Search Results

Search found 4808 results on 193 pages for 'reserved instances'.

Page 39/193 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • MySQL access classes in PHP

    - by Mike
    I have a connection class for MySQL that looks like this: class MySQLConnect { private $connection; private static $instances = 0; function __construct() { if(MySQLConnect::$instances == 0) { //Connect to MySQL server $this->connection = mysql_connect(MySQLConfig::HOST, MySQLConfig::USER, MySQLConfig::PASS) or die("Error: Unable to connect to the MySQL Server."); MySQLConnect::$instances = 1; } else { $msg = "Close the existing instance of the MySQLConnector class."; die($msg); } } public function singleQuery($query, $databasename) { mysql_select_db(MySQLConfig::DB, $this->connection) or die("Error: Could not select database " . MySQLConfig::DB . " from the server."); $result = mysql_query($query) or die('Query failed.'); return $result; } public function createResultSet($query, $databasename) { $rs = new MySQLResultSet($query, MySQLConfig::DB, $this->connection ) ; return $rs; } public function close() { MySQLConnect::$instances = 0; if(isset($this->connection) ) { mysql_close($this->connection) ; unset($this->connection) ; } } public function __destruct() { $this->close(); } } The MySQLResultSet class looks like this: class MySQLResultSet implements Iterator { private $query; private $databasename; private $connection; private $result; private $currentRow; private $key = 0; private $valid; public function __construct($query, $databasename, $connection) { $this->query = $query; //Select the database $selectedDatabase = mysql_select_db($databasename, $connection) or die("Error: Could not select database " . $this->dbname . " from the server."); $this->result = mysql_query($this->query) or die('Query failed.'); $this->rewind(); } public function getResult() { return $this->result; } // public function getRow() // { // return mysql_fetch_row($this->result); // } public function getNumberRows() { return mysql_num_rows($this->result); } //current() returns the current row public function current() { return $this->currentRow; } //key() returns the current index public function key() { return $this->key; } //next() moves forward one index public function next() { if($this->currentRow = mysql_fetch_array($this->result) ) { $this->valid = true; $this->key++; }else{ $this->valid = false; } } //rewind() moves to the starting index public function rewind() { $this->key = 0; if(mysql_num_rows($this->result) > 0) { if(mysql_data_seek($this->result, 0) ) { $this->valid = true; $this->key = 0; $this->currentRow = mysql_fetch_array($this->result); } } else { $this->valid = false; } } //valid returns 1 if the current position is a valid array index //and 0 if it is not valid public function valid() { return $this->valid; } } The following class is an example of how I am accessing the database: class ImageCount { public function getCount() { $mysqlConnector = new MySQLConnect(); $query = "SELECT * FROM images;"; $resultSet = $mysqlConnector->createResultSet($query, MySQLConfig::DB); $mysqlConnector->close(); return $resultSet->getNumberRows(); } } I use the ImageCount class like this: if(!ImageCount::getCount()) { //Do something } Question: Is this an okay way to access the database? Could anybody recommend an alternative method if it is bad? Thank-you.

    Read the article

  • Load Balancing a UDP server

    - by Hellfrost
    Hello StackOverflow, I have a udp server, it is a central part in my business process. in order to handle the loads I'm expecting in the production environment I'll probably need 2 or 3 instances of the server. The server is almost entirely stateless, it mostly collects data, and the layer above it knows how to handle the minimal amount of stale data that can arise from the the multiple server instances. My question is, how can I implement load balancing between the servers? I would prefer to distribute the requests as evenly as possible between the servers. I would also would like to have some fidelity, I mean if client X was routed to server y, then I want all of X's subsequent requests to go to server Y, as long as it is sensible and not overloads Y. By the way it is a .NET system... what would you recommend?

    Read the article

  • Reporting Services 2008 & 2012 side by side

    - by Iulian Ilies
    I have installed Sql Server 2008 & 2012 side by side on the same machine, and that's includes the reporting service for each. Both are instances are named instances: MSSQLSERVER2008 and MSSQLSERVER2012. I didn't configure the 2008 one but configured 2012 first and this one is working fine. However later on when I wanted to configure the 2008 reporting service instance I was not able to do so; it simply cannot find it. Both services are displayed as running, nevertheless while being in Reporting Services Configuration Manager only the 2012 instance is displayed. I tried stopping the 2012 but still no luck, 2008 won't show up in the RS configuration manager. Any suggestions? Thanks in advance Iulian

    Read the article

  • Unable to ping remote server Nagios

    - by williamsowen
    We've recently set up Nagios on one of our Amazon EC2 instances to act as a monitoring server to our other instances. nrpe was installed on our staging server stager and appears to be working fine: monitoring_server~: /usr/lib/nagios/plugins/check_nrpe -H xx.xx.xx.xx -p 5666 NRPE v2.12 The issue is - when viewing the remote server stager within the Nagios admin screen - it appears to be 'DOWN'. The check_ping command reveals: monitoring_server~: /usr/lib/nagios/plugins/check_ping -H 'xx.xx.xx.xx' -w 5000,100% -c 5000,100% -p 1 PING CRITICAL - Packet loss = 100%|rta=5000.000000ms;5000.000000;5000.000000;0.000000 pl=100%;100;100;0 Can anyone provide some direction on how to get this working? Not sure what else to do

    Read the article

  • Mongodump on Gridfs is killing the host IOs

    - by Raphael
    I'm trying to make a mongodump from our production mongodb while the production is running. We have three production instances, one regular mongodb, one with very few gb of data on gridfs, one with a larger amount of data on gridfs. All mongodb instances are running in version 2.4.9 on a ubuntu 10.04 virtual server. I use a mongodump command to export the bases to another server. Unfortunately our machines are virtually hosted in a "low performances" datacenter (vmware based) so when I try to export the large gridfs db, the disk IO hits 100% (and 50% of the cpu starts waiting for IO too). This has a very negative impact on the production applicatiosn because db access time is excessively increased, making the applications unusable. I'm looking for a way to regulate the mongodump so the export goes slower but cooler on the hardware ressources allowing better performances for the applications to run. Has anyone had a similar scenario ?

    Read the article

  • Connect to MySQL EC2 Instance outside of VPC

    - by Brian W
    I have a VPC setup with a few EC2 instances inside. I'm attempting to connect to a MySQL database on an EC2 instance outside the VPC, with no luck. I have the security groups on the VPC EC2 instances set to outbound 0.0.0.0/0 which I assumed would let it connect to any outbound connection. I also followed a tutorial on creating a NAT, but wasn't exactly sure how to use it to connect to an external database. In any case, if anyone has experience and knows the proper way to connect to a database outside the VPC, it would be greatly appreciated!

    Read the article

  • how can I pass an environment variable through an ssh command?

    - by Ross Rogers
    How can I pass a value into an ssh command, such that the environment that is started on the host machine starts with a certain environment variable set to my choosing? EDIT: The goal is to pass the current kde desktop ( from dcop kwin KWinInterface currentDesktop ) to the new shell created so that I can pass back an nfs locations to my JEdit instance on the original server which is unique for each KDE desktop. ( Using a mechanism like emacsserver/emacsclient) The reason multiples ssh instances can be in flight at one time is because when I'm setting up my environment, I'm opening a bunch of different ssh instances to different machines.

    Read the article

  • Routing application traffic through specific interface

    - by UnicornsAndRainbows
    Hello All! First question here, so please go easy: I have a debian linux 5.0 server with two public interfaces. I would like to route outbound traffic from one instance of an application via one interface and the second instance through the second interface. There are some challenges: both instances of the application use the same protocol both instances of the application can access the entire internet (can't route based on dest network) I can't change the code of the application I don't think a typical approach to load balancing all traffic is going to work well, because there are relatively few destination servers being accessed in the outbound traffic, and all traffic would really need to be distributed pretty evenly across these relatively few servers. I could probably run two virtualized servers on the box and bind each of them to a different external ip, but I'm looking for a simpler solution, maybe using iproute or iptables? Any ideas for me? Thanks in advance - and I'm happy to answer any questions.

    Read the article

  • How to put 1000 lightweight server applications in the cloud

    - by Dan Bird
    The company I work for sells a commercial desktop/server app that runs on any non dedicated Windows PC or server and uses Tomcat for all interactions with the application. Customers are asking that we host their instance of the application so they don't have to run it locally on their own servers. The app is lightweight and an average server, in theory, could handle 25-50 instances before users would notice a slowdown. However only 1 instance can run per Windows instance (because the application writes to a common registry branch) so we'd need something like VMWare to create 25-50 Windows instances. We know we eventually need to reprogram to make it truly cloud-worthy but what would you recommend for a server farm or whatever for this? We don't have the setup to purchase our own servers so we must use a 3rd party. We have budgeted $500 - $1000 per year per customer for this service. Thanks in advance for your suggestions, experiences and guidance.

    Read the article

  • How to create a new public AMI for windows?

    - by user67081
    I am trying to make a windows 2008 AMI that is a nice clean 64bit starter pack (IIS, SQL express, ASP.NET MVC, etc...) I would like to make it a public AMI when its done. There in lies the problem. I can make an AMI from my image no problem. But I can't seen to get new instances to generate their own passwords.. The results are that I have a new instance that works great with my password. So what is the process of making my EBS backed Instances convert into an AMI that will auto-generate its password and do all the other setup steps that amazon wants to go thru when a new instance starts up? Thanks in advance.

    Read the article

  • How can I create an AMI from an existing EC2 instance?

    - by Arkaaito
    (I suspect that this may already be answered somewhere, since it seems like it would be a common operation. But I can't find it, so...) I am a relative AWS newbie. I have inherited a running Amazon EC2 instance, with various items (Apache, MySQL, Sphinx, ...) installed on it and a bunch of configuration. I'd like to turn it into an AMI that I can spin up other instances from. I can't find any information on creating a custom AMI on Amazon's site - only the fact that you can, repeatedly referenced, as if to taunt me... I believe this is not an EBS-backed instance, just an "ordinary" one. I do not know what AMI it was originally created from. How would I create an AMI that I could use for spinning up other instances which will be identical except for the hostname?

    Read the article

  • Xen and Ubuntu?

    - by wag2639
    How does one properly approach having Ubuntu servers on a Xen hypervisor? I don't have any experience with RAID or Xen other than from a theoretical level. Additional requirements: Use with mdadm Software RAID 5 (can be on separate disks) that multiple instances with access Paravirtualized Ubuntu Server guests instances Possible ideas for now: Ubuntu host (dom0) with ubuntu-xen-server package (this purportedly isn't supported) dom0 host will "own" RAID 5 partition More Ubuntu servers as guests Citrix XenServer bare-metal host XenServer will own RAID Citrix XenServer bare-metal host Ubuntu guest instance creates and owns RAID Questions and concerns: Can Ubuntu be used as a dom0 Xen host? Can XenServer install packages such as mdadm and create a partition? Can multiple guest access (R + W) to the same data partition (RAID)? Note: since it may have a bearing on support, I'm referring to Ubuntu Server 10.04

    Read the article

  • Using a AWS EC2 Server to host a busy website and I need to set up a loadbalancing

    - by Philip Isaacs
    My company has one EC2 server running on AWS with a MYSQL-DB and Apache on the same instance. This one instance hosts a website built on PHP Zend Framework. The site runs like crap when it starts to get busy with a lot of traffic so I'm looking for some advice on how to set up something that can handle the load better. My first question is should I move the mysql DB on to a separate EC2 instance or perhaps use AWS's RDS service which looks like a nice option. I'm sort of new to some of this but I'm guessing I'll need at least two EC2 instances for serving the website from and some sort of load balancing mechanism to distribute traffic. But maybe not, I'm not sure. Also what are some best practices for how to replicate the data so that they stay in sync on both instances? Okay I know these are a lot of questions. But I don't know where to start so any advice will help.

    Read the article

  • Where would an S3 upload speed cap originate?

    - by CoreyH
    I do a ton of uploading to S3 and am experiencing capped speeds and I can't quite figure out how to address it. The setup: Windows Server 2008 R2 x64, external HD, using a Java based upload tool called Jsh3ll and custom VBS scripts to kick the jobs off. Running one process at a time, I am always limited to about 4mbps. I have FiOS at 35/35mbps speeds, so it isn't an outright limit. AND, I can run parallel instances and can go all the way up to 35mbps, so I know the problem isn't gateway/nic/machine/amazon related. Running parallel instances works to a degree as a solution, but increases the complexity of my workflow greatly. Solving this would make my life dramatically easier. When I was first doing this I was playing around with a bunch of Windows TCP parameters and was able to briefly get unconstrained bandwidth, but it wasn't repeatable. Thoughts?

    Read the article

  • Tracking costs within one AWS account

    - by caius howcroft
    I have what I'm sure is a very common problem. Our company has many projects and groups working for different clients. We do a lot of our development work in the cloud and deploy our solutions there. We have a VPC set up that isolates projects from each other in their own subnet and that VPC is getting a hardware VPN connection back to HQ. We need to keep track of the cost run up by every project. The way I currently implement this is by providing my own tools for starting and stopping instances which log which user (and thus which project) to bill the instance too. This works okay for BoxUsage costs but not for other costs. I could create a separate account for each project and use consolidated billing, this I think would allow me to pay once but track costs per "project", but I would then not be able to share common resources (like bring account B's running instances inside the same VPC). Does anyone have any suggestions? Cheers C

    Read the article

  • Amazon EC2: how to find out detailed CPU usage?

    - by j0nes
    I am running several EC2 instances, and I want to know the exact work my CPU is doing. On "normal" machines I am doing this with munin and its CPU plugin which looks at the statistics provided by /proc/stat. On my EC2 machines however, I get incorrect graphs. The machine has two cores, so the max CPU usage should be 200% - however it gets as high as 400%: I know that I should use Amazon CloudWatch to see the total CPU usage (and this is the official and recommended from Amazon way to do this), but I am specifically looking on how the CPU usage is spend (e.g. system, user, iowait). Is there a way to get detailed CPU usage statistics on EC2 instances?

    Read the article

  • AWS autoscaling. Launch Config/Auto Scaling Group and VPC instance with two ifaces

    - by icalvete
    I want create an Launch Config/Auto Scaling Group to build instances inside an VPC with two subnets ("frontend" and "backend") I need that this instances have two ifaces. One in "frontend" subnet and one in "backend" subnet. I can't see how do that. It's no posible from AWS console and neither with aws cli. http://docs.aws.amazon.com/cli/latest/reference/autoscaling/create-launch-configuration.html http://docs.aws.amazon.com/cli/latest/reference/autoscaling/create-auto-scaling-group.html Launch Config don't say nothing about this. http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/create-lc-with-instanceID.html Ideas? Thanks!!!

    Read the article

  • How to address an EC2 instance from both inside and outside datacenter?

    - by Alexandr Kurilin
    I'm trying to find a good way of being able to address my EC2 database instance from both inside and outside of the datacenter. Other EC2 instances need to be able to call into it, and other clients like pgAdmin might need to connect to it from the outside world as well. It's my understanding that using the internal and external DNS names is sustainable long term as each reboot leads to a change. I'm thinking of associating an Elastic IP with the instance and giving it an A record (say db1.mydomain.com) which I then will use both within and outside the datacenter. Further instances in the same role will get the same treatment and a DNS record of db2.mydomain.com etc. Now, is there a cleaner and more stable way of achieving this result? Am going about this the wrong way? Suggestions?

    Read the article

  • How do you keep up with Nagios/Capistrano configs when using EC2?

    - by imaginative
    I use Amazon EC2 for my mobile app. Depending on load of the application at a given time, I might spawn new instances and then take them down when load is lower to save costs. How does one keep up with Nagios configurations for such a dynamic environment? When one deals with managed hardware, configuration files are predictable. In this case Nagios, Capistrano and a bunch of other configuration files would need to be added. Capistrano needs to know where to deploy a new build to for an app server. Nagios needs to know to remove an existing instance or add a new instance for monitoring. Nagios also needs to know if a node was intentionally taken down or if the host is down due to error. How is this done with the wonderful world of VPS/dynamic instances?

    Read the article

  • can a python script know that another instance of the same script is running... and then talk to it?

    - by Justin Grant
    I'd like to prevent multiple instances of the same long-running python command-line script from running at the same time, and I'd like the new instance to be able to send data to the original insance before the new instance commits suicide. How can I do this in a cross-platform way? Specifically, I'd like to enable the following behavior: "foo.py" is launched from the command line, and it will stay running for a long time-- days or weeks until the machine is rebooted or the parent process kills it. every few minutes the same script is launched again, but with different command-line parameters when launched, the script should see if any other instances are running. if other instances are running, then instance #2 should send its command-line parameters to instance #1, and then instance #2 should exit. instance #1, if it receives command-line parameters from another script, should spin up a new thread and (using the command-line parameters sent in the step above) start performing the work that instance #2 was going to perform. So I'm looking for two things: how can a python program know another instance of itself is running, and then how can one python command-line program communicate with another? Making this more complicated, the same script needs to run on both Windows and Linux, so ideally the solution would use only the Python standard library and not any OS-specific calls. Although if I need to have a Windows codepath and an *nix codepath (and a big if statement in my code to choose one or the other), that's OK if a "same code" solution isn't possible. I realize I could probably work out a file-based approach (e.g. instance #1 watches a directory for changes and each instance drops a file into that directory when it wants to do work) but I'm a little concerned about cleaning up those files after a non-graceful machine shutdown. I'd ideally be able to use an in-memory solution. But again I'm flexible, if a persistent-file-based approach is the only way to do it, I'm open to that option. More details: I'm trying to do this because our servers are using a monitoring tool which supports running python scripts to collect monitoring data (e.g. results of a database query or web service call) which the monitoring tool then indexes for later use. Some of these scripts are very expensive to start up but cheap to run after startup (e.g. making a DB connection vs. running a query). So we've chosen to keep them running in an infinite loop until the parent process kills them. This works great, but on larger servers 100 instances of the same script may be running, even if they're only gathering data every 20 minutes each. This wreaks havoc with RAM, DB connection limits, etc. We want to switch from 100 processes with 1 thread to one process with 100 threads, each executing the work that, previously, one script was doing. But changing how the scripts are invoked by the monitoring tool is not possible. We need to keep invocation the same (launch a process with different command-line parameters) but but change the scripts to recognize that another one is active, and have the "new" script send its work instructions (from the command line params) over to the "old" script.

    Read the article

  • How would I measure the amount of RAM needed per Glassfish domain? [closed]

    - by oligofren
    Possible Duplicate: Can you help me with my capacity planning? In our test environment we have a lot of apps spread out over a few servers and Glassfish domains. To make versioning easier I would have liked to have one Glassfish domain per customer per app (kind of like a heavyweight version of lots of jetty instances). But I have heard that Glassfish is kind of heavy on the resources, and so I would need to measure approximately how many instances would fit in the available RAM. These are low-traffic/low load testing servers, so CPU is not really an issue, though RAM might be. How would I get an approximate measure of how much RAM is needed? This is one Glassfish 3 instance with one heavy EAR application deployed. top? jvmstats? ??

    Read the article

  • pdflush hanging on Amazon EBS drives when using multi-GB files - any workaround?

    - by rhh
    Hello, When I run gunzip on a 1.7GB file (which generates an 8GB file) on an EBS volume, pdflush freezes after gunzip runs and the CPU hangs indefinitely at 100% IO Wait. Here's the output from 'ps aux | grep pdflush'. Note the D status root 87 0.0 0.0 0 0 ? D 06:18 0:00 pdflush root 88 0.0 0.0 0 0 ? D 06:18 0:00 pdflush The only solution is to kill the pdflush processes. The processes don't die immediately either. This problem is repeatable and happens with new instances. I'm running 2xlarge instances and I have way more RAM free than is being used (i.e. /proc/meminfo shows 20+GB MemFree) Has anyone found a workaround to this problem in the past? Thanks for any thoughts. Robert

    Read the article

  • Separation of memory oriented process and CPU oriented process

    - by Jeevan Dongre
    I am develops guy working for an e-commerce company I am running my e-commerce application built using ruby on rails spree commerce. I am presently running 2 medium instances in the production. One is a high memory instance which has 3.8 RAM and single Core CPU and another one is high CPU instance which has Dual Core CPU. Basically AWS calls it has m1.medium and c1.medium instance respectively. My question is it possible to separate the processes according the cpu intense and memory intense? So that all the cpu intense process can be made run in high cpu instance and all the memory intense process can be made to run in the high memory instances. Is any tool available to identify those process. Kindly give me some heads up. Thank you

    Read the article

  • Small footprint on a Laptop

    - by sqldebacle
    I am trying to find a solution where I can do the following: 1) Run a small footprint on my laptop 2) Run virtual instances of OS w/ no primary OS installed. All the OS that I will ever use will be all virtualized. I tried playing around with the VMware Esxi, and got it to boot it from the flash drive, etc. But this just runs the server. I cannot actually run my virtual instances from there. Anyone has done this? Something similar implemented with VMWare products without needing 2 computers will be great. Thanks, -Subhash

    Read the article

  • Duplicating an instance into a new VPC from a Snapshot

    - by Remmus
    We have a group of instances in an Amazon VPC we use for our live environment. We have a big release to do and want to test that the deployment will run smoothly. I have created a second VPC, created instances of the same size on the same private ips and then removed their original volumes and attached new volumes that were created from snapshots of the live environment. Unfortunately none of the instance will allow me to connect. They start running fine, but I don't get any system logs appear and can't connect. The only thing I can think of is that the new instance was created from a new AMI as the old one is deprecated due to new security fixes. Is this a problem? If so can I fix it in any way? And if this isn't a problem, does anyone have any ideas how I can fix it?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >