Search Results

Search found 3025 results on 121 pages for 'amazon ec2'.

Page 52/121 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Amazon EC2 Instance - m1.medium Ubuntu 12.04 - Started to crash three days ago

    - by Joy
    The environment: Amazon EC2 Instance - m1.medium Ubuntu 12.04 Apache 2.2.22 - Running a Drupal Site Using MySQL DB Server RAM info: ~$ free -gt total used free shared buffers cached Mem: 3 1 2 0 0 0 -/+ buffers/cache: 0 2 Swap: 0 0 0 Total: 3 1 2 Hard drive info: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 4.7G 2.9G 62% / udev 1.9G 8.0K 1.9G 1% /dev tmpfs 751M 180K 750M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/xvdb 394G 199M 374G 1% /mnt The problem About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message: kernel: [2963685.664359] [31716] 106 31716 226946 22748 0 0 0 mysqld kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld) score 23 or sacrifice child kernel: [2963685.664764] Killed process 31716 (mysqld) total-vm:907784kB, anon-rss:90992kB, file-rss:0kB kernel: [2963686.153608] init: mysql main process (31716) killed by KILL signal kernel: [2963686.169294] init: mysql main process ended, respawning That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log: Plugin 'FEDERATED' is disabled. The InnoDB memory heap is disabled Mutexes and rw_locks use GCC atomic builtins Compressed tables use zlib 1.2.3.4 Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 Completed initialization of buffer pool Fatal error: cannot allocate memory for the buffer pool Plugin 'InnoDB' init function returned error. Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Unknown/unsupported storage engine: InnoDB [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so: <IfModule mpm_prefork_module> ... MaxClients 60 </IfModule> <IfModule mpm_worker_module> ... MaxClients 60 </IfModule> <IfModule mpm_event_module> ... MaxClients 60 </IfModule> Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message: kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0 kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf ) vm.overcommit_memory = 2 vm.overcommit_ratio = 80 So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege. Thanks a bunch in advance.

    Read the article

  • Optimise Apache for EC2 micro instance

    - by Shiyu Sekam
    I'm running apache2 on a EC2 micro instance with ~600 mb RAM. The instance was running for almost a year without problems, but in the last weeks it just keeps crashing, because the server reached MaxClients. The server basically runs few websites, one wordpress blog(not often used), company website(most used) and 2 small sites, which are just internal. The database for the blog runs on RDS, so there's no Mysql running on this web server. When I came to the company, the server already was setup and is running apache + mod_php + prefork. We want to migrate that in the future to a nginx + php-fpm, but it still needs further testing. So for now I have to stick with the old setup. I also use CloudFlare DDOS protection in front of the server, because it was attacked a couple of the times in the last weeks. My company don't want to pay money for a better web server at this point, so I have to stick with the micro instance also. Additionally the code for the website we run is really bad and slow and sometimes a single page load can take up to 15 seconds. The whole website is dynamic and written in PHP, so caching isn't really an option here. It's a customized search for users. I've already turned off KeepAlive, which improved the performance a little bit. My prefork config looks like the following: StartServers 2 MinSpareServers 2 MaxSpareServers 5 ServerLimit 10 MaxClients 10 MaxRequestsPerChild 100 The server just becomes unresponsive after a while running and I've run the following command to see how many connections there are: netstat | grep http | wc -l 75 Trying to restart apache helps for a short moment, but after that a while the apache process(es) become unresponsive again. I've the following modules enabled(output of apache2ctl -M) Loaded Modules: core_module (static) log_config_module (static) logio_module (static) version_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) authz_host_module (shared) deflate_module (shared) dir_module (shared) expires_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) rewrite_module (shared) setenvif_module (shared) ssl_module (shared) status_module (shared) Syntax OK apache2.conf # Security ServerTokens OS ServerSignature On TraceEnable On ServerName "web.example.com" ServerRoot "/etc/apache2" PidFile ${APACHE_PID_FILE} Timeout 30 KeepAlive off User www-data Group www-data AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> <Directory /> Options FollowSymLinks AllowOverride None </Directory> DefaultType none HostnameLookups Off ErrorLog /var/log/apache2/error.log LogLevel warn EnableSendfile On #Listen 80 Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf Include /etc/apache2/ports.conf LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include /etc/apache2/conf.d/*.conf Include /etc/apache2/sites-enabled/*.conf Vhost of main site <VirtualHost *:80> ServerName www.example.com ## Vhost docroot DocumentRoot /srv/www/jenkins/Web ## Directories, there should at least be a declaration for /srv/www/jenkins/Web <Directory /srv/www/jenkins/Web> AllowOverride All Order allow,deny Allow from all </Directory> ## Load additional static includes ## Logging ErrorLog /var/log/apache2/www.example.com.error.log LogLevel warn ServerSignature Off CustomLog /var/log/apache2/www.example.com.access.log combined ## Rewrite rules RewriteEngine On RewriteCond %{HTTP_HOST} !^www.example.com$ RewriteRule ^.*$ http://www.example.com%{REQUEST_URI} [R=301,L] ## Server aliases ServerAlias www.example.invalid ServerAlias example.com ## Custom fragment <Location /srv/www/jenkins/Web/library> Order Deny,Allow Deny from all </Location> <Files ~ "^\.(.+)"> Order deny,allow deny from all </Files> </VirtualHost>

    Read the article

  • Restarting Haproxy Gracefully

    - by Anand Gupta
    As per various blogs, HAproxy can be gracefully restarted using the following command: sudo haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) TO verify this, I had set up a apache bench script which contiguously sent message to haproxy. Ideally, whenever I restarted my server the script should not have an affect on the apache bunch execiton. But, it seems that whenever Haproxy is restarted apache bench scripts terminate and the connection to load balancer is lost. Here is the details of my HaProxy configuration file : global nbproc 4 log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon pidfile /var/run/haproxy.pid stats socket /home/ubuntu/haproxy.sock #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen webstats bind 0.0.0.0:1000 stats enable mode http stats uri /lb?stats stats auth anand:aaaaaaaa #stats refresh listen web-farm 0.0.0.0:80 mode http balance roundrobin option httpchk HEAD /index.php HTTP/1.0 server server2.com 1.1.1.1:80 server serve1.com 1.1.1.2:80 ~ Please let me know what am I missing here.

    Read the article

  • Sun Java Realtime System on VirtualMachine / cloud

    - by portoalet
    Just wondering if anybody can run/compile application for Sun Java Realtime system on a VM such as VMWare or on the Cloud such as on Amazon EC2 ? I know it is not ideal running Realtime java on a virtualized infrastructure, but it makes things easier. (Otherwise I just have to install SLES SP2 on physical hardware.)

    Read the article

  • What does %st mean in top?

    - by Ben
    Here is an example from my top: Cpu(s): 6.0%us, 3.0%sy, 0.0%ni, 78.7%id, 0.0%wa, 0.0%hi, 0.3%si, 12.0%st I am trying to figure out the significance of the %st field. I read that it means steal cpu and it represents time spent by the hypervisor, but I want to know what that actually means to me. Does it mean I may be on a busy physical server and someone else is using too much CPU on the server and they are taking from my VM? If I am using EBS could it be related to handling EBS I/O at the hypervisor level? Is it related to things running on my VM or is it completely unaffected by me?

    Read the article

  • Why can't “knife data bag from file” find existing json file on chef server?

    - by ellisera
    Summary: I'm running into a problem with "knife data bag from file", where knife doesn't recognize the .json data bag file pulled down from a remote git repo. Background: I'm currently trying to transition from chef-solo use to chef server while using the cookbooks, data bags and other chef info from our remote git repo. I've currently pulled down a copy of our git repo and set the cookbook path and data bag path in knife.rb. I also loaded the cookbooks, made adjustments, etc. Details: When trying to load our .json data bags by doing "knife data bag add from file FOLDER FILE" it looks like it worked until I do "knife data bag list" and it comes up blank. So I decided to try adding the edit option at the end to see what's being loaded, if it is. This is the error I get: knife data bag from file local_settings test.json -e nano ERROR: Could not find or open file 'test.json' in current directory or in 'data_bags/local_settings/test.json' The data bag file does exist, in the proper location, in a tested, working json file. I've also sometimes gotten an error saying "could not open data bag "local_settings". I would obviously like to keep the data bag path within the appropriate git repo folder to be able to keep track of changes in a more centralized location (our git repo, as opposed to the chef server). Any solutions, advice or pointers in the right direction are appreciated.

    Read the article

  • Update RDS db via mysqlbinlog: "you need (at least one of) the SUPER privilege(s)"

    - by timoxley
    We are moving a production site to EC2/RDS Followed these instructions: http://geehwan.posterous.com/moving-a-production-mysql-database-to-amazon I have set up row-based binary logging on the production server did a: mysqldump --single-transaction --master-data=2 -C -q -u root -p backup.sql then imported to RDS instance. No dramas. Due to the size of the db, and minimal downtime requirements, I've got to update the ec2 db to the latest datas via the binlogs, and it won't let me. mysqlbinlog mysql-bin.000004 --start-position=360812488 | mysql -uroot -p -h and it says: ERROR 1227 (42000) at line 6: Access denied; you need (at least one of) the SUPER privilege(s) for this operation My guess, based on what is on line 6 of the binlog, is that it's the 'write to the BINLOG' statements in the SQL backup, and because RDS doesn't support this, it can't run these statements, or something, I don't really know. Please help.

    Read the article

  • Unexpected behaviour when dynamically add node in HAproxy server

    - by Anand Soni
    I wanted to use HAProxy for my web app for load balancing purpose. I am trying to add a new rabbitmq node dynamically in HAProxy server using command : haproxy -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid). I am doing tcp connection mode with leastconn balance algorithm in load balancing. What is expected is when there is 3 connection in one rabbitmq, I add a new rabbit server in HAProxy server. so the next connection would pass to 2nd rabbitmq server which is not happening in my case. It distributes the connection in haphazardly manner. Here is my config file: defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 5000 srvtimeout 5000 listen rabbitmq 0.0.0.0:5672 mode tcp stats enable balance leastconn option tcplog server rabbit01 xx.xx.xx.xx:5672 check server rabbit02 xx.xx.xx.xx:5672 check listen tomcatq 0.0.0.0:80 mode http stats enable balance roundrobin stats refresh 10s stats refresh 10s stats uri /lb?stats stats auth admin:admin option httplog What is the problem causing this behavior? Any suggestion will appreciated.

    Read the article

  • Problem when setting up Ubuntu Enterprise Cloud

    - by Patrick
    Hi, I have a problem when setting up Ubuntu Enterprise Cloud. When I use this command:- euca-authorize default -P tcp -p 22 -s 0.0.0.0/0 The system says:- EC2_ACCESS_KEY environment variable must be set. Can anyone help me on this? How can I solve this problem? Thank you. Patrick

    Read the article

  • How do you create large, growable, shared filesystems on Linux at AWS?

    - by Reece
    What are acceptable/reasonable/best ways to provide large, growable, shared storage at AWS, exposed as a single filesystem? We're currently making 1TB EBS volumes ~biweekly and NFS exporting with no_subtree_check and nohide. In this setup, distinct exports appear under a single mount on the client. This arrangement does not scale well. The options we've considered: LVM2 with ext4. resize2fs is too slow. Btrfs on Linux. not obviously ready for prime time yet. ZFS on Linux. not obviously ready for prime time yet (although LLNL uses it) ZFS on Solaris. future of this combo is uncertain (to me), and new OS in the mix glusterfs. heard mostly good but two scary (and maybe old?) stories. The ideal solution would provide sharing, a single fs view, easy expandability, snapshots, and replication. Thanks for sharing ideas and experience.

    Read the article

  • “Disk /dev/xvda1 doesn't contain a valid partition table”

    - by Simpanoz
    Iam newbie to EC2 and Ubuntu 11 (EC2 Free tier Ubuntu). I have made following commands. sudo mkfs -t ext4 /dev/xvdf6 sudo mkdir /db sudo vim /etc/fstab /dev/xvdf6 /db ext4 noatime,noexec,nodiratime 0 0 sudo mount /dev/xvdf6 /db fdisk -l I got following output. Can some one guide me what I am doing wrong and how it can be rectified. Disk /dev/xvda1: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvda1 doesn't contain a valid partition table Disk /dev/xvdf6: 6442 MB, 6442450944 bytes 255 heads, 63 sectors/track, 783 cylinders, total 12582912 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvdf6 doesn't contain a valid partition table.

    Read the article

  • ephemeral vs EBS partitions

    - by hortitude
    I launched an EBS backed AMI with all the defaults. I noticed that it automicatlly had attached an ephemeral disk. I was just wondering if there was a good programtic way to know that this particular device is ephemeral vs some EBS volume I had decided to attach: ubuntu@-----:~$ df -ahT Filesystem Type Size Used Avail Use% Mounted on /dev/xvda1 ext4 7.9G 867M 6.7G 12% / proc proc 0 0 0 - /proc sysfs sysfs 0 0 0 - /sys none fusectl 0 0 0 - /sys/fs/fuse/connections none debugfs 0 0 0 - /sys/kernel/debug none securityfs 0 0 0 - /sys/kernel/security udev devtmpfs 1.9G 12K 1.9G 1% /dev devpts devpts 0 0 0 - /dev/pts tmpfs tmpfs 751M 172K 750M 1% /run none tmpfs 5.0M 0 5.0M 0% /run/lock none tmpfs 1.9G 0 1.9G 0% /run/shm /dev/xvdb ext3 394G 199M 374G 1% /mnt ubuntu@-----:~$ mount /dev/xvda1 on / type ext4 (rw) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/xvdb on /mnt type ext3 (rw,_netdev)

    Read the article

  • Mail Server using Postfix

    - by unknown (google)
    I have currently set up my web application on Amazon EC2 server. As a well known fact sending email from EC2 has a problem. As a cheap and long lasting solution instead of using "authsmtp" is it possible to rent a server and use it as a Mail Server? I am currently looking for cheap hosting which will give me root access so that it can be configured and used as a relayhost. I am curently using Postfix as MTA. Has any one implemented this before? I am curious about its feasibility of this solution. I guess common requirements are: Dedicated IP which is not black listed Open relay( open to my Server only) Any Tips for Header configurations to keep the mails out of spam folder. This is like exactly cloning authsmtp for personal use. Any suggestions for other Mail Server software instead of Postfix?

    Read the article

  • Should I have a heroku worker dyno for poll a AWS SQS?

    - by Luccas
    Im confusing about where should I have a script polling an Aws Sqs inside a Rails application. If I use a thread inside the web app probably it will use cpu cycles to listen this queue forever and then affecting performance. And if I reserve a single heroku worker dyno it costs $34.50 per month. It makes sense to pay this price for it for a single queue poll? Or it's not the case to use a worker for it? The script code: queue = AWS::SQS::Queue.new(SQSADDR['my_queue']) queue.poll(:idle_timeout => 20) do |msg| # code here end I need help!! Thanks

    Read the article

  • NIC disabled in AWS instance

    - by Elad Lachmi
    Nothing makes you feel dumber than disabling a NIC while in RDP, but here I am :) I have mounted the volume on another instance and tried editing the registry. I have tried enabling auto logon and using runonce to run a netsh command to enable the NIC, but that does not work. I read something about enabling the NIC through the registry directly, but have had no luck in doing this. Has anyone dealt with this type of issue? I'm going nuts! Thank you!

    Read the article

  • cloud and existing enterprise applications technologies

    - by maxxxee
    What is the significance of new cloud platforms and databases like Microsoft Azure and Amazon EC2? Is it a replacement for enterprise application platforms like .net or JEE in a cloud environment? Is it neccessary to use these or other cloud specific platforms, or can we implement .net or JEE on a cloud based environment?

    Read the article

  • Redirect some URL requests to CloudFront and the rest direct to the normal server?

    - by indiehacker
    Say I have two types of URL requests that must be handled by my REST API: http://query.restapi.com/image.png?apikey=abc123 http://query.restapi.com/2.0/<apiKey>/resource.json?from=umi.us_census00.state_geometry Is it possible to redirect only some URL requests for static images (ie., regex: *.png?.*) to take advantage of CloudFront's caching and have the rest of the requests go directly to the normal EC2 server (or at least take a speedier indirect route to the normal EC2 server?). Perhaps the added request time for the misses to CloudFront is irrelevant to worry about? Or perhaps my situation is not best to use for CloudFront? I understand I will need to make DNS change where the current URL requests having http://query.restapi.com/some.png?apikey=0123 get redirected to http://d1234.cloudfront.net/some.png, but I am hoping there is some way for just redirecting static .png requests to take advantage of CloudFront?

    Read the article

  • The future of cloud computing? [closed]

    - by Vimvq1987
    As far as I know, cloud computing is growing rapidly. Amazon EC2, Google App Engine, Microsoft Windows Azure...But I can't imagine how cloud computing will change the world. Will cloud computing will play the main role in software industry? Will our data be stored at one place and then can be accessed from any where? Shall we need powerful PCs no more because everything will be processed at "cloud"? Thank you so much

    Read the article

  • Trying to send email from nagios

    - by batman
    I'm very new to Nagios. I'm trying to send email alerts. But that doesn't seem to be working. But in my log of nagios I can see this : SERVICE ALERT: Appserver;Tmp directory;CRITICAL;HARD;1; Where host notifications are generated via email, only service alerts are not working. And when I look at sendEmail log I can see this : Sep 14 12:38:39 x.x.x.x. sendEmail[23005]: ERROR => You must specify a 'from' field! Try --help. Sep 14 12:39:39 x.x.x.x.x. sendEmail[23129]: ERROR => You must specify a 'from' field! Try --help. Sep 14 12:40:39 x-x-x-x-x sendEmail[23233]: ERROR => You must specify a 'from' field! Try --help. Where I'm making the mistake? Thanks in advance.

    Read the article

  • How can I setup nginx to serve virtualhosts with rails(unicorn/passenger) and php-fpm

    - by NewAlexandria
    I would like to serve multiple sites on one instance. I install nginx, php-fpm, and a rails app. I use sites like this to guide me. I configure php-fpm to listen to a local socket listen = /var/run/php-fpm/php-fpm.sock I configure ngnix with multiple hosts: include /etc/nginx/conf.d/*.conf I have several site php conf files like /etc/nginx/conf.d/site1.conf server { listen 80; server_name site1.com www.site1.com; root /var/www/site1; location / { index index.html index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; } } and rails site conf files like upstream rails { server 127.0.0.1:3000; } server { listen 80; server_name site2.com www.site2.com; root /var/www/site2; location / { proxy_pass http://rails; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_set_header X-Url-Scheme $scheme; } } I have a unicorn rails server running via rails s -p 3000 Yet, no sites come up for either site1.com or site2.com. I can get to the rails site at www.site2.com:3000 What is wrong? I've spent 2 days (nearly 30hr) trying many different blogs, SO / SF questions, etc. Please share your insight or answer. edit 1: No log entries are created when I try to visit either site. It's like the requests never come in.

    Read the article

  • Issue with aborted MySQL connections (error code: 4)

    - by arikfr
    Some of the connections between my application server (Ubuntu, Apache, PHP) and my DB server (Ubuntu, MySQL) are failing with error code 4. According to the documentation error code 4 is: OS error code 4: Interrupted system call At first I thought that maybe the issue is that the DB server has too many connections and fails because there are too much open files. But it seems not to be the case because: Too many open files has different error code (24). I've checked and during peak time the server had 497 files open (checked using lsof command) while the maximum is 1024. The TCP settings were already checked (see prior question). Any ideas what this can be or what should I check?

    Read the article

  • Blocking IP addresses Load Balanced Cluster

    - by Dom
    Hi We're using HAproxy as a front end load balancer / proxy and are looking for solutions to block random IP addresses from jamming the cluster. Is anyone familiar with a conf for HAProxy that can block requests if they exceed a certain threshold from a single IP within a defined period of time. Or can anyone suggest a software solution which could be placed in front of HAProxy to handle this kind of blocking. Thanks Dom--

    Read the article

  • Need to scale quickly. Which cloud service should I use?

    - by mk1000
    Traffic to my facebook app is growing at an insane rate and I need some suggestions on how to scale. I'm probably not going to even be able to keep it running by the day's end, as it's hosted from my already overloaded dedicated server. I need to either move it to its own box or a cloud service like e2c. Something like e2c seems like the way to go, but my server admin skills are terrible. Is there a good front end management UI for e2c or another hosting service that is comparable in cost that is fully managed? I don't mind going with something a bit more expensive now if that means I can get everything switched over and running within 24 hours.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >