Search Results

Search found 2923 results on 117 pages for 'amazon ami'.

Page 57/117 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • PHP and load balancing

    - by StCee
    I have one major domain but the server spec behind it is not good enough. Hence I want to relay the traffic, in particular php-mysql queries to multiple smaller servers. How is that normally be done? (BTW I wonder how much traffic or number of php/mysql request a normal setup on ec2 micro instance can handle? ) I did have a look of EC2 load balancer. But is it only possible to load balance on machines of your own account?

    Read the article

  • Dynamically changing one-node Cassandra cluster to two nodes

    - by Jason Axelson
    So I have an application that will be very dormant most of the time but will need high-bursting a few days out of the month. Since we are deploying on EC2 I would like to keep only one Cassandra server up most of the time and then on burst days I want to bring one more server up (with more RAM and CPU than the first) to help serve the load. What is the best way to do this? Should I take a different approach? Some notes about what I plan to do: Bring the node up and repair it immediately After the burst time is over decommission the powerful node Use the always-on server as the seed node My main question is how to get the nodes to share all the data since I want a replication factor of 2 (so both nodes have all the data) but that won't work while there is only one server. Should I bring up 2 extra servers instead of just one?

    Read the article

  • How can I force all requests to be SSL when using EC2 load balancer?

    - by chris
    I currently have a single EC2 instance which is forcing all requests to be secure by using mod_rewrite: RewriteEngine On RewriteCond %{SERVER_PORT} !443 RewriteRule ^(.*)$ https://%{HTTP_HOST}$1 [R,L] I am planning on moving to a load balanced setup, with multiple back-end instances. If I set up my EC2 load balancer with my certs, do I need to use SSL to communicate between the LB and my instances? If not, is it as simple as replacing the RewriteCond with RewriteCond %{HTTP:X-Forwarded_Proto} ^http$ Edit: I tried using the x-forwarded-proto, but it does not appear to work. Is there another way to detect if someone is connected to the LB via SSL?

    Read the article

  • Bootstrapped Ubuntu 12.04 EC2 instance. Where to find log?

    - by nocode
    So I bootstrapped a shell script to install and run a bunch of tasks. Looks like the it ran for the most part, but I added one part and that was formatting an extra EBS volume. Pretty straightforward: mkfs.ext4 /dev/xvdf mkdir –m 000 /vol01 echo “/dev/xvdf /vol01 auto noatime 0 0” | sudo tee –a /etc/fstab sudo mount /vol01 I was able to install MongoDB, NGINX and Forever. I selected to use /dev/xdvf in the AWS console and see it. The 3rd line is not in fstab either. I've searched through various logs in /var/log/ but I don't really see much indicating the execution of the bootstrap. Logs that I see and looked through: auth.log boot.log dmesg dpkg.log syslog udev

    Read the article

  • Why don't cfn-init logs get sent by rsyslog?

    - by Jon M
    I just signed up for Papertrail to aggregate logs from some AWS instances I'm setting up with CloudFormation::Init. I've followed the instructions and added *.* @logs.papertrailapp.com to the end of '/etc/rsyslog.conf'. Some logs are showing up on Papertrail, but notably the contents of '/var/log/cfn-init.log' never get there, and those are the ones I'm interested in right now. Have I set up rsyslog incorrectly? Or do the CloudFormation::Init scripts just not use syslog to write log information?

    Read the article

  • EC2 Instance of Wordpress not mapping URLs correctly

    - by Benjamin
    I'm using an AWS EC2 micro instance to run a wordpress blog. I've successfully mapped a subdomain to the Elastic IP for the micro instance. After a few minor changes, the URL I mapped to the Elastic IP (blog.example.com) opens up the wordpress home page but whenever I click on any of the wordpress links the domain changes to the AWS public DNS for that instance (http://ec2-123-45-678-910.compute-1.amazonaws.com/wordpress/). How do I fix the URLs so that they all follow the subdomain I have setup?

    Read the article

  • Cannot send email from EC2 instance on port 587

    - by Tahsin Mostafiz
    I have written a mail service for our flask application that uses Celery and RabbitMQ to send emails (using gmail). I have got the celery consumer and producer communicating okay but I cannot get to send send emails. I am getting a socket.error: [Errno 101] Network is unreachable. I think this means that AWS is blocking port 587 - even though in my security group I opened both ports 587 and 25 (inbound and outbound). Any reason why this is happening? Any help will be highly appreciated.

    Read the article

  • With a node.js powered server on EC2, how can I decrease the TCP connection time?

    - by talentedmrjones
    While profiling my application I've noticed that in the Firebug Net panel, the "Connecting" time—that is the time waiting for a TCP connection—is consistently around 70–100ms. See image below: Of course in the grand scheme of things, 100ms is not long, but I have seen other services that respond with 0ms Connect time. So if other servers can, I should be able to as well. Any thoughts on how I might even beging to troubleshoot this?

    Read the article

  • Logs show lots of user attempts from unknown IP

    - by rodling
    I lost access to my instance which I host on AWS. Keypairing stopped to work. I detached a volume and attached it to a new instance and what I found in logs was a long list of Nov 6 20:15:32 domU-12-31-39-01-7E-8A sshd[4925]: Invalid user cyrus from 210.193.52.113 Nov 6 20:15:32 domU-12-31-39-01-7E-8A sshd[4925]: input_userauth_request: invalid user cyrus [preauth] Nov 6 20:15:33 domU-12-31-39-01-7E-8A sshd[4925]: Received disconnect from 210.193.52.113: 11: Bye Bye [preauth] Where "cyrus" is changed by hundreds if not thousands of common names and items. What could this be? Brute force attack or something else malicious? I traced IP to Singapore, and I have no connection to Singapore. May thought is that this was a DoS attack since I lost access and server seemed to stop working. Im not to versed on this, but ideas and solutions for this issue are welcome.

    Read the article

  • How to find the reason for a weekly downtime on an Ubuntu web server hosted by AWS?

    - by IceSheep
    We started monitoring our web server using Pingdom and found out that we have a downtime of a few minutes every Sunday at 0:00 UTC. The test runs every minute and checks if a successful HTTP response (code 200) is returned on port 80. The test fails due to a timeout (no response after 30 seconds). Here's what we've already checked – without success: Since we run our webserver behind a load balancer, I've set the Pingdom test on the load balancer's public DNS and the webserver's public DNS in order to find out if there's a problem with the AWS load balancer – both tests return the same result We set up Munin on our webserver. Everything looked fine even after the failure. Since the last failure lasted only 2 minutes I suppose Munin couldn't capture a potential problem (it only checks every 5 minutes) I have checked /var/log/apache2/error.log and /var/log/syslog for suspicious entries I have checked /etc/cron.weekly and /etc/crontab for suspicious entries I have searched for files created or last-modified during 0:00 and 0:15 using this method: touch -t 201209020000 start touch -t 201209020015 end find / -newer start -and ! -newer end (nothing found) Has anybody experienced a similar problem? Any proposals on how to find the reason for this behavior? It's Ubuntu 10.04 LTS running on an AWS m1.large instance. Thanks!

    Read the article

  • How to keep multiple servers in sync file wise?

    - by GForceSys
    I'm currently managing a cluster of PHP-FPM servers, all of which tend to get out of sync with each other. The application that I'm using on top of the app servers (Magento) allows for admins to modify various files on the system, but now that the site is in a clustered set up modifying a file only modifies it on a single instance (on one of the app servers) of the various machines in the cluster. Is there an open-source application for Linux that may allow me to keep all of these servers in sync? I have no problem with creating a small VM instance that can listen for changes from machines to sync. In theory, the perfect application would have small clients that run on each machine to be synced, which would talk to the master server which would then decide how/what to sync from each machine. I have already examined the possibilities of running a centralized file server, but unfortunately my app servers are spread out between EC2 and physical machines, which makes this unfeasible. As there are multiple app servers (some of which are dynamically created depending on the load of the site), simply setting up a rsync cron job is not efficient as the cron job would have to be modified on each machine to send files to every other machine in the cluster, and that would just be a whole bunch of unnecessary data transfers/ssh connections.

    Read the article

  • options for physical architecture of rails site regarding caching server or cdn

    - by timpone
    I have a rails app that is set on a single server currently. On production, I force_ssl for everything. I am interested in using a caching server for images (I'm fine with css and js being served from origin for time being). Would nginx or varnish (which I have no experience with) be a better solution (for October 2012)? I'd imagine that it would be easy to switch these around while still on this single server architecture. Or would something like cloudfront (which I also have no experience with) make sense for hosting image files? I know this is a vague question but appreciate any current feedback. thx in advance

    Read the article

  • Need to scale quickly. Which cloud service should I use?

    - by mk1000
    Traffic to my facebook app is growing at an insane rate and I need some suggestions on how to scale. I'm probably not going to even be able to keep it running by the day's end, as it's hosted from my already overloaded dedicated server. I need to either move it to its own box or a cloud service like e2c. Something like e2c seems like the way to go, but my server admin skills are terrible. Is there a good front end management UI for e2c or another hosting service that is comparable in cost that is fully managed? I don't mind going with something a bit more expensive now if that means I can get everything switched over and running within 24 hours.

    Read the article

  • Large volume at /mnt on AWS instance

    - by rhaag71
    I know this is probably a somewhat 'dumb' question :) I have an AWS (small) instance and I just noticed that there is a ~150gb volume attached at /mnt, is this normal? It kinda freaked me out, I was thinking maybe someone was trying to capture whatever I mount in /mnt, there is the entry in my fstab too (and I found that others have this by googling)... the entry is as follows /dev/xvdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 I don't have any volumes this large in my AWS volumes section though. I was just trying to understand this and be sure that someone is not trying to 'get in'... as there are many attempts daily. Thanks

    Read the article

  • EC2: map multiple applications to different domains

    - by EsseTi
    i'm playing with EC2 and i've been able to create my instance that has a django appliacation on port 80, and a tomcat on 8080. now, with elastic IP i can manage to redirect my domain to django application. now i would like to map subdomains to each tomact applications. for example django app (ec2...:80) --> mydomain.com tomcat (ec2...:8080) --> tomcat.mydomain.com webbapp1 (ec2...:8080/webapp1/) --> webapp1.mydomain.com is this possible with the free account? ciao

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

  • Finding the owner of an AWS access key + secret key pair

    - by nightw
    I would like to have a simple solution (possibly in 1-3 plain API calls to AWS) to find the owner of an AWS access key. I have the password of the "root" AWS account and of course I can manage the users and credentials through IAM, but we have a lot of users and I don't want to look at them one by one looking for the owner of the key. So basically I have a working access key + secret key pair (in fact a couple of them), but I do not know which user's key is it and what rights are on it. What is the easiest way to do this? Thank you in advance.

    Read the article

  • How to configure S3 or DNS to handle incomplete name (sans www) for web site?

    - by user193116
    I have a set up a bucket called "www.mydomainname.com" to host my website and I have configured the CNAME such that "www.mydomainname.com" points to the my endopint http://www.mydomainname.com.s3-website-us-east-1.amazonaws.com/ It works and when people who type the the full url "www.mydomainname.com" are able to see my index page But most people are in the habit of typing incoplete domain name -- they just type "mydomainname.com" and their browser fails to find my site. Is there a way to configure CName or S3 bucket such that typing "mydomainname.com" take them to my s3 website ? (I am using Networksolutions as my DNS provider).

    Read the article

  • FTP Server with MySQL access, and POST notification

    - by TIW
    Im looking for an FTP server solution, that we can host either internally on a dedicated server, or on Rackspace Cloud/AWS, that provides a HTTP POST notification when a file is uploaded, and allows user accounts to be created either through an API or MySQL database. There are several offerings that provide email notification - but has anyone come across anything that matches the above requirements. BrickFTP being a IaaS system is an option, but we would prefer something hosted in house. I don't believe the standard FTP servers provided with Apache can do the above ... can they?

    Read the article

  • How to install stuff on Ubuntu

    - by Industrial
    Hi everyone, I have just launched my first EC2 instance and choosed a Ubuntu image to start from, since it's quite well documented. However, I am trying to install the Redis package: http://packages.ubuntu.com/lucid/redis-server Maybe I am not googling properly or just stupid since the weekend is approaching, but I'll keep getting errors: root@ip-10-229-123-199:~# sudo apt-get install redis-server Reading package lists... Done Building dependency tree... Done E: Couldn't find package redis-server I'll assume that I need to add a repository or something to Ubuntu to help it find the package I want, but how do I do it? I can only find graphical guides which doesnt help me too much since I am using SSH. Thanks alot!

    Read the article

  • What's the best practice for taking MySQL dump, encrypting it and then pushing to s3?

    - by HalogenCreative
    This current project requires that the DB be dumped, encrypted and pushed to s3. I'm wondering what might be some "best practices" for such a task. As of now I'm using a pretty straight ahead method but would like to have some better ideas where security is concerned. Here is the start of my script: mysqldump -u root --password="lepass" --all-databases --single-transaction > db.backup.sql tar -c db.backup.sql | openssl des3 -salt --passphrase foopass > db.backup.tarfile s3put backup/db.backup.tarfile db.backup.tarfile # Let's pull it down again and untar it for kicks s3get surgeryflow-backup/db/db.backup.tarfile db.backup.tarfile cat db.backup.tarfile | openssl des3 -d -salt --passphrase foopass |tar -xvj Obviously the problem is that this script everything an attacker would need to raise hell. Any thoughts, critiques and suggestions for this task will be appreciated.

    Read the article

  • mirror sql server 2008 to AWS instance from our datacenter?

    - by Alex
    We are currenlty running on hosted pos system locally and would like to mirror to AWS. We are new to AWS and would like to know the most cost effective way to do this? We have 2 DB and 2 web servers right now in one cabinet in CA. One tape drive, one firewall, one SNA. We are thinking to replicate our system in AWS (using sql server 2008) and just mirror both systems and use a witness server between them to keep the data in sync? The goal is, if CA datacenter goes down, AWS keeps running. User see no downtime. All data is synced. Is anyone doing something similar? Would this be practical to just use AWS in this fashion? Thanks

    Read the article

  • Is there a way to send personal documents on Kindle for Mac app?

    - by Sid
    I have the Kindle App on my Mac, and an android phone. When I email documents to my [email protected] id, I am able to see it in my library, and subsequently send it to my Android device. However, I'm not able to send it to the Kindle App for my Mac. The Kindle for Mac FAQs clearly state that Magazines, personal documents, etc. are not supported. However, I came across here that there is a workaround to this, although I've not been able to figure out what it is.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >