Search Results

Search found 2912 results on 117 pages for 'amazon vpc'.

Page 51/117 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • AWS forwarding email to a gmail account

    - by user2433617
    So I registered a domain name. I then set up a static webpage using aws (S3 and Rout53). Now what I want to do is forward any email I get from that custom domain name to a personal email address I have set up. I can't seem to figure out how to do this. I have these record sets already: A NS SOA CNAME I believe I have to set up an MX record but not sure how. say I have the custom domain [email protected] and I want to redirect all email to [email protected]. The personal email account is a gmail (google accounts) email address. Thanks.

    Read the article

  • Good/Better config for MySQL on an EC2 Large Instance

    - by Tim Reynolds
    I have an EC2 Large instance dedicated to MySQL. It will be serving a Joomla/Magento combo so it has a blend of InnoDB and MyISAM tables. I have only worked with MyISAM in the past and am therefore unfamiliar with the settings InnoDB uses. Experiments so far have been less than fruitful, as I keep causing the InnoDB engine to be disabled. My instance is running Ubuntu 10.04 64 bit server edition and has ~7.5G of ram. MySQL is currently using ~0.6% of that, with somewhat poor performance. I would like to configure it to use as much of the system RAM as is reasonable. Testing some settings I learned that the InnoDB logs can't collectively be larger than 4G. Would anyone be able to provide some base InnoDB and MyISAM settings to get my started. Thank you Tim

    Read the article

  • EC2 custom topology

    - by Methos
    Is there any way to create a desired topology of EC2 instances? For example, can I create a 3 node topology of nodes A, B, C where C gets the public IP address and B and A are connected to it. Something like: Internet <-- C <-- B <-- A B and A only get private IP addresses and there is no way for the traffic to reach A before hitting B and C. This means I can install whatever I want to install on C and B to filter, cache etc. I'm going through EC2 documentation but so far I have not seen anything that talks about it. I will really appreciate if anyone knows how to do this on EC2

    Read the article

  • How can I poll different aws sqs in the same process?

    - by Luccas
    What is the right way to poll from differents AWS SQS in the same process? Suppose I have a ruby script: listen_queues.rb and run it. Should I need to create threads to wrap each SQS poll or start sub processes? t1 = Thread.new do queue1.poll do |msg| .... end t2 = Thread.new do queue2.poll do |msg| .... end t2.join I tried this code, but the poll is not receiving any of the messages available. When I run only one of them (t1 or t2), it works. But I need the 2 running. What is going on? Thanks!!

    Read the article

  • Can I use nginx to start EC2 instances on demand?

    - by Gabe Hollombe
    TL;DR - Is there a way to make nginx act as an elastic load balancer that will spin up EC2 instances on demand, allowing for the case when periods of no demand mean no instances will be running? Longer explanation - I have an nginx server that proxy_pass'es requests to a server on EC2. This server doesn't get many requests, so I'd like to keep the server spun down during periods of inactivity (I already have a script to do this). Then, when the instance is spun down and nginx gets a request for that instance, it will time out when trying to get a response from it. At this point, can I somehow trigger a shell command on the server to use EC2's command line tools to spin up the instance, then re-try the user's request after it has started?

    Read the article

  • How do you create large, growable, shared filesystems on Linux at AWS?

    - by Reece
    What are acceptable/reasonable/best ways to provide large, growable, shared storage at AWS, exposed as a single filesystem? We're currently making 1TB EBS volumes ~biweekly and NFS exporting with no_subtree_check and nohide. In this setup, distinct exports appear under a single mount on the client. This arrangement does not scale well. The options we've considered: LVM2 with ext4. resize2fs is too slow. Btrfs on Linux. not obviously ready for prime time yet. ZFS on Linux. not obviously ready for prime time yet (although LLNL uses it) ZFS on Solaris. future of this combo is uncertain (to me), and new OS in the mix glusterfs. heard mostly good but two scary (and maybe old?) stories. The ideal solution would provide sharing, a single fs view, easy expandability, snapshots, and replication. Thanks for sharing ideas and experience.

    Read the article

  • “Disk /dev/xvda1 doesn't contain a valid partition table”

    - by Simpanoz
    Iam newbie to EC2 and Ubuntu 11 (EC2 Free tier Ubuntu). I have made following commands. sudo mkfs -t ext4 /dev/xvdf6 sudo mkdir /db sudo vim /etc/fstab /dev/xvdf6 /db ext4 noatime,noexec,nodiratime 0 0 sudo mount /dev/xvdf6 /db fdisk -l I got following output. Can some one guide me what I am doing wrong and how it can be rectified. Disk /dev/xvda1: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvda1 doesn't contain a valid partition table Disk /dev/xvdf6: 6442 MB, 6442450944 bytes 255 heads, 63 sectors/track, 783 cylinders, total 12582912 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvdf6 doesn't contain a valid partition table.

    Read the article

  • EC2 instance store cloning or to ebs via gui management console

    - by devnull
    I have found similar questions here but the answer are either outdated or are from the command line. The case is this. I have an EC2 instance using instance store (this was the only AMI available for Debian 6 in Ireland). Now through the AWS GUI I can do a snapshot of the instance volume and/or even create a volume. But an image made from the snapshot doesn't boot. What is the best solution to either clone an EC2 instance that uses instance store OR from the created snapshot of the instance store to launch a new EBS instance (identical clone) FROM the gui aws management console and not command line ? Before turning this down consider that there is not similar question on how to do it via the aws management console. hint can't be done is not an appropriate answer. As you can create a snapshot of the instance store backed instance and/or a volume and create an AMI from that snapshot.

    Read the article

  • What ports to open in aws security group for aws?

    - by HarrisonJackson
    I am building the backend for a turn based gamed. My experience is mostly with a lamp stack; I've dabbled in nginx on a node side project. I just read Scaling PHP Applications by Stephen Corona of Twit Pic. He recommends an nginx server over apache. He says that his ubuntu machine has 32768-61000 ports open. On AWS do I need to modify my security to group to allow access to those ports? How do I ensure nginx is taking full advantage of this configuration?

    Read the article

  • Intermittent 404 on select assets, LAMP stack

    - by Tom Lagier
    We have a LAMP stack WordPress server that is serving most assets correctly. However, one plugin's CSS file and several images are returning soft 404s roughly 20% of the time. I can't find any reference to the 404 in the access logs, but the browser is definitely receiving a 404 response from somewhere (WordPress, I would assume). When I use an alias URL that does not match the site URL but does resolve to the asset path, the resource loads correctly 100% of the time. However, using the site url only resolves for the select, problematic assets 20% of the time. You can test one of the problematic assets here: http://www.mreco.org/wp-content/uploads/2014/05/zero-cost.jpg However the alias link always resolves correctly: http://mr-eco.wordpress.promocampaigns.com/wp-content/uploads/2014/05/zero-cost.jpg Stranger, if I attempt to access outdated content that definitely does not exist on the server, at the live URL it returns the content roughly 50% of the time. Using the alias link, it 404s 100% of the time - the correct behavior. Error log and PHP error log are clean. A sample access log (pulled from grep 'zero-cost.jpg' /var/log/httpd/mr-eco-access_log) from several refreshes of the live direct link (where I am not seeing any 404's): 10.166.202.202 - - [28/May/2014:20:27:41 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:42 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.176.201.37 - - [28/May/2014:20:27:56 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 200 57027 Chrome's dev tools list the following network activity before displaying 404 page content: zero-cost.jpg /wp-content/uploads/2014/05 GET 404 Not Found text/html Other 15.9?KB 73.2?KB 953?ms 947?ms My Apache configuration is standard, I've listed the virtual host entry and .htaccess file below. I can provide other parts of Apache config if necessary. Virtual host: <VirtualHost *:80> DocumentRoot /var/www/public_html/mr-eco.wordpress.promocampaigns.com ServerName www.mreco.org ServerAlias mreco.org mr-eco.wordpress.promocampaigns.com ErrorLog logs/mr-eco-error_log CustomLog logs/mr-eco-access_log common <Directory /var/www/public_html/mr-eco.wordpress.promocampaigns.com> AllowOverride All SetOutputFilter DEFLATE </Directory> </VirtualHost> .htaccess: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I have checked for multiple A records and can confirm that there is a single A record pointing at the domain: ;; ANSWER SECTION: mreco.org. 60 IN A 50.18.58.174 I'm fairly new to systems administration, and at a complete loss as to what could cause this. In the past, inconsistently 404ing assets have been because of out-of-sync instances behind a load balancer. In this case, it is a single instance behind the load balancer. Because of the inconsistency, it feels like a caching issue. We don't make use of Apache caching, and as far as I know WordPress should not be caching either. What I've done so far: Reset WordPress permalinks Disabled WordPress plugins Re-generated WordPress .htaccess file Swapped ServerName and ServerAlias directives Cleared browser cache Confirmed disk location of resources Checked PHP, access, and error logs Confirmed correct DNS setup (can post if necessary) I'm at a total loss. Thanks for helping me out!

    Read the article

  • Should EC2 server self register or have admin server

    - by hortitude
    I'm creating AWS servers using chef. I am also planning on enabling auto-scaling. While we have automatic monitoring setup already (server density or nagios etc), I was also going to setup the cloud watch alarm for the status check on the server. This led me down the path of trying to decide whether to install the ec2-command-line tools on the server itself (which then requires me to install java on the servers -- despite no other need for Java in our environment) or to possible have an "admin" box that will check periodically for servers and make sure they have their alarms set. I expect this paradigm to carry over to other things we want to configure (perhaps ensuring that termination protection is setup on production servers?). Any thoughts as to why to go one direction or another?

    Read the article

  • Does cloud storage replicate the data over many datacenters if so it means i benefit content delive

    - by Berkay
    Let's assume that i want to use cloud storage service from one of the cloud storage provider, i got X gb structured and unstructured data and i will use this data as my contents of my interactive web page. And now i have some doubts about this point.I have many users and they are all visiting my web page from various countries.To be more specific first; does my data stored only of the Cloud Storage data center ? or Does my data replicated over many data centers of my provider? second if so; how can i benefit from content delivery network? (matching and placing users’ content nearest storage data centers)

    Read the article

  • How can I automatically cycle a new image in an AWS Auto Scaling Group?

    - by JustinY
    I have a web application setup with a load balancer and auto scaling group to manage scaling. The source code is in a git repository so I don't have to update the images when the code changes, but occasionally the environment changes so we create a new image. Then that image needs to be cycled into the auto scaling group. Is there a way to cycle the images automatically? Right now I schedule a scale up and scale down action which gets rid of the old instances.

    Read the article

  • ephemeral vs EBS partitions

    - by hortitude
    I launched an EBS backed AMI with all the defaults. I noticed that it automicatlly had attached an ephemeral disk. I was just wondering if there was a good programtic way to know that this particular device is ephemeral vs some EBS volume I had decided to attach: ubuntu@-----:~$ df -ahT Filesystem Type Size Used Avail Use% Mounted on /dev/xvda1 ext4 7.9G 867M 6.7G 12% / proc proc 0 0 0 - /proc sysfs sysfs 0 0 0 - /sys none fusectl 0 0 0 - /sys/fs/fuse/connections none debugfs 0 0 0 - /sys/kernel/debug none securityfs 0 0 0 - /sys/kernel/security udev devtmpfs 1.9G 12K 1.9G 1% /dev devpts devpts 0 0 0 - /dev/pts tmpfs tmpfs 751M 172K 750M 1% /run none tmpfs 5.0M 0 5.0M 0% /run/lock none tmpfs 1.9G 0 1.9G 0% /run/shm /dev/xvdb ext3 394G 199M 374G 1% /mnt ubuntu@-----:~$ mount /dev/xvda1 on / type ext4 (rw) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/xvdb on /mnt type ext3 (rw,_netdev)

    Read the article

  • Choosing between cloud (Cloudfoundry ) and virtual servers - for developers

    - by Mike Z
    I just came across some articles on how to setup your own cloud using Cloudfoundry and Ubuntu, this got me thinking, choosing our infrastructure, if we want to use our own servers what's the advantage of cloud on virtual servers vs just using virtual servers, VPN? If we now develop for the cloud later if we need help we can quickly move on to a cloud provider, but other than that what's the advantage and disadvantage of private cloud in these areas? speed of development, testing, deployment server management security having an extra layer (cloud) will that have a hit on server performance, how big? any other advantage/disadvantage?

    Read the article

  • How do I handle mysql replication in EC2 using private IPs?

    - by chris
    I am trying to set up a mysql master/slave configuration in two EC2 instances. However, every time I reboot an instance, the IP address (and hostname) changes. I could assign an Elastic IP address, but would prefer to use the internal IP address. I can't be the first person to do this, but I can't seem to find a solution. There are a lot of "getting started" guides, but none of them mention how to handle changing IP addresses. So what are the best practices to manage master/slave replication in EC2?

    Read the article

  • s3fs: how to force remount on errors?

    - by Alexander Gladysh
    I use s3fs 1.33 on Ubuntu 9.10. Regularily it gives me errors like this: rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) rsync: close failed on "/mnt/s3/mybucket/filename": Software caused connection abort (103) rsync error: error in file IO (code 11) at receiver.c(731) [receiver=3.0.6] rsync: connection unexpectedly closed (86 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6] Any attempt to work with mounted directory after that gives this error: Transport endpoint is not connected To get rid of this, I have to remount. Is there a way to force a remount automatically?

    Read the article

  • AWS Autoscaling issue with existing nodes in ELB

    - by Ram Prasad
    I already have a ELB setup called MyLoadBalancer. I already have 2 nodes running on it with health checks (that checks a URL on the node to see if they are up) Created an autoscaling group (min 2, Max 10) Associated launchconfig mylaunchconfig that provisions a node using an AMI Created a trigger, that checks for avg min connections of 100 and Max of 500 (checks the load balancer and it is support to increase the node count by 1, if avg connections are 500 and decrease by one if less than 100) as-create-or-update-trigger MyTrigger --auto-scaling-group MyAutoScalingGroup --namespace "AWS/ELB" --measure RequestCount --statistic Average --dimensions "LoadBalancerName=MyLoadBalancer" --period 60 --lower-threshold 500 --upper-threshold 800 --lower-breach-increment=-1 --upper-breach-increment=1 --breach-duration 600 Now the issue is, as soon as I put in the trigger, it start 2 nodes .... but there are already two nodes in the LB. So, why is it provisioning 2 more nodes, when the nodes are there ? is it because it is not recognizing the existing 2 nodes ? then how do I add the existing nodes to the AutoScaling group ?

    Read the article

  • Websites down EC2 inaccessible via SSH CPU utilisation 100% last few hours - what should I do?

    - by fuzzybee
    I have multiple websites hosted on 1 single EC2 instance. 1 website "abc" were down for a few hours, sometimes threw database connection error and sometimes just took too long to respond. 1 website "def" were incredibly slow but still up and running the rest of the websites had the same symptoms has "abc" I can afford 15 min or less down time for "def". Should I then (in AWS console) reboot my instance or create an AMI image from my instance and launch it and associate my elastic IP to the new instance or "launch more like this" Background on what may have happened to my ec2 The last time I made changes for 21 hours ago. A cronjob to create snapshots ran around 19 hours ago and it has been running for a long time. Google Analytics shows traffic to my websites such as kidlander.sg has been nothing exceptional. Is there any other actions I should take or better options I could have? (I have already contacted AWS support but their turnaround is 12 hours so I appreciate all the help I could get) Update I got everything back up and running and CPU utilisation back to normal, around 30%. There is 1 difference between "def" and "abc" as well as my other websites "def"'s database is hosted on RDS "abc"'s database is hosted on an EC2 instance (different from my web server instance) configured by myself Nevertheless, I checked the EC2 instance I'm using as MySQL server yesterday and it was absolutely fine during the incident low CPU ultilisation I could log in using linux command line

    Read the article

  • What to do for a 1 million concurrent web application? [duplicate]

    - by Amit Singh
    This question already has an answer here: How do you do load testing and capacity planning for web sites? 3 answers There are few things that I would like to know here. What server configuration do I need. And if I am deploying it on EC2 how many VMs do I need and what should be their configuration. What options do I have to do a load testing for 1 million concurrent users. Any pointer to (for php) how to code or what to keep in mind for such application. This is for sure that I don't exactly know what to ask because this is my first application on this scale. But one thing is clear that this application should pass a load test of 1 million concurrent request.

    Read the article

  • Is there a way to use something similar to a capture group for apache2 server name

    - by Zipper
    I have a server that sits behind an AWS load balancer. The LB can't do automatic redirect from HTTP to HTTPs, and the LB is doing my SSL. So I need to setup apache on my servers to redirect any request on port 80 to https://FOOBAR m where FOOBAR is the domain that came in. I haven't been able to find a way of doing that so far. I'm an apache newb though. What I'm trying to do is something similar to this. I'll use regex as an example <VirtualHost *:80> ServerName (.*) Redirect / https://\1 </VirtualHost> If there's a better way to do this, please let me know. EDIT: Sorry I should have explained why this is happening. I actually have a tomcat server running my app on port 8080, and the LB points to that. From what I can tell so far my requests come in on http (which is expected), but when my app server sends redirects (for login purposes) it tries to redirect to http, instead of https. I haven't had a chance to fully investigate this, but I wanted to work around it for now by point the LB to point to the apache server, and have any port 80 requests redirect to 443. EDIT2: The other reason I'm interested in doing this, is that since the LB can't do the redirect, I need to have another redirect mechanism in place to tell the browser to go to https://FOOBAR

    Read the article

  • Puppet nodes cant' find master, ec2 public versus internal ip addresses and hosts files

    - by Blankman
    If I setup my hosts files such that they reference all other ec2 nodes using the internal ip addresses, will this work or do I have to use the external ip addresses? Do I need to specify anything in my security group to get internal ip addresses to work? e.g. /etc/hosts ip-10-11-12-13.internal some_node_name If I do this, can I reference some_node_name anywhere in my scripts where I would have used the ip address previously? On my puppet agent servers, I have a reference to my puppet master like: public-ip-here puppet When I reboot my puppet agent's, syslog shows they couldn't find the master with the message: getaddinfo : name or service not known I did get it to work by updating /etc/default/puppet and I added to the options: --server=public-ip-here From what I read, puppet will by default try using 'puppet', and I set this in my hosts file so why wouldn't it be picking this up?

    Read the article

  • Options for EC2 ec2-create-snapshot and family

    - by shabda
    I am trying to use the various tools provided by ec2-ami-tools Eg, ec2-create-snapshot -h .... -K, --private-key KEY Specify KEY as the private key to use. Defaults to the value of the EC2_PRIVATE_KEY environment variable (if set). Overrides the default. -C, --cert CERT Specify CERT as the X509 certificate to use. Defaults to the value of the EC2_CERT environment variable (if set). Overrides the default. -K and -C are two required values, and I cant understand what values are these expecting? If I create a Keypair from Elasticfox, I get only one file to download and a fingerprint. So which of this need to get where?

    Read the article

  • Cause of slow download speed on a particular EC2 instance?

    - by James
    I have a networking issue I'm trying to solve. I have two EC2 instances, same zone, same type. On one of the two EC2 instances (the 'bad' instance), the download speed is really poor (200k/s), while on the other (the 'good' instance), the download speed is fine, comfortable at 30M/s +). To clarify, I'm talking about downloading files to the EC2 instance while ssh'd into the server, e.g running wget with a large file. I've tried different files, including S3 objects and a large linux ISO from elsewhere. Running ethtool eth0 only returns 'Link detected: yes' for both. When running ifconfig, both return the same for most part, aside from how the good instance shows no error packets yet the bad instance shows many: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:168372370 errors:5075643 dropped:0 overruns:0 frame:0 TX packets:122116480 errors:0 dropped:0 overruns:0 carrier:0 Both servers are configured the same, at least were supposed to be. How can I go about diagnosing the cause for the slow download speed? Is there anything particular to EC2 instances that could cause this? Having trouble knowing where to start. Thanks for any help!

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >