Search Results

Search found 2845 results on 114 pages for 'amazon glacier'.

Page 26/114 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Sticky Load Balancing with AWS

    - by John Wheal
    I have just setup a load balancer with AWS for a few instances as search engine crawlers were bringing down the site (it has millions of pages). Parts of the site allow you to login so I selected: Enable Application Generated Cookie Stickiness and everything works fine. I now wonder how this will effect my SEO and the crawlers. As I selected sticky load balancing does this mean that a crawler will be stuck on one server and therefore defeat the point in the load balancer? Any recommendations will be appreciated.

    Read the article

  • growing EBS RAID volume

    - by Ryan Fernandes
    I've created a RAID0 configuration with two 1GB EBS volumes, mounted at /dev/md0 using mdadm and formatted with XFS Next, I copied some files over to fill the volume to around 30% of its capacity (of 2GB) I then created snapshots of the volumes using ec2-consistent-snapshot and created volumes of the said snapshots but specified the volume size to be 2GB (effective doubling the capacity on each disk) I then spun up a new instance, assembled the RAID0 configuration on /dev/md0 from the 2 volumes mentioned above and mount it to /vol df -hT showed /vol as 2GB (as expected) Now I ran sudo xfs_growfs -d /vol. The command completed normally but reported blocks changed from 523776 to 524160 (only!) and df -hT still showed /vol as 2GB (instead of the expected 4GB) I rebooted, remounted, reassembled the RAID but it still reports the old size. EDIT: trying to grow the RAID using mdadm --grow yields mdadm: raid0 array /dev/md0 cannot be reshaped Is there any other way I can grow a RAID0 array?

    Read the article

  • Is it possible to get a list of running processes with a Cloudwatch Alarm?

    - by jtalarico
    We have an EC2 instance (Ubuntu) that has a few java-based applications and lately we're getting hit with high CPU utilization spikes that trigger one of our Cloudwatch alarms. By the time we get into the server to look at the cpu utilization, things have calmed down. What we'd love to see in one of the alarm emails is a list of running processes and their cpu utilization(%) at the time of the alarm. Is this even possible?

    Read the article

  • I get a 403 when requesting a JS file from CloudFront

    - by Roland
    This is new to me so please excuse me if I have no idea what I'm talking about (: I'm trying to set up my own CDN with CloudFront and S3 through a subdomain by adding a CNAME to that subdomain to point to the CloudFront. It seems like I get a 403 when trying to load the file, this is the original s3 link : https://s3.amazonaws.com/chaoscod3r_aws_cdn/libs/polyfills/json3_polyfill.js ; which seems to be working after setting the permission to everyone to open / download. But when trying to use the subdomain to request the file : http://cdn.chaoscod3r.com/libs/polyfills/json3_polyfill.js ; it seems like I get that 403. Could anyone help me out with this one ?

    Read the article

  • Backing up data (including mysqldumps) to S3

    - by seengee
    We have a web app on a number of servers and we want to add an additional layer of redundancy by backing up the key data to S3. The key data is the MySQL database and a folder containing dynamically created site assets - predominantly images. Some kind of rsync based solution would initially seem the best plan. A couple of years ago we played with S3cmd (in particular s3cmd sync) with some success but we didn't find it particularly reliable although this may have changed since. Its occurred to me though that a rsync solution might not work particularly well with a single db.sql file created with mysqldump and I assume this means the whole database getting transferred each time, with multiple databases of over 1GB this is going to add up to a lot of traffic (and $s) very quickly. With the image files I could simply just transfer files modified within the last day which would be far more simple. What approach should I look at?

    Read the article

  • EC2 Auto-Scaling with Spot and On-Demand Instances?

    - by platforms
    I'm looking to optimize the cost of our auto-scaling EC2 groups by having them launch spot instances instead of on-demand instances. What I really want is to be able to keep some servers in the group as on-demand instances, regardless of what happens to the spot instance pricing market. Then I want any additional servers in the group, above my configured minimum, to be spot instances. I'm generally OK with the delay in adding servers via spot requests. I can't seem to find any way to do this and I've tried to scour the AWS documentation. It appears that an ASG can either be on-demand or spot, but not a hybrid. I could possibly manually add an on-demand instance to the Elastic Load Balancer assigned to the auto-scaling group, but then the load of that server would not be factored into the auto-scaling measurements and triggers. I suppose I could enter a ridiculously high bid price in order to ensure that I always get the servers I need, but then I look at the pricing history and see occasional large spikes. The AWS documentation is at odds with itself, since in one place it says that if you enter a server minimum, that number is "ensured" to be there. But then when you read about spot instances, there are no assurances. The price differential for spot is compelling, so I'd like to leverage that as much as I can while still maintaining an always-on baseline. Is this possible?

    Read the article

  • backup aws ec2 to separate account

    - by Paul de Goede
    I want to backup my AWS snapshots to a completely separate AWS account for additional security (if my AWS credentials were acquired someone could delete all my snapshots and volumes). But I'm a bit stumped on how to do this. There doesn't seem to be a way to store a volume or snapshot in S3 such that another user could access that data in s3 and store it in a separate AWS account. Does anyone have any suggestions on how to acheive this? Thanks

    Read the article

  • How to add a second domain to an EC2 instance with Elastic & Route 53

    - by memeLab
    I've got my domain site.com running on EC2, using Elastic IP and Route 53. I want to park site.net so that it resolves to the same site.. I've looked up Migrating an Existing Domain to Route 53 in the docs, but can't find mention of how to add a second domain! I figured I'd have to create an A record, but when I do so, the record is created site.net.site.com .. not quite what I'm after! I've also done searches for mixes of 'route 53', 'park domain', 'addon domain', 'second domain', but no dice... My prior experience is with cPanel and Plesk, so I'm a bit lost! Any pointers would be appreciated! TIA

    Read the article

  • Force HTTPS with AWS Elastic load balancer

    - by panos2point0
    I need to redirect all incoming HTTP traffic to HTTPS on my elastic load balancer. I tired using Apache mod_rewrite: RewriteEngine On RewriteCond %{HTTP:X-Forwarded-Proto} !https RewriteRule !/status https://%{SERVER_NAME}%{REQUEST_URI} [L,R] Taking advantage of the X-Forwarded-Proto header added by the load balancer, this rule should instruct the users browser to request the HTTPS version of the same URL. So far It doesn't work (no redirection happens). What am I doing wrong? Is there a better way to do this?

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • Conflicting ip routes with local table on attaching a virtual network interface

    - by user1071840
    I have an EC2 instance with these ip rules: $ sudo ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default I can attach an elastic network interface to it with a private IP. Say the IP of my machine is 10.1.3.12 and the IP of the interface is 10.1.1.190. As soon as I attach the interface to my machine a new entry is added to the routing policy and local routing table: sudo ip rule show 0: from all lookup local 32765: from 10.1.1.190 lookup 10003 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table local broadcast 10.1.1.0 dev eth3 proto kernel scope link src 10.1.1.190 local 10.1.1.190 dev eth3 proto kernel scope host src 10.1.1.190 broadcast 10.1.1.255 dev eth3 proto kernel scope link src 10.1.1.190 broadcast 10.1.3.0 dev eth0 proto kernel scope link src 10.1.3.12 local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 broadcast 10.1.3.255 dev eth0 proto kernel scope link src 10.1.3.12 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 I can send traffic to this ENI directly from a host that can have the same IP as the host the ENI is attached to. This is where the problem starts. I ran tcpdump on the port in question and saw multiple SYNs going to the ENI with src '10.1.3.12' and destination '10.1.1.190' but didn't see even a single ACK. In my understanding if ACKs were being sent from the ENI they'd have destination as 10.1.3.12 i.e. the same as the local machine's IP and such packets will now be routed as local packets matching local routing policy: local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 I'd like to send all the packets originating from 10.1.1.190 (my ENI) to go back on the same interface i.e. eth3 in this case. Contents of the nee table 10003 are: $ sudo ip route show table 10003 default via 10.1.1.1 dev eth3 I think I can do the following: I don't know if its possible but probably decrease the priority of local table so the packets match the table 10003. Use iptables to mangle these packets and update the local table route to include the mark information But I'm not sure if these are the right approaches.

    Read the article

  • VPC SSH port forward into private subnet

    - by CP510
    Ok, so I've been racking my brain for DAYS on this dilema. I have a VPC setup with a public subnet, and a private subnet. The NAT is in place of course. I can connect from SSH into a instance in the public subnet, as well as the NAT. I can even ssh connect to the private instance from the public instance. I changed the SSHD configuration on the private instance to accept both port 22 and an arbitrary port number 1300. That works fine. But I need to set it up so that I can connect to the private instance directly using the 1300 port number, ie. ssh -i keyfile.pem [email protected] -p 1300 and 1.2.3.4 should route it to the internal server 10.10.10.10. Now I heard iptables is the job for this, so I went ahead and researched and played around with some routing with that. These are the rules I have setup on the public instance (not the NAT). I didn't want to use the NAT for this since AWS apperantly pre-configures the NAT instances when you set them up and I heard using iptables can mess that up. *filter :INPUT ACCEPT [129:12186] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [84:10472] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 1300 -j ACCEPT -A INPUT -d 10.10.10.10/32 -p tcp -m limit --limit 5/min -j LOG --log-prefix "SSH Dropped: " -A FORWARD -d 10.10.10.10/32 -p tcp -m tcp --dport 1300 -j ACCEPT -A OUTPUT -o lo -j ACCEPT COMMIT # Completed on Wed Apr 17 04:19:29 2013 # Generated by iptables-save v1.4.12 on Wed Apr 17 04:19:29 2013 *nat :PREROUTING ACCEPT [2:104] :INPUT ACCEPT [2:104] :OUTPUT ACCEPT [6:681] :POSTROUTING ACCEPT [7:745] -A PREROUTING -i eth0 -p tcp -m tcp --dport 1300 -j DNAT --to-destination 10.10.10.10:1300 -A POSTROUTING -p tcp -m tcp --dport 1300 -j MASQUERADE COMMIT So when I try this from home. It just times out. No connection refused messages or anything. And I can't seem to find any log messages about dropped packets. My security groups and ACL settings allow communications on these ports in both directions in both subnets and on the NAT. I'm at a loss. What am I doing wrong?

    Read the article

  • Sharing / replicating EBS across AWS nodes

    - by skrat
    I would like to use single EBS storage across multiple EC2 nodes (web/app servers). I've read some articles on snapshot sharing, but that doesn't suit well for what we need. We use filesystem for storing DB record attachments, so if one such attachment gets created, we need it to be immediately available to all nodes (to serve). So far only NFS seem to be viable, but it's a pain to configure and maintain. Another option could be storing those attachments on S3 instead, but that would cut us of doing any analysis on that data. This must be quite common problem when scaling in AWS, what solutions are there?

    Read the article

  • Chef command to create new ec2 instance with second ebs volume attached and mounted instead of the default ephemeral volume?

    - by runamok
    We currently use this command to create a new ec2 instance with chef: knife ec2 server create --node-name=prod-apache-1 --availability-zone us-east-1c --image ami-3d4ff254 --distro ubuntu12.04-gems --groups "default" --ssh-key foo --identity-file ~/.ssh/id_rsa --ssh-user ubuntu --flavor m1.small After this command we then run further chef commands to finish provisioning the server. I was wondering if it would be possible while first setting up the instance I wanted a 100 gb volume created and mounted at /mnt and to have the ephemeral storage mounted at /tmp or /mnt-ephemeral instead. If not what further commands in chef would you advise running? I know how to do this via the aws console and can probably figure out how to do it via the ec2 command line tools but I am knew to chef and a bit overwhelmed.

    Read the article

  • Configuring EC2 Instance

    - by Philip Isaacs
    Forgive me if this seems like a dumb question, but I'm wondering how do I increase the processing power (cpu, memory) of an instance I already have running. Right now I have a web server running on a m1.small type instance and it's performing poorly at peak times, is it possible to increase the amount of memory on the instance somehow, or do I need to create a new EC2 install. What are my options. Please advise.

    Read the article

  • get a list of running ec2 instances programmatically

    - by user113981
    Hi i have started with aws and found out that we can get a list of running servers with the aws php sdk. Is there any other way to get the list of all ec2 instances? after getting the list i want to sync the data from one main instances to all the instances. Something like a button click can also do the operation. Are rsync, incron the only options, or it can be done by aws php sdk also. Please provide some tutorial links.

    Read the article

  • How can I match an AWS account number to a key and secret

    - by iwein
    My client gave me a key and a secret to manage his EC2 things, but to make one of my AMI's available to run I have to fill in the Account Number. Is it possible to deduce the account number from the key and the secret? Obviously I also asked the client for this information, but since it's weekend and I'm not fond of waiting I wanted to see if I could figure it out myself. Have you done this before?

    Read the article

  • What exactly is an invalid HTTP_HOST header

    - by rolling stone
    I've implemented Django's relatively new allowed hosts setting, which is meant to prevent attackers from submitting requests with a fake HTTP Host header. Since adding that setting, I now get anywhere from 20-100 emails a day notifying me of invalid HTTP_HOST headers. I've copied in an example of a typical error message below. I'm hosting my site on EC2, and am relatively new to setting up/maintaining a server, so my question is what exactly is happening here, and what is the best way to manage these invalid and I assume malicious requests? [Django] ERROR: Invalid HTTP_HOST header: 'www.launchastartup.com'.You may need to add u'www.launchastartup.com' to ALLOWED_HOSTS.

    Read the article

  • bottle.py on EC2 micro instance causes 2 order of magnitude slowdown

    - by user61633
    Cross-posted from StackOverflow: I wrote a little toy script to solve this type of game, and put it on my new micro EC2 instance. It works perfectly, but while it takes around 0.5 seconds to run a local version, and takes under 0.5 seconds to run both the local and the bottle.py version on my home computer, running the bottle.py version on the EC2 instance takes over 2 minutes. Python has the cpu pegged at 99% the entire time. Only 7.4% memory usage, consistently, and no swapping. The only guess I have is initialization time for bottle.py on EC2, but if it were that, why would it be ~200x faster on my own computer with bottle.py?

    Read the article

  • Subsequent runs of rsync locally don't reduce data transferred

    - by sharakan
    I have an EC2 instance with data I want to sync to a mounted, but remote, volume, as a backup. rsync seems like the way to go with this, so as a test I took my test file (a Postgres pg_dump file) and used rsync -v to copy it to the mounted volume: [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3416650.09 bytes/sec total size is 821603948 speedup is 1.00 Then, I ran it again, expecting to see minimal sent/received numbers because it would just be checksums. Instead... [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3402502.47 bytes/sec total size is 821603948 speedup is 1.00 I'm new to rsync so perhaps I'm missing something, but isn't the idea that the source and destination files are checked for differences, and then a patch is generated and applied to the destination? Why is this not reducing the amount of data 'sent' to just the size of the checksums? Some background if it's relevant: the mounted volume is using s3fs, mounted with s3fs <bucketname> backup.

    Read the article

  • An international mobile app - Should I set up EC2 instances in multiple regions?

    - by ashiina
    I am currently trying to launch an mobile app for users around the world. It is not a spectacular launch which will get millions of users in weeks - just another individual developer releasing an app. I know enough about the techniques of managing timezones, internationalizing string, and what not ( the application layer ). But I cannot find any information on how I should manage my EC2 instances... Should I be setting up EC2 instances in different regions around the world? Is that a must-do, or is it an overkill? I'm aware that it's the ideal solution in terms of performance, but it becomes very tough managing servers in multiple regions. DB issues, AMI management, etc... I'd much rather NOT do so. So I would like to know the general best practice when launching an international app/website. Note: For static contents, I know it's better to use a CDN, so I'm planning on doing so.

    Read the article

  • Is S3 cheaper than a EC2 DIY solution (for small files)

    - by Jann
    Is it really cheaper to host images and scripts via S3 than with an EC2 instance running nginx/varnish/etc. ? It seems to me (but i'm just getting started with AWS) that the request costs will be the major factor if you don't use sprites or other optimizations... or am i missing something ?

    Read the article

  • Uptime concerns in case of AWS outage

    - by Aditya Patawari
    I am running an Elastic Load Balancer backup by 2 instances in different Availability Zones in US East. I am using Multi-AZ RDS as well. Ideally this should ensure that if one AZ goes down, it should not effect the app because everything is spread across multiple AZs. But the recent AWS outage took the app down for a long time. I am not sure how this can happen. It would be great if someone can point out what went wrong. Major question here I have is how can I avoid this in future? I can setup app servers across different regions or even providers and use DNS for load balancing but what do I do with MySQL? Read Replicas will introduce some lag which I would want to avoid.

    Read the article

  • Monitoring Between EC2 Regions

    - by ABrown
    I'm working on a small EC2 project that involves a handful of servers in two different regions (US East and EU West). My first task is to implement a Nagios monitoring solution. Monitoring within a region is simple - I just use the private domain names/IPs, but I'm a little unsure of the best way to handle monitoring the second region without setting up a second Nagios install. The environment is fairly static, so I'm not going to be scripting the configuration with the EC2 tools just yet. As I see it, I have two options. Two Nagios installations (which is over-kill for the small number of servers I'm dealing with). Pros: I don't have to alter the group permissions nor do I have to pay for the traffic, redundancy in the monitoring solution - I could monitor the Nagios servers. Cons: two installations to deal with and I'd need to run another server instance. Have the single installation monitor both regions. Pros: one installation to deal with. Cons: slightly reduced security - security group will have to have NRPE (5666) opened for one source IP and also paying for a small amount of bandwidth at the Internet rate for data transfer between the regions. I guess my question is - how have others handled this problem and what are your recommendations? Thanks!

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >