Search Results

Search found 2845 results on 114 pages for 'amazon glacier'.

Page 5/114 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Amazon CloudFront and EC2: Global Load Balancing

    - by Matt Rogish
    We have an app that is going to store and serve up a decent amount of data in S3 to a global audience where latency should be minimized. So, we've been doing tests with Amazon CloudFront and have seen favorable results. However, we need a thin middleware layer (to do security etc.) and we'd like to put that in EC2. Due to security restrictions, this middleware layer will do the file streaming from S3/CloudFront: S3/CloudFront - EC2 - Clients We can geographically distribute the EC2 nodes (US East/West, and Ireland) but the problem is that a client in the EU would hit our US server and be fed data from there, thus rendering much of the performance benefit of CloudFront moot. I've been digging through the EC2 docs but I can't find a built-in way to get a geographically distributed version of EC2 a la CloudFront. Elastic Load Balancing sounds like the way to go, but I can't seem to find a way with that to direct based on routing... Preferably, we'd like to keep the amount of stuff outside of EC2/S3/etc. to a minimum (for obvious reasons). Any ideas how to do that within the EC2/S3 framework? DNS/routing tricks? Thanks!

    Read the article

  • Amazon AWS VPN how to open a port?

    - by Victor Piousbox
    I have a VPN with public and private subnets; I am considering only public subnet for now. The node 10.0.0.23, I can ssh into it. Let's say I want to connect to MySQL on the node using its private address: ubuntu@ip-10-0-0-23:/$ mysql -u root -h 10.0.0.23 ERROR 2003 (HY000): Can't connect to MySQL server on '10.0.0.23' (111) ubuntu@ip-10-0-0-23:/$ mysql -u root -h localhost Welcome to the MySQL monitor. Commands end with ; or \g. --- 8< --- snip --- 8< --- mysql> The port 3306 is not reachable if I use the private IP? My security group allows port 3306 inbound from 0.0.0.0/0 AND from 10.0.0.0/24. Outbound, allowed all. The generic setup done by Amazon through their wizard does not work... I add ACL that allows everything for everybody, still does not work. What am I missing?

    Read the article

  • Amazon EC2 Nat Instance - goes out but not back in

    - by nocode
    I've followed Amazon's steps and list what I've done. I've created 6 subnets (4 private SN1: 10.50.1.0/24, SN2: 10.50.2.0/24, SN3: 10.50.3.0/24, SN4: 10.50.4.0/24) and 2 public (SN5: 10.50.101.0/24 and SN6: 10.50.102.0/24) -I have a Bastion host and a NAT instance on SN5 and assigned EIP's to both. I created a test instance on SN1. edit: -NAT instance has source/destination check disabled -On the NAT instance, I had enabled the following commands to be bootstrapped: echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -s 10.0.0.0/16 -j MASQUERADE -In my VPC, the private subnets have their own route table and configured 0.0.0.0/0 to the NAT instance with 4 subnets being associated with the route table. I have a second route table for my public subnets and 0.0.0.0/16 is pointed towards the IGW (with the other 2 subnets associated with it). -For Security Groups, I have the NAT instance accepting all traffic on each of the 4 subnets and all OUTBOUND traffic is allowed. For my test server, I have allowed all outbound access and have allowed all traffic from the public subnet of the NAT host. I can ping internally with no issues. On my test instance, if I try to ping google.com, DNS resolves however I don't get a reply back. On my NAT instance, I run a tcpdump and can see the request being requested to google.com but it's not sending the reply back. My NAT host can ping and receive a reply from google. From the test host, when I ping the NAT instance, the tcpdump shows a request and receive. Is there something I'm missing? EDIT: I've figured it out - I had to save the iptable config and restart the service.

    Read the article

  • Backup Dropbox to Amazon Glacier

    - by joekr
    i'm using Dropbox for Backup which means i keep all my files in my Dropbox folder (encrypted using encfs but that should not be relevant). I like this solution because it is automatic and keeps copies of my files on several machines at different locations. The only thing i could see go wrong is that Dropbox has some sort of bug that tells all my machines to delete the files. So currently i do a Backup of the Dropbox folder to an external Harddrive. With Amazon Glacier it seems affordable to automate Backup snapshots of my Dropbox. What i am looking for is a tool that will do this for me - the base case scenario would be that files would go from Dropbox (using their API) directly to Amazon as uploading the ~80GB from my home connection would take forever... Thanks!

    Read the article

  • Simple Backup Strategy for Amazon EC2 instances / volumes?

    - by minerj
    You have entered Introductory Backups for Amazon EC2 EBS-backed Windows Images 010... I have been browsing my brains out to find a simple backup strategy for our single windows 2008 server running SharePoint Services. This is an EBS-backed image of one server with one data volume. I don’t need anything exotic. I only need a “daily” backup (losing a day’s worth of data is not catastrophic). We have created and saved an EBS backed AMI image (Windows 2008) we are comfortable using. We started off making backups by simply creating a new EBS AMI image. This is really simple, but the running server is put offline during the first 10 – 15 minutes of creating the image – not ideal. The standard way of creating backups would seem to be creating snapshots of volumes attached to a running instance. Again it’s pretty simple and the server remains usable during the snapshot generation. The apparent Catch-22 is that you can’t simply launch a new instance directly from a snapshot. I know how to bundle a running instance to S3 storage and then register the AMI from the S3 bucket. This does allow me to capture a backup of a running instance and, if the running instance is lost, register the AMI from the S3 bucket and launch the new AMI to recover the instance, but this seems really convoluted and it seems ridiculous to have to juggle back and forth between the AWS Console and the S3 Organizer plug-in for Firefox to get this accomplished. (Please don't mention the command line approach, this is an 010 level course). From playing around with EBS-backed images, the following approach appears to work for me (all done within the AWS Console): 1.For your backups, simply snapshot the system volume (/dev/sda1) as needed. 2.If you lose your running instance, do the following: a.Create a new volume from your last snapshot backup b.Launch another instance of your starting AMI (must be EBS-backed) c.Stop this instance. d.Detach the existing system volume from the new stopped instance and discard. e.Attach the newly created volume as system volume (/dev/sda1) to the stopped instance. f.Re-start the new instance. I have tested this out a couple of times and it seems to work for me. Question: Is there anything wrong with this approach?

    Read the article

  • Amazon Elastic Terms and Conditions

    - by PP
    WARNING: Have you really read Amazon's Terms and Conditions? Would anybody seriously agree to this term on Amazon's Elastic services sign up page? 6.2. Restrictions with Respect to Use of Marks. Your use of any trademarks, service marks, service or trade names, logos, and other designations of AWS and its affiliates or licensors, hereinafter "Marks", shall strictly comply with the following provisions. You may use the Marks in conjunction with the display of the AWS Content and for the purpose of indicating that your Application was created using the Services. You may use the Marks only in the form in which we make them available to you and not in any manner that disparages Amazon, its affiliates or its licensors, or that otherwise dilutes any Mark. Other than your limited right to use the Marks as provided in this Agreement, we and our licensors retain all right, title, and interest in and to the Marks. You will not at any time now or in the future challenge or assist others to challenge the validity of the Marks, or attempt to register confusingly similar trademarks, trade names, service marks or logos. You agree to follow our the Trademark Use Guidelines posted on the Amazon Web Services™ Trademark Guidelines page (the "Trademark Guidelines") as those guidelines may change from time to time. The Trademark Guidelines are incorporated herein by reference. You must immediately discontinue use of any Mark as specified by us at any time in writing. We may modify any Marks provided to you at any time, and upon notice, you will use only the modified Marks and not the old Marks. Other than as specified in this Agreement, you may not use any trademark, service mark, trade name or other business identifier of Amazon or its affiliates unless you obtain Amazon's or its affiliates' prior written consent. The foregoing prohibition includes the use of "amazon," any other trademark of AWS, Amazon or its affiliates, or variations or misspellings of any of them, in the name of an Application or in a URL to the left of the top-level domain name (e.g., ".com", ".net", "co.uk", etc.)-for example, a URL such as "amazon.mydomain.com", "amaozn.com" or "amazonauctions.net" are expressly prohibited. Any use you make of the Marks shall inure to our benefit and you hereby irrevocably assign to us all right, title and interest in the same. In addition, you agree not to misrepresent or embellish the relationship between us and you, for example by implying that we support, sponsor, endorse, or contribute money to you or your business endeavors. If you are a large company and you want to use Amazon's services you must agree that: you may not use the word "amazon" in any domain name you control (even if you are a forestry company) you may not use any word Amazon choose to trademark in any domain you control (regardless of whether the name has a different meaning/purpose in your industry) from now until forever you will never dispute any claim Amazon makes on any word you or anybody else uses Seriously, who would sign such a thing?

    Read the article

  • why does and EBS volumes mounted in an Ubuntu 12.04 EC2 instance as /dev/sdh1 appear as /dev/xvdh1?

    - by Andres
    When mounting an EBS volume on ubuntu specified as /dev/sdh1 it actually mounts it at /dev/xvdh1. The aws console still thinks it's mounted at /dev/sdh1 so it took a while to realize that it was actually mounted, just in the wrong place I ran into this problem a long time ago using ubuntu on ec2. I just ran into it again https://forums.aws.amazon.com/post!reply.jspa?messageID=351382 and it seems like I'm not alone: https://forums.aws.amazon.com/thread.jspa?threadID=68957&tstart=0 I haven't found a good answer as to why this happens or how to fix it. Any ideas?

    Read the article

  • How can I create an AMI from an existing EC2 instance?

    - by Arkaaito
    (I suspect that this may already be answered somewhere, since it seems like it would be a common operation. But I can't find it, so...) I am a relative AWS newbie. I have inherited a running Amazon EC2 instance, with various items (Apache, MySQL, Sphinx, ...) installed on it and a bunch of configuration. I'd like to turn it into an AMI that I can spin up other instances from. I can't find any information on creating a custom AMI on Amazon's site - only the fact that you can, repeatedly referenced, as if to taunt me... I believe this is not an EBS-backed instance, just an "ordinary" one. I do not know what AMI it was originally created from. How would I create an AMI that I could use for spinning up other instances which will be identical except for the hostname?

    Read the article

  • Amazon SQS invalid binary character in message body

    - by letronje
    I have a web app that sends messages to an Amazon SQS Queue. Amazon sqs lib throws a 'AmazonSQSException' since the message contained invalid binary character. The message is the referrer obtained from an incoming http request. This is what it looks like: http://ads.vrx.adbrite.com/adserver/display_iab_ads.php?sid=1220459&title_color=0000FF&text_color=000000&background_color=FFFFFF&border_color=CCCCCC&url_color=008000&newwin=0&zs=3330305f323530&width=300&height=250&url=http%3A%2F%2Funblockorkutproxy.com%2Fsearch.php%2FOi8vZG93%2FbmxvYWRz%2FLnppZGR1%2FLmNvbS9k%2Fb3dubG9h%2FZGZpbGUv%2FNTY5MTQ3%2FNi9NeUN1%2FdGVHaXJs%2FZnJpZW5k%2FWmFoaXJh%2FLndtdi5o%2FdG1s%2Fb0%2F^Fô}úÃ<99ë)j Looks like the characters in bold are the invalid characters. Is there an easy way to filter out characters characters that are not accepted by amazon ? Here are the characters allowed by amazon in message body. I am not sure what regex i should use to replace invalid characters by ''

    Read the article

  • Amazon EC2 prices for Windows Instance?

    - by Abhishek Gupta
    Hello Guys , I want to ask from some Amazon cloud technology Experts , that is it profitable to deploy our web application on amazon cloud as compared to normal server? Currently there are micro,small, large and other types of instances available , if we start from micro instance then we realize that our app needs some more CPU cycle and Ram then how can we dynamically move to next more powerful instance automatically at runtime. What is the approx minimum yearly cost for a single EC2 windows small instance? I wnat to deploy a simple Online quiz application (ASP.net based) on Amazon Cloud which at a time can have maximum of 500 users only. Please suggest me as I m very new to Cloud .Should I go for Azure or Amazon?

    Read the article

  • Living the Amazon Life [Video]

    - by Asian Angel
    Amazon has an amazing selection of products available to satisfy your needs and desires, but what if their services were to expand even more? This humorous video looks at what it might be like if you could literally get anything you wanted through a unique assortment of Amazon sister-sites! Note: Video contains some language that may be considered inappropriate. AMAZON LIFE [via Geeks are Sexy] What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8 HTG Explains: Why You Shouldn’t Use a Task Killer On Android

    Read the article

  • How do I mount an EBS root volume to a windows instance in Amazon EC2

    - by Kyle
    So basically, I created a large windows server for development, and then I created a micro windows server for production. I set up everything how I wanted it on my development server, and then i unmounted the drives, and mounted them to my micro server. Now I'm trying to get back into my large windows development server, and I'm getting the error. Invalid value 'i-4896ce28' for instanceId. Instance does not have a volume attached at root (/dev/sda1) this error pops up when I try to start my large windows server. I've remounted the drives to the large development server, and I still get this message. I'm not really sure what to do, I've read other posts and everyone is giving these almost like command line arguments and talking about other tools, and I really have no clue what any of that means, or where I even have an option to enter any commands without be logged into a specific instance.

    Read the article

  • Amazon EC2: Not able to open web application even if port it opened

    - by learner
    I have a t1.micro instance with public dns looks similar to ec2-184-72-67-202.compute-1.amazonaws.com (some numbers changed) On this machine, I am running a django app $ sudo python manage.py runserver --settings=vlists.settings.dev Validating models... 0 errors found Django version 1.4.1, using settings 'vlists.settings.dev' Development server is running at http://127.0.0.1:8000/ I have opened the port 8000 through AWS console Now when I hit the following in Chrome http://ec2-184-72-67-202.compute-1.amazonaws.com:8000, I get Oops! Google Chrome could not connect to WHat is that I am doing wrong?

    Read the article

  • Increase application performance on Amazon AWS

    - by Honus Wagner
    I've got a client with an MVC v1 (.NET) application running on a micro instance. On this instance, I've got .NET, IIS 7.5, and MS SQL Server 2008 running to handle the application. The client has reported that it is taking nearly 10 seconds to process each request. Even loading the initial login page takes about that long, then logging in takes that long, etc etc. The currently running instance specs are as follows: 615 MB RAM Intel Xenon CPU E5430 @ 2.66GHz 2.78 GHz 64-Bit Is the memory availability the issue? or is it the processing power? I forsee two options: Change to a larget instance Set up a 2-tier architecture with two micro instances Which of these will give the application better performance? Thanks in advance.

    Read the article

  • Unable to login to Amazon EC2 compute server

    - by MasterGaurav
    I am unable to login to the EC2 server. Here's the log of the connection-attempt: $ ssh -v -i ec2-key-incoleg-x002.pem [email protected] OpenSSH_5.6p1, OpenSSL 0.9.8p 16 Nov 2010 debug1: Reading configuration data /home/gvaish/.ssh/config debug1: Applying options for * debug1: Connecting to ec2-50-16-0-207.compute-1.amazonaws.com [50.16.0.207] port 22. debug1: Connection established. debug1: identity file ec2-key-incoleg-x002.pem type -1 debug1: identity file ec2-key-incoleg-x002.pem-cert type -1 debug1: identity file /home/gvaish/.ssh/id_rsa type -1 debug1: identity file /home/gvaish/.ssh/id_rsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'ec2-50-16-0-207.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/gvaish/.ssh/known_hosts:8 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: ec2-key-incoleg-x002.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: Trying private key: /home/gvaish/.ssh/id_rsa debug1: No more authentication methods to try. Permission denied (publickey). What can be the possible reason? How do I fix the issue?

    Read the article

  • Yum installing wrong MySQL version on CentOS 5 (Amazon)

    - by Marius Stuparu
    I'm having trouble with a CentOS server running on AWS. This is CentOS 5.6 i386 from RightImage, but the problem was the same on all RightScale AMIs. When issuing the following command: yum install mysql mysql-server mysql-devel the only packages proposed by yum are MySQL-devel-community and MySQL-server-community. Which would't be a problem, except this package is old/incomplete, because it does not create a "mysqld" service, only a /etc/init.d/mysql (notice the missing d). That would't be a problem, I can start the service by doing ./etc/init.d/mysql start, and it starts OK, but there is no "mysql" (or other mysql*) command available. If I try to force a different version (yum install mysql50-server...) I get this yum error: mysql-5.0.77-4.el5_6.6.i386 from updates has depsolving problems --> mysql conflicts with MySQL-server-community (even when I don't have MySQL-server-community installed). I have tried this before and after yum update, in a fresh image. How can I install a working version of MySQL? I'm stuck on CentOS 5 because I want to install Kloxo (which does not yet support CentOS 6). I'm not interested in Webmin, and I can't afford cPanel. Thanks!

    Read the article

  • Ways to go about optimizing website performance WordPress, Amazon EC2 Apache and RDS MySQL

    - by fuzzybee
    I have 6 WordPress websites running on 1 single EC2 instance. All the the websites are connecting to databases in 1 same RDS instance. Earlier today, traffic to the largest website peaked and the RDS instance went bottle-neck - CPU utilization was 100% for over an hour. It affected all of my websites as it took them all forever to load. In order to prevent such issue from happening again, which of the following will matter most so that I invest time and effort in first of all? (I will work on all later, I just need to prioritise now) To improve caching for all websites To fine-tune the database server To fine-tune my Apache server What will be the effect on user experience for my websites? Some quick searches show that I should limit number of concurrent connections to my web server but wouldn't that prevent users from accessing my websites? More background: My largest website has 140k visits and 660k page views a month. The other 5 websites should add up much less than that. I'm using a large EC2 instance as the web server I'm using a medium RDS instance as the database server What I've already done: Use W3 Total Cache plugin for caching for most the websites, especially the largest one (I can barely anything else in terms of caching I could do for the largest website) Am I using my resources wastefully or is there simply not enough resources for my websites - or rather, how do I answer that question myself?

    Read the article

  • Lan, vpn on Amazon EC2, how to?

    The problem is as follows: I have 2 windows2003 server instances running on the cloud. 1) How can I create a local area network from these 2 instances? 2) Assuming that I want to create a VPN network from these 2 instances, how do I do that? (I'm not very good in networking, therefor the above problem description might be incomplete or not very clear.) A detailed answer or clarification would be praised and appreciated! What I tried: 1) Setting up OpenVPN, but I got lost in the process. 2) Creating a VPN from windows2003 server in the following manner: on instance a): set up a dhcp server; set up an "accept income vpn" connection; with the followin tcp ip settings: obtain an ip from the dhcp server; on instance b): created a new vpn connection, tried to connect to intance A, using the instance A static IP but error 806 was thrown, something relate to a GRE protocol.

    Read the article

  • Receive emails on Amazon EC2 Server

    - by Kartik
    I just got started with an EC2 instance and got my mail sending limit removed, allowing me to send emails from my instance. But due to lack of experience, I have no clue on how to enable receiving emails send to me on that server. The instance has an elastic IP and I have a domain name with an A record pointing to that IP. I cant seem to find better documentation on what steps need to be taken so if someone sends an email to [email protected] it either actually receives it or simply forwards it to my personal email. I know that it involves using postfix but cant find a guide to properly configure it after the installation. Thanks

    Read the article

  • Can't authorize a server for Amazon RDS

    - by Parris
    We are attempting to slowly migrate a website over to AWS among other things. We decided the first thing to move was the database. We have some dedicated server with a different hosting provider. We only have one IP. I am having trouble authorizing the ip so that the old server can connect to RDS. It simply hangs for a while while using the mysql cli, then responds: ERROR 2003 (HY000): Can't connect to MySQL server on 'db.address.us-east-1.rds.amazonaws.com' (110) It did work on my laptop though. I am not quite sure what is wrong. I have a feeling I don't quite understand CIDR/IP. I simply took the ip address and tacked on /32 at the end. Then I gleaned some information that it also has to do with subnet mask? ifconfig reports: 255.255.255.0 I found a calculator and the IP changed a bit and had /24 at the end. That still didn't work. One other note... perhaps i dont know enough about the differences between OS. The hosting provider is using centOS, while our development machines are all ubuntu. Any insight would be extremely helpful! THANKS :)

    Read the article

  • Amazon ELB and use of address / server names across multiple servers

    - by Stpn
    I am setting up Nginx servers behind the ELB. I set up so that api.app.com points to an ELB. I wonder which addresses I should use for remote connections, Nginx settings etc.. 1) For example, in Nginx: Should I do server { listen 80; #What is the right line here: # server_name <WWW.NAME.COM> OR <ec2-.....compute-1.amazonaws.com> OR <MLB-....amazonaws.com>?; passenger_enabled on; ..... } 2) I connect servers behind ELB to remote Postgres database. In Postgres settings should I open the ELB address (MLB-...amazonws.com) or to individual EC2 IPs?

    Read the article

  • Address already in use - Amazon AWS

    - by Peter
    I've run into a really weird issue. I was debugging a server 500 error script on our EC2 instance and found that we didn't have ioncube loaders installed. So I went to go install them and I created a new file at /etc/php.d/zend.ini and initially I inserted the value of extension=/usr/local/ioncube/ioncube_loader_lin_5.3.so and restarted httpd at which point it told me: The ionCube Loader is a Zend-Engine extension and not a module Please specify the Loader using 'zend_extension' in php.ini PHP Fatal error: Unable to start ionCube Loader module in Unknown on line 0 So I changed the contents of zend.ini to zend_extension=/usr/...etc. Now when I attempt to restart httpd I get this error: Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs I can't even run /etc/init.d/httpd stop without it erroring. I've since removed zend.ini to see if that's what caused it and it doesn't seem to be. Any ideas?

    Read the article

  • Configure non-destructive Amazon S3 bucket policy

    - by Assaf
    There's a bucket into which some users may write their data for backup purposes. They use s3cmd to put new files into their bucket. I'd like to enforce a non-destruction policy on these buckets - meaning, it should be impossible for users to destroy data, they should only be able to add data. How can I create a bucket policy that only lets a certain user put a file if it doesn't already exist, and doesn't let him do anything else with the bucket.

    Read the article

  • amazon ec2 assign domain name

    - by user41999
    1.amazonaws doesnt provide dns service? 2.i can only assign static ip through ec2 so the only way to assign domain name is to use third party dns service? which do you all recommend? i need one that able to add SRV

    Read the article

  • different amazon image with/without ebs boot?

    - by user41999
    i'm at http://alestic.com/ and can see Ubuntu 10.04 Lucid Canonical, ubuntu@ Ubuntu 10.04 Lucid Canonical, ubuntu@EBS boot 1.what is the different between with/without EBS boot? any article explaining this? 2. can provide advantage/disadvantages of using not using ebs boot?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >