Search Results

Search found 2846 results on 114 pages for 'amazon elb'.

Page 4/114 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • amazon aws, rds

    - by b0x0rz
    just need a quick info on two questions but very related. 1 if we use rds on amazon aws, what happens if some of the machines rds is running on crash? i mean what happens with the data? is it gone? 2 if we use rds on amazon aws via beanstalk, and decide to stop them is the database gone, i mean data, is it gone? thnx a lot, simple yes/no will do, but if you can give more info or a solution to mitigate these issues if any of the answers is unfavorable, that would be great.

    Read the article

  • AWS Elastic load balancer doesn't decrease instances from Alarm Trigger

    - by jchysk
    I have a load balancer that I created an auto-scaling-group and launch-config for. I created the auto-scaling-group with a min-size of 1 and max size of 20. I have a scaledown policy: as-put-scaling-policy SBMScaleDownPolicy --auto-scaling-group SBMAutoScaleGroup --adjustment=-1 --type ChangeInCapacity --cooldown 300 Then I set up an alarm: mon-put-metric-alarm SBMLowCPUAlarm --comparison-operator LessThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 600 --statistic Average --threshold 35 --alarm-actions arn:aws:autoscaling:us-east-1:policystuffhere:autoScalingGroupName/SBMAutoScaleGroup:policyName/SBMScaleDownPolicy --dimensions "AutoScalingGroupName=SBMAutoScaleGroup" When average CPU usage over 10 minutes is under 35, in CloudFront the alarm shows up as "In Alarm State" but doesn't decrease the number of instances. Also, if there's only one instance running it'll spin up another to 2 even if a scale up alarm isn't hit. It seems like the default value is just set to 2 somehow. How can I change this?

    Read the article

  • Setting up scripts in Amazon EC2 Cloud

    - by racket99
    Hello, I am currently running a few perl and python scripts on a windows pc and would like to port over to the Amazon EC2 servers running 64-bit LINUX. The scripts are basic web scrapers that go to a variety of websites, get data and then save daily as csv files. I would like to install these in the cloud and get them running in an automated way so that they will run without my intervention. Also given that I don't want to lose all the data if the instance crashes, I should also upload the csv files to Amazon S3. Any idea how I can do this? I am not terribly versed in LINUX nor do I know Perl/Python well. What is the best way for me to tackle thi

    Read the article

  • Building nginx 1.0.4 on Amazon EC2 micro - perl and python problems

    - by digitaltoast
    I'd like to run nginx as a reverse proxy with apache2 on my EC2 micro instance. yum install nginx gives me nginx-0.8.53-1.2.amzn1.x86_64.rpm The current nginx is 1.0.4 I found and followed this guide: http://kdn2.info/2011/05/install-nginx-on-amazon-ec2/ It works fine up to and including "make". When I get to checkinstall --fstrans=no I get ERROR: ld.so: object '/usr/lib/installwatch.so' from LD_PRELOAD cannot be preloaded: ignored. test -d '/var/log/nginx' || mkdir -p '/var/log/nginx' ERROR: ld.so: object '/usr/lib/installwatch.so' from LD_PRELOAD cannot be preloaded: ignored. make[1]: Leaving directory `/root/src/nginx-1.0.4' ======================== Installation successful ========================== Copying documentation directory... ./ ./CHANGES ./LICENSE ./README cp: cannot stat `//var/tmp/gRWoVgIcdbmjfTjoVGBM/newfiles.tmp': No such file or directory Copying files to the temporary directory...OK Striping ELF binaries and libraries...OK Compressing man pages...OK Building file list...OK Building RPM package... FAILED! *** Failed to build the package ...and the logfile is full of: Building target platforms: x86_64 Building for target x86_64 Processing files: nginx-1.0.4-1.x86_64 error: File not found: /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/usr error: File not found: /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/usr/doc There IS /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/ but no /usr Following further down the page, it says: "If we want to use, for example, PHP 5.2 we can download PHP and Nginx compatible with Amazon Kernel(Xen Kernel) from the CentosALT Repository." So I install the two repositories, but when I yum install http://centos.alt.ru/pub/nginx/1.0/RPMS/x86_64/nginx-stable-1.0.4-1.el5.x86_64.rpm I get Error: Package: nginx-stable-1.0.4-1.el5.x86_64 (/nginx-stable-1.0.4-1.el5.x86_64) Requires: perl(:MODULE_COMPAT_5.8.8) You could try using --skip-broken to work around the problem but that doesn't fix it. When I do yum update, I get --> Finished Dependency Resolution Error: Package: python-distribute-0.6.19-10.1.x86_64 (devel_languages_python) Requires: python < 2.5 Installed: 1:python-2.6-1.19.amzn1.noarch (@amzn-main) python = 1:2.6-1.19.amzn1 Error: Package: python-distribute-0.6.19-10.1.i586 (devel_languages_python) Requires: python < 2.5 Installed: 1:python-2.6-1.19.amzn1.noarch (@amzn-main) python = 1:2.6-1.19.amzn1 I've tried everything - yum clean all and various other suggestions found on other sites. If anyone has any suggestions or a known package of the current 1.04 nginx working on EC2 Micro (Linux ip-10-56-63-85 2.6.35.11-83.9.amzn1.x86_64 #1 SMP Sat Feb 19 23:42:04 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux - which I think is RHEL 5?) then I'd be grateful. Incidentally, does this repolist look right? repo id repo name status CentALT CentALT Packages for Enterprise Linux 5 - x86_64 enabled: 112+157 amzn-main amzn-main-Base enabled: 2,706 amzn-main-debuginfo amzn-main-debuginfo disabled amzn-main-nosrc amzn-main-nosrc disabled amzn-updates amzn-updates-Base enabled: 328 amzn-updates-debuginfo amzn-updates-debuginfo disabled amzn-updates-nosrc amzn-updates-nosrc disabled devel_languages_python Python and Python Modules (SLE_10) enabled: 1,452+768 epel Extra Packages for Enterprise Linux 5 - x86_64 enabled: 5,892+604 epel-debuginfo Extra Packages for Enterprise Linux 5 - x86_64 - Debug disabled epel-source Extra Packages for Enterprise Linux 5 - x86_64 - Source disabled epel-testing Extra Packages for Enterprise Linux 5 - Testing - x86_64 disabled epel-testing-debuginfo Extra Packages for Enterprise Linux 5 - Testing - x86_64 - Debug disabled epel-testing-source Extra Packages for Enterprise Linux 5 - Testing - x86_64 - Source disabled s3tools Tools for managing Amazon S3 - Simple Storage Service (RHEL_6) enabled: 2+1 repolist: 10,492

    Read the article

  • Prevent Amazon EC2 Time zone from reverting back on yum update

    - by D.Tate
    I use an Amazon EC2 server instance that runs a distro called Amazon Linux AMI. (I've read that it is based on CentOS/Red Hat). My specific version is the 2012.09 release. Anyway, I was able to change the time zone about a week ago from the default UTC to America/New_York (which is EST/EDT). The command I used to change it was: ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime ...thanks to this other Server Fault question. At that point, I was able to run date from the the command line, and it correctly displayed the EDT time. And even after EDT "fell back" to EST this past Sunday, I was pleased to find that running date still produced the correct local time. So that was great. However, after running a yum update yesterday, it seems that my time zone got reverted back to plain 'ol UTC. I even checked the last modified time of /etc/localtime file, and indeed it confirmed that it had been modified around the same time I had updated. Is there any way to prevent this from happening again, or will I be stuck resetting the time zone every time I do a yum update?

    Read the article

  • Dealing with upgrade of libevent on Amazon AWS

    - by Dreen
    I am building an application (in Python) on Amazon EC2 that has a following dependency chain: gevent-websocket ---> gevent ---> libevent The last one (libevent) got upgraded on Sunday and my server is now generating this error: (...) File "/usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/__init__.py", line 41, in <module> from gevent import core ImportError: libevent-1.4.so.2: cannot open shared object file: No such file or directory Not wanting to spend much time on the issue, I tried to mitigate it by creating a symlink to an always-recent version: $ sudo ln -s /usr/lib64/libevent.so /usr/lib64/libevent-1.4.so.2 But it didn't quite work: (...) File "/usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/__init__.py", line 41, in <module> from gevent import core ImportError: /usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/core.so: undefined symbol: current_base I am a bit stumped as to how to proceed. Should I create more symlinks? To what? Or is there a better way to solve this problem... PS. For the record I am using Amazon AMI.

    Read the article

  • Cannot connect to MySQL on RDS (Amazon Web Services) from my laptop

    - by Bruno Reis
    I'm having some trouble connecting to a MySQL 5.1 server on an RDS instance on AWS from my laptop. The detailed description of the problem is here: https://forums.aws.amazon.com/thread.jspa?messageID=323397 In short: I have 2 MySQL servers, both with the same db configuration and firewall (security group) configuration. One of them works fine: I can connect to it from my EC2 instances (ie, from inside the AWS cloud) and from my laptop. The other one doesn't: I can connect from my EC2 instances but not from my laptop. The symptom: a connection attempt from my laptop just hangs, and then times out, as if there was a firewall blocking me (ie, silently dropping my SYN packets). I must say that everything has been working fine for a very long time, and this problem began suddenly, 3 days ago, without any modifications to DB parameters or the security groups. My current analysis of the situation: The firewall (ie, security group) cannot be the problem: both MySQL servers share the same firewall configuration -- I can connect to one of them but not to the other. Later on, I even added a rule to allow inbound connections from 0.0.0.0/0 (ie, I turned off the firewall), and nothing. Oh, I also created a new, fresh security group and changed this instance's SG to the new one (to which I first added my ip address, and then 0.0.0.0/0) but still nothing. The credentials cannot be the problem: I use the same from my laptop and from my EC2 instances -- and the user (which is what Amazon calls master user), in the database, has a host of '%'. MySQL is not blocking my IP due to, say, too many failed connection attemps: I've FLUSH HOSTS on the database, and also I tried to connect using many different source IP addresses, even from all around the world through a VPN proxy service. What could I be missing? I'm asking here because it's been about 36 hours since I've posted on AWS forums but got no answer at all over there... someone here might have a solution! Any input is really appreciated, I'm out of ideas. Thanks!

    Read the article

  • Inside Amazon’s Warehouses

    - by Jason Fitzpatrick
    If you’re expecting the inside of Amazon’s warehouses to be some sort of rigidly organized robot-filled warehouse of tomorrow, you’ll be quite surprised to find that storage technique they employ is called “chaotic storage”. International Business Times paid a visit to a major Amazon warehouse and took a tour. Rather than finding robots they found: Amazon must rely on barcodes and human hands to find the ordered items and drop them into the proper bins — without robots, Amazon utilizes a system known as “chaotic storage,” where products are essentially shelved at random. By storing items randomly instead of categorically, the warehouse has a much better flow of material. Even without robots or automation, Amazon can compile a “picking list” where each item needs to be taken off the shelf and scanned again before it can be shipped. The real advantage to chaotic storage is that it’s significantly more flexible than conventional storage systems. If there are big changes in a product range, the company doesn’t need to plan for more space, because the products or their sales volumes don’t need to be known or planned in advance if they’re simply being stored at random. HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    Read the article

  • Amazon blocked port 80 and 443 on my instance

    - by Burak
    Amazon AWS sent me an email about warning that my instance have been behaving like Phising that is against AWS Customer Agreement. And they noticed that they blocked port 80 and 443 which are for HTTP and HTTPS respectively. Google Safe Browsing also reported that some code injection was made to one of my websites. After a cleaning, Google stopped blocking my website displaying in the search result. So, how can I unblock my ports?

    Read the article

  • Amazon EBS for web site files

    - by MattB
    I'm new to the Amazon EC2/EBS system. I'm trying to figure out the "best practice" for hosting a web application (php, ASP.NET, etc.). The way I see it, I have 2 options: Have the instance hold my web files - Don't need to worry about attaching volume's etc.? Have EBS volume hold my web files - Easily update with new code without needing to recreate the AMI for new updates? How do others handle this?

    Read the article

  • How to dynamically insert a keyword in an Amazon Search Widget

    - by ElHaix
    Through Amazon Associates, you can create search widgets that have a place for a search term. In the admin, you can set the default search term, but that seems to be tied to the widget ID. I would like to be able to dynamically set the search term for the widget when it is displayed. How can I accomplish this? Note: I am referring to the following banner script: <SCRIPT charset="utf-8" type="text/javascript" src="http://ws-na.amazon-adsystem.com/widgets/q?rt=tf_sw&ServiceVersion=20070822&MarketPlace=CA&ID=V20070822%2FCA%2F[PARTNER-ID]%2F8002%2F84cb1754-d9ab-48de-b96b-574927fa9599"> </SCRIPT> <NOSCRIPT><A HREF="http://ws-na.amazon-adsystem.com/widgets/q?rt=tf_sw&ServiceVersion=20070822&MarketPlace=CA&ID=V20070822%2FCA%2F[PARTNER-ID]%2F8002%2F84cb1754-d9ab-48de-b96b-574927fa9599&Operation=NoScript">Amazon.ca Widgets</A></NOSCRIPT>

    Read the article

  • Will Xubuntu 12.10 also have amazon ads?

    - by Miguel Guasch
    and thanks in advance for your comments: I'm currently using Ubuntu 12.04, and quite happy with it. I'm using the Unity desktop, and I've got no major complaints. My problem/question is: I've been reading on the news, forums, and different websites that the new version in 12.10, which i'll eventually have to upgrade to if I plan on using Ubuntu, has a lens/amazon function on the dash that sends queries to amazon. Now, this disturbs me a bit, since I don't want to see "shopping recommendations" everytime I look for something, be they from amazon or from "future partners". Does this new "function" only apply to the Unity desktop? If I switch to the Xfce desktop, will I be able to "save myself" from sending search data to amazon and/or shopping recommendations from them? Or will I have to entirely switch distributions, in order to evade this? Again, many thanks in advance for your comments and/or help. Regards, Miguel.

    Read the article

  • Amazon Web Services Free Trial: query about get and put requests

    - by abel
    Amazon recently introduced a free tier for its cloud offering. I signed up for AWS and while signing up for the free tier of S3, i found this As part of AWS Free Usage Tier, you can get started with Amazon S3 for free. Upon sign-up, new AWS customers receive 5 GB of Amazon S3 storage, 20,000 Get Requests, 2,000 Put Requests, 15GB of bandwidth in and 15GB of bandwidth out each month for one year. source:aws.amazon.com , emphasis mine. 20,000 GET requests & 2000 puts mean , 20,000 page views(max) and 2000 file uploads per month. Isn't that lower than what App Engine offers 43,200,000 requests per day.Am I missing some thing, please help.

    Read the article

  • Mounting an Amazon EC2 instance on MAC OS X

    - by hinghoo
    What are some methods for transferring files to and from Amazon EC2 instances. I'm looking for solutions / tools for editing files as well as copying files to EC2 instances from both Mac and Windows. For example, what are some solutions for mounting a drive from an instance locally? Generally, what other methods are out there?

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • Amazon EC2 - how to determine how busy each CPU is

    - by sally
    I have an Amazon EC2 micro instance. I believe that this is 1 core (or 2 for periodic bursts) with 4 CPU's. I'm getting confused with the terminology (ECU vs CPU vs Core) but really I would like to see how busy each CPU is. When I look at top it seems to be showing me just the cores. I want to see if my process is be spread out across the available processors and how busy each is, what is the appropriate command to do this?

    Read the article

  • 100% utilization on amazon server

    - by user2939830
    Good day, I would just like to know if you guys have any idea what could be the possible cause for a sudden disconnection of clients and 100% cpu utilization in our amazon server. This problem just started 2 days ago and in both occasion it happened at 7 plus in the morning gmt+8. What we usually do is just reset the socket for it to normalize and then on the next day same thing happened at 7 in the morning every client is disconnected from the server.

    Read the article

  • AWS Autoscaling issue with existing nodes in ELB

    - by Ram Prasad
    I already have a ELB setup called MyLoadBalancer. I already have 2 nodes running on it with health checks (that checks a URL on the node to see if they are up) Created an autoscaling group (min 2, Max 10) Associated launchconfig mylaunchconfig that provisions a node using an AMI Created a trigger, that checks for avg min connections of 100 and Max of 500 (checks the load balancer and it is support to increase the node count by 1, if avg connections are 500 and decrease by one if less than 100) as-create-or-update-trigger MyTrigger --auto-scaling-group MyAutoScalingGroup --namespace "AWS/ELB" --measure RequestCount --statistic Average --dimensions "LoadBalancerName=MyLoadBalancer" --period 60 --lower-threshold 500 --upper-threshold 800 --lower-breach-increment=-1 --upper-breach-increment=1 --breach-duration 600 Now the issue is, as soon as I put in the trigger, it start 2 nodes .... but there are already two nodes in the LB. So, why is it provisioning 2 more nodes, when the nodes are there ? is it because it is not recognizing the existing 2 nodes ? then how do I add the existing nodes to the AutoScaling group ?

    Read the article

  • AWS: Multi-region setup using single RDS instance

    - by Ion
    I'm trying to scale our web application (PHP, MySQL, memcache) in a multi-region scheme. Currently we are using a setup with two EC2 instances behind an ELB and an RDS instance, all of them in US-EAST (Virginia) region. We would like to have a presence in the EU (Ireland) region as well. This means at least a new EC2 instance there (identical to the others, serving the same application). I have copied the desired AMI, setup the new instance, setup a same ELB configuration (required for SSL termination) and configured latency-based routing in Route53. And it works as suggested. But, clients from EU have speed problems. This is due to the fact that the EU EC2 instances connect to the US-based RDS instance. As far as I know Amazon has not yet enabled RDS multi-region replication. Do you have any suggestions on how to properly speed up the whole setup while using the single RDS instance? Also, any ideas in general on how to scale things up? Ideally we would like to continue using the RDS technology for various reasons. Nevertheless, I am open to suggestions (I guess the next idea would be to host our own MySQL servers).

    Read the article

  • OpenVPN access server on Amazon VPC vs free version

    - by imaginative
    Maybe I'm missing the point, but I'd like to setup simple VPN access with software VPN to access my private network on Amazon VPC. I thought OpenVPN would be a great solution for this, and I thought it might make sense to put this on the NAT instance that comes with VPC so I don't have to spend money on another instance. Is there any advantage to running the following: http://www.openvpn.net/index.php?option=com_content&id=493 vs sticking to the free solution of OpenVPN? What does one offer over the other? Any reason not to run this on the NAT instance itself?

    Read the article

  • Amazon ec2 - WildCard Sub-Domain

    - by Sharanc25
    I'm running an ec2 instance on ubuntu running lamp stack. I configured my httpd.conf file to support wildcard sub-domain but it didn't work. My httpd.conf file NameVirtualHost * <VirtualHost *> DocumentRoot /www/example ServerName example.com ServerAlias *.example.com </VirtualHost> I tried all possible solutions but they didn't work. Finally I used amazon Route-53 to setup a wildcard DNS to redirect all *.example.com to example.com. My question is, Is it okay if I use Route-53 instead of httpd.conf file for wildcard Sub-Domain ? Is there an error in my httpd.conf file ? (Note: I used the same httpd.conf settings with another hosting provider and it worked perfectly there.) Additional Information : VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server example.com (/etc/apache2/httpd.conf:1) port 80 namevhost example.com (/etc/apache2/httpd.conf:1) port 80 namevhost ip-xx-xxx-xx-xxx.ec2.internal (/etc/apache2/sites-enabled/000-default:1) Syntax OK

    Read the article

  • Where is Amazon Linux AMI Test Page EC2?

    - by fuzzybee
    I have set up my websites as directories directly under /var/www/html/ and they are working just fine (the websites are mapped to virtual hosts). So, this is mainly out of curiosity for the moment. Furthermore, being able to customise this might bring some benefits in the future e.g. branding the elastic IPs my computer use temporarily. Notes I can always create a index.html page under /var/www/html/ and modify it but that's not my goal here. I can also map the elastic IP address to a directory /var/www/html/default/ and do my stuffs there but that is not also my goal here My goal is the find the Amazon Linux AMI test page I've tried running Linux command to find it but it takes too long obviously

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >