Search Results

Search found 3025 results on 121 pages for 'amazon ec2'.

Page 36/121 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • What does the EC2 command line say when a machine won't start?

    - by OneSolitaryNoob
    When starting an instance on Amazon EC2, how would I detect a failure, for instance, if there's no machine available to fulfill my request? I'm using one of the less-common machine types and am concerned it won't start up, but am having trouble finding out what message to look for to detect this. I'm using the EC2 commandline tools to do this. I know I can look for 'running' when I do ec2-describe-instance to see if the machine is up, but don't know what to look for to see if the startup failed. Thanks!

    Read the article

  • Can EC2 instances be set up to come from different IP ranges?

    - by Joshua Frank
    I need to run a web crawler and I want to do it from EC2 because I want the HTTP requests to come from different IP ranges so I don't get blocked. So I thought distributing this on EC2 instances might help, but I can't find any information about what the outbound IP range will be. I don't want to go to the trouble of figuring out the extra complexity of EC2 and distributed data, only to find that all the instances use the same address block and I get blocked by the server anyway. NOTE: This isn't for a DoS attack or anything. I'm trying to harvest data for a legitimate business purpose, I'm respecting robots.txt, and I'm only making one request per second, but the host is still shutting me down. Edit: Commenter Paul Dixon suggests that the act of blocking even my modest crawl indicates that the host doesn't want me to crawl them and therefore that I shouldn't do it (even assuming I can work around the blocking). Do people agree with this?

    Read the article

  • architecture and tools for a remote control application?

    - by slothbear
    I'm working on the design of a remote control application. From my iPhone or a web browser, I'll send a few commands. Soon my home computer will perform the commands and send back results. I know there are remote desktop apps, but I want something programmable, something simpler, and something that I wrote. My current direction is to use Amazon Simple Queue Service (SQS) as the message bus. The iPhone places some messages in a queue. My local Java/JRuby program notices the messages on the queue, performs the work and sends back status via a different queue. This will be a very low-volume application. At $1.00 for a million requests (plus a handful of data transfer charges), Amazon SQS looks a lot more affordable than having my own server of any type. And super reliable, that's important for me too. Are there better/standard toolkits or architectures for this kind of remote control? Cost is not a big issue, but I prefer the tons I learn by doing it myself. I'm moderately concerned about security, but doubt it will be a problem. The list of commands recognized will be very short, and only recognized in specific contexts. No "erase hard drive" stuff. update: I'll probably distribute these programs to some other people who want the same function, but who don't have Amazon SQS accounts. For now, they'll use anonymous access to my queues, with random 80-character queue names.

    Read the article

  • AWS RDS connection count

    - by wmarbut
    I am using AWS RDS with MySQL for a project and have a "large" instance. The documentation is clear on what this means as far as compute resources and RAM goes, but I can't find anything that documents how many open database connections that I can have. The app that I am using is PHP and it utilizes PDO with persistent connections. This means that the number of open connections could reach the maximum number of PHP child processes running at any given point. How do I ensure that my RDS instance has a max connections setting high enough to be comfortable with this?

    Read the article

  • Deleting multiple objects in a AWS S3 bucket with s3curl.pl?

    - by user183394
    I have been trying to use the AWS "official" command line tool s3curl.pl to test out the recently announced multi-object delete. Here is what I have done: First, I tested out the s3curl.pl with a set of credentials without a hitch: $ s3curl.pl --id=s3 -- http://testbucket-0.s3.amazonaws.com/|xmllint --format - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 884 0 884 0 0 4399 0 --:--:-- --:--:-- --:--:-- 5703 <?xml version="1.0" encoding="UTF-8"?> <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>testbucket-0</Name> <Prefix/> <Marker/> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <Contents> <Key>file_1</Key> <LastModified>2012-03-22T17:08:17.000Z</LastModified> <ETag>"ee0e521a76524034aaa5b331842a8b4e"</ETag> <Size>400000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>file_2</Key> <LastModified>2012-03-22T17:08:19.000Z</LastModified> <ETag>"6b32cbf8219a59690a9f69ba6ff3f590"</ETag> <Size>600000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> </ListBucketResult> Then, I following the s3curl.pl's usage instructions: s3curl.pl --help Usage /usr/local/bin/s3curl.pl --id friendly-name (or AWSAccessKeyId) [options] -- [curl-options] [URL] options: --key SecretAccessKey id/key are AWSAcessKeyId and Secret (unsafe) --contentType text/plain set content-type header --acl public-read use a 'canned' ACL (x-amz-acl header) --contentMd5 content_md5 add x-amz-content-md5 header --put <filename> PUT request (from the provided local file) --post [<filename>] POST request (optional local file) --copySrc bucket/key Copy from this source key --createBucket [<region>] create-bucket with optional location constraint --head HEAD request --debug enable debug logging common curl options: -H 'x-amz-acl: public-read' another way of using canned ACLs -v verbose logging Then, I tried the following, and always got back error. I would appreciated it very much if someone could point out where I made a mistake? $ s3curl.pl --id=s3 --post multi_delete.xml -- http://testbucket-0.s3.amazonaws.com/?delete <?xml version="1.0" encoding="UTF-8"?> <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><StringToSignBytes>50 4f 53 54 0a 0a 0a 54 68 75 2c 20 30 35 20 41 70 72 20 32 30 31 32 20 30 30 3a 35 30 3a 30 38 20 2b 30 30 30 30 0a 2f 7a 65 74 74 61 72 2d 74 2f 3f 64 65 6c 65 74 65</StringToSignBytes><RequestId>707FBE0EB4A571A8</RequestId><HostId>mP3ZwlPTcRqARQZd6gU4UvBrxGBNIVa0VVe5p0rqGmq5hM65RprwcG/qcXe+pmDT</HostId><SignatureProvided>edkNGuugiSFe0ku4eGzkh8kYgHw=</SignatureProvided><StringToSign>POST Thu, 05 Apr 2012 00:50:08 +0000 The file multi_delete.xml contains the following: cat multi_delete.xml <?xml version="1.0" encoding="UTF-8"?> <Delete> <Quiet>true</Quiet> <Object> <Key>file_1</Key> <VersionId> </VersionId>> </Object> <Object> <Key>file_2</Key> <VersionId> </VersionId> </Object> </Delete> Thanks for any help! --Zack

    Read the article

  • beanstalk using php-git on windows client

    - by ntidote
    I am trying to install beanstalk for php using git. I am using a Windows Client machine. I am done with the prerequisite installations , credentials setup. I am following the link http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_PHP.sdlc.html The following step does not workout (i use git bash for git related commands) From your Git repository directory, type the following command. git aws.config This gives the error git : 'aws.config' is not a git command. Please suggest how to deal with the issue.

    Read the article

  • How do you transfer AWS RDS snap shot to a different AWS account

    - by Webmonger
    Hi I have an RDS database that I need to transfer a snapshot of to another AWS account. I understand there are issues being able to do this between availability zones so I'm really unsure if this is possible. The RDS instance is mySql. If it's not possible to transfer the snapshot please could you explain how to transfer the data from one RDS instance to another without downloading any if the contents(The DB is over 200GB). Thanks in advance

    Read the article

  • When to increase AWS RDS MySQL Server instance to larger CPU/RAM?

    - by rksprst
    I'm wondering at what stage do I need to increase the image for the RDS MySQL server to a larger CPU/RAM instance. The CPU utilization graph is near 0. The Avg Free Memory is around 150MB. The Avg Swap Usage is 420MB. Read Latency is 0-20ms/op it spikes up randomly. Avg write latency is on average 5ms/op but spikes up to 10-20ms/op. Are there some common rules here that I should follow? Thanks!

    Read the article

  • List DB2 version, OS and hardware on Linux? (aws image)

    - by mestika
    Hello everybody, I'm not that familiar with Linux but I'm currently working on a aws image for an assignment and I need to display the DB2 version, the OS and the hardware. Is there a commando or program of some sort I can use for this purpose? I tried a rpm called "Bonnie" but that only writes the throughput for the system. Thanks Mestika

    Read the article

  • Bootstrapping in CloudFormation with Autoscale

    - by PapelPincel
    My CloudFormation template creates an autoscale group and bootstrap it with utility script /opt/aws/bin/cfn-init. When I remove the bootstrap part out of my template the, autoscale get created without any problem, but I add it the CloudFormation Stack fails and add line in /var/log/cloud-init.log : Error: AutoScalingGroupName does not specify any metadata The line above appens right after the following command : /opt/aws/bin/cfn-init --verbose --configsets orderedConfig --region us-east-1 --stack AS15 --resource AutoScalingGroupName --access-key XXXXXXXXXXXXX --secret-key XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Digging a little bit deeper, in cfn-init I added the following lines the point where it exit: from pprint import pprint pprint(vars(detail)) and I get the following trace when running the previous cfn-init command : {'_description': None, '_lastUpdated': datetime.datetime(2012, 7, 12, 14, 52, 42), '_logicalResourceId': u'AutoScalingGroupName', '_metadata': None, '_physicalResourceId': u'AS15-AutoScalingGroupName-HNPOXXXXXXXX', '_resourceStatus': u'CREATE_COMPLETE', '_resourceStatusReason': None, '_resourceType': u'AWS::AutoScaling::AutoScalingGroup', '_stackId': u'arn:aws:cloudformation:us-east-1:XXXXXXXXXXXXX:stack/AS15/XXXXXXXX-cc30-11e1-XXXXXX-XXXXXXXXXX', '_stackName': u'AS15'} As you can see, the metadata field is empty and that's the reason why it fails to create the stack. Is there any known side effects for cfn-init when used with autoscale ?

    Read the article

  • How to allow IAM users to setup their own virtual MFA devices

    - by Ali
    I want to let my IAM users to setup their own MFA devices, through the console, is there a single policy that I can use to achieve this? So far I can achieve this through a number of IAM policies, letting them list all mfa devices and list users (so that they can find themselves in the IAM console and ... I am basically looking for a more straight forward way of controlling this. I should add that my IAM users are trusted users, so I don't have to (although it will be quite nice) lock them down to the minimum possible, so if they can see a list of all users that is ok.

    Read the article

  • AWS ELB as backend for Varnish Accelerator

    - by addisonj
    I am working on a large deployment on AWS that has high uptime requirements and variable loads throughout the day. Obviously, this is the perfect use case for ELB (Elastic Load Balancer) and autoscaling. However, we also rely on varnish for caching of API calls. My initial instinct was to structure the stack so that varnish uses ELB as a backend which in turn hits an appGroup. Varnish -> ELB -> AppServers However, according to a few sources that isn't possible as ELB constantly changes the IP address of its DNS hostname, which varnish caches on start, meaning changes to the IP won't be picked up by varnish. Reading around however, it looks like people are doing this so I am wondering what workarounds exist? Perhaps a script to reload the vcl periodically? In the case of where this is really just not a good idea, any idea of other solutions?

    Read the article

  • What are the steps needed to set up and use security for AWS command line tools?

    - by chris
    I've been trying to set up the AWS command-line tools following Eric's most useful guide at http://alestic.com/2012/09/aws-command-line-tools. I can't seem to find a good how-to for how to generate the x509 certificate and private key, and how that relates to the various security files the guide creates. Update: I have found a couple of links that describe the some steps. These steps seem to work, however I'm not sure if this is secure & the best way to do it: 1) Create a private key openssl genrsa -out my-private-key.pem 2048 2) Create x.509 cert openssl req -new -x509 -key my-private-key.pem -out my-x509-cert.pem -days 365 Hit enter to accept all of the defaults. Then, from the IAM Dashboard, User, select a user & click on the "Security Credentials" tab. Click on "Manage Signing Certificates", then "Upload Signing Certificate", paste in the contents of my-x509-cert.pem, click OK and it should be accepted. One step that is discussed, but not required for me, was the addition and subsequent removal of a pass phrase on the private key. Should I have been prompted for one, and is my cert potentially unsafe because of this?

    Read the article

  • List DB2 version, OS and hardware on Linux? (aws image)H

    - by mestika
    Hello everybody, I'm not that familiar with Linux but I'm currently working on a aws image for an assignment and I need to display the DB2 version, the OS and the hardware. Is there a commando or program of some sort I can use for this purpose? I tried a rpm called "Bonnie" but that only writes the throughput for the system. Thanks Mestika

    Read the article

  • Create an AWS AMI for Ubuntu with GUI which automatically launches web browser

    - by Rory MacDonald
    I've got an ubuntu AMI setup with ubuntu desktop installed and Chrome installed and set to boot on load (via the startup programmes menu within the ubuntu desktop) I've created an image of this AMI, but any time I launch a new instance running this, the Ubuntu GUI doesn't seem to load, until I SSH into the machine, enable VNC and then connect via Chicken VNC to the machine. At that point, the desktop appears to load + starts the browser. I really need the machine to boot and the browser to load without having to VNC into the machine.. Any help would be appreciated.

    Read the article

  • Debugging logrotate postrotate script

    - by robert
    Following is my logrotate conf. /mnt/je/logs/apache/jesites/web/*.log" { missingok rotate 0 size 5M copytruncate notifempty sharedscripts postrotate /home/bitnami/.conf/compress-and-upload.sh /mnt/je/logs/apache/jesites/web/ web endscript } And compress-and-upload.sh script, #!/bin/sh # Perform Rotated Log File Compression tar -czPf $1/log.gz $1/*.1 # Fetch the instance id from the instance EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`" if [ -z $EC2_INSTANCE_ID ]; then echo "Error: Couldn't fetch Instance ID .. Exiting .." exit; else /usr/local/bin/s3cmd put $1/log.gz s3://xxxx/logs/$(date +%Y)/$(date +%m)/$(date +%d)/$2/$EC2_INSTANCE_ID-$(date +%H:%M:%S)-$2.gz fi # Removing Rotated Compressed Log File rm -f $1/log.gz The files are rotated, but shell script is not executed. I don't know how to debug the postscript. Is there any logfile I chek to see if there is any permission issues. If i directly execute the script from commandline file upload works. Thanks.

    Read the article

  • What differences are there between an official Ubuntu AMI image and a base install from an ISO?

    - by David Winter
    When creating a new instance on AWS using an official Ubuntu 12.04 server AMI, what differences are there compared to if I was to do a standard server install on a computer of my own? For example, the default user is 'ubuntu'. An SSH public key is added to that users authorized_keys file. Sudo is passwordless for that user. PasswordAuthentication is disabled for SSH. etc etc. Configurations have been changed from their defaults, and I'd like to know if there is a list, or somewhere I could find out the modifications made.

    Read the article

  • s3fs changing s3 permissions?

    - by magd1
    My developer believes that s3fs is changing my bucket's permissions. Is this possible? I want my bucket to be public, but it keeps reverting back to private. Here's my fstab. s3fs#production /mnt/production fuse use_cache=/tmp,use_rrs=1,allow_other,uid=1000,gid=1000 0 0 My developer mentioned the "-o default_acl (default="private")" option. The documentation refers to "canned acl", but I don't understand what these are.

    Read the article

  • I get a 403 when requesting a JS file from CloudFront

    - by Roland
    This is new to me so please excuse me if I have no idea what I'm talking about (: I'm trying to set up my own CDN with CloudFront and S3 through a subdomain by adding a CNAME to that subdomain to point to the CloudFront. It seems like I get a 403 when trying to load the file, this is the original s3 link : https://s3.amazonaws.com/chaoscod3r_aws_cdn/libs/polyfills/json3_polyfill.js ; which seems to be working after setting the permission to everyone to open / download. But when trying to use the subdomain to request the file : http://cdn.chaoscod3r.com/libs/polyfills/json3_polyfill.js ; it seems like I get that 403. Could anyone help me out with this one ?

    Read the article

  • Backing up data (including mysqldumps) to S3

    - by seengee
    We have a web app on a number of servers and we want to add an additional layer of redundancy by backing up the key data to S3. The key data is the MySQL database and a folder containing dynamically created site assets - predominantly images. Some kind of rsync based solution would initially seem the best plan. A couple of years ago we played with S3cmd (in particular s3cmd sync) with some success but we didn't find it particularly reliable although this may have changed since. Its occurred to me though that a rsync solution might not work particularly well with a single db.sql file created with mysqldump and I assume this means the whole database getting transferred each time, with multiple databases of over 1GB this is going to add up to a lot of traffic (and $s) very quickly. With the image files I could simply just transfer files modified within the last day which would be far more simple. What approach should I look at?

    Read the article

  • Force HTTPS with AWS Elastic load balancer

    - by panos2point0
    I need to redirect all incoming HTTP traffic to HTTPS on my elastic load balancer. I tired using Apache mod_rewrite: RewriteEngine On RewriteCond %{HTTP:X-Forwarded-Proto} !https RewriteRule !/status https://%{SERVER_NAME}%{REQUEST_URI} [L,R] Taking advantage of the X-Forwarded-Proto header added by the load balancer, this rule should instruct the users browser to request the HTTPS version of the same URL. So far It doesn't work (no redirection happens). What am I doing wrong? Is there a better way to do this?

    Read the article

  • VPC SSH port forward into private subnet

    - by CP510
    Ok, so I've been racking my brain for DAYS on this dilema. I have a VPC setup with a public subnet, and a private subnet. The NAT is in place of course. I can connect from SSH into a instance in the public subnet, as well as the NAT. I can even ssh connect to the private instance from the public instance. I changed the SSHD configuration on the private instance to accept both port 22 and an arbitrary port number 1300. That works fine. But I need to set it up so that I can connect to the private instance directly using the 1300 port number, ie. ssh -i keyfile.pem [email protected] -p 1300 and 1.2.3.4 should route it to the internal server 10.10.10.10. Now I heard iptables is the job for this, so I went ahead and researched and played around with some routing with that. These are the rules I have setup on the public instance (not the NAT). I didn't want to use the NAT for this since AWS apperantly pre-configures the NAT instances when you set them up and I heard using iptables can mess that up. *filter :INPUT ACCEPT [129:12186] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [84:10472] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 1300 -j ACCEPT -A INPUT -d 10.10.10.10/32 -p tcp -m limit --limit 5/min -j LOG --log-prefix "SSH Dropped: " -A FORWARD -d 10.10.10.10/32 -p tcp -m tcp --dport 1300 -j ACCEPT -A OUTPUT -o lo -j ACCEPT COMMIT # Completed on Wed Apr 17 04:19:29 2013 # Generated by iptables-save v1.4.12 on Wed Apr 17 04:19:29 2013 *nat :PREROUTING ACCEPT [2:104] :INPUT ACCEPT [2:104] :OUTPUT ACCEPT [6:681] :POSTROUTING ACCEPT [7:745] -A PREROUTING -i eth0 -p tcp -m tcp --dport 1300 -j DNAT --to-destination 10.10.10.10:1300 -A POSTROUTING -p tcp -m tcp --dport 1300 -j MASQUERADE COMMIT So when I try this from home. It just times out. No connection refused messages or anything. And I can't seem to find any log messages about dropped packets. My security groups and ACL settings allow communications on these ports in both directions in both subnets and on the NAT. I'm at a loss. What am I doing wrong?

    Read the article

  • EC2 Amazon Linux AMI MySQL CPU @ 62% When Idle?

    - by Jeff
    I am running MySQL on an Amazon Linux AMI. There is nothing connected to it. There are no connections and no other applications running that use MySQL. It is completely idle, but yet, top is reporting that mysql is using 62% of the CPU? Why is this happening and how do I fix it? Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 97.8%id, 0.0%wa, 0.0%hi, 0.0%si, 1.7%st Mem: 1738504k total, 390708k used, 1347796k free, 56888k buffers Swap: 917500k total, 0k used, 917500k free, 229804k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2959 mysql 20 0 466m 39m 5244 S 62.2 2.3 4:00.67 mysqld 1 root 20 0 19252 1504 1212 S 0.0 0.1 0:00.20 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd There are no connections... mysql> show processlist; +----+------+-----------+------+---------+------+-------+------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+------+-----------+------+---------+------+-------+------------------+ | 5 | root | localhost | NULL | Query | 0 | NULL | show processlist | +----+------+-----------+------+---------+------+-------+------------------+

    Read the article

  • Amazon how does their remarkable search work?

    - by JonH
    We are working on a fairly large CRM system /knowledge management system in asp.net. The db is SQL server and is growing in size based on all the various relationships. Upper management keeps asking us to implement search much like amazon does. Right from there search you can specify to search certain objects like outdoor equipment, clothing, etc. and you can even select all. I keep mentioning to upper management that we need to define the various fields to search on. Their response is all fields...they probably look at the search and assume that it is so simple. I'm the guy who has to say hold on guys we are talking about amazon here. My question is how can amazon run a search on an "all" category. Also one of the things management here likes is the dynamic filters. For instance, searching robot brings up filters specific to a robot toy. How can I put management in check and at least come up with search functionality that works like amazon. We are using asp.net, SQL server 2008 and jquery.

    Read the article

  • Recommended Method to Watch Amazon Prime using Ubuntu 14.04 LTS

    - by Kurt Sanger
    I realize that Hal is no longer in the Ubuntu Software Center for Ubuntu 14.04 and it is only available from a third party at this time. But I would like to know what Ubuntu's plans are for integrating DRM into Linux? Especially with Amazon's integration into the search tool, one would hope that they would make it easier for their Amazon Prime customers to watch Instant Videos. Is the repository for getting Hal for 13.10 safe for use? What will that break if I install it onto 14.04? Or do we need to find another OS that has DRM built into it? If Hal is okay to add to the OS using a third party repo, then why doesn't Ubuntu Software Center support it too? I imagine that Amazon's contract with the video copyright holders requires that they have some protection on electronically distributed media. I also imagine that getting Amazon to change is much harder than getting a bunch of software engineers to fix Ubuntu. Unless they don't want too. At which point Ubuntu isn't really a complete OS. Very disappointing. In general the ease of use of Ubuntu, the software center, and the large variety of applications was alluring. But breaking DRM wasn't a great idea. Can't wait to see what fails in our next update. Please tell us that there is a plan that is going to work in our future.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >