Search Results

Search found 2923 results on 117 pages for 'amazon ami'.

Page 9/117 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • How to address an EC2 instance from both inside and outside datacenter?

    - by Alexandr Kurilin
    I'm trying to find a good way of being able to address my EC2 database instance from both inside and outside of the datacenter. Other EC2 instances need to be able to call into it, and other clients like pgAdmin might need to connect to it from the outside world as well. It's my understanding that using the internal and external DNS names is sustainable long term as each reboot leads to a change. I'm thinking of associating an Elastic IP with the instance and giving it an A record (say db1.mydomain.com) which I then will use both within and outside the datacenter. Further instances in the same role will get the same treatment and a DNS record of db2.mydomain.com etc. Now, is there a cleaner and more stable way of achieving this result? Am going about this the wrong way? Suggestions?

    Read the article

  • Persistent Spot Instance Request with CloudFormation

    - by PapelPincel
    Is it possible to create "Persistent Spot Instance" with AWS CloudFormation ? I'm going through the Autoscale and EC2 CloudFormation's template references but there is no mention how to set a property so the Spot requests stay persistent. When the price bid lower than the actual spot price AWS brings the instances down. I would like the instances to be started automatically when the instance price is cheaper again. This can be set manually when creating a new spot instance request by checking the option "Persistent Request" in the "Request Instances Wizard".

    Read the article

  • Detaching EBS Volumes (in LVM) take a lot of time

    - by Cheezo
    I have an EC2 Instance(EBS Backed-root partition) with EBS volumes configured via LVM. I have formatted it as ext4 and can mount it to store data etc. Now i want take a snapshot of the root partition, hence in that case i go and detach the other non-root EBS volumes (configured in LVM). Here a regular detach does not work, and i have "force" detach almost always. Although, i another similar setup with RAID instead of LVM and there after stopping RAID, i can easily detach. The whole setup is running Ubuntu Maverick 10.10 Please assist me in the same.

    Read the article

  • How can I tell my dd-wrt router to use someone's Amazon Affiliates link when I point my browser to amazon.com?

    - by Michael Paul
    Here's what I'd like to do. Instead of a one-time donation to one of my favorite free tools (junecloud.com) I'd like to do what they suggest here and use their Amazon Affiliates link to do all my Amazon shopping. I shop at amazon once or twice a week, so this is a great way to let them earn lots of long-term cash without me dropping a dime. My thought was to go into my dd-wrt enabled router and tell it, "any time I go to amazon.com on any computer in the house, please go to http://www.amazon.com/gp/redirect.html?link_code=ur2&tag=junecloud-20&camp=1789&creative=9325&location=%2F instead. (That URL simply redirects me to amazon.com but every purchase I make during that session is credited to JuneCloud.) Once logged into dd-wrt, I went to Services Services DNSMasq but I'm not really sure how to get it to work from there, or if it's even possible. I know I can redirect IP addresses, but I'm looking to redirect someone on my network from amazon.com to the special amazon affiliate code link. Hope that's clear. Thanks for any replies!

    Read the article

  • C# code to GZip and upload a string to Amazon S3

    - by BigJoe714
    Hello. I currently use the following code to retrieve and decompress string data from Amazon C#: GetObjectRequest getObjectRequest = new GetObjectRequest().WithBucketName(bucketName).WithKey(key); using (S3Response getObjectResponse = client.GetObject(getObjectRequest)) { using (Stream s = getObjectResponse.ResponseStream) { using (GZipStream gzipStream = new GZipStream(s, CompressionMode.Decompress)) { StreamReader Reader = new StreamReader(gzipStream, Encoding.Default); string Html = Reader.ReadToEnd(); parseFile(Html); } } } I want to reverse this code so that I can compress and upload string data to S3 without being written to disk. I tried the following, but I am getting an Exception: using (AmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(AWSAccessKeyID, AWSSecretAccessKeyID)) { string awsPath = AWSS3PrefixPath + "/" + keyName+ ".htm.gz"; byte[] buffer = Encoding.UTF8.GetBytes(content); using (MemoryStream ms = new MemoryStream()) { using (GZipStream zip = new GZipStream(ms, CompressionMode.Compress)) { zip.Write(buffer, 0, buffer.Length); PutObjectRequest request = new PutObjectRequest(); request.InputStream = ms; request.Key = awsPath; request.BucketName = AWSS3BuckenName; using (S3Response putResponse = client.PutObject(request)) { //process response } } } } The exception I am getting is: Cannot access a closed Stream. What am I doing wrong?

    Read the article

  • Amazon Web services - retrieving a wishlist

    - by izb
    I've been tinkering with Yahoo Pipes and the Amazon E-Commerce Service (ECS) SDK to retrieve my wishlist. The problem is that although I can get all the items on my wishlist just fine, it seems to include items that I've deleted too. Has anyone else used this API and noticed this? Is there a way around it? UPDATE: Requested additional information in comments... Here is the URL I use to fetch the wishlist XML: http://webservices.amazon.co.uk/onca/xml?SubscriptionId=[my subs id]&Service=AWSECommerceService&ResponseGroup=ListItems&ProductPage=1&ProductGroup=Book&Operation=ListLookup&ListType=WishList&ListId=[my list id] And here is the relevant part of the XML response: <ListId>[my list id]</ListId> <ListName>Wishlist</ListName> <TotalItems>132</TotalItems> <TotalPages>14</TotalPages> <ListItem> <ListItemId>EPIE5559HKT391</ListItemId> <DateAdded>2003-11-17</DateAdded> <QuantityDesired>1</QuantityDesired> <QuantityReceived>0</QuantityReceived> <Item> <ASIN>5557205521</ASIN> <ItemAttributes> <Title>Horton hears a who</Title> </ItemAttributes> </Item> </ListItem> ... The rest of the XML is just either more list items like that, or information about the request at the top of the response.

    Read the article

  • Issues with Rails, Amazon S3, and protected URLs

    - by Shpigford
    So I followed this little tutorial about protecting downloads of files that are uploaded to Amazon S3 with Paperclip. When I've developed locally, it's worked fine, but since pushing the exact same code to a production server...I now get this error from Amazon when I try to access the files: <Error> <Code>InvalidArgument</Code> <Message>Either the Signature query string parameter or the Authorization header should be specified, not both</Message> <ArgumentValue>Basic dGVjaHVrdWxlbGU6ZWxlbHVrdWhjZXQ=</ArgumentValue> <ArgumentName>Authorization</ArgumentName> <RequestId>F6E455857C54F95A</RequestId> <HostId>X4QA2pw9wpHtJtJ2T8qxCyINjq4PLHQVF4VrlYjpX7Ayh694BgQprh5p8H7NRCAt</HostId> </Error> Example URL: http://s3.amazonaws.com/media.example.com/assets/videos/1/original.mov?AWSAccessKeyId=MY_ACCESS_KEY&Expires=1271972624&Signature=7wWH2WYHPO0o9szwPJbimUMqAig%3D That URL is generated using AWS::S3::S3Object.url_for using the aws-s3 gem. So...not even sure where to start. The fact that it works fine when the app is running locally but not when in production really doesn't make sense. The production server is running Ubuntu 8.04.4 LTS (Hardy).

    Read the article

  • What settings need to be changed to allow EC2 instances to use Amazon's Route 53 for DNS?

    - by ks78
    I have a number of Amazon EC2 instances, all running Ubuntu, which I'd like to configure to use Amazon's Route 53. I setup a script, following Shlomo Swidler's article, but ran into script-related issues, which were answered here. Now, I have the script working, but my instances are still not able to access Route 53's DNS. By this I mean, they are not able to resolve hostnames to IP addresses. My instances are currently configured with the DNS server IP address Amazon pushes out to them by default, does that need to be changed when using Route 53? I'm also IP-restricting my instances using the Security Groups. Could that be the problem? Is there a certain IP address or port I should open to allow communication with Route 53? It seems that DNS requests should be originating from my instances so the Security Groups shouldn't be an issue, but I've been wrong before. If anyone has any ideas, I'd really appreciate it.

    Read the article

  • How can I get the size of an Amazon S3 bucket?

    - by Garret Heaton
    I'd like to graph the size (in bytes, and # of items) of an Amazon S3 bucket and am looking for an efficient way to get the data. The s3cmd tools provide a way to get the total file size using s3cmd du s3://bucket_name, but I'm worried about its ability to scale since it looks like it fetches data about every file and calculates its own sum. Since Amazon charges users in GB-Months it seems odd that they don't expose this value directly. Although Amazon's REST API returns the number of items in a bucket, [s3cmd] doesn't seem to expose it. I could do s3cmd ls -r s3://bucket_name | wc -l but that seems like a hack. The Ruby AWS::S3 library looked promising, but only provides the # of bucket items, not the total bucket size. Is anyone aware of any other command line tools or libraries (prefer Perl, PHP, Python, or Ruby) which provide ways of getting this data?

    Read the article

  • Amazon CloudFront Cache Invalidation – Fill out the Survey!

    - by joelvarty
    Amazon have come up with a survey regarding how cache can be invalidated on object stored in their CloudFront servers. http://survey.amazonwebservices.com/survey/s?s=1369   This is a key feature for Agility CMS, and for a lot of other applications. If it’s important to you, I suggest you spend a few minutes and fill it out. more later - joel

    Read the article

  • How to setup IPSec with Amazon EC2

    - by bonzi
    How to setup an IPSec connection from my ubuntu laptop to Amazon EC2 instance? I tried setting it up using elastic IP and VPC with the following openswan configuration but it is not working. conn host-to-host left=%defaultroute leftsubnet=EC2PRIVATEIP/32 # Local netmask leftid=ELASTICIP leftrsasigkey= connaddrfamily=ipv4 right=1laptopip # Remote IP address rightid=laptopip rightrsasigkey= ike=aes128 # IKE algorithms (AES cipher) esp=aes128 # ESP algorithns (AES cipher) auto=add pfs=yes forceencaps=yes type=tunnel

    Read the article

  • How to setup IPSec with Amazon EC2

    - by bonzi
    How to setup an IPSec connection from my ubuntu laptop to Amazon EC2 instance? I tried setting it up using elastic IP and VPC with the following openswan configuration but it is not working. conn host-to-host left=%defaultroute leftsubnet=EC2PRIVATEIP/32 # Local netmask leftid=ELASTICIP leftrsasigkey= connaddrfamily=ipv4 right=1laptopip # Remote IP address rightid=laptopip rightrsasigkey= ike=aes128 # IKE algorithms (AES cipher) esp=aes128 # ESP algorithns (AES cipher) auto=add pfs=yes forceencaps=yes type=tunnel

    Read the article

  • What is a good Amazon S3 client?

    - by Eyal
    I've been using the Amazon S3 Management console to browse my S3 files. Unfortunately, it doesn't seem to be able to sort files (in a given bucket) by anything other than whatever its default is (which seems to be by name). I'd like a nice GUI client for seeing these files which will let me sort them by date, so the newest will appear on top. I did find a Firefox plug-in - S3Fox - but it doesn't work for the current version of Firefox.

    Read the article

  • Controlling number of downloads on Amazon S3

    - by m7d
    Is there a way to control number of downloads of digital content on Amazon S3 or via some middle man software that talks to S3? I already use their timed links, but I would like to control number of downloads also. Any ideas of how to accomplish this using S3 or suggestions about alternative services that could? Thanks!

    Read the article

  • Emulating Amazon SQS during development

    - by pyo
    I'm quite interested in beginning some development using Amazon SQS, perhaps SimpleDB too, my question is this, are there any open source solutions that mimic the functionality, just for the purposes of development. I've already encountered the Eucalyptus project (http://open.eucalyptus.com) for creating an EC-esque cloud. I've not had any success with google, I suspect it's because the cost of entry is so inexpensive, but still, does anyone know of anything like this?

    Read the article

  • Send an XML message to Amazon SQS

    - by bartligthart
    I am a newbie to Amazon SQS and ruby on rails. And i am working on a project that some XML messages must be send to SQS. How do i do that? Now i have this in the controller after the .save def create @thing = Thing.new(params[:thing]) respond_to do |format| if @thing.save message = @thing.to_xml and in the model inputqueue.send_message(message) Is this the way i can send an XML file to SQS or??

    Read the article

  • How to get the list of price offers on an item from Amazon with python-amazon-product-api item_looku

    - by miernik
    I am trying to write a function to get a list of offers (their prices) for an item based on the ASIN: def price_offers(asin): from amazonproduct import API, ResultPaginator, AWSError from config import AWS_KEY, SECRET_KEY api = API(AWS_KEY, SECRET_KEY, 'de') str_asin = str(asin) node = api.item_lookup(id=str_asin, ResponseGroup='Offers', Condition='All', MerchantId='All') for a in node: print a.Offer.OfferListing.Price.FormattedPrice I am reading http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?ItemLookup.html and trying to make this work, but all the time it just says: Failure instance: Traceback: <type 'exceptions.AttributeError'>: no such child: {http://webservices.amazon.com/AWSECommerceService/2009-10-01}Offer

    Read the article

  • Delete Amazon S3 buckets?

    - by Kyle Cronin
    I've been interacting with Amazon S3 through S3Fox and I can't seem to delete my buckets. I select a bucket, hit delete, confirm the delete in a popup, and... nothing happens. Is there another tool that I should use?

    Read the article

  • Web based client for Amazon S3

    - by Dick Lebavo
    We are looking for a secure online solution to access our files stored on Amazon S3. We have about 3K files, mostly media and documents, that we need to make available to our employees on the move. We don't want to develop anything in-house if there is an existing solution. Please note that our employees are not technologically minded , so a simple web based upload/download GUI would work the best.

    Read the article

  • Amazon Product Advertising API: XXX is not a valid value for BrowseNodeId

    - by Chris
    Hello, I am using the Amazon product Advertising API to fetch the product categories. For US categories it is working. But using browse nodes from different sites I get the following error: "569604 is not a valid value for BrowseNodeId. Please change this value and retry your request." I got the browse-nodes from the following site: http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?BrowseNodeIDs.html Where is the problem? Thanks for your help!

    Read the article

  • Can no longer deploy to Amazon AWS using VS 2010

    - by KevinDeus
    Did something change on Amazon recently? I'm trying to redeploy to my Amazon instance, and the "Publish to Amazon Cloudformation" plugin for VS 2010 no longer appears to update my instance. It tells me that upload is successful, but my instance does not appear to be updating on Amazon I've tried disabling all my instances and using the tool to create a new instance , but no luck. I do see that the URL of the deployed application (which looks like this: http://c2-107-20-11-27.compute-1.amazonaws.com) does not appear to match up with the public IP of my instance on Amazon. (even when it creates a new one) This seems to indicate that something might be broken. any clues? (btw, whenever the Amazon VS2010 Plugin creates a new instance, I am sure to reconnect my elastic IP with the new instance)

    Read the article

  • Converting an EC2 AMI to vmdk image

    - by Reed G. Law
    I've come quite close to getting Amazon Linux to boot inside VirtualBox, thanks to this answer and these websites. A quick overview of the steps I've taken: Launch EC2 instance with Amazon Linux 2011.09 64-bit AMI dd the contents of the EBS volume over ssh to a local image file. Mount the image file as a loopback device and then to a local mount point. Create a new empty disk image file, partition with an offset for a bootloader, and create an ext4 filesystem. Mount the new image's partition and copy everything from the EC2 image. Install grub (using Ubuntu's grub-legacy-ec2 package, not grub2). Convert the image file to vmdk using qemu-img. Create a new VirtualBox VM with the vmdk. Now the VM boots, grub loads, and the kernel is found. But it fails when it tries to mount the root device: dracut Warning: No root device "block:/dev/xvda1" found dracut Warning: Boot has failed. To debug this issue add "rdshell" to the kernel command line. dracut Warning: Signal caught! dracut Warning: Boot has failed. To debug this issue add "rdshell" to the kernel command line. Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init Not tainted 2.6.35.14-107.1.39.amzn1.x86_64 #1 I have tried changing /boot/grub/menu.lst to find the root device by label and UUID, but nothing works. I'm guessing the xen kernel is not compatible with VirtualBox. The reasoning behind all this effort is to make a Vagrant box that is as close to possible as the production enviroment, so deploys can be tested locally. I know it's cheap to do test runs on EC2, but poor connectivity often ruins the experience. Plus it would be really nice to have a virtual machine with the production environment so that co-workers don't have to install everything under the sun just to get up and running with app development. If I were to try running a different kernel, what kernel could I get to be as close as possible to Amazon Linux 2011.09?

    Read the article

  • How do you persist installed software & configurations on an Amazon EC2 instance?

    - by Richard
    I've gotten a base Debian AMI up and running and now I need to know the best way to maintain it. I've ran the updates (aptitude update/upgrade) and installed/configured my software (Apache, Ruby, etc.) but if I reboot the instance or start a new one I'll have to do all this work over again. How do you persist these types of things over a reboot? Do you build a new AMI every time you adjust some tiny piece of the system? Or is there some way to feed it a script on startup that configures it in "real-time"? I know I could go all the way with a Reductive Labs Puppet style setup but that's a bit too much for my needs right now (1-2 servers). Any best practices on this? Update: I found a bit of information on using User-Data to run scripts at instance boot time.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >