Search Results

Search found 3627 results on 146 pages for 'amazon ses'.

Page 8/146 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • AWS: Should my EC2 and RDS instances be in the same Availability Zone?

    - by DOOManiac
    I just noticed that all of our EC2 instances are in zone us-west-2b, but our Multi-AZ RDS instance is in us-west-2a. Performance-wise everything seems to be okay, and it will be a hassle to "move" the instances to one place since you have to stop and re-create them all. However if either of the two zones goes down when we will have some downtime; if everything is in one zone then at least we have a higher chance of not being in the zone that has downtime... Is this something worth fixing, or am I over-thinking it? (I was about to purchase some EC2 Reserved Instances, which are tied to specific AZs, so I wanted to make sure before going through with it) Thanks!

    Read the article

  • Providing a static IP for resources behind AWS Elastic Load Balancer (ELB)

    - by tharrison
    I need a static IP address that handles SSL traffic from a known source (a partner). Our servers are behind an AWS Elastic Load Balancer (ELB), which cannot provide a static IP address; many threads about this here. My thought is to create an instance in EC2 whose sole purpose in life is to be a reverse proxy server having it's own IP address; accepting HTTPS requests and forwarding them to the load balancer. Are there better solutions?

    Read the article

  • Picking only the value field out of Cloudwatch Dimensions, Java

    - by GroovyUser
    I have some data that are retried from the cloudwatch api's. Specifically I have used listMetrics. The data that I got from this call is : {Metrics: [{Namespace: Metric from grails, MetricName: hello123, Dimensions: [{Name: name, Value: 1425, }], }, {Namespace: Metric from grails, MetricName: hello123, Dimensions: [{Name: name, Value: 1068, }], }, That was the correct data as I would expect. I need a way to return only the value fields. Not others things. Is there any way to do this, in java? Thanks in advance.

    Read the article

  • How to address an EC2 instance from both inside and outside datacenter?

    - by Alexandr Kurilin
    I'm trying to find a good way of being able to address my EC2 database instance from both inside and outside of the datacenter. Other EC2 instances need to be able to call into it, and other clients like pgAdmin might need to connect to it from the outside world as well. It's my understanding that using the internal and external DNS names is sustainable long term as each reboot leads to a change. I'm thinking of associating an Elastic IP with the instance and giving it an A record (say db1.mydomain.com) which I then will use both within and outside the datacenter. Further instances in the same role will get the same treatment and a DNS record of db2.mydomain.com etc. Now, is there a cleaner and more stable way of achieving this result? Am going about this the wrong way? Suggestions?

    Read the article

  • Disk doesn't contain a valid partition table

    - by Jeevan Dongre
    I was running a m1.small instance ec2 ubuntu instance. I was running out of disk space, so I upgraded my instance to medium. When I upgraded I actually got 429.5 GB of space and after that I added 10 gb of volume too. When I run the "sudo fdisk -l" command I got this results. Disk /dev/sda1: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda1 doesn't contain a valid partition table Disk /dev/sda2: 429.5 GB, 429461078016 bytes 255 heads, 63 sectors/track, 52212 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda2 doesn't contain a valid partition table Disk /dev/sdf: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 sda1 is the primary parition and sda2 is what I got added upgrading my system to medium. But the problem persists, I am not able to pull the code from git, it is giving me this error. remote: Counting objects: 409, done. remote: Compressing objects: 100% (236/236), done. fatal: write error: No space left on device fatal: index-pack failed

    Read the article

  • Persistent Spot Instance Request with CloudFormation

    - by PapelPincel
    Is it possible to create "Persistent Spot Instance" with AWS CloudFormation ? I'm going through the Autoscale and EC2 CloudFormation's template references but there is no mention how to set a property so the Spot requests stay persistent. When the price bid lower than the actual spot price AWS brings the instances down. I would like the instances to be started automatically when the instance price is cheaper again. This can be set manually when creating a new spot instance request by checking the option "Persistent Request" in the "Request Instances Wizard".

    Read the article

  • Detaching EBS Volumes (in LVM) take a lot of time

    - by Cheezo
    I have an EC2 Instance(EBS Backed-root partition) with EBS volumes configured via LVM. I have formatted it as ext4 and can mount it to store data etc. Now i want take a snapshot of the root partition, hence in that case i go and detach the other non-root EBS volumes (configured in LVM). Here a regular detach does not work, and i have "force" detach almost always. Although, i another similar setup with RAID instead of LVM and there after stopping RAID, i can easily detach. The whole setup is running Ubuntu Maverick 10.10 Please assist me in the same.

    Read the article

  • How to change the computer name on a server configured by Puppet

    - by David Sulpy
    I am new to Puppet and I'm trying to get Puppet to configure my EC2 instances after they're started from a Cloud Formation Template in AWS. The problem is that all the nodes that get started from the Cloud Formation Template all have the same name (the name from the AMI that the new nodes derive from). I would love to find a way to have puppet rename the nodes when the nodes start up. (although, as far as I know, a Computer Name change requires reboot, a separate issue...) If you can point me to some documentation that can help me figure this out or if you have any ideas that would be great. My ultimate goal is to have each EC2 start with a unique name so that I can use New Relic server monitoring to report the different servers.

    Read the article

  • How can I tell my dd-wrt router to use someone's Amazon Affiliates link when I point my browser to amazon.com?

    - by Michael Paul
    Here's what I'd like to do. Instead of a one-time donation to one of my favorite free tools (junecloud.com) I'd like to do what they suggest here and use their Amazon Affiliates link to do all my Amazon shopping. I shop at amazon once or twice a week, so this is a great way to let them earn lots of long-term cash without me dropping a dime. My thought was to go into my dd-wrt enabled router and tell it, "any time I go to amazon.com on any computer in the house, please go to http://www.amazon.com/gp/redirect.html?link_code=ur2&tag=junecloud-20&camp=1789&creative=9325&location=%2F instead. (That URL simply redirects me to amazon.com but every purchase I make during that session is credited to JuneCloud.) Once logged into dd-wrt, I went to Services Services DNSMasq but I'm not really sure how to get it to work from there, or if it's even possible. I know I can redirect IP addresses, but I'm looking to redirect someone on my network from amazon.com to the special amazon affiliate code link. Hope that's clear. Thanks for any replies!

    Read the article

  • C# code to GZip and upload a string to Amazon S3

    - by BigJoe714
    Hello. I currently use the following code to retrieve and decompress string data from Amazon C#: GetObjectRequest getObjectRequest = new GetObjectRequest().WithBucketName(bucketName).WithKey(key); using (S3Response getObjectResponse = client.GetObject(getObjectRequest)) { using (Stream s = getObjectResponse.ResponseStream) { using (GZipStream gzipStream = new GZipStream(s, CompressionMode.Decompress)) { StreamReader Reader = new StreamReader(gzipStream, Encoding.Default); string Html = Reader.ReadToEnd(); parseFile(Html); } } } I want to reverse this code so that I can compress and upload string data to S3 without being written to disk. I tried the following, but I am getting an Exception: using (AmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(AWSAccessKeyID, AWSSecretAccessKeyID)) { string awsPath = AWSS3PrefixPath + "/" + keyName+ ".htm.gz"; byte[] buffer = Encoding.UTF8.GetBytes(content); using (MemoryStream ms = new MemoryStream()) { using (GZipStream zip = new GZipStream(ms, CompressionMode.Compress)) { zip.Write(buffer, 0, buffer.Length); PutObjectRequest request = new PutObjectRequest(); request.InputStream = ms; request.Key = awsPath; request.BucketName = AWSS3BuckenName; using (S3Response putResponse = client.PutObject(request)) { //process response } } } } The exception I am getting is: Cannot access a closed Stream. What am I doing wrong?

    Read the article

  • Amazon Web services - retrieving a wishlist

    - by izb
    I've been tinkering with Yahoo Pipes and the Amazon E-Commerce Service (ECS) SDK to retrieve my wishlist. The problem is that although I can get all the items on my wishlist just fine, it seems to include items that I've deleted too. Has anyone else used this API and noticed this? Is there a way around it? UPDATE: Requested additional information in comments... Here is the URL I use to fetch the wishlist XML: http://webservices.amazon.co.uk/onca/xml?SubscriptionId=[my subs id]&Service=AWSECommerceService&ResponseGroup=ListItems&ProductPage=1&ProductGroup=Book&Operation=ListLookup&ListType=WishList&ListId=[my list id] And here is the relevant part of the XML response: <ListId>[my list id]</ListId> <ListName>Wishlist</ListName> <TotalItems>132</TotalItems> <TotalPages>14</TotalPages> <ListItem> <ListItemId>EPIE5559HKT391</ListItemId> <DateAdded>2003-11-17</DateAdded> <QuantityDesired>1</QuantityDesired> <QuantityReceived>0</QuantityReceived> <Item> <ASIN>5557205521</ASIN> <ItemAttributes> <Title>Horton hears a who</Title> </ItemAttributes> </Item> </ListItem> ... The rest of the XML is just either more list items like that, or information about the request at the top of the response.

    Read the article

  • Issues with Rails, Amazon S3, and protected URLs

    - by Shpigford
    So I followed this little tutorial about protecting downloads of files that are uploaded to Amazon S3 with Paperclip. When I've developed locally, it's worked fine, but since pushing the exact same code to a production server...I now get this error from Amazon when I try to access the files: <Error> <Code>InvalidArgument</Code> <Message>Either the Signature query string parameter or the Authorization header should be specified, not both</Message> <ArgumentValue>Basic dGVjaHVrdWxlbGU6ZWxlbHVrdWhjZXQ=</ArgumentValue> <ArgumentName>Authorization</ArgumentName> <RequestId>F6E455857C54F95A</RequestId> <HostId>X4QA2pw9wpHtJtJ2T8qxCyINjq4PLHQVF4VrlYjpX7Ayh694BgQprh5p8H7NRCAt</HostId> </Error> Example URL: http://s3.amazonaws.com/media.example.com/assets/videos/1/original.mov?AWSAccessKeyId=MY_ACCESS_KEY&Expires=1271972624&Signature=7wWH2WYHPO0o9szwPJbimUMqAig%3D That URL is generated using AWS::S3::S3Object.url_for using the aws-s3 gem. So...not even sure where to start. The fact that it works fine when the app is running locally but not when in production really doesn't make sense. The production server is running Ubuntu 8.04.4 LTS (Hardy).

    Read the article

  • What settings need to be changed to allow EC2 instances to use Amazon's Route 53 for DNS?

    - by ks78
    I have a number of Amazon EC2 instances, all running Ubuntu, which I'd like to configure to use Amazon's Route 53. I setup a script, following Shlomo Swidler's article, but ran into script-related issues, which were answered here. Now, I have the script working, but my instances are still not able to access Route 53's DNS. By this I mean, they are not able to resolve hostnames to IP addresses. My instances are currently configured with the DNS server IP address Amazon pushes out to them by default, does that need to be changed when using Route 53? I'm also IP-restricting my instances using the Security Groups. Could that be the problem? Is there a certain IP address or port I should open to allow communication with Route 53? It seems that DNS requests should be originating from my instances so the Security Groups shouldn't be an issue, but I've been wrong before. If anyone has any ideas, I'd really appreciate it.

    Read the article

  • How can I get the size of an Amazon S3 bucket?

    - by Garret Heaton
    I'd like to graph the size (in bytes, and # of items) of an Amazon S3 bucket and am looking for an efficient way to get the data. The s3cmd tools provide a way to get the total file size using s3cmd du s3://bucket_name, but I'm worried about its ability to scale since it looks like it fetches data about every file and calculates its own sum. Since Amazon charges users in GB-Months it seems odd that they don't expose this value directly. Although Amazon's REST API returns the number of items in a bucket, [s3cmd] doesn't seem to expose it. I could do s3cmd ls -r s3://bucket_name | wc -l but that seems like a hack. The Ruby AWS::S3 library looked promising, but only provides the # of bucket items, not the total bucket size. Is anyone aware of any other command line tools or libraries (prefer Perl, PHP, Python, or Ruby) which provide ways of getting this data?

    Read the article

  • Amazon CloudFront Cache Invalidation – Fill out the Survey!

    - by joelvarty
    Amazon have come up with a survey regarding how cache can be invalidated on object stored in their CloudFront servers. http://survey.amazonwebservices.com/survey/s?s=1369   This is a key feature for Agility CMS, and for a lot of other applications. If it’s important to you, I suggest you spend a few minutes and fill it out. more later - joel

    Read the article

  • How to setup IPSec with Amazon EC2

    - by bonzi
    How to setup an IPSec connection from my ubuntu laptop to Amazon EC2 instance? I tried setting it up using elastic IP and VPC with the following openswan configuration but it is not working. conn host-to-host left=%defaultroute leftsubnet=EC2PRIVATEIP/32 # Local netmask leftid=ELASTICIP leftrsasigkey= connaddrfamily=ipv4 right=1laptopip # Remote IP address rightid=laptopip rightrsasigkey= ike=aes128 # IKE algorithms (AES cipher) esp=aes128 # ESP algorithns (AES cipher) auto=add pfs=yes forceencaps=yes type=tunnel

    Read the article

  • How to setup IPSec with Amazon EC2

    - by bonzi
    How to setup an IPSec connection from my ubuntu laptop to Amazon EC2 instance? I tried setting it up using elastic IP and VPC with the following openswan configuration but it is not working. conn host-to-host left=%defaultroute leftsubnet=EC2PRIVATEIP/32 # Local netmask leftid=ELASTICIP leftrsasigkey= connaddrfamily=ipv4 right=1laptopip # Remote IP address rightid=laptopip rightrsasigkey= ike=aes128 # IKE algorithms (AES cipher) esp=aes128 # ESP algorithns (AES cipher) auto=add pfs=yes forceencaps=yes type=tunnel

    Read the article

  • What is a good Amazon S3 client?

    - by Eyal
    I've been using the Amazon S3 Management console to browse my S3 files. Unfortunately, it doesn't seem to be able to sort files (in a given bucket) by anything other than whatever its default is (which seems to be by name). I'd like a nice GUI client for seeing these files which will let me sort them by date, so the newest will appear on top. I did find a Firefox plug-in - S3Fox - but it doesn't work for the current version of Firefox.

    Read the article

  • Controlling number of downloads on Amazon S3

    - by m7d
    Is there a way to control number of downloads of digital content on Amazon S3 or via some middle man software that talks to S3? I already use their timed links, but I would like to control number of downloads also. Any ideas of how to accomplish this using S3 or suggestions about alternative services that could? Thanks!

    Read the article

  • Emulating Amazon SQS during development

    - by pyo
    I'm quite interested in beginning some development using Amazon SQS, perhaps SimpleDB too, my question is this, are there any open source solutions that mimic the functionality, just for the purposes of development. I've already encountered the Eucalyptus project (http://open.eucalyptus.com) for creating an EC-esque cloud. I've not had any success with google, I suspect it's because the cost of entry is so inexpensive, but still, does anyone know of anything like this?

    Read the article

  • Send an XML message to Amazon SQS

    - by bartligthart
    I am a newbie to Amazon SQS and ruby on rails. And i am working on a project that some XML messages must be send to SQS. How do i do that? Now i have this in the controller after the .save def create @thing = Thing.new(params[:thing]) respond_to do |format| if @thing.save message = @thing.to_xml and in the model inputqueue.send_message(message) Is this the way i can send an XML file to SQS or??

    Read the article

  • How to get the list of price offers on an item from Amazon with python-amazon-product-api item_looku

    - by miernik
    I am trying to write a function to get a list of offers (their prices) for an item based on the ASIN: def price_offers(asin): from amazonproduct import API, ResultPaginator, AWSError from config import AWS_KEY, SECRET_KEY api = API(AWS_KEY, SECRET_KEY, 'de') str_asin = str(asin) node = api.item_lookup(id=str_asin, ResponseGroup='Offers', Condition='All', MerchantId='All') for a in node: print a.Offer.OfferListing.Price.FormattedPrice I am reading http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?ItemLookup.html and trying to make this work, but all the time it just says: Failure instance: Traceback: <type 'exceptions.AttributeError'>: no such child: {http://webservices.amazon.com/AWSECommerceService/2009-10-01}Offer

    Read the article

  • Delete Amazon S3 buckets?

    - by Kyle Cronin
    I've been interacting with Amazon S3 through S3Fox and I can't seem to delete my buckets. I select a bucket, hit delete, confirm the delete in a popup, and... nothing happens. Is there another tool that I should use?

    Read the article

  • Web based client for Amazon S3

    - by Dick Lebavo
    We are looking for a secure online solution to access our files stored on Amazon S3. We have about 3K files, mostly media and documents, that we need to make available to our employees on the move. We don't want to develop anything in-house if there is an existing solution. Please note that our employees are not technologically minded , so a simple web based upload/download GUI would work the best.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >