Search Results

Search found 2864 results on 115 pages for 'amazon sns'.

Page 9/115 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Amazon EC2 EBS automatic backup one-liner works manually but not from cron

    - by dan
    I am trying to implement an automatic backup system for my EBS on Amazon AWS. When I run this command as ec2-user: /opt/aws/bin/ec2-create-snapshot --region us-east-1 -K /home/ec2-user/pk.pem -C /home/ec2-user/cert.pem -d "vol-******** snapshot" vol-******** everything works fine. But if I add this line into /etc/crontab and restart the crond service: 15 12 * * * ec2-user /opt/aws/bin/ec2-create-snapshot --region us-east-1 -K /home/ec2-user/pk.pem -C /home/ec2-user/cert.pem -d "vol-******** snapshot" vol-******** that doesn't work. I checked var/log/cron and there is this line, therefore the command gets executed: Dec 13 12:15:01 ip-10-204-111-94 CROND[4201]: (ec2-user) CMD (/opt/aws/bin/ec2-create-snapshot --region us-east-1 -K /home/ec2-user/pk.pem -C /home/ec2-user/cert.pem -d "vol-******** snapshot" vol-******** ) Can you please help me to troubleshoot the problem? I guess is some environment problem - maybe the lack of some variable. If that's the case I don't know what to do about it. Thanks.

    Read the article

  • Amazon Product Availability

    - by user201140
    Many products on amazon are unavailable but a date is given for when they'll be in stock. I've used AWS and got an XML reply but I can't find the date information anywhere. Is it possible to get this information? Thanks. NOTE: This is what i've got as a request, what should I alter? http://ecs.amazonaws.com/onca/xml?AWSAccessKeyId=MYID&AssociateTag=MYTAG&ItemId=THEITEMID&Operation=ItemLookup&ResponseGroup=Large&ReviewSort=-HelpfulVotes&Service=AWSECommerceService&Signature=MYSIGNATURE&Timestamp=2009-12-04T17%3A35%3A43Z&Version=2009-06-01

    Read the article

  • Using amazon s3 with cloudfront as a CDN

    - by weezybizzle
    I would like to serve user uploaded content (pictures, videos, and other files) from a CDN. using Amazon S3 with cloudfront seems like a reasonable way to go. My only question is about the speed of the file system. My plan was to host user media with the following uri. cdn.mycompany.com/u/u/i/d/uuid.jpg. I don't haven any prior experience with S3 or CDN's and I was just wondering if this strategy would scale well to handle a large amount of user uploaded content. And if there might be another conventional way to accomplish this.

    Read the article

  • which service to use among Amazon's free tier for Java app

    - by vikas devde
    this is the first time I am going to host a java app, Amazon offers a free tier which provides free usage for one year, and I am going to use it. But there are so many services (S3, EC2, etc etc), I cant figure out which service is for web hosting, which service is for what use, their docs is so huge, I am confused, what to read. can anybody write some good points, about which service to use specifically for Java apps, how much I will be charged, and they ask for credit/debit card credential for signup, does it mean they will debit me even in the free period?

    Read the article

  • increasing amazon root volume size

    - by OCD
    I have a default amazon ec2 instance with 8GB root volume size. I am running out of space. I have: Detach the current EBS volume in AWS Management Console (Web). Create snapshot of this volume. Created a new Volume with 50G space with my snapshot. Attach the new volume back to the instance to /dev/sda1 However, when I reconnect to the account with: > df -h I can see from the management console that my new Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 8256952 8173624 0 100% / tmpfs 308508 40 308468 1% /dev/shm It's still not using my new volume's size, how to make this work?

    Read the article

  • Best practices for using Amazon SQS - Polling the queue

    - by alex
    I'm designing a service for sending out emails for our eCommerce site (order confirmations, alerts etc...) The plan is to have a "SendEmail" method, that generates a chunk of XML representing the email to be sent, and sticks it on an Amazon SQS queue. My web app(s) and other applications will use this to "send" emails. I then require a way of checking the queue, and physically sending out the email messages. (I know how I'm going to be dispatching emails) I'm curious as to what the best way to "poll" the queue would be? Should I create a windows service, and use something like Quartz.net to schedule it to check the queue every x number of minutes for example? Is there a better way of doing this?

    Read the article

  • Problem downloading .exe file from Amazon S3 with a signed URL in IE

    - by Joe Corkery
    I have a large collection of Windows exe files which are being stored/distributed using Amazon S3. We use signed URLs to control access to the files and this works great except in one case when trying to download a .exe file using Internet Explorer (version 8). It works just fine in Firefox. It also works fine if you don't use a signed URL (but that is not an option). What happens is that the IE downloader changes the name from 'myfile.exe' to 'myfile[1]' and Windows no longer recognizes it as an executable. Any advice would greatly be appreciated. Thanks.

    Read the article

  • Amazon-like ecommerce site

    - by Soule
    Hey there, My idea was to make an e-commerce site alot like Amazon. Not exactly cloning it, but since its for a niche market, i need something like it. I was thinking of using Magento or something like that to use it as a base, but I cant figure out how to allow users to: Sign Up for account, get verified by me. Allowed to add items, so they can be searchable. Product Reviews, What can I use to achieve/make this, and what are some suggestions? I can code in PHP and python, thanks!

    Read the article

  • how to 'load data infile' on amazon RDS?

    - by feydr
    not sure if this is a question better suited for serverfault but I've been messing with amazon RDS lately and was having trouble getting 'file' privileges to my web host mysql user. I'd assume that a simple: grant file on *.* to 'webuser@'%'; would work but it does not and I can't seem to do it with my 'root' user as well. What gives? The reason we use load data is because it is super super fast for doing thousands of inserts at once. anyone know how to remedy this or do I need to find a different way? This page, http://docs.amazonwebservices.com/AmazonRDS/latest/DeveloperGuide/index.html?Concepts.DBInstance.html seems to suggest that I need to find a different way around this. Help? UPDATE I'm not trying to import a database -- I just want to use the file load option to insert several hundred-thousand rows at a time. after digging around this is what we have: mysql> grant file on *.* to 'devuser'@'%'; ERROR 1045 (28000): Access denied for user 'root'@'%' (using password: YES) mysql> select User, File_priv, Grant_priv, Super_priv from mysql.user; +----------+-----------+------------+------------+ | User | File_priv | Grant_priv | Super_priv | +----------+-----------+------------+------------+ | rdsadmin | Y | Y | Y | | root | N | Y | N | | devuser | N | N | N | +----------+-----------+------------+------------+

    Read the article

  • How to get Amazon s3 PHP SDK working?

    - by JakeRow123
    I'm trying to set up s3 for the first time and trying to run the sample file that comes with the PHP sdk that creates a bucket and attempts to upload some demo files to it. But this is the error I am getting: The difference between the request time and the current time is too large. I read on another question on SO that this is because Amazon determines a valid request by comparing the times between the server and the client, that the 2 must be within a 15 min span of one another. Now here is the problem. My laptop's time is 12:30AM June 8, 2012 at the moment. On my server I created a file called servertime.php and placed this code in that file: <?php print strftime('%c'); ?> and the output is: Fri Jun 8 00:31:22 2012 It looks like the day is correct but I don't know what to make of 00:31:22. In any case, how is it possible to always make sure the time between the client and server is within a 15 minute window of one another. What if I have a user in China who wishes to upload a file on my site which uses s3 for the cdn. Then the time difference would be over a day. How can I make sure all my user's times are within 15 minutes of my server time? What if the user is in the U.S. but the time on their machine is misconfigured. Basically how to get s3 bucket creation and upload to work?

    Read the article

  • amazon product advertising api - item lookup request working example

    - by I__
    would anyone have a working example of an amazon ITEMLOOKUP ? i have the following code but it does not seem to work: string ISBN = "0393326381"; string ASIN = ""; if (!(string.IsNullOrEmpty(ISBN) && string.IsNullOrEmpty(ASIN))) { AWSECommerceServicePortTypeChannel service = new AWSECommerceServicePortTypeChannel(); ItemLookup lookup = new ItemLookup(); ItemLookupRequest request = new ItemLookupRequest(); lookup.AssociateTag = secretKey; lookup.AWSAccessKeyId = accessKeyId; if (string.IsNullOrEmpty(ASIN)) { request.IdType = ItemLookupRequestIdType.ISBN; request.ItemId = new string[] { ISBN.Replace("-", "") }; } else { request.IdType = ItemLookupRequestIdType.ASIN; request.ItemId = new string[] { ASIN }; } request.ResponseGroup = new string[] { "OfferSummary" }; lookup.Request = new ItemLookupRequest[] { request }; response = service.ItemLookup(lookup); if (response.Items.Length > 0 && response.Items[0].Item.Length > 0) { Item item = response.Items[0].Item[0]; if (item.MediumImage == null) { //bookImageHyperlink.Visible = false; } else { //bookImageHyperlink.ImageUrl = item.MediumImage.URL; } //bookImageHyperlink.NavigateUrl = item.DetailPageURL; //bookTitleHyperlink.Text = item.ItemAttributes.Title; //bookTitleHyperlink.NavigateUrl = item.DetailPageURL; if (item.OfferSummary.LowestNewPrice == null) { if (item.OfferSummary.LowestUsedPrice == null) { //priceHyperlink.Visible = false; } else { //priceHyperlink.Text = string.Format("Buy used {0}", item.OfferSummary.LowestUsedPrice.FormattedPrice); //priceHyperlink.NavigateUrl = item.DetailPageURL; } } else { //priceHyperlink.Text = string.Format("Buy new {0}", item.OfferSummary.LowestNewPrice.FormattedPrice); //priceHyperlink.NavigateUrl = item.DetailPageURL; } if (item.ItemAttributes.Author != null) { //authorLabel.Text = string.Format("By {0}", string.Join(", ", item.ItemAttributes.Author)); } else { //authorLabel.Text = string.Format("By {0}", string.Join(", ", item.ItemAttributes.Creator.Select(c => c.Value).ToArray())); } /* ItemLink link = item.ItemLinks.Where(i => i.Description.Contains("Wishlist")).FirstOrDefault(); if (link == null) { //wishListHyperlink.Visible = false; } else { //wishListHyperlink.NavigateUrl = link.URL; } * */ } } } the problem is with this: thisshould be defined differently but i do not know how AWSECommerceServicePortTypeChannel service = new AWSECommerceServicePortTypeChannel();

    Read the article

  • EC2 persistence of machine

    - by Seagull
    I want to 'persist' my Amazon EC2 images. My scenario: I have a range of Windows and Linux machines Some machines are EBS backed, whereas others are S3 backed. I need to be able to persist a machine (put it to sleep), preferably keeping all settings active I had them when the machine was running. I need to be able to quickly wake up a machine from sleep [Ideally with an SLA of less than 2 min to turn-on, if such an SLA is available with Amazon]. Here's the stuff that confuses me: AWS allows me to put EBS backed machines to sleep, but not S3 backed. I believe I can put S3 machines into some sort of persistence mode. But this involves shutting down the machine, writing it to S3 storage and then recovering from there (not a real sleep mode, but at least I don't continue to get billed for CPU). S3 backing seems to take a long time to either writing a machine to disk, or to recover (turn on a machine). I can't tell immediately which machines are EBS backed and which are S3 backed? It seems like I can instantiate either type, but it's not immediately clear how Amazon decided whether a given machine should be EBS or S3 backed. Advice?

    Read the article

  • Can't Connect to IIS Ftp Site under Amazon EC2

    - by h3n
    IIS 7.5: Ftp Firewall Suport: Data Ranges 49152-65535 using external Ip of Amazon EC2 static IP Ftp IPv4 Restriction: allow: Amazon EC2 static IP Ftp Authentication: Anonymous: Enabled, Basic: Disabled, IISMgr: Enabled Ftp Authorization: Allow All Users: Read/Write Windows Firewall (Inbound): Open port 21 Open port ranges: 49152-65535 (Outbound) Open port: 20 Amazon EC2 Security Group: Custom TCP Rule: 21 Custom TCP Rule: 49152-65535 It works on Internet Explorer when I typed the address: ftp://localhost on the server but when I entered the Amazon EC2 Static IP (ftp://IPADRESS) it doesnt connect. I cant connect also to FileZilla

    Read the article

  • Why does my custom Amazon EC2 AMI have limited instance type options?

    - by John
    The Basic 64-bit Amazon Linux AMI has the following instance type options available: Micro Large Extra-Large High-Memory Extra Large ... etc I booted up this AMI as a micro type, made customizations, shut it down, detached the volume, took a snapshot, and registered my own custom AMI: ec2-register –snapshot [snapshot_id] –description "my description" –name "my name" –kernel aki-427d952b That worked. HOWEVER, when I try to create an instance from my custom AMI, only the following instance types are available: Micro Small High-CPU Medium ... which coincidentally are the same instance types available if you try to boot up the 32-bit Amazon image. Why are the available instance types of my custom image varying from the available instance types of the image I based it off of?

    Read the article

  • Are whole VM images backed up on Amazon EC2/S3?

    - by John
    I've been trying to get my head around Amazon Web Services as a VPS provider. My understanding is a EC2 instance running Windows is basically a Windows VM, very similar to renting a VPS from a more traditional hosting provider. I don't want to have complex backups, either to administer or to restore - if my restore involves installing SVN, MySQL, Jira, etc on a new box before I can even try to restore the backup then it's not great to me. What I really want is a service which backs up my entire VM... if the PC running the VPS dies then the VM image is installed on a new PC and off we go again. With Amazon being all about flexibility and elasticity, I wondered if they have this service? I can't figure it out from reading their docs.

    Read the article

  • Why do I get "Permission denied (publickey)" when trying to SSH from local Ubuntu to a Amazon EC2 se

    - by Vorleak Chy
    I have an instance of an application running in the cloud on Amazon EC2 instance, and I need to connect it from my local Ubuntu. It works fine on one of local ubuntu and also laptop. I got message "Permission denied (publickey)" when trying to access SSH to EC2 on another local Ubuntu. It's so strange to me. I'm thinking some sort of problems with security settings on the Amazon EC2 which has limited IPs access to one instance or certificate may need to regenerate. Does anyone know a solution?

    Read the article

  • How to Setting up Amazon EC2 with own OS and DB ?

    - by Spencer Lim
    i got my own version of OS and DB which are window server 2008 Hyper-V R2 and Sql server R2 2008 both in enterprise version may i know how to configure it up and running ? with amazon EC2, what other is a must combination to make it run ? also how could i install the operating system and DNS ? i never doing server before, but i just need something like VPS to support my development and testing. Amazon Ec2 seem the best and cheapest service due to only $1 per hour. Appreciate if Any brief guide provided, Thx =D

    Read the article

  • How to Setting up Amazon EC2 with own OS and DB?

    - by SLim
    i got my own version of OS and DB which are window server 2008 Hyper-V R2 and Sql server R2 2008 both in enterprise version may i know how to configure it up and running ? with amazon EC2, what other is a must combination to make it run ? also how could i install the operating system and DNS ? i never doing server before, but i just need something like VPS to support my development and testing. Amazon Ec2 seem the best and cheapest service due to only $1 per hour.

    Read the article

  • Amazon S3: allow users to upload on a restricted basis (per bucket maybe)?

    - by Tom
    Hi there, I'm thinking about signing up to the Amazon S3 storage service. What I want to do is create a service where other people can register their own bucket with a certain amount of storage. These users will install my software, which then uploads their files. Of course, the users may only upload what they have paid for. For this to work I would like to create a separate bucket for each customer, each with its own properties. Question 1: is this possible with the API? How? This means that the installed software must have the rights needed to upload to my Amazon S3 account. Question 2: can I create individual authentication IDs for each bucket or customer, so that they can only upload with restrictions I have set? Thanks in advance.

    Read the article

  • How to run AWS sample JAVA code on an EC2

    - by SeaPlusPlus
    I just started with Amazon web services, and I have an EC2 instance. I downloaded the JAVA SDK and the Eclipse toolbox. I am able to run a sample program locally on my PC and connect to the Amazon databases, etc. My question is, what do I need to do to get this working on my EC2 instance? This may not even be specific to AWS. On Eclipse, I can just "Run as Application" and run any code. On the server side, what do I need to do? Should I ftp over my .java files? Should I export it to a jar and upload that? Do I need to install anything special to actually run it? I'm just trying to run the basic DynamoDB example that connects to the database and adds a new table and row

    Read the article

  • Amazon AWS s3fs mount problem on Fedora 14

    - by Alex
    I successfully compiled and installed s3fs (http://code.google.com/p/s3fs/) on my Fedora 14 machine. I included the password credentials in /etc/ as specified in the guide. When I run: sudo /usr/bin/s3fs bucket_name /mnt/bucket_name/ it runs successfully. (note: the bucket name is the same as the folder name in /mnt/). When I run ls in /mnt/ I get the error "ls: cannot access bucket_name: Permission denied". When I run sudo chmod 640 /mnt/bucket_name I get "chmod: changing permissions of `bucket_name': Input/output error". When I reboot the machine I can access the folder /mnt/bucket_name normally but it is not mapped to the s3 bucket. So, basically I have two questions. 1) How do I access the folder (/mnt/bucket_name) as usual after I mount it to the s3 bucket and 2) How can I keep it mounted even after machine restart. Regards

    Read the article

  • Interesting questions related to lighttpd on Amazon EC2

    - by terence410
    This problem appeared today and I have no idea what is going on. Please share you ideas. I have 1 EC2 DB server (MYSQL + NFS File Sharing + Memcached). And I have 3 EC2 Web servers (lighttpd) where it will mounted the NFS folders on the DB server. Everything going smoothly for months but suddenly there is an interesting phenomenon. In every 8 minutes to 10 minutes, PHP file will be unreachable. This will last about 1 minute and then back to normal. Normal files like .html file are unaffected. All servers have the same problem exactly at the same time. I have spent one whole day to analysis the reason. Finally, I find out when the problem appear, the file descriptor of lighttpd suddenly increased a lot. I used ls /proc/1234/fd | wc -l to check the number of fd. The # of fd is around 250 in normal time. However, when the problem appeared, it will be raised to 1500 and then back to normal. It sounds funny, right? Do you have any idea what's going on? ======================== The CPU graph of one of the web server.

    Read the article

  • Amazon S3 permissions

    - by Joe
    Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >