Search Results

Search found 1351 results on 55 pages for 'aws s3'.

Page 2/55 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Why wouldn't an S3 ACL "stick"?

    - by Chris Phillips
    We would like to set an ACL to allow access to one of our buckets with a partner account. We've tested the process on a test account and everything works fine. On our production account/buckets, however, we can set the ACL and see the update but as soon as we attempt to access the bucket from the other account we get a forbidden response. Afterwards, when we look at the ACL list for the bucket, the permission is gone. We've tried using both Amazon's new S3 tool in the AWS Management Console and CloudBerry Explorer and both tools exhibit exactly the same behavior. Using the same process to update an ACL from our test account works as expected ( the ACL update "sticks" ). What would cause the ACL to not "stick"? Does anyone have any ideas on how to fix/workaround the problem?

    Read the article

  • How to set up JBoss with S3_Ping on AWS?

    - by Jonik
    I'm looking into running clustered JBoss on Amazon Web Services (AWS). I'd like to try out S3_PING, i.e. making JBoss use an S3 bucket for dynamic node discovery etc, since no multicast is available. I found a piece of example config XML related to S3_Ping, but I'm not sure where in JBoss installation you're supposed to configure this. So, what JBoss config files would I need to tweak to get S3_PING working? Can anyone point me to a more complete example? JBoss 5.1.0 GA. (This is probably more a JGroups/JBoss question than anything else. I've already got the S3 bucket for this set up, so no problem there.)

    Read the article

  • Use personal Amazon S3 account to backup Gmail using Ubuntu

    - by lokheart
    As question, I have found online that backupify offer shifting from their S3 to user's personal S3, I have a backupify account but I can't find this options, besides, I don't prefer having my email being processed by somebody else. Is it possible to use my own personal amazon s3 account to backup gmail? Preferably as incremental backup, as I don't have to use too much of bandwidth to load redundant data back to S3. I am using ubuntu, so script is OK for me. Thanks!

    Read the article

  • Upload database backup from mysql to Amazon S3 or Glacier without creating local file

    - by Rubem Azenha
    Is there a tool that makes possible to backup a Mysql database to Amazon S3 or Amazon Glacier without having o create a local file with the database contents? Something like that: mysqldump -u root -ppass -h host --all-databases | magical-s3-tool s3-bucket backup-yyyy-mm-dd.sql This magical tool would use the pipe data and transfer the backup data directly to S3, without creating a local file.

    Read the article

  • DNS with name.com and Amazon S3

    - by aledalgrande
    I have a website on a bucket in Amazon S3, and recently started to get emails from Google "Googlebot can't access your site". When I go to Webmaster Tools and I try to fetch in fact it doesn't work. Also people in locations different from mine sometimes reported they could not access the website. Now for curiosity I tried from my terminal: $ host xxx xxx is an alias for xxx.s3-website-us-west-1.amazonaws.com. xxx.s3-website-us-west-1.amazonaws.com is an alias for s3-website-us-west-1.amazonaws.com. s3-website-us-west-1.amazonaws.com has address yyy.yyy.yyy.yyy And when I try with dig: $ dig xxx ; <<>> DiG 9.8.3-P1 <<>> xxx ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17860 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;xxx. IN A ;; ANSWER SECTION: xxx. 300 IN CNAME xxx.s3-website-us-west-1.amazonaws.com. xxx.s3-website-us-west-1.amazonaws.com. 60 IN CNAME s3-website-us-west-1.amazonaws.com. s3-website-us-west-1.amazonaws.com. 60 IN A yyy ;; Query time: 1514 msec ;; SERVER: 75.75.75.75#53(75.75.75.75) ;; WHEN: Fri Aug 22 12:32:13 2014 ;; MSG SIZE rcvd: 127 It seems OK to me. Why would Google tell me there is a DNS error? UPDATE: Google also cannot fetch robots.txt, but I can fetch it from my browser. UPDATE 2: I have a forwarding on the root to the www.* hostname: $ dig thenifty.me ; <<>> DiG 9.8.3-P1 <<>> thenifty.me ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49286 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;thenifty.me. IN A ;; AUTHORITY SECTION: thenifty.me. 300 IN SOA ns1hwy.name.com. support.name.com. 1 10800 3600 604800 300 ;; Query time: 148 msec ;; SERVER: 75.75.75.75#53(75.75.75.75) ;; WHEN: Fri Aug 22 13:32:56 2014 ;; MSG SIZE rcvd: 88

    Read the article

  • Is it safe to use S3 over HTTP from EC2, as opposed to HTTPS

    - by Marc
    I found that there is a fair deal of overhead when uploading a lot of small files to S3. Some of this overhead comes from SSL itself. How safe is it to talk to S3 without SSL when running in EC2? From the awesome comments below, here are some clarifications: this is NOT a question about HTTPS versus HTTP or the sensitivity of my data. I'm trying to get a feeling for the networking and protocol particularities of EC2 and S3. For example Are we guaranteed to be passing through only the AWS network when communicating from EC2 to S3 Can other AWS users (apart from staff) sniff my communications between EC2 and S3 Is authentication on their api done on every call, and thus credentials are passed on every call? Or is there some kind of authenticated session. I am using the jets3t lib. Feedback from people with some AWS experience would be appreciated. Thanks Marc

    Read the article

  • rebuild yum index on aws s3

    - by Chucks
    I am trying to rebuild yum repo on aws S3 after adding new packages. Here are few commands I am trying, but it is not helping. [root@chucks ~]$ createrepo --baseurl http://rpmcopy.xxxxx.com.s3-website-us-east-1.amazonaws.com /repodata/ --update Saving Primary metadata Saving file lists metadata Saving other metadata Generating sqlite DBs Sqlite DBs complete How do I give a path from S3? /repodata/ path is not relevent I believe. All my pkgs are under bucket s3://rpmcopy.xxxxx.com/. And repodata dir is under s3://rpmcopy.xxxxx.com/repodata

    Read the article

  • Why is uploading to S3 so slow?

    - by Tom Marthenal
    I am using s3cmd to upload to S3: # s3cmd put 1gb.bin s3://my-bucket/1gb.bin 1gb.bin -> s3://my-bucket/1gb.bin [1 of 1] 366706688 of 1073741824 34% in 371s 963.22 kB/s I am uploading from Linode, which has an outgoing bandwidth cap of 50 Mb/s according to support (roughly 6 MB/s). Why am I getting such slow upload speeds to S3, and how can I improve them? Update: Uploading the same file via SCP to an m1.medium EC2 instance (SCP from my Linode to the instance's EBS drive) gives about 44 Mb/s according to iftop (any compression done by the cipher is not a factor). Traceroute: Here's a traceroute to the server it's uploading to (according to tcpdump). # traceroute s3-1-w.amazonaws.com. traceroute to s3-1-w.amazonaws.com. (72.21.194.32), 30 hops max, 60 byte packets 1 207.99.1.13 (207.99.1.13) 0.635 ms 0.743 ms 0.723 ms 2 207.99.53.41 (207.99.53.41) 0.683 ms 0.865 ms 0.915 ms 3 vlan801.tbr1.mmu.nac.net (209.123.10.9) 0.397 ms 0.541 ms 0.527 ms 4 0.e1-1.tbr1.tl9.nac.net (209.123.10.102) 1.400 ms 1.481 ms 1.508 ms 5 0.gi-0-0-0.pr1.tl9.nac.net (209.123.11.62) 1.602 ms 1.677 ms 1.699 ms 6 equinix02-iad2.amazon.com (206.223.115.35) 9.393 ms 8.925 ms 8.900 ms 7 72.21.220.41 (72.21.220.41) 32.610 ms 9.812 ms 9.789 ms 8 72.21.222.141 (72.21.222.141) 9.519 ms 9.439 ms 9.443 ms 9 72.21.218.3 (72.21.218.3) 10.245 ms 10.202 ms 10.154 ms 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * * The latency looks reasonable, at least until the server stopped responding to ping requests.

    Read the article

  • Amazon AWS EC2 + Puppet, get Puppet to know AWS instance tags

    - by Piotr Jasiulewicz
    I am having a problem with my AWS deployment, fairly new to AWS and Puppet. So coming to my question - can you distinguish puppet nodes with AWS machine tags or CNAME domains? A little background about the plan: have multiple clusters of machines, one php cluster, one legacy php cluster, one java cluster, one perl cluster control configuration with puppet - still pretty new to puppet but as a developer I like the idea of being able to version control configuration of servers have autoscaling enabled on those clusters - obviously the main benefit of the cloud that makes the much hight cost when it comes to any reasonable performance worth it (those amazon machines are slower than my phone...) deployment controlled by Capistrano, this makes things a lot easier So in AWS you get those super nasty public/private machine dns's... no way you can identify machines on those. In order to easer the problem, seams like AWS want's you to tag everything - so I did. Found a script that makes a CNAME record for each machine with the tag "ShortName" thanks to the Route53 API. Every machine has a ShortName tag that becomes its CNAME, unfortunately puppet still resolves the private dns name. I'd like to have node 'perl-cluster'{} in puppet, anyone any clue ho to achieve this? Thanks

    Read the article

  • S3 file Uploading from Mac app though PHP?

    - by Ilija Tovilo
    I have asked this question before, but it was deleted due too little information. I'll try to be more concrete this time. I have an Objective-C mac application, which should allow users to upload files to S3-storage. The s3 storage is mine, the users don't have an Amazon account. Until now, the files were uploaded directly to the amazon servers. After thinking some more about it, it wasn't really a great concept, regarding security and flexibility. I want to add a server in between. The user should authenticate with my server, the server would open a session if the authentication was successful, and the file-sharing could begin. Now my question. I want to upload the files to S3. One option would be to make a POST-request and wait until the server would receive the file. Problems here are, that there would be a delay, when the file is being uploaded from my server to the S3 servers, and it would double the uploading time. Best would be, if I could validate the request, and then redirecting it, so the client uploads it directly to the s3-storage. Not sure if this is possible somehow. Uploading directly to S3 doesn't seem to be very smart. After looking into other apps like Droplr and Dropmark, it looks like they don't do this. Btw. I did this using Little Snitch. They have their api on their own web-server, and that's it. Could someone clear things up for me? EDIT How should I transmit my files to S3? Is there a way to "forward" it, or do I have to upload it to my server and then upload it from there to S3? Like I said, other apps can do this efficiently and without the need of communicating with S3 directly.

    Read the article

  • Serving and caching content from Amazon S3 with Tomcat

    - by Rob
    Hi all, We're looking to serve a range of content using Amazon S3 as a store for the content and Tomcat to host the web application. The content is divided into free and paid for content. We intend to authenticate the users when they access the web application running in Tomcat. Based around their authentication we are able to tell if the user has access to paid for content or simply free stuff. So I envision the flow of a request being something like this: Authenticated request to Tomcat If user is "paid" user, display links to premium content Direct requests for paid content back through Tomcat to prevent direct access to it by non-paying users. Tomcat makes request to S3 through a web cache to keep our costs down Content is returned to user. As we have to pay for each request to S3, I'd ideally like to cache content locally to the Tomcat instance after it has been requested for the first time to keep costs to a minimum and to speed things up. I would also like to be able to invalidate this cache if we publish fresh content to S3. So to confirm my proposal: Client Request - Tomcat - Web Cache - S3 To invalidate the cache, I was thinking of using something like PubSubHubbub with the cache waiting for updates to the feed for content that it should invalidate. I'd appreciate some general feedback on this approach as I've no real experience of caching and I'm sure I've made some invalid assumptions. I'd also appreciate any recommendations for caching technologies. Thanks.

    Read the article

  • Synchronize large objects to S3 efficiently

    - by emk
    I need to synchronize about 30GB of git repositories to S3. These repos may contain some very large pack files, on the rough order of 2GB. I know that S3 has recently added support for large objects, and has new APIs that allow the objects to be uploaded as several parallel chunks. Is there a good command-line tool for Linux that allows me to efficiently synchronize large objects with S3 in a fashion similar to s3sync?

    Read the article

  • Choosing gems to work with AWS

    - by Sergii Vozniuk
    Suppose a service written with RoR starts to use AWS S3 to store some data. What is the best library to use for working with AWS S3? Currently the main two alternatives for me are: RightScale AWS Ruby gems http://github.com/rightscale/right_aws AWS::s3 http://amazon.rubyforge.org/ What are their main advantages and disadvantages? What if later service will need to use other AWS (like EC2)? What other gems do you use and why? Thanks!

    Read the article

  • Selecting keys based on metadata, possible with Amazon S3?

    - by nbv4
    I'm sending files to my S3 bucket that are basically gzipped database dumps. They keys are a human readable date ("2010-05-04.dump"), and along with that, I'm setting a metadata field to the UNIX time of the dump. I want to write a script that retrieve the latest dump from the bucket. That is to say I want the the key with the largest unix time metadata value. Is this possible with Amazon S3, or is this not how S3 is meant to work? I'm using both the command line tool aws, and the python library boto

    Read the article

  • Can you create a HIPAA compliant Amazon S3 Web Application?

    - by xkingpin
    I am facing some questions when trying to design an S3 application using ASP.NET MVC and trying to stay HIPAA compliant. My initial plan was to require an SSL connection to my web server, encrypt the images on my server, then send them to s3 using my private keys. Here's my obvious concerns: You cannot store unencrypted images in any temporary file cache when client views images within the browser. Even if I setup an ashx to generically handle the image in memory, couldn't this get stored in cache? Saying the images will be encrpyted because you will be connecting to my server via https still does not guarantee all browsers will not cache data. It's not possible to even consider the "Query String" with expiration option since data will be encrypted before being stored on disk at s3, and will again be decrypted at my server in memory. I think my only option would be to write/purchase some sort of ActiveX component that will not expose the image as a simple html image source or write my app as a client side WinForm application.

    Read the article

  • Manage Your Amazon S3 Account with CloudBerry Explorer

    - by Mysticgeek
    If you have an Amazon S3 account you’re using to backup your data, you might want an easy way to manage it. CloudBerry Explorer is a free app that runs on your desktop an provides an easy way to manage your S3 account. Installation and Setup Just download and install the application with the defaults. When the application launches you’ll be prompted to enter in your username and email to get a registration key. Or you can continue on by clicking Register later. Now you will want to set up your Amazon S3 account. Click on File \ Amazon S3 Accounts. Double-click on the New Account icon.   Next enter in your Amazon account Access and Secret keys, select SSL if you want, then click the Test Connection button. Provided everything was entered correctly, you’ll see the Connection Success screen, just close out of it. Browse and Manage files Once you have your account setup through the Explorer, you can start viewing and managing your files on S3. The left pane shows your S3 buckets and stored files, while the right side shows your local computer. This allows you to manage your files in your Amazon S3 buckets directly from your desktop! It’s very easy to use, and you can drag and drop files from your computer to the S3 account or vice versa. There is also the ability to transfer files between Amazon S3 accounts from within the explorer. Go into Tools and Content Types and you can control the file types by adding, removing, or editing them. If you end up messing something up along the lines, you can always select Reset to defaults and everything will be back to normal. There is a multiple tabbed view so you can easily keep track of your different accounts and local machine. It allows the ability to create new storage buckets directly in the Explorer. Or you can delete buckets as well… Different actions can be accessed from the toolbars or by right-clicking and selecting from the context menu. Here we see a cool option that lets you move your data inside Amazon S3. It is faster and doesn’t cost money by moving the files to your computer first, then to another account. However, if you want data moved to your local machine first, you have that option as well.   Not all features are available in the free version, and if it’s not, you’ll be prompted to purchase a license for the Pro version. We will have a comprehensive review of the Pro version in the near future.    If you ever need help with CloudBerry Explorer, go to Tools \ Diagnostics. It will run a quick diagnostics check and you can send the information to the CloudBerry team for assistance. Delete Files from Amazon S3 To delete a file from you Amazon S3 account, simply highlight the files or folder you want to get rid of then click Delete on the toolbar. You can also right-click the file and select Delete from the Context Menu. Click Yes to the confirmation dialog box… Then you can watch the progress as your files are deleted in the bottom section of the explorer. Conclusion CloudBerry Explorer free version has several neat features that will allow you easy and basic control over you Amazon S3 account. The free version may be enough for basic users, but power users will want to upgrade to the pro version, as it includes a lot more features. Using the free version allows you to get a feel for what CloudBerry Explorer has to offer, and is a good starting point. Keep in mind that Amazon S3 is introducing Reduced Redundancy Storage which will lower the price of data stored. The price drops from $0.15 per GB to only $0.10 per GB. If you’re a Windows Home Server user, check out our review of CloudBerry Online Backup 1.5 for WHS. Download CloudBerry Explorer Free for Amazon S3 Similar Articles Productive Geek Tips CloudBerry Online Backup 1.5 for Windows Home ServerReopen Closed Tabs in Internet ExplorerPreview and Purchase Ebooks with Kindle for PCTroubleshoot and Manage Addons in Internet Explorer 8Beginner Geek: Delete User Accounts in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor

    Read the article

  • How do I use XML prefixes in C#?

    - by Andrew Mock
    EDIT: I have now published my app: http://pastebin.com/PYAxaTHU I was trying to make console-based application that returns my temperature. using System; using System.Xml; namespace GetTemp { class Program { static void Main(string[] args) { XmlDocument doc = new XmlDocument(); doc.LoadXml(downloadWebPage( "http://www.andrewmock.com/uploads/example.xml" )); XmlNamespaceManager man = new XmlNamespaceManager(doc.NameTable); man.AddNamespace("aws", "www.aws.com/aws"); XmlNode weather = doc.SelectSingleNode("aws:weather", man); Console.WriteLine(weather.InnerText); Console.ReadKey(false); } } } Here is the sample XML: <aws:weather xmlns:aws="http://www.aws.com/aws"> <aws:api version="2.0"/> <aws:WebURL>http://weather.weatherbug.com/WA/Kenmore-weather.html?ZCode=Z5546&Units=0&stat=BOTHL</aws:WebURL> <aws:InputLocationURL>http://weather.weatherbug.com/WA/Kenmore-weather.html?ZCode=Z5546&Units=0</aws:InputLocationURL> <aws:station requestedID="BOTHL" id="BOTHL" name="Moorlands ES" city="Kenmore" state=" WA" zipcode="98028" country="USA" latitude="47.7383346557617" longitude="-122.230278015137"/> <aws:current-condition icon="http://deskwx.weatherbug.com/images/Forecast/icons/cond024.gif">Mostly Cloudy</aws:current-condition> <aws:temp units="&deg;F">40.2</aws:temp> <aws:rain-today units=""">0</aws:rain-today> <aws:wind-speed units="mph">0</aws:wind-speed> <aws:wind-direction>WNW</aws:wind-direction> <aws:gust-speed units="mph">5</aws:gust-speed> <aws:gust-direction>NW</aws:gust-direction> </aws:weather> I'm just not sure how to use XML prefixes correctly here. What is wrong with this?

    Read the article

  • How to read remote video on Amazon S3 using ffmpeg

    - by virtualize
    I need to create poster frames from videos hosted on Amazon S3 via ffmpeg. So is there a way to use the remote video file directly in ffmpeg command line like this: ffmpeg -i "http://bucket.s3.amazonaws.com/video.mp4" -ss 00:00:10 -vframes 1 -f image2 "image%03d.jpg" ffmpeg just returns: http://bucket.s3.amazonaws.com/video.mp4: I/O error occurred Usually that means that input file is truncated and/or corrupted. I also tried forcing ffmpeg to use the videos mp4 container for reading: ffmpeg -f mp4 -i "http://bucket.s3.amazonaws.com/video.mp4" ... But no luck. Wget this video from S3 and processing it locally works fine of course, as well as reading the file remotely from other 'standard' http servers. So I know that ffmpeg supports remote file reading, but why not on S3?

    Read the article

  • Why would you dual-run an app on Azure and AWS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/11/10/why-would-you-dual-run-an-app-on-azure-and-aws.aspxI had this question from a viewer of my Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS, and thought I’d publish the response. So why would you dual-run your cloud app by hosting it on Azure and AWS? Sounds like a lot of extra development and management overhead. Well the most compelling reasons are reliability and portability. In 2012 I was working for a client who was making a big investment in the cloud, and at the end of the year we published their first external API for business partners. It was hosted in Azure and used some really nice features to route back into existing on-premise services. We were able to publish a clean, simple API to partners, and hide away the underlying complexity of the internal services while still leveraging them to do all the work. Two days after we went live, we were hit by the Azure SSL certificate expiry outage, and our API was unavailable for the best part of 3 days. Fortunately we had planned a gradual roll-out to partners, so the impact was minimal, but we’d been intending to ramp up quickly, and if the outage had happened a week or two later we would have been in a very bad place. Not least because our app could only run on Azure, we couldn’t package it up for another service without going back and reworking the code. More recently AWS had an issue with a networking device in one of their data centres which caused an outage that took the best part of a day to resolve. In both scenarios the SLAs are worthless, as you’ll get back a small percentage of your cloud expenditure, which is going to be negligible compared to your costs in dealing with the outage. And if your app is built specifically for AWS or Azure then if there’s an extended outage you can’t just deploy it onto a new set of kit from a different supplier. And the chances are pretty good there will be another extended outage, both for Microsoft and for Amazon. But the chances are small that it will happen to both at the same time. So my basic guidance has been: ignore the SLAs, go for better uptime by using two clouds. As soon as you need to scale beyond a single instance, start by scaling out to another cloud. Then scale out to different data centres in both clouds. Then you’ve got dual-cloud, quadruple-datacentre redundancy, so any more scaling you need can be left to the clouds to auto-scale themselves. By running in both clouds, you’ve made your app portable, so in the highly unlikely event that both AWS and Azure go down in multiple regions, you’ll have a deployment package which will let you spin up a new stack on yet another cloud, without having to rework your solution.

    Read the article

  • Duplicity not writing to a pre-existing S3 bucket

    - by Saurabh Nanda
    I'm trying to backup a directory to a pre-existing Amazon S3 bucket using the following command: duplicity --no-encryption system/ s3+http://MY_BUCKET_NAME/backup However, I'm getting the following error consistently: S3CreateError: S3CreateError: 409 Conflict <?xml version="1.0" encoding="UTF-8"?> <Error><Code>BucketAlreadyOwnedByYou</Code><Message>Your previous request to create the named bucket succeeded and you already own it.</Message><BucketName>vacationlabs</BucketName><RequestId>3C1B8C49469E3374</RequestId><HostId>4dU1TKf3Td6R0yvG9MaLKCYvQfwaCpdM8FUcv53aIOh0LeJ6wtVHHduPSTqjDwt0</HostId></Error> The S3 bucket is empty and does NOT have the backup directory The bucket is in Singapore region

    Read the article

  • Nginx proxy to s3 bucket gets 400 Invalid Argument

    - by elssar
    I have a Django app in which I serve media files through an nginx proxy to s3. The relevant python code response = HttpResponse() response['X-Accel-Redirect'] = '/s3_redirect/%s' % filefield.url.replace('http://', '') response['Content-Disposition'] = 'attachment; filename=%s' % filefield.name return response The nginx block for the internal redirect is location ~* ^/s3_redirect/(.*) { internal; set $full_url http://$1; proxy_pass $full_url; And the request logged by s3 is. REST.GET.OBJECT <media file> "GET <media file>" 400 InvalidArgument 354 - 4 - "http://<referer>" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1" - I, for the life of me, can't figure out what's wrong. The url send to nginx by the app is valid, it works in the browser. And nginx is sending a request to s3.

    Read the article

  • Alias multiple DNS entries to one Amazon S3 Bucket

    - by Tristan
    I have a bucket on Amazon S3. Lets call it "webstatic.mydomain.com". I have a DNS alias setup for that bucket webstatic.mydomain.com CNAME - web-static.mydomain.com.s3.amazonaws.com. This all works great, however for some rather complicated reasons I now need: webstatic.myOtherDomain.com to point to that same amazon bucket so: webstatic.myOtherDomain.com CNAME - web-static.mydomain.com.s3.amazonaws.com. Fails, as the bucket is not called the same as the referring DNS. Can anyone tell me how to have two different DNS entries pointing to the same amazon bucket?

    Read the article

  • Limited access to Amazon S3 buckets

    - by Tomas Markauskas
    Is it possible to somehow limit the access to an Amazon S3 account. I don't really like the idea of distributing my secret access key to all of my applications, that want to access just a single bucket on my account. If someone gains access to one of the applications, I could loose all my data stored on S3. One way I was thinking to do it would be creating a second S3 account and give it access to just one bucket of the main account, but it's not really a great solution. Another nice thing for me would be to give the secondary account only write (but not modify/delete) and read access. That way I could upload backups or other files and be sure, that they won't get lost.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >