Search Results

Search found 800 results on 32 pages for 's3'.

Page 6/32 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Amazon AWS S3 to Force Download Mp3 File instead of Stream It

    - by Calua
    Hi everybody, I'm using Amazon AWS S3 to put the mp3 file then allow our site visitor to download the mp3 from Amazon AWS. I use S3Fox to manage the file, everything seems working fine until recently we got many complaints from visitor that the mp3 was streamed via the browser instead of displaying browser save dialog. I try for some mp3 and notice that for some mp3, the save dialog box is appear, and for some others they're streamed via browser. What can I do to force that the Mp3 file will be downloaded instead of streamed via web browser.... Any help would be much appreciated. Thanks

    Read the article

  • Backup AWS Dynamodb to S3

    - by Ali
    It has been suggested on Amazon docs http://aws.amazon.com/dynamodb/ among other places, that you can backup your dynamodb tables using Elastic Map Reduce, I have a general understanding of how this could work but I couldn't find any guides or tutorials on this, So my question is how can I automate dynamodb backups (using EMR)? So far, I think I need to create a "streaming" job with a map function that reads the data from dynamodb and a reduce that writes it to S3 and I believe these could be written in Python (or java or a few other languages). Any comments, clarifications, code samples, corrections are appreciated.

    Read the article

  • How to load secure S3 images into Flex with temporary URLs

    - by Yarin
    I have some secure images on S3 that I need to load into Flex. I was expecting to be able to do this using signed temporary URLs but can't get it working. I know the URLs I'm generating are correct, because they load fine in my browsers' address bar. Moreover, Flex has no problem loading my images with a non-signed url when they are public, but as soon as I try signing the urls all the images fail, whether public or not. I've tried image.source = signedURL, image.load(signedURL), etc. If I try loading the file with URLLoader/URLStream, it looks like I'm getting the data OK, but I'm not sure how to translate those results to an Image control. Is this just an issue with the Image control not being able to recognize signed urls? Do I have to load the image from a byte array? What would that look like?

    Read the article

  • Amazon S3 as secure backup without multiple invoices

    - by Tom Viner
    I'm storing copies of database backups on Amazon S3 using the Python Boto library. But I worry that if my web server was hacked, those backups could be deleted using the credentials I need to do the upload. Ok, so I know you can grant permissions to another Amazon email address, so I can imagine doing that after an upload then removing the original user's write access BUT in this scenario I now end up with 2 accounts and 2 sets of invoices to give to accounts every month. Is there a solution to this that doesn't require a new Amazon account for each web server I run?

    Read the article

  • Why wouldn't an S3 ACL "stick"?

    - by Chris Phillips
    We would like to set an ACL to allow access to one of our buckets with a partner account. We've tested the process on a test account and everything works fine. On our production account/buckets, however, we can set the ACL and see the update but as soon as we attempt to access the bucket from the other account we get a forbidden response. Afterwards, when we look at the ACL list for the bucket, the permission is gone. We've tried using both Amazon's new S3 tool in the AWS Management Console and CloudBerry Explorer and both tools exhibit exactly the same behavior. Using the same process to update an ACL from our test account works as expected ( the ACL update "sticks" ). What would cause the ACL to not "stick"? Does anyone have any ideas on how to fix/workaround the problem?

    Read the article

  • How to securely stream video from amazon S3

    - by JP.
    I have couple of copyright videos available on my S3 buckets. I want to stream them on my website, but at the same time. I don't want the users to rip the video from the video player. I tried to google about it but still i am not confident on this, coz i do not know the intricacies of options available like Server Side encryption None/ AES-256 2) A very interesting option is under Metadata tab - It shows couple of keys & Values. How can i use them to secure my video content? 3) Add more meta data and related options?

    Read the article

  • Using AWS S3 for photo storage

    - by Sam
    I'm going to be using S3 to store user uploaded photos. Obviously, I wont be serving the image files to user agents without resizing them down. However, not one size would do, as some thumbnails will be smaller than other larger previews. So, I was thinking of making a standard set of dimensions scaling from the lowest 16x16 to some highest 1024x1024. Is this a good way to solve this problem? What if I need a new size later on? How would you solve this?

    Read the article

  • Uploading Files to AWS S3 from an Android App

    - by Abhishek Kaushik
    Edited 7th June,14 My Android app needs to have a feature where clients can upload their files. I want AWS S3 as my storage. Moreover i dont want to use SECRET_KEY and ACCESS_KEY_ID on client side. What is the the best way to do this. Can someone provide the working code too ? I read that i can request to AWS for a signed URL and then make client directly upload to that URL. How to achieve this ?

    Read the article

  • Can a S3 mount be used as the document root for Apache?

    - by Hesse
    Has anyone been successful in having their DocumentRoot reside on an S3 mount (using s3fs)? I currently have a mounted bucket at /mnt/s3. I can read and write files to it no problem. In my httpd.conf I have DocumentRoot "/mnt/s3". When I restart Apache I get the error "DocumentRoot must be a directory". Has anyone tried something similar. My goal is to have a shared storage space so my nodes can scale easily and access the same document root. Thanks

    Read the article

  • Any need to make backup of data on Amazon S3?

    - by Chrille
    I'm hosting 200 GB of product images at S3 (this is my primary file host). Do I need to back that data up somewhere else, or is S3 safe as it is? I have been experimenting with mounting the S3 bucket to a EC2 instance, and then making a nightly rsync backup. The problem is that it's about 3 million files, so it takes a while to generate the different rsync needs. The backup actually takes about 3 days to complete. Any ideas how to do this better? (if it's even necessary?)

    Read the article

  • How to configure S3 or DNS to handle incomplete name (sans www) for web site?

    - by user193116
    I have a set up a bucket called "www.mydomainname.com" to host my website and I have configured the CNAME such that "www.mydomainname.com" points to the my endopint http://www.mydomainname.com.s3-website-us-east-1.amazonaws.com/ It works and when people who type the the full url "www.mydomainname.com" are able to see my index page But most people are in the habit of typing incoplete domain name -- they just type "mydomainname.com" and their browser fails to find my site. Is there a way to configure CName or S3 bucket such that typing "mydomainname.com" take them to my s3 website ? (I am using Networksolutions as my DNS provider).

    Read the article

  • C# code to GZip and upload a string to Amazon S3

    - by BigJoe714
    Hello. I currently use the following code to retrieve and decompress string data from Amazon C#: GetObjectRequest getObjectRequest = new GetObjectRequest().WithBucketName(bucketName).WithKey(key); using (S3Response getObjectResponse = client.GetObject(getObjectRequest)) { using (Stream s = getObjectResponse.ResponseStream) { using (GZipStream gzipStream = new GZipStream(s, CompressionMode.Decompress)) { StreamReader Reader = new StreamReader(gzipStream, Encoding.Default); string Html = Reader.ReadToEnd(); parseFile(Html); } } } I want to reverse this code so that I can compress and upload string data to S3 without being written to disk. I tried the following, but I am getting an Exception: using (AmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(AWSAccessKeyID, AWSSecretAccessKeyID)) { string awsPath = AWSS3PrefixPath + "/" + keyName+ ".htm.gz"; byte[] buffer = Encoding.UTF8.GetBytes(content); using (MemoryStream ms = new MemoryStream()) { using (GZipStream zip = new GZipStream(ms, CompressionMode.Compress)) { zip.Write(buffer, 0, buffer.Length); PutObjectRequest request = new PutObjectRequest(); request.InputStream = ms; request.Key = awsPath; request.BucketName = AWSS3BuckenName; using (S3Response putResponse = client.PutObject(request)) { //process response } } } } The exception I am getting is: Cannot access a closed Stream. What am I doing wrong?

    Read the article

  • Adding S3 metadata using jets3t

    - by billintx
    I'm just starting to use the jets3t API for S3, using version 0.7.2 I can't seem to save metadata with the S3Objects I'm creating. What am I doing wrong? The object is successfully saved when I putObject, but I don't see the metadata after I get the object. S3Service s3Service = new RestS3Service(awsCredentials); S3Bucket bucket = s3Service.getBucket(BUCKET_NAME); String key = "/1783c05a/p1"; String data = "This is test data at key " + key; S3Object object = new S3Object(key,data); object.addMetadata("color", "green"); for (Iterator iterator = object.getMetadataMap().keySet() .iterator(); iterator.hasNext();) { String type = (String) iterator.next(); System.out.println(type + "==" + object.getMetadataMap().get(type)); } s3Service.putObject(bucket, object); S3Object retreivedObject = s3Service.getObject(bucket, key); for (Iterator iterator = object.getMetadataMap().keySet() .iterator(); iterator.hasNext();) { String type = (String) iterator.next(); System.out.println(type + "==" + object.getMetadataMap().get(type)); } Here's the output before putObject Content-Length==37 color==green Content-MD5==AOdkk23V6k+rLEV03171UA== Content-Type==text/plain; charset=utf-8 md5-hash==00e764936dd5ea4fab2c4574df5ef550 Here's the output after putObject/getObject Content-Length==37 ETag=="00e764936dd5ea4fab2c4574df5ef550" request-id==9ED1633672C0BAE9 Date==Wed Mar 24 09:51:44 CDT 2010 Content-MD5==AOdkk23V6k+rLEV03171UA== Content-Type==text/plain; charset=utf-8

    Read the article

  • Using a "local" S3 emulation layer as a replacement for HDFS?

    - by user183394
    I have been testing out the most recent Cloudera CDH4 hadoop-conf-pseudo (i.e. MRv2 or YARN) on a notebook, which has 4 cores, 8GB RAM, an Intel X25MG2 SSD, and runs a S3 emulation layer my colleagues and I wrote in C++. The OS is Ubuntu 12.04LTS 64bit. So far so good. Looking at Setting up hadoop to use S3 as a replacement for HDFS, I would like to do it on my notebook. Nevertheless, I can't find where I can change the jets3t.properties for setting the end point to localhost. I downloaded the hadoop-2.0.1-alpha.tar.gz and searched the source without finding out a clue. There is a similar Q on SO Using s3 as fs.default.name or HDFS?, but I want to use our own lightweight and fast S3 emulation layer, instead of AWS S3, for our experiments. I would appreciate a hint as to how I can change the end point to a different hostname. Regards, --Zack

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • How can I prevent double file uploading with Amazon S3?

    - by Tony
    I decided to use Amazon S3 for document storage for an app I am creating. One issue I run into is while I need to upload the files to S3, I need to create a document object in my app so my users can perform CRUD actions. One solution is to allow for a double upload. A user uploads a document to the server my Rails app lives on. I validate and create the object, then pass it on to S3. One issue with this is progress indicators become more complicated. Using most out-of-the-box plugins would show the client that file has finished uploading because it is on my server, but then there would be a decent delay when the file was going from my server to S3. This also introduces unnecessary bandwidth (at least it does not seem necessary) The other solution I am thinking about is to upload the file directly to S3 with one AJAX request, and when that is successful, make a second AJAX request to store the object in my database. One issue here is that I would have to validate the file after it is uploaded which means I have to run some clean up code in S3 if the validation fails. Both seem equally messy. Does anyone have something more elegant working that they would not mind sharing? I would imagine this is a common situation with "cloud storage" being quite popular today. Maybe I am looking at this wrong.

    Read the article

  • Deleting multiple objects in a AWS S3 bucket with s3curl.pl?

    - by user183394
    I have been trying to use the AWS "official" command line tool s3curl.pl to test out the recently announced multi-object delete. Here is what I have done: First, I tested out the s3curl.pl with a set of credentials without a hitch: $ s3curl.pl --id=s3 -- http://testbucket-0.s3.amazonaws.com/|xmllint --format - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 884 0 884 0 0 4399 0 --:--:-- --:--:-- --:--:-- 5703 <?xml version="1.0" encoding="UTF-8"?> <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>testbucket-0</Name> <Prefix/> <Marker/> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <Contents> <Key>file_1</Key> <LastModified>2012-03-22T17:08:17.000Z</LastModified> <ETag>"ee0e521a76524034aaa5b331842a8b4e"</ETag> <Size>400000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>file_2</Key> <LastModified>2012-03-22T17:08:19.000Z</LastModified> <ETag>"6b32cbf8219a59690a9f69ba6ff3f590"</ETag> <Size>600000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> </ListBucketResult> Then, I following the s3curl.pl's usage instructions: s3curl.pl --help Usage /usr/local/bin/s3curl.pl --id friendly-name (or AWSAccessKeyId) [options] -- [curl-options] [URL] options: --key SecretAccessKey id/key are AWSAcessKeyId and Secret (unsafe) --contentType text/plain set content-type header --acl public-read use a 'canned' ACL (x-amz-acl header) --contentMd5 content_md5 add x-amz-content-md5 header --put <filename> PUT request (from the provided local file) --post [<filename>] POST request (optional local file) --copySrc bucket/key Copy from this source key --createBucket [<region>] create-bucket with optional location constraint --head HEAD request --debug enable debug logging common curl options: -H 'x-amz-acl: public-read' another way of using canned ACLs -v verbose logging Then, I tried the following, and always got back error. I would appreciated it very much if someone could point out where I made a mistake? $ s3curl.pl --id=s3 --post multi_delete.xml -- http://testbucket-0.s3.amazonaws.com/?delete <?xml version="1.0" encoding="UTF-8"?> <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><StringToSignBytes>50 4f 53 54 0a 0a 0a 54 68 75 2c 20 30 35 20 41 70 72 20 32 30 31 32 20 30 30 3a 35 30 3a 30 38 20 2b 30 30 30 30 0a 2f 7a 65 74 74 61 72 2d 74 2f 3f 64 65 6c 65 74 65</StringToSignBytes><RequestId>707FBE0EB4A571A8</RequestId><HostId>mP3ZwlPTcRqARQZd6gU4UvBrxGBNIVa0VVe5p0rqGmq5hM65RprwcG/qcXe+pmDT</HostId><SignatureProvided>edkNGuugiSFe0ku4eGzkh8kYgHw=</SignatureProvided><StringToSign>POST Thu, 05 Apr 2012 00:50:08 +0000 The file multi_delete.xml contains the following: cat multi_delete.xml <?xml version="1.0" encoding="UTF-8"?> <Delete> <Quiet>true</Quiet> <Object> <Key>file_1</Key> <VersionId> </VersionId>> </Object> <Object> <Key>file_2</Key> <VersionId> </VersionId> </Object> </Delete> Thanks for any help! --Zack

    Read the article

  • Cannot upload files bigger than 8GB to Amazon S3 by multi-part upload due to broken pipe

    - by spencerho
    I implemented S3 multi-part upload, both high level and low level version, based on the sample code from http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?HLuploadFileJava.html and http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?llJavaUploadFile.html When I uploaded files of size less than 4 GB, the upload processes completed without any problem. When I uploaded a file of size 13 GB, the code started to show IO exception, broken pipes. After retries, it still failed. Here is the way to repeat the scenario. Take 1.1.7.1 release, create a new bucket in US standard region create a large EC2 instance as the client to upload file create a file of 13GB in size on the EC2 instance. run the sample code on either one of the high-level or low-level API S3 documentation pages from the EC2 instance test either one of the three part size: default part size (5 MB) or set the part size to 100,000,000 or 200,000,000 bytes. So far the problem shows up consistently. I attached here a tcpdump file for you to compare. In there, the host on the S3 side kept resetting the socket.

    Read the article

  • Amazon S3 File Uploads

    - by Abdul Latif
    I can upload files from a form using post, but I am trying to find out how to add extra fields to the form i.e File Description, Type and etc. Also can I upload more than one file at once, I know it says you can't using post in the documentation but are there any work arounds? Thanks in Advance

    Read the article

  • Amazon S3 collisions with heroku and paperclip

    - by poseid
    I have an app on my localhost for development and an app for testing on heroku. Image upload with localhost and paperclip always works. However, doing the same experiment with image upload on my heroku app, the app hangs... and the upload seems to be going on forever. I suspect that there is a collision going on. What is needed to get but uploads working? Or do I need to use different buckets for each environment?

    Read the article

  • Special chars in Amazon S3 keys?

    - by Martin
    Is it possible to have special characters like åäö in the key? If i urlencode the key before storing it works, but i cant really find a way to access the object. If i write åäö in the url i get access denied (like i get if the object is not found). If i urlencode the url i paste in the browser i get "InvalidURICouldn't parse the specified URI". Is there some way to do this?

    Read the article

  • Automated incremental backups from Plesk on Centos to Amazon S3

    - by ChrisS
    Hi, I've done a far bit of research on this via Google and there seems to be quite a few ways of possibly doing this. I'm looking to incrementally backup new and updated files in two directories on my Plesk run Centos 5.2 server: /backups and /var/www/vhosts (preferable only httdocs within each vhost) Has anyone got some great feedback from using the various solutions - seems to be various Java, Perl and Ruby based solutions out there. Many thanks, Chris

    Read the article

  • Securing ClickOnce hosted with Amazon S3 Storage

    - by saifkhan
    Well, since my post on hosting ClickOnce with Amazon S3 Storage, I've received quite a few emails asking how to secure the deployment. At the time of this post I regret to say that there is no way to secure your ClickOnce deployment hosted with Amazon S3. The S3 storage is secured by ACL meaning that a username and password will have to be provided before access. The Amazon CloudFront, which sits on top of S3, allows you to apply security settings to your CloudFront distribution by Applying an encryption to the URL. Restricting by IP. The problem with the CloudFront is that the encryption of the URL is mandatory. ClickOnce does not provide a way to pass the "Amazon Public Key" to the CloudFront URL (you probably can if you start editing the XML and HTML files ClickOnce generate but that defeats the porpose of ClickOnce all together). What would be nice is if Amazon can allow users to restrict by IP addresses or IP Blocks. I'd sent them an email and received a response that this is something they are looking into...I won't hold my breadth though. Alternative I suggest you look at Rack Space Cloud hosting http://www.rackspacecloud.com they have very competitive pricing and recently started hosting Windows Virtual Servers. What you can do is rent a virtual server, setup IIS to host your ClickOnce applications. You can then use IIS security setting to restrict what IP/Blocks can access your ClickOnce payloads. Note: You don't really need Windows Server to host ClickOnce. Any web server can do. If you are familiar with Linux you can run that VM with rackspace for half the price of Windows. I hope you found this information helpful.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >