Search Results

Search found 800 results on 32 pages for 's3'.

Page 4/32 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Customizing the look of S3 expiring urls

    - by Gregoriy
    Creating expiring links for Amazon S3 buckets, could I somehow change the name of AWSAccessKeyId to something else (to custom it, to speak so) so that not to disclose the use of Amazon in my web applications? For now, it looks like so: http://video.mysite.com/T154456.flv?AWSAccessKeyId=1ESOMESPECIALIDJJAKJ6RA82&Expires=1241372284&Signature=ddfr%2BlkoSEPAL%2BGbMwlMzj6q%2BCY%3D Or, in order to not open another question, what are other (tricky?) ways to expire S3 links without the use of proxy?

    Read the article

  • Amazon S3 enforcing access control

    - by KandadaBoggu
    I have several PDF files stored in Amazon S3. Each file is associated with a user and only the file owner can access the file. I have enforced this in my download page. But the actual PDF link points to Amazon S3 url, which is accessible to anybody. How do I enforce the access-control rules for this url?(without making my server a proxy for all the PDF download links)

    Read the article

  • Getting the download count of a specific S3 object

    - by phidah
    I've got a number of S3 objects that are available to my customers. Since I'd like to bill my customers by usage, I wondered if there is any smart kind of way to get the number of times a given file has been downloaded? Alternatively, I suppose I could parse the log files provided by S3, but with 10m+ fetches per customer this might be bit of a task. Any ideas?

    Read the article

  • Upload File Directly to S3 with Progress Bar

    - by John Boker
    Relating to this question, http://stackoverflow.com/questions/117810/upload-files-directly-to-amazon-s3-from-asp-net-application, is there any way to do this and have a progress bar? ---- EDIT ---- Two days later and still no luck with a direct way. Found one thing that looks promising but not free: http://www.flajaxian.com/ Uses flash to upload directly to S3 with a progress bar.

    Read the article

  • Amazon S3 permissions

    - by Joe
    Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?

    Read the article

  • Automated incremental backups from Plesk on Centos to Amazon S3

    - by ChrisS
    Hi, I've done a far bit of research on this via Google and there seems to be quite a few ways of possibly doing this. I'm looking to incrementally backup new and updated files in two directories on my Plesk run Centos 5.2 server: /backups and /var/www/vhosts (preferable only httdocs within each vhost) Has anyone got some great feedback from using the various solutions - seems to be various Java, Perl and Ruby based solutions out there. Many thanks, Chris

    Read the article

  • s3fs changing s3 permissions?

    - by magd1
    My developer believes that s3fs is changing my bucket's permissions. Is this possible? I want my bucket to be public, but it keeps reverting back to private. Here's my fstab. s3fs#production /mnt/production fuse use_cache=/tmp,use_rrs=1,allow_other,uid=1000,gid=1000 0 0 My developer mentioned the "-o default_acl (default="private")" option. The documentation refers to "canned acl", but I don't understand what these are.

    Read the article

  • Configure non-destructive Amazon S3 bucket policy

    - by Assaf
    There's a bucket into which some users may write their data for backup purposes. They use s3cmd to put new files into their bucket. I'd like to enforce a non-destruction policy on these buckets - meaning, it should be impossible for users to destroy data, they should only be able to add data. How can I create a bucket policy that only lets a certain user put a file if it doesn't already exist, and doesn't let him do anything else with the bucket.

    Read the article

  • Amazon S3 Iterating Through Multi-Page Results. (withMarker)

    - by Jitu
    Trying to iterate through AmazonS3 that has around 5000+ keys stored in the bucket, used sample code based on provided link on Amazon Developer Guide http://docs.amazonwebservices.com/AmazonS3/latest/dev/ListingObjectKeysUsingNetSDK.html Issue is iteration fails when NexMarker is passed which has length of more than 128 string characters, which seems unusal as withMarker accepts string as parameter and there is no documentation on limit to withMarker. request.Marker = response.NextMarker; Has anyone faced similar issue. Thanks in advance.

    Read the article

  • Uploading to S3 using Curl

    - by Carl Crawley
    Hi All, I'm currently using cURL to upload a file from my server to S3 using AJAX to call the script. So I have the following: $fullfilepath = '/server/sitepath/files/' . $_POST['file']; $upload_url = 'https://'.$_POST['buckets'].'.s3.amazonaws.com/'; $params = array( 'key'=>$_POST['key'], 'AWSAccessKeyId'=>$_POST['AWSAccessKeyId'], 'acl'=>$_POST['acl'], 'success_action_status'=>$_POST['success_action_status'], 'policy'=>$_POST['policy'], 'signature'=>$_POST['signature'], 'Content-Type'=>$_POST['Content-Type'], 'file'=>"@$fullfilepath" ); $ch = curl_init(); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_URL, $upload_url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $params); $response = curl_exec($ch); curl_close($ch); echo $response; However, I'm getting an S3 error as follows when it posts and I'm unsure why because I'm not passing JSON to it. <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InvalidPolicyDocument</Code><Message>Invalid Policy: Invalid JSON.</Message><RequestId>B29469C6151BE0E8</RequestId><HostId>BFPk6W2kt1b6hTtx0mEq6dWdN/IhO0gNR5bct//7LAOwJxm1C3PrxS4RPv1blzJ8</HostId></Error> I've googled it for the last hour or so and can't seem to figure it out. If I change the order of the Array fields, it gives me a different error - I believe the order of the posted fields is important somehow. any help would be much appreciated! C

    Read the article

  • How to get Amazon s3 PHP SDK working?

    - by JakeRow123
    I'm trying to set up s3 for the first time and trying to run the sample file that comes with the PHP sdk that creates a bucket and attempts to upload some demo files to it. But this is the error I am getting: The difference between the request time and the current time is too large. I read on another question on SO that this is because Amazon determines a valid request by comparing the times between the server and the client, that the 2 must be within a 15 min span of one another. Now here is the problem. My laptop's time is 12:30AM June 8, 2012 at the moment. On my server I created a file called servertime.php and placed this code in that file: <?php print strftime('%c'); ?> and the output is: Fri Jun 8 00:31:22 2012 It looks like the day is correct but I don't know what to make of 00:31:22. In any case, how is it possible to always make sure the time between the client and server is within a 15 minute window of one another. What if I have a user in China who wishes to upload a file on my site which uses s3 for the cdn. Then the time difference would be over a day. How can I make sure all my user's times are within 15 minutes of my server time? What if the user is in the U.S. but the time on their machine is misconfigured. Basically how to get s3 bucket creation and upload to work?

    Read the article

  • Using .htaccess to server files from Amazon S3 CloudFront

    - by Adrian A.
    My ideal setup would be to take a current clients site, upload a .htaccess with a regex inside, that would match the URI, and if it finds a certain file extension, it would use the same path, but with an altered domain. ie. Normal path: http://www.domain.com/something/images/someimage.jpeg http://www.domain.com/assets/js/jquery.js .htaccess translated would turn the above into: http://mycdn.other.com/something/images/someimage.jpeg http://mycdn.other.com/assets/js/jquery.js I googled this for hours in a row, no luck. Again, this is for actually making use of Amazon's CloudFront. S3 is already mounted to the website for backups and storing files using s3fs, but this doesn't solve the issue since it's using S3 directly, not using the CloudFront.

    Read the article

  • Problem with copying local data onto HDFS on a Hadoop cluster using Amazon EC2/ S3.

    - by Deepak Konidena
    Hi, I have setup a Hadoop cluster containing 5 nodes on Amazon EC2. Now, when i login into the Master node and submit the following command bin/hadoop jar <program>.jar <arg1> <arg2> <path/to/input/file/on/S3> It throws the following errors (not at the same time.) The first error is thrown when i don't replace the slashes with '%2F' and the second is thrown when i replace them with '%2F': 1) Java.lang.IllegalArgumentException: Invalid hostname in URI S3://<ID>:<SECRETKEY>@<BUCKET>/<path-to-inputfile> 2) org.apache.hadoop.fs.S3.S3Exception: org.jets3t.service.S3ServiceException: S3 PUT failed for '/' XML Error Message: The request signature we calculated does not match the signature you provided. check your key and signing method. Note: 1)when i submitted jps to see what tasks were running on the Master, it just showed 1116 NameNode 1699 Jps 1180 JobTracker leaving DataNode and TaskTracker. 2)My Secret key contains two '/' (forward slashes). And i replace them with '%2F' in the S3 URI. PS: The program runs fine on EC2 when run on a single node. Its only when i launch a cluster, i run into issues related to copying data to/from S3 from/to HDFS. And, what does distcp do? Do i need to distribute the data even after i copy the data from S3 to HDFS?(I thought, HDFS took care of that internally) IF you could direct me to a link that explains running Map/reduce programs on a hadoop cluster using Amazon EC2/S3. That would be great. Regards, Deepak.

    Read the article

  • aws-s3 can't find xml-simple, but in gem list

    - by Dan Donaldson
    I'm transitioning to heroku, and need to have AWS-s3 connections to deal with a variety of graphics. I've installed the aws-s3 gem, but one of its dependencies is not being found: xml-simple. My belief is that this is a standard part of RoR, and it is in the gem list. When I deploy to heroku, all is fine, but on my development server, it isn't being found when the code uses it to check the existence of a graphic. It works fine from the console, using s3sh. I'm not quite sure why this is -- what do I need to check? Using OS X 10.6, on a 64 bit machine -- can this be part of it?

    Read the article

  • Can I rely on S3 to keep my data secure?

    - by Jamie Hale
    I want to back up sensitive personal data to S3 via an rsync-style interface. I'm currently using s3cmd - a great tool - but it doesn't yet support encrypted syncs. This means that while my data is encrypted (via SSL) during transfer, it's stored on their end unencrypted. I want to know if this is a big deal. The S3 FAQ says "Amazon S3 uses proven cryptographic methods to authenticate users... If you would like extra security, there is no restriction on encrypting your data before storing it in Amazon S3." Why would I like extra security? Is there some way my buckets could be opened to prying eyes without my knowing? Or are they just trying to save you when you accidentally change your ACLs and make your buckets world-readable?

    Read the article

  • s3 / php script looping (strace)

    - by Neil
    Anyone using the following php S3 client library? http://undesigned.org.za/2007/10/22/amazon-s3-php-class It's been working fine for me for a few days, just noticed that a script I have in place now just ends up hanging. Running this through strace, I see something like: poll([{fd=4, events=POLLOUT}], 1, 1000) = 1 ([{fd=4, revents=POLLHUP}]) poll([{fd=4, events=POLLOUT}], 1, 0) = 1 ([{fd=4, revents=POLLHUP}]) poll([{fd=4, events=POLLOUT}], 1, 1000) = 1 ([{fd=4, revents=POLLHUP}]) poll([{fd=4, events=POLLOUT}], 1, 0) = 1 ([{fd=4, revents=POLLHUP}]) poll([{fd=4, events=POLLOUT}], 1, 1000) = 1 ([{fd=4, revents=POLLHUP}]) Looking at what's running, I see that it's not even getting to the point where it makes the curl call. Any thoughts? Thanks!

    Read the article

  • Using amazon s3 with cloudfront as a CDN

    - by weezybizzle
    I would like to serve user uploaded content (pictures, videos, and other files) from a CDN. using Amazon S3 with cloudfront seems like a reasonable way to go. My only question is about the speed of the file system. My plan was to host user media with the following uri. cdn.mycompany.com/u/u/i/d/uuid.jpg. I don't haven any prior experience with S3 or CDN's and I was just wondering if this strategy would scale well to handle a large amount of user uploaded content. And if there might be another conventional way to accomplish this.

    Read the article

  • Amazon S3 - HTTPS/SSL - Is it possible?

    - by Kerry
    I saw a few other questions regarding this without any real answers or information (or so it appeared). I have an image here: http://furniture.retailcatalog.us/products/2061/6262u9665.jpg Which is redirecting to: http://furniture.retailcatalog.us.s3.amazonaws.com/products/2061/6262u9665.jpg I need it to be: https://furniture.retailcatalog.us/products/2061/6262u9665.jpg So I installed a wildcard ssl on retailatalog.us (we have other subdomains), but it wasn't working. I went to check https://furniture.retailcatalog.us.s3.amazonaws.com/products/2061/6262u9665.jpg And it wasn't working. How do I make this work?

    Read the article

  • Amazon S3 copy request timeouts

    - by jean27
    We're uploading files to a temporary folder in a bucket. After that, we're trying to copy the uploaded files to its actual folder then delete the files in the temporary folder. It doesn't timeout when working with a single file. We're using the ThreeSharp API. Stack Trace: [WebException: The operation has timed out] System.Net.HttpWebRequest.GetRequestStream() +5322142 Affirma.ThreeSharp.Query.ThreeSharpQuery.GenerateAndSendHttpWebRequest(Request request) in C:\Consulting\Amazon\Amazon S3\Affirma.ThreeSharp\Affirma.ThreeSharp\Query\ThreeSharpQuery.cs:386 Affirma.ThreeSharp.Query.ThreeSharpQuery.Invoke(Request request) in C:\Consulting\Amazon\Amazon S3\Affirma.ThreeSharp\Affirma.ThreeSharp\Query\ThreeSharpQuery.cs:479

    Read the article

  • How can I get the size of an Amazon S3 bucket?

    - by Garret Heaton
    I'd like to graph the size (in bytes, and # of items) of an Amazon S3 bucket and am looking for an efficient way to get the data. The s3cmd tools provide a way to get the total file size using s3cmd du s3://bucket_name, but I'm worried about its ability to scale since it looks like it fetches data about every file and calculates its own sum. Since Amazon charges users in GB-Months it seems odd that they don't expose this value directly. Although Amazon's REST API returns the number of items in a bucket, [s3cmd] doesn't seem to expose it. I could do s3cmd ls -r s3://bucket_name | wc -l but that seems like a hack. The Ruby AWS::S3 library looked promising, but only provides the # of bucket items, not the total bucket size. Is anyone aware of any other command line tools or libraries (prefer Perl, PHP, Python, or Ruby) which provide ways of getting this data?

    Read the article

  • Why S3 website redirect location is not followed by CloudFront?

    - by ychaze
    I have a website hosted on Amazon S3. It is the new version of an old website hosted on WordPress. I have set up some files with the metadata Website Redirect Locationto handle old location and redirect them to the new website pages. For example: I had http://www.mysite.com/solution that I want to redirect to http://mysite.s3-website-us-east-1.amazonaws.com/product.html So I created an empty file named solutioninside my bucket with the correct metadata: Website Redirect Location= /product.html The S3 redirect metadata is equivalent to a 301 Moved Permanentlythat is great for SEO. This works great when accessing the URL directly from S3 domain. I have also set up a CloudFront distribution based on the website bucket. And when I try to access through my distribution, the redirect does not work, ie: http://xxxx123.cloudfront.net/solution does not redirect but download the empty file instead. So my question is how to keep the redirection through the CloudFront distribution ? Or any idea on how to handle the redirection without deteriorate SEO ? Thanks

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >