Search Results

Search found 1018 results on 41 pages for 'galaxy s3'.

Page 4/41 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Filesize with SWFUpload and Amazon S3

    - by Dodinas
    Hello all, I'm currently using SWFUpload to upload files to my S3 bucket. And it's working great. I'm using the script from a website here: http://www.anedix.com/news/article/50 Again, the upload to my S3 works fine, however, I've been running into an issue when attempting to upload larger files. It seems that I cannot upload anything over 50MB. I have tried this from both my webhost and locally, using my local testing environment. My question is this: When uploading with SWFUpload, it should be going straight to Amazon S3, correct? If so, then PHP settings such as MAX_UPLOAD_SIZE should not affect it? (Even though in my local environment, I've set it to 1024MB.) Essentially, what the script does is, shows that it's uploading the file (it takes the appropriate amount of time), redirects to the success page, and does not throw any errors. Any ideas on why this would be happening, or how I can troubleshoot this? Thanks!

    Read the article

  • How to securely serve S3 files to blog

    - by Hugo Palma
    I'm starting a blog and i'm in the process of choosing where should i host it. For now i want a free solution like Blogger or Wordpress.com. The problem i'm facing is that i want to use files i have in a S3 bucket in my blog but none of the blog solutions i found supports any kind of server code, which means that in order to use S3 query string authentication i would have to put vulnerable information in the client. For obvious reasons i don't want to do that. So, i'm looking for ideas on how i can safely include content from S3 in a free blog host.

    Read the article

  • Ruby: Streaming large AWS S3 object freezes

    - by Peter
    Hi, I am using the ruby aws/s3 library to retrieve files from Amazon S3. I stream an object and write it to file as per the documentation (with debug every 100 chunks to confirm progress) This works for small files, but randomly freezes downloading large (150MB) files on VPS Ubuntu. Fetching the same files (150MB) from my mac on a much slower connection works just fine. When it hangs there is no error thrown and the last line of debug output is the 'Finished chunk'. I've seen it write between 100 and 10,000 chunks before freezing. Anyone come across this or have ideas on what the cause might be? Thanks The code that hangs: i=1 open(local_file, 'w') do |f| AWS::S3::S3Object.value(key, @s3_bucket) do |chunk| puts("Writing chunk #{i}") f.write chunk.read_body puts("Finished chunk #{i}") i=i+1 end end

    Read the article

  • The Birth and Life of a Disk Galaxy [Video]

    - by Jason Fitzpatrick
    In this video, rendered over a million CPU hours by the Pleiades supercomputer at NASA’s Ames Research Center, we see the birth and life of a massive disk galaxy. Computer Model Shows a Disk Galaxy’s Life History [via Geeks Are Sexy] HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8 How To Play DVDs on Windows 8

    Read the article

  • Access files on Samsung Galaxy S3 external sd card using ubuntu 12.04

    - by nense
    I have a Samsung Galaxy s3 running the stock Samsung ROM and I'm trying to transfer files - videos, photos, music and downloads, from my handset to my system via USB running Ubuntu 12.04. I have followed to links suggested How to connect Samsung Galaxy S3 via USB? But it all goes over my head. Can anyone help me with a simple GUI program or a link so I can simply copy and paste selected files from my phone onto my system?

    Read the article

  • Samsung Galaxy tab Textbox issue

    - by csharpnewbie
    We are working on a mobile web application( .NET based). We are facing an issue in Samsung galaxy tab text box. Pls find below the details. Let me know if you need more details. Problem in Samsung tab 750 We have a login screen where we focus on the input text box the Samsung tab keypad pops up and the design is completely collapsed. And the same is working fine in iPad. Wanted to check any settings or CSS property to fix this issue. ![After tapping on the text box]

    Read the article

  • NSURLSession and amazon S3 uploads

    - by George Green
    I have an app which is currently uploading images to amazon S3. I have been trying to switch it from using NSURLConnection to NSURLSession so that the uploads can continue while the app is in the background! I seem to be hitting a bit of an issue. The NSURLRequest is created and passed to the NSURLSession but amazon sends back a 403 - forbidden response, if I pass the same request to a NSURLConnection it uploads the file perfectly. Here is the code that creates the response: NSString *requestURLString = [NSString stringWithFormat:@"http://%@.%@/%@/%@", BUCKET_NAME, AWS_HOST, DIRECTORY_NAME, filename]; NSURL *requestURL = [NSURL URLWithString:requestURLString]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:requestURL cachePolicy:NSURLRequestReloadIgnoringLocalAndRemoteCacheData timeoutInterval:60.0]; // Configure request [request setHTTPMethod:@"PUT"]; [request setValue:[NSString stringWithFormat:@"%@.%@", BUCKET_NAME, AWS_HOST] forHTTPHeaderField:@"Host"]; [request setValue:[self formattedDateString] forHTTPHeaderField:@"Date"]; [request setValue:@"public-read" forHTTPHeaderField:@"x-amz-acl"]; [request setHTTPBody:imageData]; And then this signs the response (I think this came from another SO answer): NSString *contentMd5 = [request valueForHTTPHeaderField:@"Content-MD5"]; NSString *contentType = [request valueForHTTPHeaderField:@"Content-Type"]; NSString *timestamp = [request valueForHTTPHeaderField:@"Date"]; if (nil == contentMd5) contentMd5 = @""; if (nil == contentType) contentType = @""; NSMutableString *canonicalizedAmzHeaders = [NSMutableString string]; NSArray *sortedHeaders = [[[request allHTTPHeaderFields] allKeys] sortedArrayUsingSelector:@selector(caseInsensitiveCompare:)]; for (id key in sortedHeaders) { NSString *keyName = [(NSString *)key lowercaseString]; if ([keyName hasPrefix:@"x-amz-"]){ [canonicalizedAmzHeaders appendFormat:@"%@:%@\n", keyName, [request valueForHTTPHeaderField:(NSString *)key]]; } } NSString *bucket = @""; NSString *path = request.URL.path; NSString *query = request.URL.query; NSString *host = [request valueForHTTPHeaderField:@"Host"]; if (![host isEqualToString:@"s3.amazonaws.com"]) { bucket = [host substringToIndex:[host rangeOfString:@".s3.amazonaws.com"].location]; } NSString* canonicalizedResource; if (nil == path || path.length < 1) { if ( nil == bucket || bucket.length < 1 ) { canonicalizedResource = @"/"; } else { canonicalizedResource = [NSString stringWithFormat:@"/%@/", bucket]; } } else { canonicalizedResource = [NSString stringWithFormat:@"/%@%@", bucket, path]; } if (query != nil && [query length] > 0) { canonicalizedResource = [canonicalizedResource stringByAppendingFormat:@"?%@", query]; } NSString* stringToSign = [NSString stringWithFormat:@"%@\n%@\n%@\n%@\n%@%@", [request HTTPMethod], contentMd5, contentType, timestamp, canonicalizedAmzHeaders, canonicalizedResource]; NSString *signature = [self signatureForString:stringToSign]; [request setValue:[NSString stringWithFormat:@"AWS %@:%@", self.S3AccessKey, signature] forHTTPHeaderField:@"Authorization"]; Then if I use this line of code: [NSURLConnection connectionWithRequest:request delegate:self]; It works and uploads the file, but if I use: NSURLSessionUploadTask *task = [self.session uploadTaskWithRequest:request fromFile:[NSURL fileURLWithPath:filePath]]; [task resume]; I get the forbidden error..!? Has anyone tried uploading to S3 with this and hit similar issues? I wonder if it is to do with the way the session pauses and resumes uploads, or it is doing something funny to the request..? One possible solution would be to upload the file to an interim server that I control and have that forward it to S3 when it is complete... but this is clearly not an ideal solution! Any help is much appreciated!! Thanks!

    Read the article

  • Alternative to Amazon’s S3 service?

    - by Cory
    Just wondering if there is good alternative to Amazon's S3 service? I like S3 but the bandwidth cost is high. I looked at CouldFiles from Rackspace but the cost is even higher. I don't mind prepaying or having monthly payment in order to reduce the bandwidth cost greatly. Thank you for any help

    Read the article

  • Rails upload to s3 performance issue

    - by Denis
    Hello, I'm building an app to store files on my s3 account. I use Rails 3.0.0beta A lot of files can be uploaded at the same time, and the cost (from a performance point of view) of an upload is quite heavy, my app will be busy handling uploads all the time! Maybe a solution is to upload directly to s3, but I still need a submit to my app, at least to store the file's name. I'm wondering what is the best solution?

    Read the article

  • Securing S3 via your own application

    - by Neil Middleton
    Imagine the following use case: You have a basecamp style application hosting files with S3. Accounts all have their own files, but stored on S3. How, therefore, would a developer go about securing files so users of account 1, couldn't somehow get to files of account 2? We're talking Rails if that's a help.

    Read the article

  • Customizing the look of S3 expiring urls

    - by Gregoriy
    Creating expiring links for Amazon S3 buckets, could I somehow change the name of AWSAccessKeyId to something else (to custom it, to speak so) so that not to disclose the use of Amazon in my web applications? For now, it looks like so: http://video.mysite.com/T154456.flv?AWSAccessKeyId=1ESOMESPECIALIDJJAKJ6RA82&Expires=1241372284&Signature=ddfr%2BlkoSEPAL%2BGbMwlMzj6q%2BCY%3D Or, in order to not open another question, what are other (tricky?) ways to expire S3 links without the use of proxy?

    Read the article

  • Amazon S3 enforcing access control

    - by KandadaBoggu
    I have several PDF files stored in Amazon S3. Each file is associated with a user and only the file owner can access the file. I have enforced this in my download page. But the actual PDF link points to Amazon S3 url, which is accessible to anybody. How do I enforce the access-control rules for this url?(without making my server a proxy for all the PDF download links)

    Read the article

  • Getting the download count of a specific S3 object

    - by phidah
    I've got a number of S3 objects that are available to my customers. Since I'd like to bill my customers by usage, I wondered if there is any smart kind of way to get the number of times a given file has been downloaded? Alternatively, I suppose I could parse the log files provided by S3, but with 10m+ fetches per customer this might be bit of a task. Any ideas?

    Read the article

  • Upload File Directly to S3 with Progress Bar

    - by John Boker
    Relating to this question, http://stackoverflow.com/questions/117810/upload-files-directly-to-amazon-s3-from-asp-net-application, is there any way to do this and have a progress bar? ---- EDIT ---- Two days later and still no luck with a direct way. Found one thing that looks promising but not free: http://www.flajaxian.com/ Uses flash to upload directly to S3 with a progress bar.

    Read the article

  • Amazon S3 permissions

    - by Joe
    Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?

    Read the article

  • Automated incremental backups from Plesk on Centos to Amazon S3

    - by ChrisS
    Hi, I've done a far bit of research on this via Google and there seems to be quite a few ways of possibly doing this. I'm looking to incrementally backup new and updated files in two directories on my Plesk run Centos 5.2 server: /backups and /var/www/vhosts (preferable only httdocs within each vhost) Has anyone got some great feedback from using the various solutions - seems to be various Java, Perl and Ruby based solutions out there. Many thanks, Chris

    Read the article

  • s3fs changing s3 permissions?

    - by magd1
    My developer believes that s3fs is changing my bucket's permissions. Is this possible? I want my bucket to be public, but it keeps reverting back to private. Here's my fstab. s3fs#production /mnt/production fuse use_cache=/tmp,use_rrs=1,allow_other,uid=1000,gid=1000 0 0 My developer mentioned the "-o default_acl (default="private")" option. The documentation refers to "canned acl", but I don't understand what these are.

    Read the article

  • Configure non-destructive Amazon S3 bucket policy

    - by Assaf
    There's a bucket into which some users may write their data for backup purposes. They use s3cmd to put new files into their bucket. I'd like to enforce a non-destruction policy on these buckets - meaning, it should be impossible for users to destroy data, they should only be able to add data. How can I create a bucket policy that only lets a certain user put a file if it doesn't already exist, and doesn't let him do anything else with the bucket.

    Read the article

  • Amazon S3 Iterating Through Multi-Page Results. (withMarker)

    - by Jitu
    Trying to iterate through AmazonS3 that has around 5000+ keys stored in the bucket, used sample code based on provided link on Amazon Developer Guide http://docs.amazonwebservices.com/AmazonS3/latest/dev/ListingObjectKeysUsingNetSDK.html Issue is iteration fails when NexMarker is passed which has length of more than 128 string characters, which seems unusal as withMarker accepts string as parameter and there is no documentation on limit to withMarker. request.Marker = response.NextMarker; Has anyone faced similar issue. Thanks in advance.

    Read the article

  • Uploading to S3 using Curl

    - by Carl Crawley
    Hi All, I'm currently using cURL to upload a file from my server to S3 using AJAX to call the script. So I have the following: $fullfilepath = '/server/sitepath/files/' . $_POST['file']; $upload_url = 'https://'.$_POST['buckets'].'.s3.amazonaws.com/'; $params = array( 'key'=>$_POST['key'], 'AWSAccessKeyId'=>$_POST['AWSAccessKeyId'], 'acl'=>$_POST['acl'], 'success_action_status'=>$_POST['success_action_status'], 'policy'=>$_POST['policy'], 'signature'=>$_POST['signature'], 'Content-Type'=>$_POST['Content-Type'], 'file'=>"@$fullfilepath" ); $ch = curl_init(); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_URL, $upload_url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $params); $response = curl_exec($ch); curl_close($ch); echo $response; However, I'm getting an S3 error as follows when it posts and I'm unsure why because I'm not passing JSON to it. <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InvalidPolicyDocument</Code><Message>Invalid Policy: Invalid JSON.</Message><RequestId>B29469C6151BE0E8</RequestId><HostId>BFPk6W2kt1b6hTtx0mEq6dWdN/IhO0gNR5bct//7LAOwJxm1C3PrxS4RPv1blzJ8</HostId></Error> I've googled it for the last hour or so and can't seem to figure it out. If I change the order of the Array fields, it gives me a different error - I believe the order of the posted fields is important somehow. any help would be much appreciated! C

    Read the article

  • How to get Amazon s3 PHP SDK working?

    - by JakeRow123
    I'm trying to set up s3 for the first time and trying to run the sample file that comes with the PHP sdk that creates a bucket and attempts to upload some demo files to it. But this is the error I am getting: The difference between the request time and the current time is too large. I read on another question on SO that this is because Amazon determines a valid request by comparing the times between the server and the client, that the 2 must be within a 15 min span of one another. Now here is the problem. My laptop's time is 12:30AM June 8, 2012 at the moment. On my server I created a file called servertime.php and placed this code in that file: <?php print strftime('%c'); ?> and the output is: Fri Jun 8 00:31:22 2012 It looks like the day is correct but I don't know what to make of 00:31:22. In any case, how is it possible to always make sure the time between the client and server is within a 15 minute window of one another. What if I have a user in China who wishes to upload a file on my site which uses s3 for the cdn. Then the time difference would be over a day. How can I make sure all my user's times are within 15 minutes of my server time? What if the user is in the U.S. but the time on their machine is misconfigured. Basically how to get s3 bucket creation and upload to work?

    Read the article

  • Using .htaccess to server files from Amazon S3 CloudFront

    - by Adrian A.
    My ideal setup would be to take a current clients site, upload a .htaccess with a regex inside, that would match the URI, and if it finds a certain file extension, it would use the same path, but with an altered domain. ie. Normal path: http://www.domain.com/something/images/someimage.jpeg http://www.domain.com/assets/js/jquery.js .htaccess translated would turn the above into: http://mycdn.other.com/something/images/someimage.jpeg http://mycdn.other.com/assets/js/jquery.js I googled this for hours in a row, no luck. Again, this is for actually making use of Amazon's CloudFront. S3 is already mounted to the website for backups and storing files using s3fs, but this doesn't solve the issue since it's using S3 directly, not using the CloudFront.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >