I'm trying to stream a video that is stored on Amazom S3 services.
I've tried passing the url with authentication using MPMoviePlayerController but with no success.
Like this: http://theusername:[email protected]/path/to/the/video
Have anyone done this before? I would like some advices, thanks.
Hi,
I'm looking for an open source project that provides a file manager type interface to S3. The ability to view files and "folders", add/edit/delete files/folders, etc.
I've seen http://s3fm.com, but I'd like to host something like that myself. Does anything like this exist?
Thanks.
Someone posted something similar but it didn't really solve the problem.
I want to move all my static files (images, javascript, css) to an Amazon S3 bucket when I deploy my app, as well as rewrite those paths in my app, is there a simple way to accomplish this? or am I in for a huge amount of work here?
I've been interacting with Amazon S3 through S3Fox and I can't seem to delete my buckets. I select a bucket, hit delete, confirm the delete in a popup, and... nothing happens. Is there another tool that I should use?
We are looking for a secure online solution to access our files stored on Amazon S3. We have about 3K files, mostly media and documents, that we need to make available to our employees on the move. We don't want to develop anything in-house if there is an existing solution.
Please note that our employees are not technologically minded , so a simple web based upload/download GUI would work the best.
Besides the Amazon Integrator from /n software, are there any other Amazon S3 components available that can be used with Delphi 2010? I would use the one from /n software, but it has some issues (e.g. GetObjectInfo doesn't work if the object is stored in a specific location) and limitations (e.g. copying objects doesn't let you define replacement meta-data).
I don't have the time or resources to create such a component myself.
Thanks!
Hi all,
I have an application running on GAE/J that streams video from AWSS3.
I need a solution for protecting the video from being stolen and I found that pre-signed URLs might be it (??).
How can I create pre-signed URLs from GAE/J or there's a better solution to secure the videos?
thanks
So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems.
However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. We'd like to avoid any kind of local storage (share a disk on a desktop or something) since we're a geographically distributed team).
So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that.
There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements:
Transport security via SSL to the bucket
Encryption of bucket contents
Bi-directional syncing
Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 )
Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). FTP and other client-based apps don't seem to support this at all.
Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one.
Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. In my mind, the magical tool would be some combination of TrueCrypt and rsync.
I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts?
THANKS!
So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems.
However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive.
So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that.
There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements:
Transport security via SSL to the bucket
Encryption of bucket contents
Bi-directional syncing
Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 )
Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage).
Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one.
Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists.
I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts?
THANKS!
I've been using VPS to host 7 Wordpress websites, most of them require big storage but very little RAM and traffic. So I'm thinking of moving the static files(uploads folder) content to Amazon S3 and I'm looking for the most viable solution to this.
I want every website to have their own bucket and newly uploaded media files automatically uploaded to Amazon S3 without using plugin. I'm ok with cron job, for example the files were uploaded first to my server, then transferred to S3 and deleted from my server every 24 hour. Or is there any way for me to change the default upload directory to my S3 bucket without sacrificing any Wordpress functionality(resize/title etc)?
What do you think the most efficient way to do this? Currently I'm looking at this plus cron job but I would like to know better option if it exist.
I blogged about the difference and similarities between AWS CloudFormations and Oracle Assembler builder to package your software stack for deployment/provisioning to the cloud. However, these tools do not deal with software stack versioning and configuration management. This is where tools like Chef and Puppet come into play.
Puppet and Chef points of interest:
1. Can be used in any cloud environment (rackspace, private cloud etc).
2. There is a debate between which is better. I am not going to get into this debate other then to say Puppet is more mature.
3. AWS CloudFormations can integration with both Chef and Puppet.
A good blog on AWS CloudFormations and the need for something more:
AWS CloudFormation
To continue from yesterday, let's set up a scenario that enables us to make use of this drag/drop service in NetBeans IDE:
The above service is applicable to Amazon S3, an Amazon storage provider that is typically used to store large binary files. In Amazon S3, every object stored is contained in a bucket. Buckets partition the namespace of objects stored in Amazon S3. More on buckets here. Let's use the tools in NetBeans IDE to create a Java application that accesses our Amazon S3 buckets.
Create a Java application named "AmazonBuckets" with a main class named "AmazonBuckets". Open the main class and then drag the above service into the main method of the class. Now, NetBeans IDE will create all the other classes and the properties file that you see in the screenshot below.
The first thing to do is to open the properties file above and enter the access key and secret:
access_key=SOMETHINGsecret=SOMETHINGELSE
Now you're all set up. Make sure to, of course, actually have some buckets available:
Then rewrite the Java class to parse the XML that is returned via the generated code:
package amazonbuckets;import java.io.ByteArrayInputStream;import java.io.IOException;import javax.xml.parsers.DocumentBuilder;import javax.xml.parsers.DocumentBuilderFactory;import javax.xml.parsers.ParserConfigurationException;import org.netbeans.saas.amazon.AmazonS3Service;import org.netbeans.saas.RestResponse;import org.w3c.dom.DOMException;import org.w3c.dom.Document;import org.w3c.dom.Node;import org.w3c.dom.NodeList;import org.xml.sax.InputSource;import org.xml.sax.SAXException;public class AmazonBuckets { public static void main(String[] args) { try { RestResponse result = AmazonS3Service.getBuckets(); String dataAsString = result.getDataAsString(); DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance(); DocumentBuilder dBuilder = dbFactory.newDocumentBuilder(); Document doc = dBuilder.parse( new InputSource(new ByteArrayInputStream(dataAsString.getBytes("utf-8")))); NodeList bucketList = doc.getElementsByTagName("Bucket"); for (int i = 0; i < bucketList.getLength(); i++) { Node node = bucketList.item(i); System.out.println("Bucket Name: " + node.getFirstChild().getTextContent()); } } catch (IOException | ParserConfigurationException | SAXException | DOMException ex) { } }}That's all. This is simpler to setup than the scenario described yesterday.
Also notice that there are other Amazon S3 services you can interact with from your Java code, again after generating a heap of code after drag/drop into a Java source file:
I tried the above, e.g., I created a new Amazon S3 bucket after dragging "createBucket", adding my credentials in the properties file, and then running the code that had been created. I.e., without adding a single line of code I was able to programmatically create new buckets.
The above outlines a handy set of tools and techniques to use if you want to let your users store and access data in Amazon S3 buckets directly from the application you've created for them.
I am trying to migrate my site hosting from bluehost to AWS cloud based service.
I have the site up and running on AWS with an elastic IP configured, it loads fine when I specify the IP address in the browser.
I have gone into Route 53 on the AWS console and created a "hosted zone" for the domain. I then created a new record set of type "A" using the IP address as the value.
I have a domain name registered with bluehost. Ive logged into the bluehost account and updated the domain name servers to point to those specified in Route 53 in the AWS console.
When I hit the IP address directly the site loads, however it doesn't load when using the domain name (I get a google chrome oops error page saying page is not found)
I've tried using this site: http://dns.squish.net/ to debug but it seems to be giving me the correct results.
fizaclegems.com 300 IN A 107.20.209.78
Where 107.20.209.78 matches the elastic IP configured in the AWS console. This is the result it gives for all 4 name servers.
Am I missing a step here? Does anyone know what else I should be doing or looking for?
I've tested most of the included samples in the AWS SDK for .NET and they all works fine.
I can PUT objects, LIST objects and DELETE objects in a bucket, but... lets say I delete the original and want to sync those files missing locally?
I would like to make a GET object (by key/name and bucket ofcause). I can find the object, but how do I read the binary data from S3 through the API?
Do I have to write my own SOAP wrapper for this or is there some kinda sample for this out "here" ? :o)
In hope of a sample. It does not have to tollerate execeptions etc. I just need to see the main parts that connects, retreives and stores the file back on my ASP.net or C# project.
Anyone???
Hi all,
I was wondering if it was possible to generate security credentials per individual Amazon S3 bucket. I am working with a developer and would like to grant him access only to the bucket we are working with. It's not a trust issue, it's more a concern that he'll delete the wrong bucket or its contents.
For example: If we were working on an application that used a bucket called test-application I could generate the credentials for just that one bucket. These credentials would not allow access to other buckets in my account.
Is this possible?
Thanks,
Tony
It has been suggested on Amazon docs http://aws.amazon.com/dynamodb/ among other places, that you can backup your dynamodb tables using Elastic Map Reduce,
I have a general understanding of how this could work but I couldn't find any guides or tutorials on this,
So my question is how can I automate dynamodb backups (using EMR)?
So far, I think I need to create a "streaming" job with a map function that reads the data from dynamodb and a reduce that writes it to S3 and I believe these could be written in Python (or java or a few other languages).
Any comments, clarifications, code samples, corrections are appreciated.
I have some secure images on S3 that I need to load into Flex. I was expecting to be able to do this using signed temporary URLs but can't get it working. I know the URLs I'm generating are correct, because they load fine in my browsers' address bar. Moreover, Flex has no problem loading my images with a non-signed url when they are public, but as soon as I try signing the urls all the images fail, whether public or not.
I've tried image.source = signedURL, image.load(signedURL), etc. If I try loading the file with URLLoader/URLStream, it looks like I'm getting the data OK, but I'm not sure how to translate those results to an Image control.
Is this just an issue with the Image control not being able to recognize signed urls? Do I have to load the image from a byte array? What would that look like?
I'm storing copies of database backups on Amazon S3 using the Python Boto library. But I worry that if my web server was hacked, those backups could be deleted using the credentials I need to do the upload.
Ok, so I know you can grant permissions to another Amazon email address, so I can imagine doing that after an upload then removing the original user's write access BUT in this scenario I now end up with 2 accounts and 2 sets of invoices to give to accounts every month.
Is there a solution to this that doesn't require a new Amazon account for each web server I run?
I have couple of copyright videos available on my S3 buckets. I want to stream them on my website, but at the same time. I don't want the users to rip the video from the video player.
I tried to google about it but still i am not confident on this, coz i do not know the intricacies of
options available like
Server Side encryption None/ AES-256
2) A very interesting option is under Metadata tab - It shows couple of keys & Values. How can i use them to secure my video content?
3) Add more meta data and related options?
I'm going to be using S3 to store user uploaded photos. Obviously, I wont be serving the image files to user agents without resizing them down. However, not one size would do, as some thumbnails will be smaller than other larger previews. So, I was thinking of making a standard set of dimensions scaling from the lowest 16x16 to some highest 1024x1024. Is this a good way to solve this problem? What if I need a new size later on? How would you solve this?
Has anyone been successful in having their DocumentRoot reside on an S3 mount (using s3fs)?
I currently have a mounted bucket at /mnt/s3. I can read and write files to it no problem.
In my httpd.conf I have DocumentRoot "/mnt/s3".
When I restart Apache I get the error "DocumentRoot must be a directory".
Has anyone tried something similar. My goal is to have a shared storage space so my nodes can scale easily and access the same document root.
Thanks