Search Results

Search found 800 results on 32 pages for 's3'.

Page 8/32 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • How should secret files be pushed to an EC2 (on AWS) Ruby on Rails application?

    - by nikc
    How should secret files be pushed to an EC2 Ruby on Rails application using amazon web services with their elastic beanstalk? I add the files to a git repository, and I push to github, but I want to keep my secret files out of the git repository. I'm deploying to aws using: git aws.push The following files are in the .gitignore: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb Following this link I attempted to add an S3 file to my deployment: http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers.html Quoting from that link: Example Snippet The following example downloads a zip file from an Amazon S3 bucket and unpacks it into /etc/myapp: sources: /etc/myapp: http://s3.amazonaws.com/mybucket/myobject Following those directions I uploaded a file to an S3 bucket and added the following to a private.config file in the .elasticbeanstalk .ebextensions directory: sources: /var/app/current/: https://s3.amazonaws.com/mybucket/config.tar.gz That config.tar.gz file will extract to: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb However, when the application is deployed the config.tar.gz file on the S3 host is never copied or extracted. I still receive errors that the database.yml couldn't be located and the EC2 log has no record of the config file, here is the error message: Error message: No such file or directory - /var/app/current/config/database.yml Exception class: Errno::ENOENT Application root: /var/app/current

    Read the article

  • Connecting a Samsung Galaxy S3 in Ubuntu 13.04

    - by Squishy
    In 13.04, whenever I connect an Android device, one of three things happens: 1 . It mounts successfully (maybe once out of 3 attempts) 2 . It fails to mount with the following error message: Oops! Something went wrong. Unhandled error message: Unable to open MTP device 3 . This one occasionally happens: Unhandled error message: No such interface `org.gtk.vfs.Mount' on object at path /org/gtk/vfs/mount/1 Regardless of activity (even when successfully mounted) it will continuously spam the following error message: Unable to mount SAMSUNG_Android Unable to open MTP Device '[usb:003,00x]' where x seems to be an arbitrary number below 10 and continues counting up with each new error message until the device is unplugged. I've also just noticed that even if it mounts successfully, it unmounts after about 30 seconds and starts spamming the error message above. The Android device is unlocked, always on and fully charged. ADB seems to function normally. Any suggestions? Further info: this happens on both a stock Samsung S3 and an Xperia Arc S running a custom AOSP based ROM. I've also tried the steps outlined in this Stack Overflow answer, but the problem persists. UPDATE: After doing a dist-upgrade (May 8th 2013), the Xperia Arc S on AOSP ROM now mounts and behaves normally. The S3, however, still behaves as described above. UPDATE: After careful observation, ABD does not, in fact, behave normally. If the error message above appears while sending an app to the device, the attempt is aborted with an error message saying that the device is unavailable.

    Read the article

  • django app using amazon aws s3 storage in stead of DB?

    - by farble1670
    new to python here so bear with me ... i'm looking at django for a rapid prototype to a photo sharing app with an amazon aws s3 storage back end. however, as far as i can tell, django is tailored toward the typical database MVC type of pattern. is there a way to for example provide a custom django model implementation that talks to s3 in stead of a DB? a custom DB engine? would either of these be practical, or am i looking in the wrong direction? thanks.

    Read the article

  • What does BucketAlreadyOwnedByYou error (from Amazon S3) actually mean? I can't find any reason affe

    - by Phyo Wai Win
    Hi there, I am using Amazon S3 to back up my Rails app's mysql database. And I am using astrails-safe plugin to do that and I got the "Your previous request to create the named bucket succeeded and you already own it. (AWS::S3::BucketAlreadyOwnedByYou)" error back whenever I try to update it. I have checked that the folder in which I am going to back up is there in my account already. It's just that I can't upload the files from the code (using astrails-safe). Any help would be appreciated! Thanks.

    Read the article

  • What Amazon S3 .NET Library is most useful and efficient?

    - by Geo
    There are two main open source .net Amazon S3 libraries. Three Sharp LitS3 I am currently using LitS3 in our MVC demo project, but there is some criticism about it. Has anyone here used both libraries so they can give an objective point of view. Below some sample calls using LitS3: On demo controller: private S3Service s3 = new S3Service() { AccessKeyID = "Thekey", SecretAccessKey = "testing" }; public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; return View("Index",s3.GetAllBuckets()); } On demo view: <% foreach (var item in Model) { %> <p> <%= Html.Encode(item.Name) %> </p> <% } %> EDIT 1: Since I have to keep moving and there is no clear indication of what library is more effective and kept more up to date, I have implemented a repository pattern with an interface that will allow me to change library if I need to in the future. Below is a section of the S3Repository that I have created and will let me change libraries in case I need to: using LitS3; namespace S3Helper.Models { public class S3Repository : IS3Repository { private S3Service _repository; #region IS3Repository Members public IQueryable<Bucket> FindAllBuckets() { return _repository.GetAllBuckets().AsQueryable(); } public IQueryable<ListEntry> FindAllObjects(string BucketName) { return _repository.ListAllObjects(BucketName).AsQueryable(); } #endregion If you have any information about this question please let me know in a comment, and I will get back and edit the question. EDIT 2: Since this question is not getting attention, I integrated both libraries in my web app to see the differences in design, I know this is probably a waist of time, but I really want a good long run solution. Below you will see two samples of the same action with the two libraries, maybe this will motivate some of you to let me know your thoughts. WITH THREE SHARP LIBRARY: public IQueryable<T> FindAllBuckets<T>() { List<string> list = new List<string>(); using (BucketListRequest request = new BucketListRequest(null)) using (BucketListResponse response = service.BucketList(request)) { XmlDocument bucketXml = response.StreamResponseToXmlDocument(); XmlNodeList buckets = bucketXml.SelectNodes("//*[local-name()='Name']"); foreach (XmlNode bucket in buckets) { list.Add(bucket.InnerXml); } } return list.Cast<T>().AsQueryable(); } WITH LITS3 LIBRARY: public IQueryable<T> FindAllBuckets<T>() { return _repository.GetAllBuckets() .Cast<T>() .AsQueryable(); }

    Read the article

  • With the attachment_fu rails plugin, is there any way to delete files uploaded to Amazon S3?

    - by Eric Nguyen
    Let's say I'm using attachment_fu to attach profile pics to user profiles in a system, with Amazon S3 used as the actual file storage. When users upload new profile pics, I'd like to replace the attached file with the new one. I can do this within my database (i.e. the file metadata) easily, but attachment_fu doesn't seem to provide methods for deleting the files from S3. Am I missing something, or am I approaching this the wrong way? Many thanks!

    Read the article

  • Why isn't a JS file hosted on Amazon S3 getting cached by the browser?

    - by Paras Chopra
    I have a JS file hosted on Amazon S3 (http://s3.amazonaws.com/wingify/vis_opt.js). I want the file to be cached on users' browsers so that they don't have to download it with every page view. However, in spite of setting Cache Control header I don't think it is getting cached. Browser still contacts Amazon Server with every pageview. Here is the example of page where this script is embedded: http://myjugaad.in/ If you have Firebug, you will be able to see that browser requests it with every pageview. What can I do so that the file gets permanently cached? Thanks for help.

    Read the article

  • Are whole VM images backed up on Amazon EC2/S3?

    - by John
    I've been trying to get my head around Amazon Web Services as a VPS provider. My understanding is a EC2 instance running Windows is basically a Windows VM, very similar to renting a VPS from a more traditional hosting provider. I don't want to have complex backups, either to administer or to restore - if my restore involves installing SVN, MySQL, Jira, etc on a new box before I can even try to restore the backup then it's not great to me. What I really want is a service which backs up my entire VM... if the PC running the VPS dies then the VM image is installed on a new PC and off we go again. With Amazon being all about flexibility and elasticity, I wondered if they have this service? I can't figure it out from reading their docs.

    Read the article

  • aws s3 works with script but not on cron

    - by user3800017
    guys.. My first post ! hope not the last .. I have few bunch of servers on aws ec2 platforms. I made a simple script to backup my custom logs on their s3 storage bucket. The problem is the script works fine .. but I tried to add it to the crontab. And the script executes but not the s3 sync/mv part ! Here is my code: NOW=$(date "+%b_%d_%Y") MY_HOSTNAME=`uname -n` mv /opt/req/req* /opt/req/bkup/ mv /opt/response/res* /opt/req/bkup/ cd /opt/req/bkup/ tar -cvf ${MY_HOSTNAME}_req_bkup_${NOW}.tar re* rm *.txt aws s3 mv /opt/req/bkup/* s3://req `

    Read the article

  • SQL SERVER – Sends backups to a Network Folder, FTP Server, Dropbox, Google Drive or Amazon S3

    - by pinaldave
    Let me tell you about one of the most useful SQL tools that every DBA should use – it is SQLBackupAndFTP. I have been using this tool since 2009 – and it is the first program I install on a SQL server. Download a free version, 1 minute configuration and your daily backups are safe in the cloud. In summary, SQLBackupAndFTP Creates SQL Server database and file backups on schedule Compresses and encrypts the backups Sends backups to a network folder, FTP Server, Dropbox, Google Drive or Amazon S3 Sends email notifications of job’s success or failure SQLBackupAndFTP comes in Free and Paid versions (starting from $29) – see version comparison. Free version is fully functional for unlimited ad hoc backups or for scheduled backups of up to two databases – it will be sufficient for many small customers. What has impressed me from the beginning – is that I understood how it works and was able to configure the job from a single form (see Image 1 – Main form above) Connect to you SQL server and select databases to be backed up Click “Add backup destination” to configure where backups should go to (network, FTP Server, Dropbox, Google Drive or Amazon S3) Enter your email to receive email confirmations Set the time to start daily full backups (or go to Settings if you need Differential or  Transaction Log backups on a flexible schedule) Press “Run Now” button to test You can get to this form if you click “Settings” buttons in the “Schedule section”. Select what types of backups and how often you want to run them and you will see the scheduled backups in the “Estimated backup plan” list A detailed tutorial is available on the developer’s website. Along with SQLBackupAndFTP setup gives you the option to install “One-Click SQL Restore” (you can install it stand-alone too) – a basic tool for restoring just Full backups. However basic, you can drag-and-drop on it the zip file created by SQLBackupAndFTP, it unzips the BAK file if necessary, connects to the SQL server on the start, selects the right database, it is smart enough to restart the server to drop open connections if necessary – very handy for developers who need to restore databases often. You may ask why is this tool is better than maintenance tasks available in SQL Server? While maintenance tasks are easy to set up, SQLBackupAndFTP is still way easier and integrates solution for compression, encryption, FTP, cloud storage and email which make it superior to maintenance tasks in every aspect. On a flip side SQLBackupAndFTP is not the fanciest tool to manage backups or check their health. It only works reliably on local SQL Server instances. In other words it has to be installed on the SQL server itself. For remote servers it uses scripting which is less reliable. This limitations is actually inherent in SQL server itself as BACKUP DATABASE command  creates backup not on the client, but on the server itself. This tool is compatible with almost all the known SQL Server versions. It works with SQL Server 2008 (all versions) and many of the previous versions. It is especially useful for SQL Server Express 2005 and SQL Server Express 2008, as they lack built in tools for backup. I strongly recommend this tool to all the DBAs. They must absolutely try it as it is free and does exactly what it promises. You can download your free copy of the tool from here. Please share your experience about using this tool. I am eager to receive your feedback regarding this article. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLServer, T SQL, Technology

    Read the article

  • Fail rate of EBS snapshots and AMIs?

    - by user784637
    According to Amazon the fail rate of EBS volumes is: As an example, volumes that operate with 20 GB or less of modified data since their most recent Amazon EBS snapshot can expect an annual failure rate (AFR) of between 0.1% – 0.5%, where failure refers to a complete loss of the volume http://aws.amazon.com/ebs/ However, I was curious to know the fail rate of EBS snapshots and private AMIs that a system admin would take. Since the EBS snapshots and AMIs are stored in Amazon s3, is it safe to assume that the likelihood you cannot rollback to a previous snapshot or AMI the same likelihood that a file gets lost in s3? Amazon S3’s standard storage is: ... Designed to provide 99.999999999% durability and 99.99% availability of objects over a given year. http://aws.amazon.com/s3/

    Read the article

  • Amazon CloudFront and EC2: Global Load Balancing

    - by Matt Rogish
    We have an app that is going to store and serve up a decent amount of data in S3 to a global audience where latency should be minimized. So, we've been doing tests with Amazon CloudFront and have seen favorable results. However, we need a thin middleware layer (to do security etc.) and we'd like to put that in EC2. Due to security restrictions, this middleware layer will do the file streaming from S3/CloudFront: S3/CloudFront - EC2 - Clients We can geographically distribute the EC2 nodes (US East/West, and Ireland) but the problem is that a client in the EU would hit our US server and be fed data from there, thus rendering much of the performance benefit of CloudFront moot. I've been digging through the EC2 docs but I can't find a built-in way to get a geographically distributed version of EC2 a la CloudFront. Elastic Load Balancing sounds like the way to go, but I can't seem to find a way with that to direct based on routing... Preferably, we'd like to keep the amount of stuff outside of EC2/S3/etc. to a minimum (for obvious reasons). Any ideas how to do that within the EC2/S3 framework? DNS/routing tricks? Thanks!

    Read the article

  • s3fs not maintaining uid/gid

    - by Publiccert
    I'm mounting s3 using s3fs with the following command: sudo /usr/local/bin/s3fs -o use_cache=/tmp,uid=1000,gid=1000,allow_other svn.domain.com /svn Allow_other is confirmed to be working along with cache. However, no variation/different placement of the uid/gid's is having any affect in the meta data on S3. Both the uid and gid come up as '0' in S3. I have created a user called svn with uid of 1000 to see if that would fix the problem. No luck.

    Read the article

  • S3sync not working

    - by user57833
    Hello, I managed to get s3sync to upload my test folder to Amazon S3 and can see it in the MWS Managment Console. Downloading the data back to a test folder results in the following error message: root@mybucketname:/var/s3sync# ./week_download.sh s3Prefix backups/weekly localPrefix /var/s3sync/testdown/weekly s3TreeRecurse mybucketname backups/weekly Creating new connection Trying command list_bucket mybucketname prefix backups/weekly max-keys 200 delimiter / with 100 retries le ft Response code: 200 prefix found: / s3TreeRecurse mybucketname backups/weekly / Trying command list_bucket mybucketname prefix backups/weekly/ max-keys 200 delimiter / with 100 retries l eft Response code: 200 S3 item backups/weekly/ s3 node object init. Name: Path:backups/weekly Size:0 Tag:d41d8cd98f00b204e9800998ecf8427e Date:Fri O ct 29 14:21:53 UTC 2010 local node object init. Name: Path:/var/s3sync/testdown/weekly/ Size: Tag: Date: source: dest: Update node s3sync.rb:638:in initialize': No such file or directory - /var/s3sync/testdown/weekly/.s3syncTemp (E rrno::ENOENT) from s3sync.rb:638:inopen' from s3sync.rb:638:in updateFrom' from s3sync.rb:393:inmain' from s3sync.rb:735 I am using the following download script: !/bin/bash script to download local directory upto s3 cd /var/s3sync/ export AWS_ACCESS_KEY_ID=nothing to see here export AWS_SECRET_ACCESS_KEY=nothing to see here export SSL_CERT_DIR=/var/s3sync/certs ruby s3sync.rb -r -v -d --progress --make-dirs mybucket:backups/weekly /var/s3sync/testdown copy and modify line above for each additional folder to be synced Any idea's? Does the download script need to download to the source of Amazon S3 i.e testup folder? Was hoping on the instance of a complete failure and the original folders won't exist that it would just download everything from me. Note: changed my bucket names to "mybucketname" so that it is not public!

    Read the article

  • EC2 persistence of machine

    - by Seagull
    I want to 'persist' my Amazon EC2 images. My scenario: I have a range of Windows and Linux machines Some machines are EBS backed, whereas others are S3 backed. I need to be able to persist a machine (put it to sleep), preferably keeping all settings active I had them when the machine was running. I need to be able to quickly wake up a machine from sleep [Ideally with an SLA of less than 2 min to turn-on, if such an SLA is available with Amazon]. Here's the stuff that confuses me: AWS allows me to put EBS backed machines to sleep, but not S3 backed. I believe I can put S3 machines into some sort of persistence mode. But this involves shutting down the machine, writing it to S3 storage and then recovering from there (not a real sleep mode, but at least I don't continue to get billed for CPU). S3 backing seems to take a long time to either writing a machine to disk, or to recover (turn on a machine). I can't tell immediately which machines are EBS backed and which are S3 backed? It seems like I can instantiate either type, but it's not immediately clear how Amazon decided whether a given machine should be EBS or S3 backed. Advice?

    Read the article

  • How to connect Samsung Galaxy S3 via USB?

    - by dez93_2000
    Connecting as either MTP or PTP: neither allows one to see pictures saved as default by phone camera to DCIM folder on external SD card. Similar problems with previous models (e.g. S2) were solvable by 'usb utilities' in wireless & networking settings, but this is no longer present. Other suggestions have mentioned uninstalling various libraries... but I don't wanna just start cutting stuff without knowing it'll help. Any thoughts on how to mount a Samsung Galaxy S3 over USB?

    Read the article

  • I`ve got some problems with new Acer Aspire S3

    - by xcariba
    I just got a new Acer Aspire S3 Laptop, and I've installed Ubuntu on it. Here is some problems: 1. Bluetooth (AR3012) can't find any devices. rfkill shows that everything is fine, but I still can't connect to any device. (Maybe it was turned off in stock windows?) 2. Can't change screen brightness (I've tried Timex's solution with new 3.2-rc kernel and some grub options, touchpad works fine,but I still can`t use Fn+Right/Left) 3. 720p video in all players(VLC, mplayer and totem) works wrong. I found some similar problems on intel video chips with vblank sync or something like that. Video plays as it should, but i always see a line which appears on action scenes (looks like some parts of a screen updates faster) Maybe there are some solutions from similar laptops that I can find? PS I've tried ubuntu 11.04 with old kernel and linux mint 12-rc with 3.2-rc2 kernel.

    Read the article

  • Does Amazon S3's HTTP Uploads feature support web-hook style callbacks?

    - by Gabe Hollombe
    When uploading files to Amazon S3 using the browser http upload feature, I know I can specify a success_action_redirect field/value that will tell my browser where to go when the upload is done. I'm wondering: is it possible to ask Amazon to make a web hook style POST request to my web server whenever a file gets uploaded? Basically, I want a way of being notified whenever a client uploads a new file, so that my server can process the upload. I'd like to do this without relying on the client to make the request to my server to tell me the file has been uploaded (never trust the client, right?).

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >