Search Results

Search found 3148 results on 126 pages for 'amazon s3'.

Page 12/126 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How to Setting up Amazon EC2 with own OS and DB?

    - by SLim
    i got my own version of OS and DB which are window server 2008 Hyper-V R2 and Sql server R2 2008 both in enterprise version may i know how to configure it up and running ? with amazon EC2, what other is a must combination to make it run ? also how could i install the operating system and DNS ? i never doing server before, but i just need something like VPS to support my development and testing. Amazon Ec2 seem the best and cheapest service due to only $1 per hour.

    Read the article

  • How to run AWS sample JAVA code on an EC2

    - by SeaPlusPlus
    I just started with Amazon web services, and I have an EC2 instance. I downloaded the JAVA SDK and the Eclipse toolbox. I am able to run a sample program locally on my PC and connect to the Amazon databases, etc. My question is, what do I need to do to get this working on my EC2 instance? This may not even be specific to AWS. On Eclipse, I can just "Run as Application" and run any code. On the server side, what do I need to do? Should I ftp over my .java files? Should I export it to a jar and upload that? Do I need to install anything special to actually run it? I'm just trying to run the basic DynamoDB example that connects to the database and adds a new table and row

    Read the article

  • Interesting questions related to lighttpd on Amazon EC2

    - by terence410
    This problem appeared today and I have no idea what is going on. Please share you ideas. I have 1 EC2 DB server (MYSQL + NFS File Sharing + Memcached). And I have 3 EC2 Web servers (lighttpd) where it will mounted the NFS folders on the DB server. Everything going smoothly for months but suddenly there is an interesting phenomenon. In every 8 minutes to 10 minutes, PHP file will be unreachable. This will last about 1 minute and then back to normal. Normal files like .html file are unaffected. All servers have the same problem exactly at the same time. I have spent one whole day to analysis the reason. Finally, I find out when the problem appear, the file descriptor of lighttpd suddenly increased a lot. I used ls /proc/1234/fd | wc -l to check the number of fd. The # of fd is around 250 in normal time. However, when the problem appeared, it will be raised to 1500 and then back to normal. It sounds funny, right? Do you have any idea what's going on? ======================== The CPU graph of one of the web server.

    Read the article

  • Serving GZipped files from s3 using the Asset Pipeline

    - by kmurph79
    I have a Rails 3.2.3 app on Heroku and I'm using the asset_sync gem to serve my assets from s3, via these instructions. It works great, except s3 is not serving up the gzipped css/js files (just the uncompressed version). I've enabled gzip compression, to no avail: config.gzip_compression = true According to Using GZIP with html pages served from Amazon S3 I need to add meta-data to the s3 object for uploading. How would I do this in concert with the Asset Pipeline? Thank you for any help.

    Read the article

  • Serving images from Amazon S3 in PHP application

    - by luckytaxi
    So it just occurred to me that once I upload profile pics to S3, I have to figure out a way to keep track of the files. For example, if "susan" uploads 3 profile pics, I need to recall those 3 pictures and display it on her profile page if someone views her page. With that said, would the following work? User uploads picture from form Save file information (filename, user info, etc...) into DB and reference URL from S3 Upload photos to S3 When displaying pictures, I'll query the DB for the info and display the images from S3 accordingly.

    Read the article

  • PHP exec problem with s3-put

    - by schneck
    Hi there, I use the s3-bash-project to upload data to an S3-Bucket. My command looks like this: /mypath/s3_bash/s3-put -v -k '123456789' -s '/mypath/secret' -T '/mypath/upload/myuploadfile' '/my.bucket/mykeyname' I can run the command from the command line (Mac OS X), and it works well. Now I want to execute it from a PHP-Script: exec($command, $output); but in output, the "s3-put"-command only returns the command's help text. I log the command, and it works if I c&p it from the log the the command line, so there not a problem. It seems that PHP does not pass all the parameters to the command line, although I run escapeshellarg() over all the parameters. I'm using a local XAMPP-Test environment, safe_mode is off. Any ideas?

    Read the article

  • Remote backup with Amazon S3

    - by mjr
    Does anyone know of some good Mac software to sync a network attached storage device or a drobo to Amazon S3? I'd love to mail them a storage device for the first dump and then do nightly syncs.

    Read the article

  • How to have SSL on Amazon Elastic Load Balancer with a Gunicorn EC2 server?

    - by Riegie Godwin
    I'm a self taught back end engineer so I'm learning all of this stuff as I go along. For the longest time, I've been using basic authentication for my users. Many developers are advising against this approach since each request will contain the username & password in clear text. Anyone with the right skills can sniff on the connection between my iOS application and my Django/Gunicorn Server and obtain their password. I wouldn't want to put my user's credentials at risk so I would like to implement a more secure way of authentication. SSL seems to be the most viable option. My server doesn't serve any static content or anything crazy of that sort. All the server does is send and receive "json" responses from and to my iOS application. Here is my current topology. iOS application ------ Amazon Elastic Load Balancer ------- EC2 Instances running HTTP Gunicorn. Gunicorn runs on port 8000. I have a CNAME record from GoDaddy for the Amazon Elastic Load Balancer DNS. So instead of using the long DNS to make requests, I just use server.example.com. To interact with my servers I send and receive requests to server.example.com:8000/ This setup works and has been solid. However I need to have a more secure way. I would like to setup SSL between my iOS application and my Elastic Load Balancer. How can I go about doing this? Since I am only sending json responses to my application, do I really need to buy a certificate from a CA or can I create my own? (since browsers will not be interacting with my servers. My servers are only designed to send json responses to my iOS application).

    Read the article

  • CheckPoint/Amazon VPC VPN tunnel working inconsistently

    - by Lee
    First time poster, so please be gentle and correct me if there's Server Fault etiquette I'm missing. We have two CheckPoint edge devices at sites A & B, independently managed, connecting to two Amazon private clouds. In both cases, the two Amazon VPCs are in the same community on the CheckPoint device. A VPN tunnel exists between the two CheckPoint devices as well. Between Sites A & B and the Amazon VPC in Northern Virigina, we are unable to keep more than one tunnel up. Both will come up, but tunnel 2 will drop an hour after initiation and will not come back up while tunnel 1 is up. We believe the 1-hour period is due to IPsec phase 2 renegotiation, but can't be sure. On our side, we see the tunnel 2 remote endpoint as not responding to phase 2 negotiation. Between Sites A & B and the Amazon VPC in Oregon, we have no issues. Both tunnels are up and fail over properly. The CheckPoint gateways are using domain-based VPNs. According to CheckPoint's advice to Amazon, this won't work. Yet, in Oregon, it does. We've pursued this with Amazon and, despite the fact it's working in Oregon, they've refused to troubleshoot with us further. Can anyone suggest anything we can do to try to get this stabilized? Going to route-based VPNs is not an option for us.

    Read the article

  • How to define AUTHPARAMS in Amazon EC2 API call

    - by The Joker
    I am trying to make an API call to EC2. I want to add my IP address to the security group. https://ec2.amazonaws.com/ ?Action=AuthorizeSecurityGroupIngress &GroupName=grppp20 &GroupId=sg-b2z982mq &IpPermissions.1.IpProtocol=tcp &IpPermissions.1.FromPort=3389 &IpPermissions.1.ToPort=3389 &IpPermissions.1.IpRanges.1.CidrIp=22.951.17.728/32 &&AWSAccessKeyId=AOPLDRACULALK6U7A And i get the following error. AWS was not able to validate the provided access credentials I have a secret access key & a username. I searched internet & found that we have to make a signature of the secret key & use it in the request instead of adding it directly. Can anyone tell me how to make a signature of my AWS secret key & how to use them with my API call? Let my secret key be: 2WwRiQzBs7RTFG4545PIOJ7812CXZ Username: thejoker

    Read the article

  • Cant access ephemeral storage on Amazon ubuntu instance

    - by matt burns
    I want to utilise my ephemeral storage as mentioned in this question but I seem to be falling at the first hurdle. I can't even see /mnt: ~$ df -ah Filesystem Size Used Avail Use% Mounted on /dev/xvda1 8.0G 855M 6.8G 12% / proc 0 0 0 - /proc sysfs 0 0 0 - /sys none 0 0 0 - /sys/fs/fuse/connections none 0 0 0 - /sys/kernel/debug none 0 0 0 - /sys/kernel/security udev 288M 8.0K 288M 1% /dev devpts 0 0 0 - /dev/pts tmpfs 119M 152K 118M 1% /run none 5.0M 0 5.0M 0% /run/lock none 296M 0 296M 0% /run/shm This is from a vanilla instance of an ubuntu AMI (12.04-amd64-server-20120424 ami-a29943cb) I'm not bothered about resizing the partition, I just want to be able to use the space for writing temp files.

    Read the article

  • How do I login once I promote my Windows Server 2012 to domain controller in my Amazon VPC?

    - by Developr
    I am following this guide: http://d36cz9buwru1tt.cloudfront.net/pdf/EC2_AD_How_to.pdf to setup my domain controller. I get AD installed correctly, but when I do the promotion to DC, the server restarts and when I try to access it, I am unable to login using any of the local system accounts. I even created my own separate user account, but that did not help. I made sure to disable the amazon settings for renaming the machine, the machine has a static ip and has been renamed.

    Read the article

  • Cloudwatch alarms from Amazon AWS EC2 instance are always in UT, how can I change the alarm time zone to Eastern?

    - by RightHandedMonkey
    I am running an Amazon linux AMI and the alarms that I've setup are coming in all showing UT (universal time). It is inconvenient reading these alarms and I'd like them setup to read in eastern time zone (or America/New_York). I've already set my /etc/localtime to point to - /usr/share/zoneinfo/America/New_York ln -s /usr/share/zoneinfo/America/New_York /etc/localtime But it is still sending alarms in the UT timezone. Does anyone have a solution to this?

    Read the article

  • Amazon S3 Iterating Through Multi-Page Results. (withMarker)

    - by Jitu
    Trying to iterate through AmazonS3 that has around 5000+ keys stored in the bucket, used sample code based on provided link on Amazon Developer Guide http://docs.amazonwebservices.com/AmazonS3/latest/dev/ListingObjectKeysUsingNetSDK.html Issue is iteration fails when NexMarker is passed which has length of more than 128 string characters, which seems unusal as withMarker accepts string as parameter and there is no documentation on limit to withMarker. request.Marker = response.NextMarker; Has anyone faced similar issue. Thanks in advance.

    Read the article

  • Postgresql performance on EC2/EBS

    - by matija
    What gives best performance for running PostgreSQL on EC2? EBS in RAID? PGData on /mnt? Do you have any preferences or experiences? Main "plus" for running pgsql on EBS is switching from one to another instances. Can this be the reason to be slower that /mnt partition? PS. im running postgresql 8.4 with datas/size about 50G, amazon ec2 xlarge(64) instance.

    Read the article

  • A decent S3 bucket manager for Ubuntu

    - by Luke
    I'm looking for a decent S3 bucket manager for Ubuntu (Gnome). I prefer it to integrate with Nautilus so it will look like just any other drive (a la WebDAV) but so far I haven't been able to find anything that I'd like to use on a daily basis. What bucket managers do you use for Ubuntu or what bucket manager would you recommend? UPDATE: S3FS seems to be what I'd really want to use since it lets me integrate my buckets directly into my file-system. However, when trying S3FS I do not get the impression that it's ready for prime time. I'm stunned by the fact that there are no decent bucket managers out there for Ubuntu/Gnome, guess I have to build it myself...

    Read the article

  • CDN Rerouting on 404 (file not yet in synch with original storage)

    - by Alan Ristic
    Here is the problem. I've setup my app(on EC2) to store uploaded images directly on Amazon S3. I'd like to be able to serve static files(cdn) from my 'home' server so I wrote script that does sync from S3. But there is a window of (at least) one minute in synch. Now I see two solutions on the problem of pics not been available on 'home' server here: 1.I write script on EC2 (where the app resides) to fetch from DB pics that have status of "not-yet-synch", which is default state when user uploads picture. The script then does a ping to picture and if it gets OK response, updates DB from "not-yet-synch" to "synch". 2.Prefered solution would be to let apache (in this case) redirect request for an image if it sees 404 (e.g. doesent find image requested) to S3. This way I wouldn't need script from solution 1. So what approach do you suggest I take in solving this redundancy problem? Or what is practice in production environments? To further clarify; I'd like so serve images first from 'home' server, if that fails serve them from S3. Tnx, Alan

    Read the article

  • Using a script that uses Duplicity + S3 excluding large files

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if I copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

  • Problem with script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if i copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

  • Backup script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. My script gives the proper command, but when run within the script it outputs an an error. However if the same command is run manually everything works...??? Here is the script based on one easy found with google #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="7743E14E" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= If run I recieve the error; Command line error: Expected 2 args, got 6 Enter 'duplicity --help' for help screen. Any help your could offer would be greatly appreciated.

    Read the article

  • Where would an S3 upload speed cap originate?

    - by CoreyH
    I do a ton of uploading to S3 and am experiencing capped speeds and I can't quite figure out how to address it. The setup: Windows Server 2008 R2 x64, external HD, using a Java based upload tool called Jsh3ll and custom VBS scripts to kick the jobs off. Running one process at a time, I am always limited to about 4mbps. I have FiOS at 35/35mbps speeds, so it isn't an outright limit. AND, I can run parallel instances and can go all the way up to 35mbps, so I know the problem isn't gateway/nic/machine/amazon related. Running parallel instances works to a degree as a solution, but increases the complexity of my workflow greatly. Solving this would make my life dramatically easier. When I was first doing this I was playing around with a bunch of Windows TCP parameters and was able to briefly get unconstrained bandwidth, but it wasn't repeatable. Thoughts?

    Read the article

  • Reinstall default apps on Galaxy S3 before updating to Jelly Bean [migrated]

    - by Bruno-P
    I want to update my Galaxy S3 to Jelly Bean but after downloading the firmware using Kies, it starts updating, but then it stops with a "dead" Android with a red triangle icon. I think it's because I have removed some default apps like ChatOn and Yahoo widgets. Is there any way to get them back or to install the official Jelly Bean update without a factory reset? I don't want to reinstall my apps again and lose my settings each time I need to update the OS (I also don't want bloatware apps that are pre-installed). Any ideas?

    Read the article

  • Amazon S3 - Storage Class and Server Side Encryption

    - by Steven
    Ahhh! I am using Amazon S3 for some low price storage to clear down out SAN. I created the bucket and created a root folder. I set the storage class to standard and server side encryption AES. I started a copy job to move the files, some files copied over and i checked the files: Reduced Redundancy Encryption set to none WTF? So i deleted all files and folders. I manuallyed created the folder structure and again set the storage class and encryption level. I coped some files and bamm, still showing (at a file level as Reduced and no encryption). So my question is this, is it really raid'd and encrypted just not showing it properly (as the root folder is, how can the file not be??) or (b) am i being a huge tool and missing something?

    Read the article

  • How to maximize parallel download from S3

    - by StCee
    I got a lot of images to load from Amazon S3 on a single page, and sometimes it takes quite some time to load all the images. I heard that splitting the images to load from different sub-domains would help parallel downloads, however what is the actual implementation on that? While it is easy to split for sub-domains like static,image, etc; Should I make like 10 sub-domains (image1, image2...) to load say 100 images? Or is there some clever ways to do? (By the way I am considering using memcache to cache the S3images; I am not sure if it is possible. I would be grateful for any further comments. Thanks a lot!

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >