Search Results

Search found 2846 results on 114 pages for 'amazon elb'.

Page 23/114 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Amazon EC2 pricing question

    - by Industrial
    Hi everyone, I took my first step today on working with cloud servers and chosed Amazon EC2 for this project. Since I am a bit of a newcomer on this, I didn't fully understand their pricing: What happens when a instance is idling with no connections being made. Does it still cost us money? It would be sad to have instances idling and costing us money when we do not use them... Thanks a lot!

    Read the article

  • NSURLSession and amazon S3 uploads

    - by George Green
    I have an app which is currently uploading images to amazon S3. I have been trying to switch it from using NSURLConnection to NSURLSession so that the uploads can continue while the app is in the background! I seem to be hitting a bit of an issue. The NSURLRequest is created and passed to the NSURLSession but amazon sends back a 403 - forbidden response, if I pass the same request to a NSURLConnection it uploads the file perfectly. Here is the code that creates the response: NSString *requestURLString = [NSString stringWithFormat:@"http://%@.%@/%@/%@", BUCKET_NAME, AWS_HOST, DIRECTORY_NAME, filename]; NSURL *requestURL = [NSURL URLWithString:requestURLString]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:requestURL cachePolicy:NSURLRequestReloadIgnoringLocalAndRemoteCacheData timeoutInterval:60.0]; // Configure request [request setHTTPMethod:@"PUT"]; [request setValue:[NSString stringWithFormat:@"%@.%@", BUCKET_NAME, AWS_HOST] forHTTPHeaderField:@"Host"]; [request setValue:[self formattedDateString] forHTTPHeaderField:@"Date"]; [request setValue:@"public-read" forHTTPHeaderField:@"x-amz-acl"]; [request setHTTPBody:imageData]; And then this signs the response (I think this came from another SO answer): NSString *contentMd5 = [request valueForHTTPHeaderField:@"Content-MD5"]; NSString *contentType = [request valueForHTTPHeaderField:@"Content-Type"]; NSString *timestamp = [request valueForHTTPHeaderField:@"Date"]; if (nil == contentMd5) contentMd5 = @""; if (nil == contentType) contentType = @""; NSMutableString *canonicalizedAmzHeaders = [NSMutableString string]; NSArray *sortedHeaders = [[[request allHTTPHeaderFields] allKeys] sortedArrayUsingSelector:@selector(caseInsensitiveCompare:)]; for (id key in sortedHeaders) { NSString *keyName = [(NSString *)key lowercaseString]; if ([keyName hasPrefix:@"x-amz-"]){ [canonicalizedAmzHeaders appendFormat:@"%@:%@\n", keyName, [request valueForHTTPHeaderField:(NSString *)key]]; } } NSString *bucket = @""; NSString *path = request.URL.path; NSString *query = request.URL.query; NSString *host = [request valueForHTTPHeaderField:@"Host"]; if (![host isEqualToString:@"s3.amazonaws.com"]) { bucket = [host substringToIndex:[host rangeOfString:@".s3.amazonaws.com"].location]; } NSString* canonicalizedResource; if (nil == path || path.length < 1) { if ( nil == bucket || bucket.length < 1 ) { canonicalizedResource = @"/"; } else { canonicalizedResource = [NSString stringWithFormat:@"/%@/", bucket]; } } else { canonicalizedResource = [NSString stringWithFormat:@"/%@%@", bucket, path]; } if (query != nil && [query length] > 0) { canonicalizedResource = [canonicalizedResource stringByAppendingFormat:@"?%@", query]; } NSString* stringToSign = [NSString stringWithFormat:@"%@\n%@\n%@\n%@\n%@%@", [request HTTPMethod], contentMd5, contentType, timestamp, canonicalizedAmzHeaders, canonicalizedResource]; NSString *signature = [self signatureForString:stringToSign]; [request setValue:[NSString stringWithFormat:@"AWS %@:%@", self.S3AccessKey, signature] forHTTPHeaderField:@"Authorization"]; Then if I use this line of code: [NSURLConnection connectionWithRequest:request delegate:self]; It works and uploads the file, but if I use: NSURLSessionUploadTask *task = [self.session uploadTaskWithRequest:request fromFile:[NSURL fileURLWithPath:filePath]]; [task resume]; I get the forbidden error..!? Has anyone tried uploading to S3 with this and hit similar issues? I wonder if it is to do with the way the session pauses and resumes uploads, or it is doing something funny to the request..? One possible solution would be to upload the file to an interim server that I control and have that forward it to S3 when it is complete... but this is clearly not an ideal solution! Any help is much appreciated!! Thanks!

    Read the article

  • Filesize with SWFUpload and Amazon S3

    - by Dodinas
    Hello all, I'm currently using SWFUpload to upload files to my S3 bucket. And it's working great. I'm using the script from a website here: http://www.anedix.com/news/article/50 Again, the upload to my S3 works fine, however, I've been running into an issue when attempting to upload larger files. It seems that I cannot upload anything over 50MB. I have tried this from both my webhost and locally, using my local testing environment. My question is this: When uploading with SWFUpload, it should be going straight to Amazon S3, correct? If so, then PHP settings such as MAX_UPLOAD_SIZE should not affect it? (Even though in my local environment, I've set it to 1024MB.) Essentially, what the script does is, shows that it's uploading the file (it takes the appropriate amount of time), redirects to the success page, and does not throw any errors. Any ideas on why this would be happening, or how I can troubleshoot this? Thanks!

    Read the article

  • Using .htaccess to server files from Amazon S3 CloudFront

    - by Adrian A.
    My ideal setup would be to take a current clients site, upload a .htaccess with a regex inside, that would match the URI, and if it finds a certain file extension, it would use the same path, but with an altered domain. ie. Normal path: http://www.domain.com/something/images/someimage.jpeg http://www.domain.com/assets/js/jquery.js .htaccess translated would turn the above into: http://mycdn.other.com/something/images/someimage.jpeg http://mycdn.other.com/assets/js/jquery.js I googled this for hours in a row, no luck. Again, this is for actually making use of Amazon's CloudFront. S3 is already mounted to the website for backups and storing files using s3fs, but this doesn't solve the issue since it's using S3 directly, not using the CloudFront.

    Read the article

  • Amazon EC2 Load Balancer: Defending against DoS attack?

    - by netvope
    We usually blacklist IPs address with iptables. But in Amazon EC2, if a connection goes through the Elastic Load Balancer, the remote address will be replaced by the load balancer's address, rendering iptables useless. In the case for HTTP, apparently the only way to find out the real remote address is to look at the HTTP header HTTP_X_FORWARDED_FOR. To me, blocking IPs at the web application level is not an effective way. What is the best practice to defend against DoS attack in this scenario? In this article, someone suggested that we can replace Elastic Load Balancer with HAProxy. However, there are certain disadvantages in doing this, and I'm trying to see if there is any better alternatives.

    Read the article

  • Amazon S3 security credentials per bucket

    - by slythic
    Hi all, I was wondering if it was possible to generate security credentials per individual Amazon S3 bucket. I am working with a developer and would like to grant him access only to the bucket we are working with. It's not a trust issue, it's more a concern that he'll delete the wrong bucket or its contents. For example: If we were working on an application that used a bucket called test-application I could generate the credentials for just that one bucket. These credentials would not allow access to other buckets in my account. Is this possible? Thanks, Tony

    Read the article

  • Some questions about setting up Amazon S3 with Ruby on Rails

    - by ben
    I'm trying to setup Amazon S3 hosting with my Ruby on Rails 3 app, which is hosted on Heroku. After reading these instructions in the Heroku docs, I'm trying to use the aws-s3 gem. The instructions say to put the S3 account details in config/amazon_s3.yml, but the aws-s3 Github page says you create a connection like this: AWS::S3::Base.establish_connection!( :access_key_id => 'abc', :secret_access_key => '123' ) Why is the connection created by providing the details if they're already provided in the config file? Is that not the correct way to establish a connection? Do I have to establish a connection for each user everytime an upload is about to occur, or is a connection established for the application as a whole? Thanks for reading.

    Read the article

  • Installing XAMPP in Amazon EC2

    - by Woho87
    Hi! Can someone explain to me(not a rocket scientist) all the steps for installing XAMPP for a EC2 instance running Linux? And yes I have look the entire web and found nothing, except this "Deploying a lamp stack" and this "Starting Amazon EC2 with Mac OS X". I found the latter one more useful, but as a said I'm not a rocket scientist. I got stuck in to the latter link where I should edit the file .bash_profile with a text editor. I tried the "vi" in SSH, but why can't I open it on textEditor included in all Macs? Is there any easier way to setting up a XAMPP server for a linux server in EC2?

    Read the article

  • How to read remote video on Amazon S3 using ffmpeg

    - by virtualize
    I need to create poster frames from videos hosted on Amazon S3 via ffmpeg. So is there a way to use the remote video file directly in ffmpeg command line like this: ffmpeg -i "http://bucket.s3.amazonaws.com/video.mp4" -ss 00:00:10 -vframes 1 -f image2 "image%03d.jpg" ffmpeg just returns: http://bucket.s3.amazonaws.com/video.mp4: I/O error occurred Usually that means that input file is truncated and/or corrupted. I also tried forcing ffmpeg to use the videos mp4 container for reading: ffmpeg -f mp4 -i "http://bucket.s3.amazonaws.com/video.mp4" ... But no luck. Wget this video from S3 and processing it locally works fine of course, as well as reading the file remotely from other 'standard' http servers. So I know that ffmpeg supports remote file reading, but why not on S3?

    Read the article

  • Quantity in amazon cart?

    - by user201140
    Hi I'd like to change the quantity in an AWS order. I've used the form below and that works fine for adding 1 item to the cart, but when I try to change it to a quantity of 2 it says there are no items in my cart. What am I doing wrong? Thanks. <form method="GET" action="http://www.amazon.com/gp/aws/cart/add.html" id="Quantity"> <input type="hidden" name="AWSAccessKeyId" value="Access Key ID" /> <input type="hidden" name="AssociateTag" value="Associate Tag" /> <?php echo "<input type='hidden' name='ASIN.1' value='".$itemid."'/>"; ?> <input type="hidden" name="Quantity.1" value="1"/><br/> <input type="image" src="buynow.gif" value="Submit" alt="Submit"> </form>

    Read the article

  • Force CloudFront distribution/file update

    - by Martin
    I'm using Amazon's CloudFront to serve static files of my web apps. Is there no way to tell a cloudfront distribution that it needs to refresh it's file or point out a single file that should be refreshed? Amazon recommend that you version your files like logo_1.gif, logo_2.gif and so on as a workaround for this problem but that seems like a pretty stupid solution. Is there absolutely no other way?

    Read the article

  • Apache DirectorySlash ignores X-Forwarded-Proto header

    - by Sharique Abdullah
    It seems that Apache's DirectorySlash directive is causing issues when using it behind Amazon ELB on HTTPS protocol. So say I access a URL: https://myserver.com/svn/MyProject it would redirect me to: http://myserver.com/svn/MyProject/ My ELB configuration forwards port 443 to port 80 on Apache, but Apache should be aware of the X-Forwarded-Porto header in the request and thus keep the protocol as https in the redirect URL too. Any thoughts?

    Read the article

  • How to encrypt Amazon CloudFront signature for private content access using canned policy

    - by Chet
    Has anyone using .net actually worked out how to successfully sign a signature to use with CloudFront private content? After a couple of days of attempts all I can get is Access Denied. I have been working with variations of the following code and also tried using OpenSSL.Net and AWSSDK but that does not have a sign method for RSA-SHA1 yet. The signature (data) looks like this {"Statement":[{"Resource":"http://xxxx.cloudfront.net/xxxx.jpg","Condition":?{"DateLessThan":?{"AWS:EpochTime":1266922799}}}]} This method attempts to sign the signature for use in the canned url. So of the variations have included chanding the padding used in the has and also reversing the byte[] before signing as apprently OpenSSL do it this way. public string Sign(string data) { using (SHA1Managed SHA1 = new SHA1Managed()) { RSACryptoServiceProvider provider = new RSACryptoServiceProvider(); RSACryptoServiceProvider.UseMachineKeyStore = false; // Amazon PEM converted to XML using OpenSslKey provider.FromXmlString("<RSAKeyValue><Modulus>....."); byte[] plainbytes = System.Text.Encoding.UTF8.GetBytes(data); byte[] hash = SHA1.ComputeHash(plainbytes); //Array.Reverse(sig); // I have see some examples that reverse the hash byte[] sig = provider.SignHash(hash, "SHA1"); return Convert.ToBase64String(sig); } } Its useful to note that I have verified the content is setup correctly in S3 and CloudFront by generating a CloudFront canned policy url using my CloudBerry Explorer. How do they do it? Any ideas would be much appreciated. Thanks

    Read the article

  • When should one use the following: Amazon EC2, Google App Engine, Microsoft Azure and Salesforce.com

    - by vicky21
    I am asking this in very general sense. Both from cloud provider and cloud consumer's perspective. Also the question is not for any specific kind of application (in fact the intention is to know which type of applications/domains can fit into which of the cloud slab -SaaS PaaS IaaS). My understanding so far is: IaaS: Raw Hardware (Processors, Networks, Storage). PaaS: OS, System Softwares, Development Framework, Virtual Machines. SaaS: Software Applications. It would be great if Stackoverflower's can share their understanding and experiences of cloud computing concept. EDIT: Ok, I will put it in more specific way - Amazon EC2: You don't have control over hardware layer. But you can take your choice of OS image, Dev Framework (.NET, J2EE, LAMP) and Application and put it on EC2 hardware. Can you deploy an applications built with Google App Engine or Azure on EC2? Google App Engine: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing Java or Python application and port it to GAE? Or vice versa, can applications that were built on GAE be taken out of GAE and ported to any Application Server like Websphere or Weblogic? Azure: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing .NET application and port it to Azure? Or vice versa, can applications that were built on Azure be taken out of Azure and ported to any Application Server like Biztalk?

    Read the article

  • Amazon EC2 and jbossws

    - by avjaz
    Hi - I've deployed a webservice to a Jboss instance running on Amazon EC2. The webservice works fine locally, but when I deploy on EC2, and go to the /jbossws/services page the Endpoint Address for the webservice is the private DNS of the ec2 instance (domU-X-X-X-X etc...), not the public dns (which I would like it to be). I've tried loading the wsdl by changing the private hostname to the public IP; that works, but when I try to call any of the operations I get a HostNotFoundException, I'm guessing due to the fact that the generated wsdl has the stanza: <service name='XXXService'> <port binding='tns:XXXBinding' name='XXXPort'> <soap:address location='http://domU-XX-XX-XX-XX-XX-XX.compute-1.internal:8080/xx/xx/xx'/> </port> </service> where http://domU-XX-XX-XX-XX-XX-XX.compute-1.internal is the internal dns of the ec2 instance. The wsdl is auto generated - Is there a JAXB annotation I can use so that I can force the generated wsdl to use the public dns of the EC2 instance? Many thanks -

    Read the article

  • Amazon access key showing in URL for Carrierwave and Fog

    - by kcurtin
    I just switched from storing my images uploaded via Carrierwave locally to using Amazon s3 via the fog gem in my Rails 3.1 app. While images are being added, when I click on an image in my application, the URL is providing my access key and a signature. Here is a sample URL (XXX replaced the string with the info): https://s3.amazonaws.com/bucketname/uploads/photo/image/2/IMG_4842.jpg?AWSAccessKeyId=XXX&Signature=XXX%3D&Expires=1332093418 This is happening in development (localhost:3000) and when I am using heroku for production. Here is my uploader: class ImageUploader < CarrierWave::Uploader::Base include CarrierWave::RMagick storage :fog def store_dir "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}" end process :convert => :jpg process :resize_to_limit => [640, 640] version :thumb do process :convert => :jpg process :resize_to_fill => [280, 205] end version :avatar do process :convert => :jpg process :resize_to_fill => [120, 120] end end And my config/initializers/fog.rb : CarrierWave.configure do |config| config.fog_credentials = { :provider => 'AWS', :aws_access_key_id => 'XXX', :aws_secret_access_key => 'XXX', } config.fog_directory = 'bucketname' config.fog_public = false end Anyone know how to make sure this information isn't available?

    Read the article

  • architecture and tools for a remote control application?

    - by slothbear
    I'm working on the design of a remote control application. From my iPhone or a web browser, I'll send a few commands. Soon my home computer will perform the commands and send back results. I know there are remote desktop apps, but I want something programmable, something simpler, and something that I wrote. My current direction is to use Amazon Simple Queue Service (SQS) as the message bus. The iPhone places some messages in a queue. My local Java/JRuby program notices the messages on the queue, performs the work and sends back status via a different queue. This will be a very low-volume application. At $1.00 for a million requests (plus a handful of data transfer charges), Amazon SQS looks a lot more affordable than having my own server of any type. And super reliable, that's important for me too. Are there better/standard toolkits or architectures for this kind of remote control? Cost is not a big issue, but I prefer the tons I learn by doing it myself. I'm moderately concerned about security, but doubt it will be a problem. The list of commands recognized will be very short, and only recognized in specific contexts. No "erase hard drive" stuff. update: I'll probably distribute these programs to some other people who want the same function, but who don't have Amazon SQS accounts. For now, they'll use anonymous access to my queues, with random 80-character queue names.

    Read the article

  • Can't attach EC2 instance to Network Interface

    - by Ian Warburton
    When trying to attach a network interface, it says... No instances were found for this availability zone. My instance is in us-east-1c and my network interface is in us-east-1b. Is that significant? If so, how do I create the VPC in the same zone and if not then why this error? EDIT: I've re-created the VPC and the Network Interface is now us-east-1c and the EC2 instance is also us-east-1c. Same error message though!

    Read the article

  • Using Cloud Formation provisioned security group with specific subnet

    - by Fred Clausen
    Summary I'm attempting to create an AWS CloudFormation template which contains an instance for which I want to select a particular subnet. If I specify the subnet ID then I get the following error The parameter groupName cannot be used with the parameter subnet. From reading this thread it appears I need to provide security group IDs - not names. How can I create a security group in CloudFormation and then get its ID after the fact? Details The relevant part of the instance config is as follows "WebServerHost": { "Type" : "AWS::EC2::Instance", <..skipping metadata...> "Properties": { "ImageId" : { "ami-1234" }, "InstanceType" : { "Ref" : "WebServerInstanceType" }, "SecurityGroups" : [ {"Ref" : "WebServerSecurityGroup"} ], "SubnetId" : "subnet-abcdef123", and the security group looks as follows "WebServerSecurityGroup" : { "Type" : "AWS::EC2::SecurityGroup", "Properties" : { "GroupDescription" : "Enable HTTP and SSH", "SecurityGroupIngress" : [ {"IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0"}, {"IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : "0.0.0.0/0"} ] } }, How can I create and then get that security group's ID?

    Read the article

  • growing EBS RAID volume

    - by Ryan Fernandes
    I've created a RAID0 configuration with two 1GB EBS volumes, mounted at /dev/md0 using mdadm and formatted with XFS Next, I copied some files over to fill the volume to around 30% of its capacity (of 2GB) I then created snapshots of the volumes using ec2-consistent-snapshot and created volumes of the said snapshots but specified the volume size to be 2GB (effective doubling the capacity on each disk) I then spun up a new instance, assembled the RAID0 configuration on /dev/md0 from the 2 volumes mentioned above and mount it to /vol df -hT showed /vol as 2GB (as expected) Now I ran sudo xfs_growfs -d /vol. The command completed normally but reported blocks changed from 523776 to 524160 (only!) and df -hT still showed /vol as 2GB (instead of the expected 4GB) I rebooted, remounted, reassembled the RAID but it still reports the old size. Any clue as to what went wrong?

    Read the article

  • AWS RDS connection count

    - by wmarbut
    I am using AWS RDS with MySQL for a project and have a "large" instance. The documentation is clear on what this means as far as compute resources and RAM goes, but I can't find anything that documents how many open database connections that I can have. The app that I am using is PHP and it utilizes PDO with persistent connections. This means that the number of open connections could reach the maximum number of PHP child processes running at any given point. How do I ensure that my RDS instance has a max connections setting high enough to be comfortable with this?

    Read the article

  • Unable to connect to EC2 instance after "reboot"

    - by KPL
    I am not able to connect to my m1.small instance after rebooting it. I have already associated the public IP with this instance. Upon checking the system log, this seems to be the issue: cloud-init-nonethttp://11.84: waiting 10 seconds for network device cloud-init-nonethttp://21.85: waiting 120 seconds for network device cloud-init-nonethttp://141.85: gave up waiting for a network device. Cloud-init v. 0.7.3 running 'init' at Sun, 18 May 2014 07:02:55 +0000. Up 142.54 seconds. ci-info: +++++++++++++++++++++++Net device info++++++++++++++++++++++++ ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: | Device | Up | Address | Mask | Hw-Address | ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | . | ci-info: | eth0 | False | . | . | 02:43:xx:xx:xx:xx | ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! A bunch of these follow the above message: 2014-05-18 07:02:56,178 - url_helper.pyWARNING: Calling http://169.254.169.254/2009-04-04/meta-data/instance-id failed 0/120s: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by : Errno 101] Network is unreachable) This is obviously related to the network interface not being working correctly. I have tried this so far: Relaunch a new instance from the custom AMI (created from EBS) of the failing instance. The same error shows up in the logs. Attach a new network interface to the EC2 instance. The error still persists. eth1 shows up in the list but the "up" column is False.

    Read the article

  • SSH broken after hostname change on EC2-hosted Ubuntu

    - by dimadima
    I changed my instance's hostname using the hostname utility and then set it in /etc/hostname so that the new name survives reboot. My main motivation was for differentiating between instances at the prompt using the \h format in PS1. EDIT I also changed permissions on my home directory. I made my home directory group writeable. END EDIT Now I can no longer SSH into the machine. The short of it is the error Permission denied (publickey). Running ssh -v, the more verbose output is: debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dmitry/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/dmitry/.ssh/ec2key.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). Should I have done something after changing the hostname? Now I can't get into the instance! :(

    Read the article

  • Static NAT in AWS's Virtual Private Cloud (VPC)

    - by user1050797
    Currently in a VPC with a public and a private subnet, all internet bound traffic from the private subnet could be routed via an NAT instance. The NAT instance will port address translate the packet's source IP to use the NAT instance's elastic IP, so the public server can reply to this public address. This is a PAT mechanism. My question is there a way for me to do a static NAT on my NAT instance -- Using the same NAT instance to static NAT an unassociated but reserved elastic IP to a private subnet host. This NAT instance will behave like a physical firewall doing static nat'ing for a bunch of private ip's.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >