Search Results

Search found 2856 results on 115 pages for 'amazon beanstalk'.

Page 20/115 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • How i can setup a nginx cache strategy that first try amazon s3, then memcache and do a fallback on miss?

    - by Tim
    i have a large site with lot of pages that almost never change, right now i am using two memcache servers (amazon elasticache), but this its really expensive. Thats why for this files that barely never change i want to upload them to amazon s3 and shutdown 1 memcache server. Here is my conf; location ~ /longterm/(.*){ proxy_pass http://amazonS3bucket; proxy_intercept_errors on; proxy_next_upstream http_404; error_page 404 503 = @fallback_memcached } location @fallback_memcache { set $memcached_key $uri; memcached_pass name:11211; error_page 404 @fallback; } location @fallback { try_files $uri $uri/index.html } I dont know why but the config doesnt work on the final fallback; if i got an amazon S3 hit it works, if i got an amazon S3 miss and a memcache hit it works, but if i got an amazon S3 miss then a memcache miss when it try to resolve the las fallback it fails. I am also thinking in use the amazon s3 fuse http://code.google.com/p/s3fs/ instead of the proxy pass i think it would be easier for implement, i would also be less performant?

    Read the article

  • Amazon Product API: "Your request is missing a required parameter combination" on Blended ItemSearch

    - by Daniel Schaffer
    I'm having some problems trying to do an ItemSearch on the Blended index using the Amazon Product API. According to the documentation, Blended requests cannot specify the MerchantId parameter - and indeed, if I try to include it I get an error telling me so. However, when I don't include it, I get an error telling me that my request is missing a required parameter combination and that a valid combination includes MerchantId... what the hell? Here's the XML response: <Items xmlns="http://webservices.amazon.com/AWSECommerceService/2005-10-05"> <Request> <IsValid>False</IsValid> <ItemSearchRequest> <Availability>Available</Availability> <Condition>All</Condition> <Keywords> home theater pc and other geekery</Keywords> <ResponseGroup>Similarities</ResponseGroup> <ResponseGroup>SalesRank</ResponseGroup> <ResponseGroup>OfferSummary</ResponseGroup> <ResponseGroup>Small</ResponseGroup> <ResponseGroup>Images</ResponseGroup> <SearchIndex>Blended</SearchIndex> </ItemSearchRequest> <Errors> <Error> <Code>AWS.MissingParameterCombination</Code> <Message>Your request is missing a required parameter combination. Required parameter combinations include MerchantId, Availability.</Message> </Error> </Errors> </Request> </Items> The failing requests are being sent as part of batches with other requests that are succeeding. I'm using REST to send my requests, so here's an example of a request: http://ecs.amazonaws.com/onca/xml?AWSAccessKeyId=-------------& ItemSearch.1.Keywords=Mates%20of%20State& ItemSearch.1.MerchantId=Amazon& ItemSearch.1.SearchIndex=DVD& ItemSearch.2.Keywords=teaching%20Lily%20various%20computer%20related%20skills& ItemSearch.2.SearchIndex=Blended& ItemSearch.Shared.Availability=Available& ItemSearch.Shared.Condition=All& ItemSearch.Shared.ResponseGroup=Small%2CSalesRank%2CImages%2COfferSummary%2CSimilarities& Operation=ItemSearch%2CSimilarityLookup& Service=AWSECommerceService& SimilarityLookup.1.ItemId=B000FNNHZ2& SimilarityLookup.2.ItemId=B000EQ5UPU& SimilarityLookup.Shared.Availability=Available& SimilarityLookup.Shared.Condition=All& SimilarityLookup.Shared.MerchantId=Amazon& SimilarityLookup.Shared.ResponseGroup=Small%2CSalesRank%2CImages%2COfferSummary& Timestamp=2010-04-02T17%3A18%3A05Z& Signature=---------------- Any ideas as to what I'm doing wrong?

    Read the article

  • Monitoring AWS Systems Behind ElasticBeanStalk

    - by A. Avadis
    So I'm getting a company set up in the Amazon Cloud -- creating IAAS protocol/solutions/standardized implementation, etc while also being the SysAdmin for individual systems, app environments, and day-to-day uptime. One of the biggest issues I'm having is tracking various system/application logs, as well as logging/monitoring/archiving system metrics like memory usage, cpu usage, etc etc In a centralized fashion. E.g. -- Nagios + Urchin. The BIGGEST impediment to my endeavors is the following: The company application is deployed in the form of a Java *.WAR file, uploaded to an Elastic BeanStalk application environment, load balancing and auto-scaling between 3(min) and 10(max) servers, and the EC2's that run the application are fired up and disposed of ad-hoc. That is to say, I can't monitor the individual EC2's for very long because so many are being terminated then auto-provisioned/auto-scaled on the fly -- so I'd constantly be having to "monitor what I'm monitoring", and continuously remove/add EC2 machine addresses to my monitoring lists. IS there some sort of way to use monitoring tools like Zabbix or Nagios to monitor the ElasticBeanStalk, and have it automatically add on new EC2's, and remove terminated/failed EC2's from its monitoring list automatically? Furthermore, is there anything I can do with GrayLog to achieve similar results with the aggregation/centralization of my application logs from multiple EC2 instances into ONE consolidated set of logs/events? If not GrayLog, is there ANYTHING LIKE GrayLog that can automatically detect what EC2 members are being added/removed from the environment, and collect the logs from them automatically? Any and all advice or direction is appreciated. Thanks much, and cheers!!

    Read the article

  • Using Amazon S3/Cloudfront and Encoding.com to deliver web video – step by step for iPhone/iPod/iPad

    - by joelvarty
      The Amazon AWS newsletter for May 2010 had a great link in it to this article by encoding.com on how you can use they service to encode your video for multi-format, multi-bandwidth streaming to many devices, including iPhone, iPad, and Flash with H264.   This looks like it doesn’t actually take advantage of CloudFront streaming, but merely splits your encoded files into the available chunks and includes all of the M3U8 files that point to the different bitrates and such.   This looks like a pretty sweet service in general, especially since they seem to have an API as well, so that may be very useful to those of you out there looking to host video. more later – joel

    Read the article

  • Uploading files to EC2 Windows instance

    - by nitramk
    I've created an instance of a Windows Server 2008 AMI at Amazon EC2. I now need to upload some installation files to it. One way to do this would be to activate the FTP server in Windows, set up an account and use that to upload files. Is there a better way to do this? Maybe some way to upload directly to an EBS?

    Read the article

  • Ubuntu Software RAID 0 on AWS Does Not Survive Reboot

    - by Eric J.
    I'm experimenting with creating a software RAID 0 device from 4 EBS volumes on Ubuntu 9.10 running at Amazon AWS following this guide: http://alestic.com/2009/06/ec2-ebs-raid The device appears (and according to SysBench is 3.5x faster than a regular attached EBS volume). Problem is, when I reboot the instance, all files on the RAID device are gone. The device is available and mounted where expected, but contains no files. I am able to write new files to it, which survive until the next reboot.

    Read the article

  • Store profile image of all users into single directory or per subdirectory id?

    - by Luccas
    I'm using amazon s3 as storage for users profile pic. I see that many websites generates large random filenames and put them into the same root directory like: http://xxx.us-east-1.amazonaws.com/aHR0cHM6Ly9mYmNkbi1wcm9maWxlLWEuYWthbWFpaGQubmV0L2hwcm9maWxlLWFrLWFzaDIvMjczMzkxXzEwMDAwMDMxMjAxMzg5OV81NTk3MjM4Mzdfbi5qcGc.jpg And my question is: What are the pros and cons of that approach? If I palce them into different directories, what problems I will have in future? http://xxx.us-east-1.amazonaws.com/users/id/username.jpg or http://xxx.us-east-1.amazonaws.com/users/id/random_number.jpg Thanks!

    Read the article

  • Autoscaling EC2 with NFS mounts

    - by Jamie Taylor
    I'm trying to set up a shared filesystem on EC2 and I've read tutorials such as this: http://blog.ronaldmccollam.com/2012/07/configuring-nfs-on-ubuntu-in-amazon-ec2.html In step 2 it talks about configuring the exports, for this I need an IP range but when I'm auto-scaling I can't predict what the IP will be before it scales. Is there any other way of doing this while still staying secure? Thanks Edit: Just tried s3fs, didn't seem to work properly

    Read the article

  • AWS Free Usage Tier + Cloudflare... possible?

    - by crashintoty
    If I throw my MySQL/PHP app up on a Amazon EC2 instance (using their AWS Free Usage Tier program) and couple it with CloudFlare (the free plan of course) roughly how many daily visitors can I comfortably handle before performance starts to suffer? Just looking for a rough estimate or educated guess - I understand this setup might be less than ideal but I'm still very curious nonetheless. Thanks in advance

    Read the article

  • App submitted to Amazon AppStore is both "Status: Incomplete" and "Cannot edit application while in review"

    - by Nicolas Raoul
    Yesterday I submitted my application to the Amazon AppStore. I did not upload a video, because I haven't made one and it seemed optional, because I was able to submit the app without one. Today I checked the app's status, it says "Status: Incomplete (Missing Multimedia)" with a link to the multimedia section whose video field says "Video < none uploaded ". So I made a video, but I can not add it because it says "Cannot edit application while in review." I used the "Feeback" link to send a message to Amazon, I might even get a reply, but do anyone already know a solution to this problem?

    Read the article

  • JVM tuning on Amazon EC2

    - by Shadowman
    We will be deploying a production application to Amazon EC2 very shortly. Initially, we'll just be using a "small" instance, but have plans to scale up not long afterwards. My question is, has any investigation been done on JVM tuning for the EC2 environment? Are there any specific changes that we should make to our JVM parameters to compensate for quirks/characteristics of Amazon EC2? Or, do the normal tuning methodologies apply here as they would in a physical environment? Our application will be deployed on Tomcat 6.x. It is built using JBoss Seam 2.2.x, and uses PostgreSQL 8.x as the backend database. Any advice you can give is greatly appreciated!

    Read the article

  • php convert images and upload to amazon s3

    - by faraklit
    I am looking for a best practice while uploading images to amazon s3 server and serving from there. We need four different sizes of an image. So just after image upload we convert the image and scale in 4 different widths and heights. And then we send them to the amazon s3 using official php api. // ... // image conversions, bucket setting, s3 initialization etc. $sizes= array("", "48", "64", "128"); foreach($sizes as $size) { $filename = $upload_path.$dest_file.$size.$ext; $s3->batch()->create_object($bucket, , array( 'fileUpload' => $filename, 'acl' => AmazonS3::ACL_PUBLIC, )); } But for a 1M image the client sometimes wait up to 30 seconds which is a very long time. Instead of sending images immediately to S3, it may be better to add them to a job queue. But the user should see the uploaded image immediately.

    Read the article

  • Correct Path for Git Remote Add from Amazon EC2 Instance to OSX Client Machine

    - by filmnut
    I'm trying to do a git remote add from a repository that sits on a remote Amazon AMI back to a cloned copy of the SAME repository that is sitting on my local OSX machine. I'm confused about what file path to use. I assume it's something like: git remote add my_clone <OSX_User_Name>@<OSX_HOST_NAME>:<PATH_TO_CLONED_REPO> I obviously know what my <OSX_User_Name> is, and I can figure out my <PATH_TO_CLONED_REPO>, but I have no idea how to determine a <OSX_HOST_NAME> that would actually work. Can I just put in my external IP address, followed by my machine's internal IP address? (Note that I'm working behind a router.) Is ssh:// the correct protocol? Do I need to set up ssh access from the Amazon EC2 machine to the local OSX machine?

    Read the article

  • apache2 is making my amazon ec2 unavailable, any ideas?

    - by Tim
    I have a web server running on a EC2 c1.medium intance. The instance is running on ubuntu, with apache2 and mysql.The ubuntu and apache version are the next; Ubuntu DISTRIB_ID=Ubuntu DISTRIB_RELEASE=11.04 DISTRIB_CODENAME=natty DISTRIB_DESCRIPTION="Ubuntu 11.04" Apache2 Server version: Apache/2.2.17 (Ubuntu) Server built: Feb 22 2011 18:33:02 Sometimes randomly, my server "hangs up", I cannot connect to it using normal web access or ssh access. If I reboot the instance it reboots fine, the amazon system log doesn't show anything weird, but the problem persists The only way to solve it its stopping the instance, and start it again. I think that the problem its has something to do with apache, because the last lines of the error log lists: normal errors [Sun Jun 19 06:25:09 2011] [notice] Apache/2.2.17 (Ubuntu) PHP/5.3.5-1ubuntu7.2 with Suhosin-Patch configured -- resuming normal operations nobody cant connetc... no more erros until i stop and start the instance normal errors [Wed Jun 22 14:21:18 2011] [notice] Apache/2.2.17 (Ubuntu) PHP/5.3.5-1ubuntu7.2 with Suhosin-Patch configured -- resuming normal operations nobody cant connetc... no more erros until i stop and start the instance Can somebody please help me?

    Read the article

  • Shorten Long DNS names

    - by user32425
    Hi, Amazon gives us a very long dns names i.e. c-123-123-123-255.compute-1.amazonaws.com Is there a way to map this name into a shorter name i.e. essentially what i want to do is to modify /etc/hosts file, and map the long name into a short one, i.e. aws1 c-123-123-123-255.compute-1.amazonaws.com but because /etc/hosts file only accepts ip address mapping, then I cannot do that. Is there any other way to do this? Thanks

    Read the article

  • Read Linux-formatted (ext3) EBS volume mounted on Windows Server 2008 instance

    - by Greg Owen
    I've got a Windows Server 2008 R2 instance set up in Amazon EC2. I've also got some Ubuntu instances on the same EC2 account. I'd like to be able to mount the EBS volume from one of the Ubuntu instances onto the Windows instance as an external drive and then access that drive from the Windows instance. I've looked at tools like ext2fsd and ext2 IFS, but these haven't worked (I couldn't get the former to work, and the latter says that it supports Windows 2008 but gives an error when I try to install it, saying that it only supports up to Windows 2003). I know that there are all kinds of tools to view Linux partitions and that there are filesystems that are compatible with both Linux and Windows, but neither of those options works here (I want to be able to attach and detach the Ubuntu volumes on command, rather than have a permanent partition, and Ubuntu EBS volumes are ext3 by default). Anybody know a good tool I should use?

    Read the article

  • Amazon Web Services promet de baisser ses prix en 2012, entretien avec Matt Wood, Technology Evangelist EMEA chez Amazon

    Amazon Web Services promet de baisser encore ses prix en 2012 Entretien avec Matt Wood, Technology Evangelist EMEA chez Amazon Les Cloud dédiés aux développeurs se multiplient. Ils mettent tous en avant les mêmes avantages : flexibilité, facturation à la demande, gestion externalisée de l'infrastructure, et aujourd'hui simplification des outils d'administration. Après avoir interviewé Laurent Lesaicherre, le responsable chez Microsoft France de la plateforme Windows Azure, il nous est apparu intéressant de continuer ce tour d'horizon du marché avec un de ses précurseurs : Amazon. Il y a maintenant cinq ans, ...

    Read the article

  • Create Webmin user for an EC2 Instance

    - by Dean
    I've setup an Amazon EC2 Instance, using the Ubuntu 12.04 AMI (ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20120424 (ami-a29943cb)), and I'd like to get Webmin working (so I can setup a DNS). After following the installation instructions on Webmin's site, the installer says I can login with any username/pass of a user who has superuser access. The problem is that the EC2 instance only has 1 user "ubuntu", which can only login using SSH keys -- not a password! I've tried creating users manually and I can't login as those users (even via SSH), so I think it might be a permission thing provided by the AMI. Does anyone know the best way around setting up a login to my webmin?

    Read the article

  • Securing ClickOnce hosted with Amazon S3 Storage

    - by saifkhan
    Well, since my post on hosting ClickOnce with Amazon S3 Storage, I've received quite a few emails asking how to secure the deployment. At the time of this post I regret to say that there is no way to secure your ClickOnce deployment hosted with Amazon S3. The S3 storage is secured by ACL meaning that a username and password will have to be provided before access. The Amazon CloudFront, which sits on top of S3, allows you to apply security settings to your CloudFront distribution by Applying an encryption to the URL. Restricting by IP. The problem with the CloudFront is that the encryption of the URL is mandatory. ClickOnce does not provide a way to pass the "Amazon Public Key" to the CloudFront URL (you probably can if you start editing the XML and HTML files ClickOnce generate but that defeats the porpose of ClickOnce all together). What would be nice is if Amazon can allow users to restrict by IP addresses or IP Blocks. I'd sent them an email and received a response that this is something they are looking into...I won't hold my breadth though. Alternative I suggest you look at Rack Space Cloud hosting http://www.rackspacecloud.com they have very competitive pricing and recently started hosting Windows Virtual Servers. What you can do is rent a virtual server, setup IIS to host your ClickOnce applications. You can then use IIS security setting to restrict what IP/Blocks can access your ClickOnce payloads. Note: You don't really need Windows Server to host ClickOnce. Any web server can do. If you are familiar with Linux you can run that VM with rackspace for half the price of Windows. I hope you found this information helpful.

    Read the article

  • Sending mail via php in EC2

    - by william007
    I have used the following code for sending mail using php using amazon ec2, but I only see 'aatest' as the result, and doesn't get any incoming email. Btw, I have already included ses.php, and have validated the email [email protected], and double confirm that accesskey, and accesskey are the correct one. Can anyone suggest way for debugging it? require_once('ses.php'); $con=new SimpleEmailService('accesskey','accesskey'); print_r('aa'.$con->listVerifiedEmailAddresses()); $m = new SimpleEmailServiceMessage(); $m->addTo('[email protected]'); $m->setFrom('[email protected]'); $m->setSubject('Hello, world!'); $m->setMessageFromString('This is the message body.'); print_r($con->sendEmail($m)); echo 'test';

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >