Search Results

Search found 2918 results on 117 pages for 'amazon rds'.

Page 15/117 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Difference in performance: local machine VS amazon medium instance

    - by user644745
    I see a drastic difference in performance matrix when i run it with apache benchmark (ab) in my local machine VS production hosted in amazon medium instance. Same concurrent requests (5) and same total number of requests (111) has been run against both. Amazon has better memory than my local machine. But there are 2 CPUs in my local machine vs 1 CPU in m1.medium. My internet speed is very low at the moment, I am getting Transfer rate as 25.29KBps. How can I improve the performance ? Do not know how to interpret Connect, Processing, Waiting and total in ab output. Here is Localhost: Server Hostname: localhost Server Port: 9999 Document Path: / Document Length: 7631 bytes Concurrency Level: 5 Time taken for tests: 1.424 seconds Complete requests: 111 Failed requests: 102 (Connect: 0, Receive: 0, Length: 102, Exceptions: 0) Write errors: 0 Total transferred: 860808 bytes HTML transferred: 847155 bytes Requests per second: 77.95 [#/sec] (mean) Time per request: 64.148 [ms] (mean) Time per request: 12.830 [ms] (mean, across all concurrent requests) Transfer rate: 590.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 14 63 99.9 43 562 Waiting: 14 60 96.7 39 560 Total: 14 63 99.9 43 563 And this is production: Document Path: / Document Length: 7783 bytes Concurrency Level: 5 Time taken for tests: 33.883 seconds Complete requests: 111 Failed requests: 0 Write errors: 0 Total transferred: 877566 bytes HTML transferred: 863913 bytes Requests per second: 3.28 [#/sec] (mean) Time per request: 1526.258 [ms] (mean) Time per request: 305.252 [ms] (mean, across all concurrent requests) Transfer rate: 25.29 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 290 297 14.0 293 413 Processing: 897 1178 63.4 1176 1391 Waiting: 296 606 135.6 588 1171 Total: 1191 1475 66.0 1471 1684

    Read the article

  • Shared files folder in Amazon Elastic Beanstalk environment

    - by por
    I'm working on a Drupal application, which is planned to be hosted in Amazon Elastic Beanstalk environment. Basically, Elastic Beanstalk enables the application to scale automatically by starting additional web server instances based on predefined rules. The shared database is running on an Amazon RDS instance, which all instances can access properly. The problem is the shared files folder (sites/default/files). We're using git as SCM, and with it we're able to deploy new versions by executing $ git aws.push. In the background Elastic Beanstalk automatically deletes ($ rm -rf) the current codebase from all servers running in the environment, and deploys the new version. The plan was to use S3 (s3fs) for shared files in the staging environment, and NFS in the production environment. We've managed to set up the environment to the extent where the shared files folder is mounted after a reboot properly. But... The Problem is that, in this setup, the deployment of new versions on running instances fail because $ rm -rf can't remove the mounted directory, and as result, the entire environment goes down and we need restart the environment, which isn't really an elegant solution. Question #1 is that what would be the proper way to manage shared files in this kind of deployment? Are you running such an environment? How did you solve the problem? By looking at Elastic Beanstalk Hostmanager code (Ruby) there seems be a way to hook our functionality (unmount if mounted in pre-deploy and mount in post-deploy) into Hostmanager (/opt/hostmanager/srv/lib/elasticbeanstalk/hostmanager/applications/phpapplication.rb) but the scripts defined in the file (i.e. /tmp/php_post_deploy_app.sh) don't seem to be working. That might be because our Ruby skills are non-existent. Question #2 is that did you manage to hook your functionality in Hostmanager in a portable way (i.e. by not changing the core Hostmanager files)?

    Read the article

  • Creating an ec2 image on amazon fails at mkfs.ext3

    - by Dave Orr
    I'm trying to create an image of my ec2 instance in Amazon's cloud. It's been a bit of an adventure so far. I did manage to install Amazon's ec2-api-tools, which was harder than it seemed like it should have been. Then I ran: ec2-bundle-vol -d /mnt -k pk-{key}.pem -c cert-{cert}.pem -u {uid} -s 1536 Which returned: Copying / into the image file /mnt/image... Excluding: /sys/kernel/debug /sys/kernel/security /sys /proc /dev/pts /dev /dev /media /mnt /proc /sys /etc/udev/rules.d/70-persistent-net.rules /etc/udev/rules.d/z25_persistent-net.rules /mnt/image /mnt/img-mnt 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00677357 s, 155 MB/s mkfs.ext3: option requires an argument -- 'L' Usage: mkfs.ext3 [-c|-l filename] [-b block-size] [-f fragment-size] [-i bytes-per-inode] [-I inode-size] [-J journal-options] [-G meta group size] [-N number-of-inodes] [-m reserved-blocks-percentage] [-o creator-os] [-g blocks-per-group] [-L volume-label] [-M last-mounted-directory] [-O feature[,...]] [-r fs-revision] [-E extended-option[,...]] [-T fs-type] [-U UUID] [-jnqvFKSV] device [blocks-count] ERROR: execution failed: "mkfs.ext3 -F /mnt/image -U 1c001580-9118-4a50-9a25-dcf02be6d25f -L " So mkfs.ext3 wants -L, which is a volume name. But ec2-bundle-vol doesn't seem to take in a volume name as an argument, and the docs (http://docs.amazonwebservices.com/AmazonEC2/gsg/2006-06-26/creating-an-image.html) don't seem to think one should be needed. Certainly their sample command: # ec2-bundle-vol -d /mnt -k ~root/pk-HKZYKTAIG2ECMXYIBH3HXV4ZBZQ55CLO.pem -u 495219933132 -s 1536 doesn't specify anything. So... any help? What am I missing?

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • Amazon EC2 master node hanging

    - by Algorist
    Hi, I am using cloudera setup to launch a cluster with hadoop on Amazon. Sometimes, the master hadoop node hangs and we have to restart the job from the job. Did anyone face similar problem and resolve the issue. Thank you.

    Read the article

  • Alternative to Amazon's S3 service?

    - by Cory
    Just wondering if there is good alternative to Amazon's S3 service? I like S3 but the bandwidth cost is high. I looked at CouldFiles from Rackspace but the cost is even higher. I don't mind prepaying or having monthly payment in order to reduce the bandwidth cost greatly. Thank you for any help

    Read the article

  • Use personal Amazon S3 account to backup Gmail using Ubuntu

    - by lokheart
    As question, I have found online that backupify offer shifting from their S3 to user's personal S3, I have a backupify account but I can't find this options, besides, I don't prefer having my email being processed by somebody else. Is it possible to use my own personal amazon s3 account to backup gmail? Preferably as incremental backup, as I don't have to use too much of bandwidth to load redundant data back to S3. I am using ubuntu, so script is OK for me. Thanks!

    Read the article

  • Remote backup with Amazon S3

    - by mjr
    Does anyone know of some good Mac software to sync a network attached storage device or a drobo to Amazon S3? I'd love to mail them a storage device for the first dump and then do nightly syncs.

    Read the article

  • Amazon S3 tools for Debian?

    - by Jonik
    I need to (programmatically, in a shell script) upload an EAR file to an Amazon S3 bucket on Debian (5.0.4). What, if any, Debian package provides simple, scriptable tools for that? (I want raw S3 bucket access, so please don't suggest solutions like Jungle Disk.)

    Read the article

  • Site-to-Site vpn setup amazon ec2 openswan (left) and cisco asa 5540 (right)

    - by user197279
    Need help on this VPN set-up on amazon EC2 using openswan Left side: EC2: setup a peer ip:- according to client using cisco (must be public) encrypted network:- according to client using cisco (must be public) Right side: Cisco ASA 5540: Peer ip: 3.3.3.3 Peer host/rightsubnet: 3.3.3.30/32 (Public NAT'd ip) The goal is to setup a site-to-site vpn connection with the client and I need guidance on the setup required on EC2. Appreciate the help Thanks.

    Read the article

  • Amazon S3 bucket - download only certain files

    - by mottey
    Hi I have an Amazon S3 bucket with 10,000 images sitting in it with a standard naming convention: 001_small.jpg 001_large.jpg 002_small.jpg 002_large.jpg Because there are such a large amount of files I don't want to download ALL of them and I don't want to sit there for a couple of hours to select just the *_large.jpg files... Can someone suggest an S3 file manager that can let me select only the *_large.jpg files to download?? Thanks!

    Read the article

  • Amazon ec2 reserve instance

    - by lydonchandra
    Hi, We received the education credit (valid for 1 year) from Amazon to use, and just wondering if we can buy reserved instance (3years) using that credit? Is there any way to reserve how much bandwidth we can use too ? Thanks

    Read the article

  • Amazon ec2 reserve instance

    - by lydonchandra
    Hi, We received the education credit (valid for 1 year) from Amazon to use, and just wondering if we can buy reserved instance (3years) using that credit? Is there any way to reserve how much bandwidth we can use too ? Thanks

    Read the article

  • Amazon EC2 Ami recommendations for free tier?

    - by Console
    Amazon web services recently introduced a free tier, where you basically get free stuff to try out AWS and run tiny sites and projects. Basically it's free as long as you remain below a certain limit of bandwidth, disk storage etc. Since going over the limits can quickly become quite expensive (for a hobbyist) I would like some recommendations or suggestions about which AMI's I can run on the free tier, for the purpose of trying out Ruby on Rails and/or Django.

    Read the article

  • Creating own Amazon Machine Image - Kernel panic

    - by amra
    I have created own AMI and registered it on Amazon EC2. But while AMI startup I receive following error: Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,1) The image is running locally without any problems. fstab contains: proc /proc proc defaults 0 0 /dev/sda1 / ext3 relatime,errors=remount-ro 0 1 thx for help

    Read the article

  • Problem install phpmyadmin on amazon ec2?

    - by yoko
    I googled on how to install phpmyadmin on ec2, and i got this syntax: sudo yum install phpmyadmin But i keep getting this: Loaded plugins: fastestmirror, priorities, security Loading mirror speeds from cached hostfile amzn-main | 2.1 kB 00:00 amzn-updates | 2.1 kB 00:00 Setting up Install Process No package phpmyadmin available. Error: Nothing to do I tried to go my website, its not installed. Please help EDIT: My Server OS: Amazon Linux AMI 64 bit I tried: yum install phpmyadmin --enablerepo=development, but still I got this error: Loaded plugins: fastestmirror, priorities, security Error getting repository data for development, repository not found

    Read the article

  • Amazon EC2 + SSL Godaddy are not routing properly in HTTPS

    - by azngunit81
    I have an Amazon EC2 + SSL just installed on GoDaddy. I have successfully managed to install it and get the green https on the main domain https://www.example.com however it doesn't any https://www.example.com/something but the route works under http://www.example.com I am using an .htacess file for some rewrite. Options -MultiViews RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] the Ec2 instance is ubuntu if that helps in anyway.

    Read the article

  • Amazon S3 Iterating Through Multi-Page Results. (withMarker)

    - by Jitu
    Trying to iterate through AmazonS3 that has around 5000+ keys stored in the bucket, used sample code based on provided link on Amazon Developer Guide http://docs.amazonwebservices.com/AmazonS3/latest/dev/ListingObjectKeysUsingNetSDK.html Issue is iteration fails when NexMarker is passed which has length of more than 128 string characters, which seems unusal as withMarker accepts string as parameter and there is no documentation on limit to withMarker. request.Marker = response.NextMarker; Has anyone faced similar issue. Thanks in advance.

    Read the article

  • Encrypt EC2 API call

    - by Frank
    I have to host an AMI in the Amazon Marketplace. i need to get the type of instance, whenever some user launches the AMI., like if its small medium or large. based on that i need to make some changes in the AMI when its created. I can do this with Amazon API call, to get the instance type, but the problem is that the instances created with the AMI will be started by other users, and i cannot use my AWS Credentials in the Amazon API. Is there any way that i can create an anonymous readonly user to make only specific type of EC2 API Calls? Or can i encrypt my EC2 API credentials, so no one can use it?

    Read the article

  • AWS autoscaling. Launch Config/Auto Scaling Group and VPC instance with two ifaces

    - by icalvete
    I want create an Launch Config/Auto Scaling Group to build instances inside an VPC with two subnets ("frontend" and "backend") I need that this instances have two ifaces. One in "frontend" subnet and one in "backend" subnet. I can't see how do that. It's no posible from AWS console and neither with aws cli. http://docs.aws.amazon.com/cli/latest/reference/autoscaling/create-launch-configuration.html http://docs.aws.amazon.com/cli/latest/reference/autoscaling/create-auto-scaling-group.html Launch Config don't say nothing about this. http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/create-lc-with-instanceID.html Ideas? Thanks!!!

    Read the article

  • How to get contents of Amazon shopping cart?

    - by Andrew Whitehouse
    I've been investigating whether it's possible to get a list of the saved items in my Amazon shopping basket programmatically ... Their Product Advertising API has methods for getting wishlists: http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?FindingItemsonLists.html and working with remote shopping carts: http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?CHAP_WorkingWithRemoteShoppingCarts.html but the shopping cart of items stored while on the Amazon web site is treated as a local shopping cart, and therefore is not accessible through the Product Advertising API. According to the last link: The opposite of a remote shopping cart is a local shopping cart, which is the shopping cart customers use while shopping on www.amazon.com. It is considered local because Amazon hosts the shopping web pages as well as the shopping cart. Product Advertising API operations work solely with remote shopping carts. Has anyone found a way of getting the contents of the "local" cart, apart from scraping the HTML? Thanks.

    Read the article

  • Amazon S3 copy request timeouts

    - by jean27
    We're uploading files to a temporary folder in a bucket. After that, we're trying to copy the uploaded files to its actual folder then delete the files in the temporary folder. It doesn't timeout when working with a single file. We're using the ThreeSharp API. Stack Trace: [WebException: The operation has timed out] System.Net.HttpWebRequest.GetRequestStream() +5322142 Affirma.ThreeSharp.Query.ThreeSharpQuery.GenerateAndSendHttpWebRequest(Request request) in C:\Consulting\Amazon\Amazon S3\Affirma.ThreeSharp\Affirma.ThreeSharp\Query\ThreeSharpQuery.cs:386 Affirma.ThreeSharp.Query.ThreeSharpQuery.Invoke(Request request) in C:\Consulting\Amazon\Amazon S3\Affirma.ThreeSharp\Affirma.ThreeSharp\Query\ThreeSharpQuery.cs:479

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >