Search Results

Search found 3103 results on 125 pages for 'amazon appstore'.

Page 7/125 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Wordpress : Automatically transfer media files to Amazon S3

    - by Ron Ranieri
    I've been using VPS to host 7 Wordpress websites, most of them require big storage but very little RAM and traffic. So I'm thinking of moving the static files(uploads folder) content to Amazon S3 and I'm looking for the most viable solution to this. I want every website to have their own bucket and newly uploaded media files automatically uploaded to Amazon S3 without using plugin. I'm ok with cron job, for example the files were uploaded first to my server, then transferred to S3 and deleted from my server every 24 hour. Or is there any way for me to change the default upload directory to my S3 bucket without sacrificing any Wordpress functionality(resize/title etc)? What do you think the most efficient way to do this? Currently I'm looking at this plus cron job but I would like to know better option if it exist.

    Read the article

  • appstore launch request unable to complete

    - by Joey
    I am trying to launch an appstore page with either of the calls: [[UIApplication sharedApplication] openURL:[NSURL URLWithString:@"http://itunes.apple.com/us/app/myappname/id999999999?mt=8"]]; [[UIApplication sharedApplication] openURL:[NSURL URLWithString:@"http://www.itunes.com/apps/myappname"]]; Both of these urls work when I enter them in a browser. However, from both simulator and device, I get a "Your request could not be completed" error dialog right after it appears to try to launch the appstore. Is there something obvious that I'm doing wrong?

    Read the article

  • Setting up scripts in Amazon EC2 Cloud

    - by racket99
    Hello, I am currently running a few perl and python scripts on a windows pc and would like to port over to the Amazon EC2 servers running 64-bit LINUX. The scripts are basic web scrapers that go to a variety of websites, get data and then save daily as csv files. I would like to install these in the cloud and get them running in an automated way so that they will run without my intervention. Also given that I don't want to lose all the data if the instance crashes, I should also upload the csv files to Amazon S3. Any idea how I can do this? I am not terribly versed in LINUX nor do I know Perl/Python well. What is the best way for me to tackle thi

    Read the article

  • Installing an EC2 (EBS based) instance on another AWS account

    - by imaginative
    We're moving from one AWS account to another. I have a couple of EC2 instances I would like to move over. These instances are EBS based. Googling around, I'm seeing all sorts of convoluted answers for how to appropriately launch this EBS based instance on a new account. Surely there has to be a simple way of going about this. As of right now, what I have done so far is backup the EB2 Volume as a snapshot. I then altered the permissions of the snapshot to allow the new AWS account number to be able to access it. I am not sure where to go from here. Do I have to create an image from the EBS snapshot? Upload it to S3? What's the next step(s)?

    Read the article

  • Why do I get xfs_freeze "Operation not supported" error with ec2-consistent-snapshot? Debian Squeeze w/ext4 filesystem

    - by Michael Endsley
    I'm running the following command: [root@somehost ~]# ec2-consistent-snapshot --aws-credentials-file '/some/dir/file' --mysql --mysql-socket '/var/run/mysqld/mysql.sock' --mysql-username 'backup' --mysql-password 'password' --freeze-filesystem '/dev/xvda1' vol-xxxxxx It returns this error: xfs_freeze: cannot freeze filesystem at /dev/xvda1: Operation not supported ec2-consistent-snapshot: ERROR: xfs_freeze -f /dev/xvda1: failed(256) snap-eeb66393 xfs_freeze: cannot unfreeze filesystem mounted at /dev/xvda1: Invalid argument ec2-consistent-snapshot: ERROR: xfs_freeze -u /dev/xvda1: failed(256) This is being run on Debian Squeeze with the ext4 Linux filesystem. Can anyone explain this error to me, or what might be its cause? When googling, I found information about it needing to be executed with sudo, but I'm performing the entire operation as root. I also found some posts about trying to run it after a CentOS upgrade using yum, but the situation appeared dissimilar. It's difficult to find things referring to this situation exactly. xfs_freeze is available for use on the filesystem. Is it possible that the filesystem, despite being ext4, somehow doesn't support freezing? Sorry if I've missed some bit of StackExchange etiquette with this post -- it's my first venture here!

    Read the article

  • Cloudformation with Ubuntu throwing errors

    - by Sammaye
    I have been doing some reading and have come to the understanding that if you wish to use a launchConfig with Ubuntu you will need to install the cfn-init file yourself which I have done: "Properties" : { "KeyName" : { "Ref" : "KeyName" }, "SpotPrice" : "0.05", "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] }, "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ], "InstanceType" : { "Ref" : "InstanceType" }, "UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [ "#!/bin/bash\n", "apt-get -y install python-setuptools\n", "easy_install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-1.0-6.tar.gz\n", "cfn-init ", " --stack ", { "Ref" : "AWS::StackName" }, " --resource LaunchConfig ", " --configset ALL", " --access-key ", { "Ref" : "WorkerKeys" }, " --secret-key ", {"Fn::GetAtt": ["WorkerKeys", "SecretAccessKey"]}, " --region ", { "Ref" : "AWS::Region" }, " || error_exit 'Failed to run cfn-init'\n" ]]}} But I have a problem with this setup that I cannot seem to get a decent answer to. I keep getting this error in the logs: Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: config-scripts-per-once already ran once Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-per-boot with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-per-instance with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-user with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] cc_scripts_user.py[WARNING]: failed to run-parts in /var/lib/cloud/instance/scripts Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[WARNING]: Traceback (most recent call last):#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 117, in run_cc_modules#012 cc.handle(name, run_args, freq=freq)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 78, in handle#012 [name, self.cfg, self.cloud, cloudinit.log, args])#012 File "/usr/lib/python2.7/dist-packages/cloudinit/__init__.py", line 326, in sem_and_run#012 func(*args)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/cc_scripts_user.py", line 31, in handle#012 util.runparts(runparts_path)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/util.py", line 223, in runparts#012 raise RuntimeError('runparts: %i failures' % failed)#012RuntimeError: runparts: 1 failures Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[ERROR]: config handling of scripts-user, None, [] failed Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling keys-to-console with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling phone-home with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling final-message with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] cloud-init-cfg[ERROR]: errors running cloud_config [final]: ['scripts-user'] I have absolutely no idea what scripts-user means and Google is not helping much here either. I can, when I ssh into the server, see that it runs the userdata script since I can access cfn-init as a command whereas I cannot in the original AMI the instance is made from. However I have a launchConfig: "Comment" : "Install a simple PHP application", "AWS::CloudFormation::Init" : { "configSets" : { "ALL" : ["WorkerRole"] }, "WorkerRole" : { "files" : { "/etc/cron.d/worker.cron" : { "content" : "*/1 * * * * ubuntu /home/ubuntu/worker_cron.php &> /home/ubuntu/worker.log\n", "mode" : "000644", "owner" : "root", "group" : "root" }, "/home/ubuntu/worker_cron.php" : { "content" : { "Fn::Join" : ["", [ "#!/usr/bin/env php", "<?php", "define('ROOT', dirname(__FILE__));", "const AWS_KEY = \"", { "Ref" : "WorkerKeys" }, "\";", "const AWS_SECRET = \"", { "Fn::GetAtt": ["WorkerKeys", "SecretAccessKey"]}, "\";", "const QUEUE = \"", { "Ref" : "InputQueue" }, "\";", "exec('git clone x '.ROOT.'/worker');", "if(!file_exists(ROOT.'/worker/worker_despatcher.php')){", "echo 'git not downloaded right';", "exit();", "}", "echo 'git downloaded';", "include_once ROOT.'/worker/worker_despatcher.php';" ]]}, "mode" : "000755", "owner" : "ubuntu", "group" : "ubuntu" } } } } Which does not seem to run at all. I have checked for the files existance in my home directory and it's not there. I have checked for the cronjob entry and it's not there either. I cannot, after reading through the documentation, seem to see what's potentially wrong with my code. Any thoughts on why this is not working? Am I missing something blatant?

    Read the article

  • EBS full device confusion

    - by Mike
    I have a 500GB EBS device (/dev/xvdf) mounted to /vol and all data on the box seems to be writing to /vol correctly (see du output below). For some reason /dev/xvda1 is totally full. Any idea what's going on here? $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 32G 30G 8.0K 100% / udev 34G 8.0K 34G 1% /dev tmpfs 14G 176K 14G 1% /run none 5.0M 0 5.0M 0% /run/lock none 34G 0 34G 0% /run/shm /dev/xvdb 827G 201M 785G 1% /mnt /dev/xvdf 500G 145G 356G 29% /vol $ du -sh * 8.7M bin 18M boot 8.0K dev 5.1M etc 48K home 0 initrd.img 80M lib 4.0K lib64 16K lost+found 4.0K media 20K mnt 4.0K opt 0 proc 40K root 176K run 7.1M sbin 4.0K selinux 4.0K srv 0 sys 4.0K tmp 414M usr 356M var 0 vmlinuz 145G vol

    Read the article

  • Auto-Attach EBS-volume to a New Spot Instance?

    - by Jeff
    I am experimenting with EC2 spot instances, and am needing some data to be retained between terminations. Now as I understand it, when the current price goes above my max. bid, it will be automatically terminated. I assume any init scripts I have will be run on shutdown so I can push data off to the EBS before unmounting. My question is, how can I automatically mount the same EBS volume on the new spot instance once the price goes down, since it won't have any of my init scripts that I would've loaded onto the root volume the first time? Do I have to create a custom AMI, or is there some other way to achieve this?

    Read the article

  • Can you change an AWS Elastic Load Balancer health check without causing instances to go out of service?

    - by Anton I. Sipos
    For a number of reasons I need to change the health check URL of a live site behind an ELB. The ELB is configured for health checks every 30 seconds, with a healthy threshold of 2 and unhealthy threshold of 2. I need to ensure I make this change with no outage. If I make the change to the health check URL, and assuming the URL checks successfully, will the instances stay healthy on the load balancer, or will they go out of service until they succeed 2 health checks (in 1 minute)?

    Read the article

  • Should I persist images on EBS or S3 ??

    - by enes
    Hi; I am migrating my Java,Tomcat, Mysql server to AWS EC2. I have already attached EBS volume for storing Mysql data. In my web application people may upload images. So I should persist them. There are 2 alternatives in my mind. 1- Save uploaded images to EBS volume. 2- Use S3 service. The followings are my notes, please be skeptic about them, as my expertise is not on servers, but software development. EBS plus: S3 storage is more expensive. (0.15 $/Gb 0.1$/Gb) S3 plus: Serving statics from EBS may influence my web server's performance negatively. Is this true? Does Serving images affect server performance notably? For S3 my server will not be responsible for serving statics. S3 plus: Serving statics from EBS may result I/O cost, probably it will be minor. EBS plus: People say EBS is faster. S3 plus: People say S3 is more safe for persistence. EBS plus: No need to learn API, it is straight forward to save the images to EBS volume. Namely I can not decide, will be happy if you guide. Thanks

    Read the article

  • How do I set up DNS with nic.io to point to an AWS EC2 server?

    - by Chad Johnson
    I purchased a domain one week ago via nic.io. I have elected to provide my own DNS [because they provided no other option]. I'm trying to point my .io domain at my EC2 server instance. I've allocated an elastic IP and associated it with the instance. I can SSH into the instance and access point 80 via the IP address just fine. The IP is 54.235.201.241. nic.io support said the following: "You have selected to provide your own DNS and therefore if there is an issue with the set-up of the name servers you will need to contact your DNS provider." So, I created a Hosted Zone via Route 53 in AWS. This created NS and SOA records. I then set the Primary and Secondary servers at nic.io's domain admin page to the SOA record domains. Additionally, I set the optional servers to the NS domains. I did this two days ago, and I can't access the server via the domain. I ran a DNS check here...still not sure what I need to do: http://mydnscheck.com/?domain=chadjohnson.io&ns1=&ns2=&ns3=&ns4=&ns5=&ns6=. I have no idea what I'm supposed to do. Does anyone have any ideas?

    Read the article

  • Windows Server 2012 VPN Server on AWS VPC EC2 Instance

    - by abran
    I'd like to use window server 2012 VPN on a AWS VPC EC2 instance. The VPC has one public subnet and the EC2 instance has one network adapter. I've taken the following steps, but have been unsuccessful; am I missing a step or configuration? Thanks. Configured an elastic IP for the VPC Enabled protocols 47, 50, & 51 Added the RRAS role to the (EC2 instance) server Configured the RRAS for vpn only. Note: I'm able to RDP to the EC2 instance, but not able to ping the external IP.

    Read the article

  • Is it safe to use S3 over HTTP from EC2, as opposed to HTTPS

    - by Marc
    I found that there is a fair deal of overhead when uploading a lot of small files to S3. Some of this overhead comes from SSL itself. How safe is it to talk to S3 without SSL when running in EC2? From the awesome comments below, here are some clarifications: this is NOT a question about HTTPS versus HTTP or the sensitivity of my data. I'm trying to get a feeling for the networking and protocol particularities of EC2 and S3. For example Are we guaranteed to be passing through only the AWS network when communicating from EC2 to S3 Can other AWS users (apart from staff) sniff my communications between EC2 and S3 Is authentication on their api done on every call, and thus credentials are passed on every call? Or is there some kind of authenticated session. I am using the jets3t lib. Feedback from people with some AWS experience would be appreciated. Thanks Marc

    Read the article

  • How can I manage AWS VPC ssh access accounts and keys across multiple instances?

    - by deitch
    I am setting up a standard AWS VPC structure: a public subnet some private subnets, hosts on each, ELB, etc. Operational network access will be via either an ssh bastion host or an openvpn instance. Once on the network (bastion or openvpn), admins use ssh to access the individual instances. From what I can tell all of the docs seem to depend on a single user with sudo rights and a single public ssh key. But is that really best practice? Isn't it much better to have each user access each host under their own name? So I can deploy accounts and ssh public keys to each server, but that rapidly gets unmanageable. How do people recommend managing user accounts? I've looked at: IAM: It doesn't like like IAM has a method for automatically distributing accounts and ssh keys to VPC instances. IAM via LDAP: IAM doesn't have an LDAP API LDAP: set up my own LDAP servers (redundant, of course). Bit of a pain to manage, still better than managing on every host, especially as we grow. Shared ssh key: rely on the VPN/bastion to track user activities. I don't love it, but... What do people recommend? NOTE: I moved this over from accidentally posting in StackOverflow.

    Read the article

  • Easy GUI way to auto scale EC2 and RDS: aws console, scalr, ylastic...?

    - by Zillo
    I am managing all my instances with the AWS Management Console (the GUI web console) but now I want to use Auto Scale and it seems that this can not be done with that console. Yes, there is CloudWatch but I can only create alarms (e-mail notifications), it seems that CouldWatch needs you to add the auto scale policy in some other place (by command line console?). I would like to use some easy GUI interface. Ylastic and Scalr seems to be a good option. Which one do you think is better? Regarding Scalr, is there any difference between the open source software Scalr and the service Scalr.net? I mean, is the GUI interface the same? I like the idea of the Scalr because I do not need to give my Secret Access Key to a third party (like in Ylastic or in Scalr.net) One question about the Scalr software, it has to be installed in the instances or it must be installed in another machine? Do I need to setup again all my security permissions, AMIs, snapshots, etc. or I can use AWS Management Console for everything and Scalr just to auto scale.

    Read the article

  • PHP cannot connect to AWS RDS server

    - by Eugene Lim
    I have a EC2 Instance with apchane, PHP and phpmyadmin. I have connect phpmyadmin to manage the aws RDS server. they are in the same security group. But when i try to use a php script to connect to the AWS rds server, it gave me SQLSTATE[HY000] [2003] Can't connect to MySQL server on (xxx.xxx.xxx.xxx). I did some researched, and most of them says use setsebool -P httpd_can_network_connect=1 reference: http://www.filonov.com/2009/08/07/sqlstatehy000-2003-cant-connect-to-mysql-server-on-xxx-xxx-xxx-xxx-13/ But i have no idea which server to configure? and how to?

    Read the article

  • How to use private DNS to map private IP with "non registred" domain name

    - by PapelPincel
    I would like to use a private DNS (Route53 in our case) in order to map hosts to EC2 instance private IP addresse. The hosted zone we are using for testing is not declared in any registrar (company-test.com.). There are different servers (Nagios, Puppet, ActiveMQ ...) all hosted in ec2, that means their IP can change over time (restart, new instance launch...). That would be great if I can use DNS instead of clients' /etc/hosts for mapping private IP/internal domain name... The ActiveMQ server url is activemq.company-test.com and it maps to (A record) private IP address of the AMQ server. This url is only reachable by other ec2 owned by the same aws account. My question is how to configure ec2 instances so they could reach the ActiveMQ server WITHOUT having to buy a new domain company-test.com ?

    Read the article

  • EC2 persistence of machine

    - by Seagull
    I notice that EBS-backed AMIs are much like a VMWare instances -- I can stop them and also persist them to disk, and all this is done relatively quickly. However, I believe that S3 backed machines are different. They cannot be 'stopped', but rather can only be shut-down, written to S3 disk and started up again; with at least a 15 min delay in doing so. Why the difference? How do AMI providers decide whether to use EBS or S3? If I need to stop/persist/restart machines relatively frequently, then I am implicitly limited to just the EBS-backed machines?

    Read the article

  • Database modularity with EBS volumes

    - by Eclyps19
    I would like to add modularity to my websites on EC2 instances by encapsulating the site files and the mysql files in their own EBS volumes. The end result that I'm going for is the ability to quickly mount a volume or two to different servers running the same AMI (for testing/development/emergency maintenance, etc), as well as maintain separate snapshots of each. I'm able to do this fairly easily with a single database by symlinking my mounted database EBS to the appropriate places (/var/lib/mysql, /etc/my.cnf, /var/log/mysqld.log), but I'm not sure if it would even be possible be possible to have multiple databases on different EBS volumes running concurrently. Example: /website1/www.website.com /database1/ /website2/www.otherwebsite.com /database2/ Could anybody shed some light on this for me? Is it possible? Is it a bad idea? Thanks.

    Read the article

  • Which is faster for read access on EC2; local drive or EBS?

    - by Phillip Oldham
    Which is faster for read access on an EC2 instance; the "local" drive or an attached EBS volume? I have some data that needs to be persisted so have placed this on an EBS volume. I'm using OpenSolaris, so this volume has been attached as a ZFS pool. However, I have a large chunk of EC2 disk space that's going to go unused, so I'm considering re-purposing this as a ZFS cache volume but I don't want to do this if the disk access is going to be slower than that of the EBS volume as it would potentially have a detrimental effect.

    Read the article

  • Problems migrating an EBS backed instance over AWS Regions

    - by gshankar
    Note: I asked this question on the EC2 forums too but haven't received any love there. Hopefully the ServerFault community will be more awesome. The new AWS Sydney region opening up is something that we've been waiting for for a long time but I'm having a lot of trouble migrating our instances over from N. California. I managed to migrate 1 instance over using CloudyScripts to move a snapshot and then firing up a new instance in the Sydney region. This was a very new instance so both the source and destination were running on a Ubuntu 12.04 LTS server and I had no issues there. However, the rest of our instances are all Ubuntu 10.04 LTS and with these, I'm having a lot of problems. I've tried following: 1- following the AWS whitepaper on moving instances which was given to us at the recent Customer Appreciation Day in Sydney where the new region was launched. The problem with this approach was with the last step (Step 19) here you register the image: ec2-register -s snap-0f62ec3f -n "Wombat" -d "migrated Wombat" --region ap-southeast-2 -a x86_64 --kernel aki-937e2ed6 --block-device-mapping "/dev/sdk=ephemeral0" I keep getting this error: Client.InvalidAMIID.NotFound: The AMI ID 'ami-937e2ed6' does not exist which I think is due to the kernel_id not existing in the Sydney region? 2- Using CloudyScripts to move a snapshot and then creating a new volume and attaching to a new instance in Sydney This results in the instance just hanging on boot and failing the status checks. I can't SSH in or look at the server log I suspect that my issue is with finding the right kernel_id for the volume in the new region. However I can't seem to work out how to go about finding this kernel_id, the ones I've tried (from the original instance) don't result in the Client.InvalidAMIID.NotFound: The AMI ID 'ami-937e2ed6' error and any other kernel_id just won't boot. I've tried both 12.04 and 10.04 versions of Ubuntu. Nothing seems to work, I've been banging my head against a wall for a while now, please help! New (broken) instance i-a1acda9b ami-9b8611a1 aki-31990e0b Source instance i-08a6664e ami-b37e2ef6 aki-937e2ed6 p.s. I also tried following this guide on updating my Ubuntu LTS version to 12.04 before doing the migration but it didn't seem to work either, still getting stuck on updating the kernel_id http://ubuntu-smoser.blogspot.com.au/2010/04/upgrading-ebs-instance.html

    Read the article

  • AWS Elastic load balancer doesn't decrease instances from Alarm Trigger

    - by jchysk
    I have a load balancer that I created an auto-scaling-group and launch-config for. I created the auto-scaling-group with a min-size of 1 and max size of 20. I have a scaledown policy: as-put-scaling-policy SBMScaleDownPolicy --auto-scaling-group SBMAutoScaleGroup --adjustment=-1 --type ChangeInCapacity --cooldown 300 Then I set up an alarm: mon-put-metric-alarm SBMLowCPUAlarm --comparison-operator LessThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 600 --statistic Average --threshold 35 --alarm-actions arn:aws:autoscaling:us-east-1:policystuffhere:autoScalingGroupName/SBMAutoScaleGroup:policyName/SBMScaleDownPolicy --dimensions "AutoScalingGroupName=SBMAutoScaleGroup" When average CPU usage over 10 minutes is under 35, in CloudFront the alarm shows up as "In Alarm State" but doesn't decrease the number of instances. Also, if there's only one instance running it'll spin up another to 2 even if a scale up alarm isn't hit. It seems like the default value is just set to 2 somehow. How can I change this?

    Read the article

  • AWS elastic load balancer basic issues

    - by Jones
    I have an array of EC2 t1.micro instances behind a load balancer and each node can manage ~100 concurrent users before it starts to get wonky. i would THINK if i have 2 such instances it would allow my network to manage 200 concurrent users... apparently not. When i really slam the server (blitz.io) with a full 275 concurrents, it behaves the same as if there is just one node. it goes from 400ms response time to 1.6 seconds (which for a single t1.micro is expected, but not 6). So the question is, am i simply not doing something right or is ELB effectively worthless? Anyone have some wisdom on this? AB logs: Loadbalancer (3x m1.medium) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 11.668 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 4285.10 [#/sec] (mean) Time per request: 23.337 [ms] (mean) Time per request: 0.233 [ms] (mean, across all concurrent requests) Transfer rate: 1661.35 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 2 4.3 2 63 Processing: 2 21 15.1 19 302 Waiting: 2 21 15.0 19 261 Total: 3 23 15.7 21 304 Single instance (1x m1.medium direct connection) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 9.597 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 5210.19 [#/sec] (mean) Time per request: 19.193 [ms] (mean) Time per request: 0.192 [ms] (mean, across all concurrent requests) Transfer rate: 2020.01 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 9 128.9 3 3010 Processing: 1 10 8.7 9 141 Waiting: 1 9 8.7 8 140 Total: 2 19 129.0 12 3020

    Read the article

  • Disable disk caches in AWS EBS for PostgreSQL?

    - by Alexandr Kurilin
    It's my understanding that, without correctly disabling OS-level and drive-level caching, there is a chance that in case of system failure the Write-Ahead Log might not be saved correctly and in fact might get corrupted, possibly preventing data recovery. I've already made sure that wal_sync_method=fdatasync however I was unable to make any configuration changes with hdparm since I get the following: $ sudo htparm -I /dev/xvdf /dev/xvdf: HDIO_DRIVE_CMD(identify) failed: Invalid argument Looks like that option is not available in the kind of setup you get in EC2. Am I missing anything here? Are there any other obvious caches I have to disable to ensure the WAL's safety?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >