Search Results

Search found 14213 results on 569 pages for 'biztalk services'.

Page 116/569 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • amazon aws, rds

    - by b0x0rz
    just need a quick info on two questions but very related. 1 if we use rds on amazon aws, what happens if some of the machines rds is running on crash? i mean what happens with the data? is it gone? 2 if we use rds on amazon aws via beanstalk, and decide to stop them is the database gone, i mean data, is it gone? thnx a lot, simple yes/no will do, but if you can give more info or a solution to mitigate these issues if any of the answers is unfavorable, that would be great.

    Read the article

  • Is there a way to determine which service does an outgoing connection?

    - by fluxtendu
    I'm redoing my firewall configuration with more restrictive policies and I would like to determine the provenance (and/or destination) of some outgoing connections. I have an issue because they come from svchost.exe and go to web content/application delivery providers - or similar: 5 IP in range: 82.96.58.0 - 82.96.58.255 --> Akamai Technologies akamaitechnologies.com 3 IP in range: 93.150.110.0 - 93.158.111.255 --> Akamai Technologies akamaitechnologies.com 2 IP in range: 87.248.194.0 - 87.248.223.255 --> LLNW Europe 2 llnw.net 205.234.175.175 --> CacheNetworks, Inc. cachefly.net 188.121.36.239 --> Go Daddy Netherlands B.V. secureserver.net So is it possible to know which service does a particular connection? Or what's your recommendation about the rules applied to these ones? (Comodo Firewall & Windows 7)

    Read the article

  • Can't star SSH on Ubuntu 12.10 AWS EC2

    - by Conor H
    So i've just started playing around with Ubuntu on Amazon EC2. I've just issued the following command to restart ssh but it has now "killed" ssh. sudo /etc/init.d/ssh restart I can't seem to ssh to this instance anymore. Putty just gives me "connection refused". NOTE: In this case I just restarted SSH to see the result. I didn't change any settings. This was to confirm that it was the restart command was the problem and not any configs I made. What is the correct way to restart SSH? P.S. That usually works on other Ubuntu boxes. Thanks. EDIT: It is also worth noting that when I ran that command I was taken straight back to a prompt. I didn't get any output on the console.

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • Can't connect to EC2 instance in VPC (Amazon AWS)

    - by Ryan Lynch
    I've taken the following steps: Created a VPC (with a single public subnet) Added an EC2 instance to the VPC Allocated an elastic IP Associated the elastic IP with the instance Created a security group and assigned it to the instance Modified the security rules to allow inbound ICMP echo and TCP on port 22 I've done all this and I still can't ping or ssh into the instance. If I follow the same steps minus the VPC bits I am able to set this up without issue. What step am I missing?

    Read the article

  • No clue for high load average on top

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 16 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st(see below). Mem is about 1.5GB free. Any idea what could it be, or where else can we check? Many thanks for the help. # # top # top - 07:57:42 up 4:18, 1 user, load average: 1.36, 1.45, 1.47 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu(s): 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7120092k total, 5644920k used, 1475172k free, 532888k buffers Swap: 0k total, 0k used, 0k free, 3463936k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1557 mysql 20 0 1829m 374m 6448 S 14.3 5.4 11:15.09 mysqld 6655 apache 20 0 416m 49m 3744 S 9.3 0.7 0:04.85 httpd 27683 apache 20 0 421m 54m 3708 S 9.0 0.8 0:00.99 httpd 6682 apache 20 0 424m 57m 3788 S 8.3 0.8 0:03.81 httpd 16816 apache 20 0 419m 51m 3760 S 4.3 0.7 0:04.09 httpd 22182 apache 20 0 417m 50m 3756 S 1.7 0.7 0:06.34 httpd 219 root 20 0 0 0 0 S 0.3 0.0 0:00.34 kworker/7:1 699 root 20 0 0 0 0 S 0.3 0.0 0:00.40 kworker/3:1 1 root 20 0 19376 1508 1212 S 0.0 0.0 0:00.29 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.71 ksoftirqd/0

    Read the article

  • dhcrelay running as both DHCP and DHCPv6 relay agent on CentOS 6.2

    - by Tibor
    I am trying to set up a DHCP relay agent that would relay DHCP requests for both IPv4 and IPv6. I am using CentOS 6.2 and I am using the dhcrelay from the ISC DHCP implementation. I would like to set it up as a service, but the man page for dhcrelay states: -6 Run dhcrelay as a DHCPv6 relay agent. Incompatible with the -4 option. -4 Run dhcrelay as a DHCPv4/BOOTP relay agent. This is the default mode of operation, so the argu- ment is not necessary, but may be specified for clarity. Incompatible with -6. It seems that the -6 and -4 options are incompatible. How would I still make it work for both protocols without rolling my own service wrapper for both cases?

    Read the article

  • Connect to RDS inside a VPC using Opsworks located in another VPC

    - by Consuelo Merino
    I have a RDS instance (mysql) inside a VPC called vpc-a (10.0.0.0/16). This instance is private, it can only be accessed from vpc-a. We created a stack on opsworks inside another VPC called vpc-b (10.1.0.0). We want to connect opsworks to the RDS but it doesn't work. It refuses to connect. I tried adding said subnet to the RDS security group. Also read a lot of documentation but I haven't stumbled across the answer. Any help would be greatly appreciated.

    Read the article

  • Amazon EC2, fastest way to get a node into an existing cluster

    - by imaginative
    I'm new to Amazon AWS. A lot of the time I hear about people folks spawning instances and almost instantly putting them behind a load balancer and into an existing cluster. In the traditional world of managed machines, this would include provisioning hardware, installing an OS, configuring the network on the machine and once the network is available, use a tool of your choice such as CFengine, Puppet or Chef to bootstrap the machine based on its class. It seems like there are "shortcuts" that are able to get a server of a particular class up and running in Amazon EC2. If I have a particular stack running on my server, such as erlang, tomcat6 etc.. what's the fastest way to get these up and running and hooked into Amazon's load balancer? From network, to software stack to kernel tuning? Is it a combination of creating an AMI then running a tool like Puppet against the new instance? Any idea

    Read the article

  • Reasonable Location to Install Web Service on Server

    - by Mr. Disappointment
    Firstly, I'm a software developer and not qualified as any kind of system or server expert so I'm looking for advice in order to help me prevent faults on our server. I've written a modular system to carry out certain tasks for us autonomously to prevent us from writing the same old code over and over again. This consists of a Windows Service (.NET), a Web Service (WCF), a shared Class Library, and a Database which will run on a Windows Server 2003. The problem comes, for me, in deployment. Specifically the web service - naturally the local service (and required shared library) are persisted (by default and convention) in the Program Files folder, but storing the web service here just seems absurd to me (even though we'd lock it down to appropriate use only). Should the files be stored some place else all together? Or split them up and store the web service elsewhere?

    Read the article

  • Why can't I create an Alias Resource Record Set for an EC2 instance

    - by praterade
    I have been working with AWS for over a year, setting up EC2 instances, domains, ELBs, etc. When I want to assign a subdomain to an EC2 instance, I have to create an elastic IP (that I pay for), then assign a CNAME record to that elastic IP. When I want to assign a subdomain to an ELB (load balancer) instance, I just create an alias resource record set to the ELB. I've read over the docs and don't understand why AWS doesn't support aliasing to instances. Am I missing a key concept here? Wouldn't it be simpler to just alias EC2 instances and skip the whole elastic IP bit?

    Read the article

  • AWS VPC public web application connecting to database via VPN

    - by Chris
    What I am trying to do is set up a web application that is public facing but makes calls to a database that is on an internal network. I have been trying to set up an AWS VPC with a public subnet, private subnet, and hardware VPN access but I can't seem to get it to work. Can someone help me understand what the process flow here should be? My understanding is that I need a public subnet to handle the website requests and then a private subnet to connect to the VPN but what I do not understand is how to send requests down the chain and get the response. Basically what I am asking is how can I query the database via VPN from that public website? I've tried during rout forwarding but I can't successfully complete the process. Does anyone have any advice on something I can read on this subject or an FAQ on setting something like this up? Is it even possible? I'm out of my league here, this is not my area of expertise but I'm being asked to solve this problem. Any help would be appreciated. Thanks

    Read the article

  • Reverse Proxy Wordpress with Lighttpd

    - by Jonah
    I am deploying an application and a Wordpress installation on AWS. I have Wordpress set up under Apache on an EC2, and my application under Lighttpd, and I want to reverse-proxy Wordpress through the application node. This works fine, I just set up the reverse proxy in Lighttpd as so: $HTTP["url"] =~ "^/blog" { proxy.server = ( "/blog" => ( "blog" => ( "host" => "123.456.789.123", "port" => 80 )) ) } url.rewrite-once = ( "^(.*?)$" => "/index.php/$1" ) However, the issue is in the rewrite. When I enable rewriting, it catches it before the reverse proxy, and routes to index.php on the application server. I need it to not rewrite if it's going to the blog. I tried various regex matches and other configurations, but I haven't been able to get it to support rewriting and proxying at the same time. How can this be done?

    Read the article

  • Migrate an intermediate CA to a new root

    - by Tim Brigham
    Using the Microsoft CA is there any way to cut over to a new certificate authority from an intermediate authority? Both my systems are Microsoft CAs - I have a 2008 R2 Enterprise CA (intermediate) and an old 2003 CA (root). The 2003 box bit the dust and I don't have good backups. I still have a few months before the CRL expires; instead of having to cut over to a new intermediate authority is there a ready way to simply point this intermediate authority to a new offline CA?

    Read the article

  • Amazon EC2 tools for Debian?

    - by Jonik
    What is the recommended way of getting command-line Amazon EC2 tools on Debian? So, basically the same as this question, but for EC2 instead of S3. Ubuntu has ec2-ami-tools and ec2-api-tools, but I couldn't find equivalent packages for Debian. A blog post titled "Install EC2 AMI & API tools in Debian" talks about installing Amazon's packages outside package management, but that seems a little clumsy.

    Read the article

  • Time between AWS Notifying of Scale Down and Terminating instance

    - by SteveEdson
    Here is the scenario, there are multiple EC2 instances behind a load balancer. When traffic dies down, the SCALE_DOWN policy is triggered from a CloudWatch alarm. What I would like, is for the instance that is going to be terminated, or a separate server altogether, to be able to run a quick script that will execute a few commands to ensure all data has been transferred. My initial question was going to be how can I send a notification when an instance is going to be terminated by an auto scale, SCALE_DOWN policy. But then I saw this question Amazon EC2 notifying the instance when the autoscale service terminates it. If the notification is sent, how much time is there before the instance actually gets terminated? Are there any parameters to specify this time? Would it be a better idea to notify an instance that it is no longer needed, and get the instance to terminate itself once it has finished running the final script? Or, am I making this into a bigger problem than it actually is, and theres a far simpler solution?

    Read the article

  • why does and EBS volumes mounted in an Ubuntu 12.04 EC2 instance as /dev/sdh1 appear as /dev/xvdh1?

    - by Andres
    When mounting an EBS volume on ubuntu specified as /dev/sdh1 it actually mounts it at /dev/xvdh1. The aws console still thinks it's mounted at /dev/sdh1 so it took a while to realize that it was actually mounted, just in the wrong place I ran into this problem a long time ago using ubuntu on ec2. I just ran into it again https://forums.aws.amazon.com/post!reply.jspa?messageID=351382 and it seems like I'm not alone: https://forums.aws.amazon.com/thread.jspa?threadID=68957&tstart=0 I haven't found a good answer as to why this happens or how to fix it. Any ideas?

    Read the article

  • Running a service with a user from a different domain not working

    - by EWood
    I've been stuck on this for a while, not sure what permission I'm missing. I've got domain A and domain B, A trusts B, but B does not trust A. I'm trying to run a service in domain A with a user account from domain B and I keep getting Access is Denied. I'm using the FQDN after the username and the password is correct. The user account from domain B is a local administrator on the domain A server, the user account has the logon locally, and as a service permissions. Must. Get. This. Working. Update: I found something interesting in the logs I must have missed. This ought to get me pointed in the right direction. Event ID: 40961 - LsaSrv : The Security System could not establish a secured connection with the server ldap/{server fqdn/fqdn@fqdn} No authentication protocol was available. I've found a few fixes for 40961 but nothing has worked so far. I've verified reverse lookup zones. nslookup resolves the correct dc properly. still workin' at it. Upadte: In response to Evan; I ran " runas /env /user:ftp_user@fqdn "notepad" " then entered the users password and notepad came up. It seems to work successfully. This issue is now resolved. The problem is visible in the screenshot. Windows tries to use the UPN for the user account if you dig your user out of AD with the Browse button. This fails every time even with the right user and password. Simply using the SAM format (Domain\User) works. So simple, yet so annoying. Can't believe I missed this. Thanks to everyone who helped.

    Read the article

  • Running docker in VPC and accessing container from another VPC machine

    - by Bogdan Gaza
    I'm having issues while running docker in AWS VPC. Here is my setup: I've got two machines running in VPC: 10.0.100.150 10.0.100.151 both having an elastic IPs assigned to them, both running in the same internet enabled subnet. Let's say I'm running a web server that serves static files in a container on the 10.0.100.150 machine the container: IP: 172.17.0.2 port 8111 is forwarded on the 8111 port on the machine. I'm trying to access the static files from my local machine (or another non-VPC machine also tried an EC2 instance not running in the VPC) and it work flawlessly. If I try to access the files from the other machine (10.0.100.151) it hangs. I'm using wget to pull the files. Tried to debug it with tcpdump and ngrep and that I have seen is that the request reaches the container. If I ngrep on the host machine I see the requests going in but no response going back. If I ngrep on the container I see the requests going in and the response going back. I've tried multiple iptables setups (with postrouting enabled, with manually forwarding ports etc) but no success. Help in any way - even debugging directions would be much appreciated. Thanks!

    Read the article

  • Sound entirely stopped working on Windows 8 on a Macbook Pro

    - by Kelvin Bongers
    I am currently running Windows 8 (downloaded from DreamSpark) on a Macbook Pro. This worked fine for a while but suddenly all audio stopped working. When I go to "Playback devices" and hit "Test" on the speakers I get treated with the following message: This also shows up right after I try restarting. I tried disallowing exclusive usage of the devices but it makes no difference. Edit: After some looking around I tried changing the sample rate and bit depth so I would get a dialog screen to force Windows to go around the program that's using it. I did get the dialog but then instead of changing it I got the following error: Edit 2: I narrowed it down to a single service failing to start, the Multimedia Class Scheduler service fails to start with the following error:

    Read the article

  • Vagrant doesn't detect chef-solo unless re-installed

    - by nightowl
    I am using Vagrant to test my Chef recipes in Amazon AWS, and I am encountering an irritating issue: I initially assumed that Vagrant would install chef itself (as it does when using Virtual Box as the provider) but it seems that this needs to be done using the cloud-init script. However, even after I successfully installed the chef gem via cloud-init I was still getting the following error: The chef binary (eitherchef-soloorchef-client) was not found A quick google of this error suggested three probable causes: Chef had failed to install It had installed, but the directory was not in the $PATH environment variable It had installed and in the $PATH but with incorrect permissions I logged in and double checked; chef-solo and chef-client were installed; The path variable for the user, sudo and root all included /usr/local/bin and permissions were all fine. I managed to solve this problem by uninstalling and reinstalling the gem using sudo gem install chef. I don't understand why this should resolve the issue and it is a bit of a problem if I have to ssh into a test box and manually install the gem every time. Does anyone have any suggestions why this might be happening?

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • MySQL query very slow on Amazon RDS but really fast on my laptop?

    - by Luc
    I would love to know if anybody knows why this is happening. i've just migrated over to Amazon RDS for our website and our biggest query which takes .2 seconds to execute on my macbook takes 1.3 seconds to execute on the most expensive RDS instance. Obviously i've disabled query cache (and tested this) on my local computer and both databases are exactly the same. InnoDB, both have the same indexes etc. It's costing us a fortune ($2000 per month) for the fastest RDS instance and i'm losing faith quickly. any ideas?

    Read the article

  • CPU Utilization LAMP stack

    - by Max
    We've got an ec2 m2.4xlarge running Magento (centos 5.6, httpd 2.2, php 5.2.17 with eaccelerator 0.9.5.3, mysql 5.1.52). Right now we're getting a large traffic spike, and our top looks like this: top - 09:41:29 up 31 days, 1:12, 1 user, load average: 120.01, 129.03, 113.23 Tasks: 1190 total, 18 running, 1172 sleeping, 0 stopped, 0 zombie Cpu(s): 97.3%us, 1.8%sy, 0.0%ni, 0.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st Mem: 71687720k total, 36898928k used, 34788792k free, 49692k buffers Swap: 880737784k total, 0k used, 880737784k free, 1586524k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2433 mysql 15 0 23.6g 4.5g 7112 S 564.7 6.6 33607:34 mysqld 24046 apache 16 0 411m 65m 28m S 26.4 0.1 0:09.05 httpd 24360 apache 15 0 410m 60m 25m S 26.4 0.1 0:03.65 httpd 24993 apache 16 0 410m 57m 21m S 26.1 0.1 0:01.41 httpd 24838 apache 16 0 428m 74m 20m S 24.8 0.1 0:02.37 httpd 24359 apache 16 0 411m 62m 26m R 22.3 0.1 0:08.12 httpd 23850 apache 15 0 411m 64m 27m S 16.8 0.1 0:14.54 httpd 25229 apache 16 0 404m 46m 17m R 10.2 0.1 0:00.71 httpd 14594 apache 15 0 404m 63m 34m S 8.4 0.1 1:10.26 httpd 24955 apache 16 0 404m 50m 21m R 8.4 0.1 0:01.66 httpd 24313 apache 16 0 399m 46m 22m R 8.1 0.1 0:02.30 httpd 25119 apache 16 0 411m 59m 23m S 6.8 0.1 0:01.45 httpd Questions: Would giving msyqld more memory help it cache queries and react faster? If so, how? Other than splitting mysql and php to separate servers (which we're about to do) is there anything else we could/should be doing? Thanks! UPDATE: Here's our my.cnf along with the output of mysqltuner. It looks like a cache problem. Thanks again! # cat /etc/my.cnf [client] port = **** socket = /var/lib/mysql/mysql.sock [mysqld] datadir=/mnt/persistent/mysql port=**** socket=/var/lib/mysql/mysql.sock key_buffer = 512M max_allowed_packet = 64M table_cache = 1024 sort_buffer_size = 8M read_buffer_size = 4M read_rnd_buffer_size = 2M myisam_sort_buffer_size = 64M thread_cache_size = 128M tmp_table_size = 128M join_buffer_size = 1M query_cache_limit = 2M query_cache_size= 64M query_cache_type = 1 max_connections = 1000 thread_stack = 128K thread_concurrency = 48 log-bin=mysql-bin server-id = 1 wait_timeout = 300 innodb_data_home_dir = /mnt/persistent/mysql/ innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size = 20G innodb_additional_mem_pool_size = 20M innodb_log_file_size = 64M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 innodb_thread_concurrency = 48 ft_min_word_len=3 [myisamchk] ft_min_word_len=3 key_buffer = 128M sort_buffer_size = 128M read_buffer = 2M write_buffer = 2M # ./mysqltuner.pl >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.52-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB +Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 2G (Tables: 26) [--] Data in InnoDB tables: 749M (Tables: 250) [!!] Total fragmented tables: 262 -------- Security Recommendations ------------------------------------------- -------- Performance Metrics ------------------------------------------------- [--] Up for: 31d 2h 30m 38s (680M q [253.371 qps], 2M conn, TX: 4825B, RX: 236B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 20.6G global + 15.1M per thread (1000 max threads) [OK] Maximum possible memory usage: 35.4G (51% of installed RAM) [OK] Slow queries: 0% (35K/680M) [OK] Highest usage of available connections: 53% (537/1000) [OK] Key buffer size / total MyISAM indexes: 512.0M/457.2M [OK] Key buffer hit rate: 100.0% (9B cached / 264K reads) [OK] Query cache efficiency: 42.3% (260M cached / 615M selects) [!!] Query cache prunes per day: 4384652 [OK] Sorts requiring temporary tables: 0% (1K temp sorts / 38M sorts) [!!] Joins performed without indexes: 100404 [OK] Temporary tables created on disk: 17% (7M on disk / 45M total) [OK] Thread cache hit rate: 99% (537 created / 2M connections) [!!] Table cache hit rate: 0% (1K open / 946K opened) [OK] Open file limit used: 9% (453/5K) [OK] Table locks acquired immediately: 99% (758M immediate / 758M locks) [OK] InnoDB data size / buffer pool: 749.3M/20.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 64M) join_buffer_size (> 1.0M, or always use indexes with joins) table_cache (> 1024)

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >