Search Results

Search found 1349 results on 54 pages for 'sec'.

Page 18/54 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • --log-slave-updates is OFF but updates received from master are still logged to slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the updates that are received from a master to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something?

    Read the article

  • It takes a long time until Windows XP recognize I connected USB drive

    - by Pavol G
    I have a problem with my new USB disk. When I connect it to my laptop with Windows XP SP2 it takes about 4-5min until Windows recognizes it and shows it as a new disk. I can also see (disk's LED is blinking) that something is scanning the disk when I connect it; when this is done Windows immediately recognize it. Also when I'm copying data to this disk the speed is about 3.5MB/sec. It's connected using USB2.0. I tried to check for spyware (using Spybot), also tried running Windows in safe mode. But still have the same problems. Do you have any idea what could help to solve this problem? On Windows Vista (another laptop) everything is ok, disk loads in about 15sec and speed is about 20-30MB/sec. Edit: I tried to update to SP3 - no change Edit2: When this "strange" scanning occurs I can see that DPCs process is taking about 50% of CPU. When the scan ends (after 5min) this process take 0% again. Edit3: About the scan time, currently it's taking about 5min, but this time is growing as I'm adding more data to the disk, currently its about 40GB and I don't want to see how long it will take with 1000GB. Thanks a lot for every advice!

    Read the article

  • Running a VM off a USB 2.0 Flash Drive - Mac/Parallels/XP

    - by geerlingguy
    I use a MacBook Air as my primary machine, and the 128GB SSD means space is precious. To save about 10 GB, I've been running Parallels with a Windows XP VM off an external USB hard drive, which performs as well in everyday use as running the VM off the internal SSD. So, I bought a tiny 32GB USB 2.0 flash drive, plugged it into the MacBook Air, formatted it first as ExFAT (which was slow), then as Mac OS Extended (Journaled) (which was also slow), and copied over my VM file, and ran Parallels off it. My full experience is documented here: http://www.midwesternmac.com/blogs/jeff-geerling/running-windows-xp-vm Straight file copies are really fast — 30 MB/sec read (solid the whole time), and 10-11 MB/sec write (solid the whole time). But I noticed that once XP started running, the disk access rates were in the low KB ranges. Are USB flash drives really that poor at random access, or could I possibly be missing something (the format of the flash drive, etc.?)? Of note, I've tried the following, to no great effect: Formatting the drive as either ExFAT or Mac OS Extended (Journaled) Unplugging all other USB devices and turning off Bluetooth (which runs on the right-side-port USB bus). Plugging in the flash drive either direct in the right side port, or the left side port, or into a USB 2.0 hub

    Read the article

  • Problems using wondershaper on KVM guest

    - by Daniele Testa
    I am trying to limit bandwidth on one of my KVM guest using Wondershaper. Doing something like this works fine: wondershaper br23 9000 9000 Doing a wget with the setting above gives a download speed of about 1MB/sec like it should. However, it seems this is the highest setting I can use, because setting it to this does not work: wondershaper br23 10000 10000 Doing the same wget with the setting above downloads with full speed, about 70MB/sec in my case. Running a status-check returns the following: qdisc cbq 1: root refcnt 2 rate 10000Kbit (bounded,isolated) prio no-transmit Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 qdisc sfq 10: parent 1:10 limit 127p quantum 1514b divisor 1024 perturb 10sec Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc sfq 20: parent 1:20 limit 127p quantum 1514b divisor 1024 perturb 10sec Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b divisor 1024 perturb 10sec Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc ingress ffff: parent ffff:fff1 ---------------- Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 class cbq 1: root rate 10000Kbit (bounded,isolated) prio no-transmit Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 class cbq 1:1 parent 1: rate 10000Kbit (bounded,isolated) prio 5 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 class cbq 1:10 parent 1:1 leaf 10: rate 10000Kbit prio 1 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 class cbq 1:20 parent 1:1 leaf 20: rate 9000Kbit prio 2 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 class cbq 1:30 parent 1:1 leaf 30: rate 8000Kbit prio 2 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 borrowed 0 overactions 0 avgidle 12500 undertime 0 What am I doing wrong? Does wondershaper have some kind of upper limit?

    Read the article

  • Performance monitoring on Linux/Unix

    - by ervingsb
    I run a few Windows servers and (Debian and Ubuntu) Linux and AIX servers. I would like to continously monitor performance on these systems in order to easily identify bottlenecks as well as to have an overview of the general activity on the servers. On Windows, I use Windows Performance Monitor (perfmon) for this. I set up these counters: For bottlenecks: Processor utilization : System\Processor Queue Length Memory utilization : Memory\Pages Input/Sec Disk Utilization : PhysicalDisk\Current Disk Queue Length\driveletter Network problems: Network Interface\Output Queue Length\nic name For general activity: Processor utilization : Processor\% Processor Time_Total Memory utilization : Process\Working Set_Total (or per specific process) Memory utilization : Memory\Available MBytes Disk Utilization : PhysicalDisk\Bytes/sec_Total (or per process) Network Utilization : Network Interface\Bytes Total/Sec\nic name (More information on the choice of these counters on: http://itcookbook.net/blog/windows-perfmon-top-ten-counters ) This works really well. It allows me to look in one place and identify most common bottlenecks. So my question is, how can I do something equivalent (or just very similar) on Linux servers? I have looked a bit on nmon (http://www.ibm.com/developerworks/aix/library/au-analyze_aix/) which is a free performance monitoring tool developed for AIX but also availble for Linux. However, I am not sure if nmon allows me to set up the above counters. Maybe it is because Linux and AIX does not allow monitoring these exact same measures. Is so, which ones should I choose and why? If nmon is not the tool to use for this, then what do you recommend?

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

  • install zenoss on ubuntu, raise No valid ZENHOME error

    - by bxshi
    I've added an user with name zenoss, and set export ZENHOME=/usr/local/zenoss in ~/.bashrc under /home/zenoss, and when using echo $ZENHOME, it could show /usr/local/zenoss When install zenoss, I switched to zenoss and then run install.sh under zenoss-4.2.0/inst, when it tries to run Tests, the error occured. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.zenoss.utils.ZenPacksTest Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 0.045 sec <<< FAILURE! Running org.zenoss.utils.ZenossTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.71 sec Results : Tests in error: testGetZenPack(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetPackPath(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetAllPacks(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. Tests run: 6, Failures: 0, Errors: 3, Skipped: 0 [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Zenoss Core ....................................... SUCCESS [27.643s] [INFO] Zenoss Core Utilities ............................. FAILURE [12.742s] [INFO] Zenoss Jython Distribution ........................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 40.586s [INFO] Finished at: Wed Sep 26 15:39:24 CST 2012 [INFO] Final Memory: 16M/60M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.8:test (default-test) on project utils: There are test failures. [ERROR] [ERROR] Please refer to /home/zenoss/zenoss-4.2.0/inst/build/java/java/zenoss-utils/target/surefire-reports for the individual test results.

    Read the article

  • Not getting gigbit from a gigabit link?

    - by marcusw
    I just upgraded my LAN to gigabit. This is what netperf has to say about things. Before: marcus@lt:~$ netperf -H 192.168.1.1 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.1 (192.168.1.1) port 0 AF_INET : demo Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.02 94.13 After: marcus@lt:~$ netperf -H 192.168.1.1 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.1 (192.168.1.1) port 0 AF_INET : demo Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.01 339.15 Only 340 Mbps? What's up with that? Background info: I'm connecting through a gigabit switch to a sheevaplug. I have Cat5e wiring in the walls and the run is maybe 30 feet. If you're not familiar with netperf, it has a tendency to give very stable results and never lie.

    Read the article

  • ASP.NET Session State SQL Server 2008 R2 Freezes with High CPU Usage

    - by jtseng
    Our ASP.Net website uses SQL Server as the session state provider. We currently host the database on SQL Server 2005 since it does not play well on 2008 R2. We would like to know why, and how to fix it. hardware setup Our current session state server has SQL Server 2005 with the files hosted on a single local disk. It is one of our oldest servers since it has served us well, and we never felt the need to upgrade it. The database is about 2 GB holding 6000 sessions. (The sessions are a little big, but we need it.) We have another server with SQL Server 2008 R2 with a much faster CPU, much more RAM, and a much faster hard disk. situation One day, we have a huge surge in traffic. The transaction log growth on SQL Server freezes the server for 10's of seconds, allowing only a few requests through in minutes. So we load up the new server with ASPState with very large data and log files and point all of our applications to the new server. It chugs along fine for about 5 minutes, and then the CPU usage jumps up to 50% of the 16 cores that Standard Edition can use and freezes for 10's of seconds at a time. The files do not record any autogrowth events. The disk queue is nice and low. RAM usage is low. CPU usage on our old server has never been higher than 5%. What happened on the new server? Alternatively, I would like to hear success stories with ASP.NET session state server running on SQL Server 2008 R2 with an average write load of 30MB/sec with bursts up to 200MB/sec.

    Read the article

  • if the file changes send email about diff

    - by user62367
    I have 2 script. Script "A", Script "B". Script A is regulary watching the dhcpacks [dhcp release is configured to 2mins] in the logs, for the past 2 minutes. It writes the MAC addresses to a file [/dev/shm/dhcpacks-in-last-2min.txt] every 2 minutes. Ok, this is working, active clients are in this file. Super! Script B: On pastebin I'm trying to create a script, that watches the changes in /dev/shm/dhcpacks-in-last-2min.txt file ( every 1 sec). Ok. But: my watcher script [the pastebined][1] is not working fine - sometime it works, sometime it sends that someoneXY logged out`, but it's not true! Nothing happened, and the problem is not in the Script A. Can someone help me point out, what am I missing? How can I watch a file (in every sec), that contains only MAC addresses, and if someone doesn't get dhcpack in 2 minutes, the file /dev/shm/dhcpacks-in-last-2min.txt changes, and that clients MAC address will be gone from it, and i need to know, who was it [pastebined my script - but somethings wrong with it]. Thank you for any help..I've been pathing my script for days now.. :\

    Read the article

  • What performance degradation to expect with Nginx over raw Gunicorn+Gevent?

    - by bouke
    I'm trying to get a very high performing webserver setup for handling long-polling, websockets etc. I have a VM running (Rackspace) with 1GB RAM / 4 cores. I've setup a very simple gunicorn 'hello world' application with (async) gevent workers. In front of gunicorn, I put Nginx with a simple proxy to Gunicorn. Using ab, Gunicorn spits out 7700 requests/sec, where Nginx only does a 5000 request/sec. Is such a performance degradation expected? Hello world: #!/usr/bin/env python def application(environ, start_response): start_response("200 OK", [("Content-type", "text/plain")]) return [ "Hello World!" ] Gunicorn: gunicorn -w8 -k gevent --keep-alive 60 application:application Nginx (stripped): user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; upstream app_server { server 127.0.0.1:8000 fail_timeout=0; } server { listen 8080 default; keepalive_timeout 5; root /home/app/app/static; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } } Benchmark: (results: nginx TCP, nginx UNIX, gunicorn) ab -c 32 -n 12000 -k http://localhost:[8000|8080]/ Running gunicorn over a unix socket gives somewhat higher throughput (5500 r/s), but it still does't match raw gunicorn's performance.

    Read the article

  • Copy past speed very slow for a large number of tiny files on Windows but not on linux

    - by Arno2501
    I've got this folder which contains 15'000 of tiny images (around 400 bytes each). If I copy past this folder on my laptop (Windows 7, i7 latest gen, superfast ssd) it takes about 30 seconds (yes for 7 megs !!!) the average transfer rate is 400 KBytes / second which is so slow. I mean my usual transfer rate is more like hundreds of MBytes per second !!! I get the same problem on my servers (Windows 2003, 2008 /r2) and on every Windows box that I could get my hands on. On the other hand if I do the same on a linux box (debian base, Ext3 FS) (which runs on the same SAN than all the windows servers I've tested) It's nearly instantaneous !!! I'm pretty sure the size / number of the files may stress such filesystem more than another but such differences !? Why is that ? Why is it so slow on the windows boxes (more that 30 sec for 7 MB) and so fast on the linux ones (one sec or so) (I mean this was not a hardlink that I've created it was a true copy). Is it a normal behaviour or something unusual ?

    Read the article

  • Apache https is slsow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With http (KeepAlive off), I can get over 5000 requests per second. However, with https, I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

  • mount.nfs: access denied by server while mounting (Kerberos authentication)

    - by Nick
    There's plenty of references to this error on Goggle, and even a question here with the same title, but it seems that "access denied by server while mounting" is a catch-all error. I've tried suggestions that others have used to fix this problem, but they did not work in my case. I'm trying to set-up a Kerberos-based NFS file server with shared homes for a Linux network. I'm using Ubuntu 11.04 Servers and clients. When trying to mount a share using: mount 192.168.1.115:/export/home/ /media/tmp I get: mount.nfs: access denied by server while mounting 192.168.1.115:/export/home/ This is the same if I mount it from a client machine or from the server itself. On the server, in /var/log/syslog I get: Aug 25 06:22:37 nfs mountd[1580]: authenticated mount request from 192.168.1.115:835 for /export/home (/export/home) Aug 25 06:22:37 nfs mountd[1580]: authenticated unmount request from 192.168.1.115:766 for /export/home (/export/home) Which is odd, since it says it's authenticated the request, not denying it. /etc/exports: /export *(rw,fsid=0,crossmnt,insecure,async,no_subtree_check,sec=krb5p:krb5i:krb5) /export/home *(rw,insecure,async,no_subtree_check,sec=krb5p:krb5i:krb5) On client: me@dt1:/$ rpcinfo -p 192.168.1.115 program vers proto port 100000 2 tcp 111 portmapper 100024 1 udp 37320 status 100024 1 tcp 48460 status 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 2 tcp 2049 100227 3 tcp 2049 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 2 udp 2049 100227 3 udp 2049 100021 1 udp 58625 nlockmgr 100021 3 udp 58625 nlockmgr 100021 4 udp 58625 nlockmgr 100021 1 tcp 49616 nlockmgr 100021 3 tcp 49616 nlockmgr 100021 4 tcp 49616 nlockmgr 100005 1 udp 45627 mountd 100005 1 tcp 60265 mountd 100005 2 udp 45627 mountd 100005 2 tcp 60265 mountd 100005 3 udp 45627 mountd 100005 3 tcp 60265 mountd Any suggestions I could try?

    Read the article

  • Copying files between linux machines with strong authentication but without encryption

    - by Zizzencs
    I'm looking for a suitable program to copy files from one linux machine to another one. The program should be able to do authentication but it should not do encryption. The reason behind the latter is the lack of CPU power to do the encryption. I copy backups from ~70 machines to a single backup server simultaneously. The single server is an HP Proliant DL360 G7, with 10 Gbps ethernet connection and an FC storage backend that can do 4 Gbps. Through FTP I can write ~400MB/sec to the storage (that's about what I want) but through ssh with arcfour I can only do ~100MB/sec while having 100% CPU usage. That's why I want file transfers not to be encrypted. The alternatives that I found not really suitable: rcp: no authentication, forget it FTP: making the authentication "secure" (at least preventing plain-text password exchange) is possible but not really easy and I haven't found a method to force any FTP daemon to encrypt the control channel (for the authentication) and not to encrypt the data channel (for data transfers) SCP/SFTP: in farely recent ssh(d) implementations you can't turn off encryption. The best you can do is to use the arcfour cypher for the encryption but it sill uses too much CPU power for my needs. rsync over ssh: same problems as with SCP/SFTP. plain rsync: from the documentation of rsyncd: "The authentication protocol used in rsync is a 128 bit MD4 based challenge response system. This is fairly weak protection, though (with at least one brute-force hash-finding algorithm publicly available), so if you want really top-quality security, then I recommend that you run rsync over ssh." It's a no-go. Is there a protocol/program that can do exactly what I want? (A big plus would be if it could work on windows as well and/or if it would support rsync-stlye copying/synchronization (e.g. copy only the differences).)

    Read the article

  • sysbench memory test on ec2 small instance

    - by caribio
    I'm seeing a problem with sysbench memory test (the default version that's compiled in). This is on Ubuntu Maverick, sysbench installed via apt-get install sysbench. Running the same thing on Ubuntu @ Rackspace worked just as expected. While the CPU and I/O tests worked fine on EC2 servers, the memory test just runs without doing anything (notice the 0M in the test results). The instance used was the publicly available 'stock' Ubuntu image with no changes to it: ./ec2-run-instances ami-ccf405a5 --instance-type m1.small --region us-east-1 --key mykey Supplying more arguments (such as: --memory-block-size=1K --memory-total-size=102400M) didn't help. What am I doing wrong? Thanks. sysbench --num-threads=4 --test=memory run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 4 Doing memory operations speed test Memory block size: 1K Memory transfer size: 0M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 0 ( 0.00 ops/sec) 0.00 MB transferred (0.00 MB/sec) Test execution summary: total time: 0.0003s total number of events: 0 total time taken by event execution: 0.0000 per-request statistics: min: 18446744073709.55ms avg: 0.00ms max: 0.00ms Threads fairness: events (avg/stddev): 0.0000/0.00 execution time (avg/stddev): 0.0000/0.00

    Read the article

  • Effects of internet connection speeds on server queries

    - by SephMerah
    Can my internet connection significantly effect queries run on phpmyadmin? I am currently 18 down and 30 up. I switched internet connections today and noticed a deep drop in query performance. The query that I am running is SELECT * FROM table. Simple. The table has one row of data. The MySQL server is on the same server as everything else. It is a VPS. Godaddy hosts. I dont have any other information. Centos 6.3 MySQL 5.1 PhpMyAdmin 3.4 Okay used google tools to inspect the XHR going out and coming in and this is what it reported. {"success":true,"message":"<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec )<\/div>","sql_query":"<div id=\"result_query\" align=\"\">\n<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec ) SNIP..................."}. So apparently my server is fine. The strange thing is though.. The returned XHR comes back exactly as soon as I execute the query on the page. It comes back within less than a second. Why PhpMyadmin does not report the change immediately. I am going to try a re-install.

    Read the article

  • IIS 7 much slower than IIS 6

    - by JoeJoe
    I have a asp.net 3.5 web application running fine on Windows2003 IIS6. I published same exact application to IIS7.5 (Win2008R2) on a faster box (i5,8Gram) and it is significantly slower. 5-6 sec per page vs. 1-2 sec per page. During that time the Task Mgr CPU is always under 10%. Both attach to same database on other box. Benchmark is consistent from any other client browser or machine. I have connection pool on both, compression on both. Same network subnet. Forms authentication (no SSL yet). Can you give me steps on how to troubleshoot where the delays are being inserted or settings in IIS7 that I may have overlooked. Just using defaults. There is only 1 web site on each box. I understand the roles of an Application as defined in IIS has changed. There is no special Application defined in IIS.

    Read the article

  • Apache https is slow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With https (KeepAlive on), I can get over 3000 requests per second. However, with https (KeepAlive off), I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

  • Tuning Linux + HAProxy

    - by react
    I'm currently rolling out HAProxy on Centos 6 which will send requests to some Apache HTTPD servers and I'm having issues with performance. I've spent the last couple of days googling and still can't seem to get past 10k/sec connections consistently when benchmarking (sometimes I do get 30k/sec though). I've pinned the IRQ's of the TX/RX queues for both the internal and external NICS to separate CPU cores and made sure HAProxy is pinned to it's own core. I've also made the following adjustments to sysctl.conf: # Max open file descriptors fs.file-max = 331287 # TCP Tuning net.ipv4.tcp_tw_reuse = 1 net.ipv4.ip_local_port_range = 1024 65023 net.ipv4.tcp_max_syn_backlog = 10240 net.ipv4.tcp_max_tw_buckets = 400000 net.ipv4.tcp_max_orphans = 60000 net.ipv4.tcp_synack_retries = 3 net.core.somaxconn = 40000 net.ipv4.tcp_rmem = 4096 8192 16384 net.ipv4.tcp_wmem = 4096 8192 16384 net.ipv4.tcp_mem = 65536 98304 131072 net.core.netdev_max_backlog = 40000 net.ipv4.tcp_tw_reuse = 1 If I use AB to hit the a webserver directly I easily get 30k/s connections. If I stop the webservers and use AB to hit HAProxy then I get 30k/s connections but obviously it's useless. I've also disabled iptables for now since I read that nf_conntrack can slow everything down, no change. I've also disabled the irqbalance service. The fact that I can hit each individual device with 30k/s makes me believe the tuning of the servers is OK and that it must be some HAProxy config? Here's the config which I've built from reading tuning articles, etc http://pastebin.com/zsCyAtgU The server is a dual Xeon CPU E5-2620 (6 cores) with 32GB of RAM. Running Centos 6.2 x64. The private and public interfaces are on separate NICS. Anyone have any ideas? Thanks.

    Read the article

  • Mysql Query - That Is Returning Blatanty Incorrect Result

    - by user866190
    I am building a VPS node that is running Ubuntu 10.10LTS, Apache2, Mysql 5.1 and php5. I could not log in to my website admin through the browser, even though I am using the correct login details. So I logged in from the command line to check the results. When I run this query I get expected results: mysql> select * from users; +----+----------+-----------------------+----------+ | id | username | email | password | +----+----------+-----------------------+----------+ | 1 | myUserName | [email protected] | myPassword | +----+----------+-----------------------+----------+ And the same goes for this query: mysql> select * from users where id = 1; +----+----------+-----------------------+----------+ | id | username | email | password | +----+----------+-----------------------+----------+ | 1 | myUserName | [email protected] | myPassword | +----+----------+-----------------------+----------+ 1 row in set (0.00 sec) But when I run this query I get this 'unexpected response': mysql> select * from users where username = 'myUserName' and password = 'myPassword'; Empty set (0.00 sec) I am not sure why this is happening. Any help would be greatly appreciated. BTW.. I will be encrypting the user details but for now I just want to get it set up. Please help, Thanks

    Read the article

  • Why is my rsync so slow?

    - by iblue
    My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync, it performs badly. I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here? EDIT: I conducted some experiments. Write performance on the laptop The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s. iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024 1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s Read performance on the workstation The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold). iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M 10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s Network performance I ran iperf between the two clients. Network performance is 939 Mbit/s iblue@raven $ iperf -c 94.135.XXX ------------------------------------------------------------ Client connecting to 94.135.XXX, TCP port 5001 TCP window size: 23.2 KByte (default) ------------------------------------------------------------ [ 3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

    Read the article

  • Subsequent runs of rsync locally don't reduce data transferred

    - by sharakan
    I have an EC2 instance with data I want to sync to a mounted, but remote, volume, as a backup. rsync seems like the way to go with this, so as a test I took my test file (a Postgres pg_dump file) and used rsync -v to copy it to the mounted volume: [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3416650.09 bytes/sec total size is 821603948 speedup is 1.00 Then, I ran it again, expecting to see minimal sent/received numbers because it would just be checksums. Instead... [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3402502.47 bytes/sec total size is 821603948 speedup is 1.00 I'm new to rsync so perhaps I'm missing something, but isn't the idea that the source and destination files are checked for differences, and then a patch is generated and applied to the destination? Why is this not reducing the amount of data 'sent' to just the size of the checksums? Some background if it's relevant: the mounted volume is using s3fs, mounted with s3fs <bucketname> backup.

    Read the article

  • Issues with "There is already an object named 'xxx' in the database'

    - by Hoser
    I'm fairly new to SQL so this may be an easy mistake, but I haven't been able to find a solid solution anywhere else. Problem is whenever I try to use my temp table, it tells me it cannot be used because there is already an object with that name. I frequently try switching up the names, and sometimes it'll let me work with the table for a little while, but it never lasts for long. Am I dropping the table incorrectly? Also, I've had people suggest to just use a permanent table, but this database does not allow me to do that. create table #RandomTableName(NameOfObject varchar(50), NameOfCounter varchar(50), SampledValue decimal) select vPerformanceRule.ObjectName, vPerformanceRule.CounterName, Perf.vPerfRaw.SampleValue into #RandomTableName from vPerformanceRule, vPerformanceRuleInstance, Perf.vPerfRaw where (ObjectName like 'Processor' AND CounterName like '% Processor Time') OR(ObjectName like 'System' AND CounterName like 'Processor Queue Length') OR(ObjectName like 'Memory' AND CounterName like 'Pages/Sec') OR(ObjectName like 'Physical Disk' AND CounterName like 'Avg. Disk Queue Length') OR(ObjectName like 'Physical Disk' AND CounterName like 'Avg. Disk sec/Read') OR(ObjectName like 'Physical Disk' and CounterName like '% Disk Time') OR(ObjectName like 'Logical Disk' and CounterName like '% Free Space' AND SampleValue > 70 AND SampleValue < 100) order by ObjectName, SampleValue drop table #RandomTableName

    Read the article

  • amazon ec2-medium apache requests per second terrible

    - by TheDayIsDone
    EDITED -- test running from localhost now to rule out network... i have a c1.medium using EBS. when i do an apache benchmark and i'm just printing a "hello" for the test from localhost - no database hits, it's very slow. i can repeat this test many times with the same results. any thoughts? thanks in advance. ab -n 1000 -c 100 http://localhost/home/test/ Benchmarking localhost (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Apache/2.2.23 Server Hostname: localhost Server Port: 80 Document Path: /home/test/ Document Length: 5 bytes Concurrency Level: 100 Time taken for tests: 25.300 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 816000 bytes HTML transferred: 5000 bytes Requests per second: 39.53 [#/sec] (mean) Time per request: 2530.037 [ms] (mean) Time per request: 25.300 [ms] (mean, across all concurrent requests) Transfer rate: 31.50 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 7 21.0 0 73 Processing: 81 2489 665.7 2500 4057 Waiting: 80 2443 654.0 2445 4057 Total: 85 2496 653.5 2500 4057 Percentage of the requests served within a certain time (ms) 50% 2500 66% 2651 75% 2842 80% 2932 90% 3301 95% 3506 98% 3762 99% 3838 100% 4057 (longest request)

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >