Search Results

Search found 17953 results on 719 pages for 'someone like you'.

Page 648/719 | < Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >

  • How to diagnose repeated "Starting up database '<dbname>'"

    - by Richard Slater
    I have a SQL 2008 server which is predominantly used as a development server, in the last two weeks it has been having occasional "fits", I have isolated the cause of these fits as CHECKDB being run almost continuiously, the following log information is logged to the Windows Event Log (Source: MSSQLSERVER, Category: Server): Event: 1073758961, Message: Starting up database 'DBName1'. Event: 1073758961, Message: Starting up database 'DBName2'. Event: 1073759397, Message: CHECKDB for database 'DBName1' finished without errors on 2010-07-19 20:29:26.993 (local time). This is an informational message only; no user action is required. Event: 1073759397, Message: CHECKDB for database 'DBName1' finished without errors on 2010-07-19 20:29:26.993 (local time). This is an informational message only; no user action is required. This is repeated every 1-2 seconds untill SQL Server is restarted or the offending databases are detatched. I initially thought that it was a problem with the databases so I took a backup and restored them to a SQL Express instance, all of the data is in tact, and CHECKDB runs without problem. The two databases that were causing a problem last week were not being used; so I took full backups of them and detached the databases, this resolved the problem. However at 0100 GMT this morning to other totally unrelated databases started showing the same problems. There is nothing in the event log to suggest that something happened to the server such as a restart, there are no messages about processes crashing or issues being detected with the storage controller. Speaking to the owner of the company this computer has suffered from "gremlins" in the past, however advice was taken and the motherboard was replaced and the computer rebuilt, memory and processor are the same. Stats: O/S: Windows 2008 Standard Build 6002 CPU: 2x Pentium Dual-Core E5200 @ 2.5GHz RAM: 2GB SQL: 2008 Standard 10.0.2531 Edit: someone posted then deleted a comment about AutoClose, it was turned on on the databases affected. It seems that best practice is to disable it so I have done that with the folllowing. EXECUTE sp_MSforeachdb 'IF (''?'' NOT IN (''master'', ''tempdb'', ''msdb'', ''model'')) EXECUTE (''ALTER DATABASE [?] SET AUTO_CLOSE OFF WITH NO_WAIT'')' I won't know if the problem recurs for some time so I am still open to further answers.

    Read the article

  • SSL_CLIENT_CERT_CHAIN not being passed to backend server

    - by nidkil
    I have client certificate configured and working in Apache. I want to pass the PEM-encoded X.509 certificates of the client to the backend server. I tried with the SSLOptions +ExportCertData. This does nothing at all, while the documentation states it should add SSL_SERVER_CERT, SSL_CLIENT_CERT and SSL_CLIENT_CERT_CHAINn (with n = 0,1,2,..) as headers. Any ideas why this option is not working? I then tried setting the headers myself using RequestHeader. This works fine for all variables except SSL_CLIENT_CERT_CHAIN. It shows null in the header. Any ideas why the certificate chain is not being filled? This is my first Apache configuration: <VirtualHost 192.168.56.100:443> ServerName www.test.org ServerAdmin webmaster@localhost DocumentRoot /var/www ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/apache2/ssl/certs/www.test.org.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.test.org.key SSLCACertificateFile /etc/apache2/ssl/ca/ca.crt <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> <Location /carbon> ProxyPass http://www.test.org:9763/carbon ProxyPassReverse http://www.test.org:9763/carbon </Location> <Location /services/GbTestProxy> SSLVerifyClient require SSLVerifyDepth 5 SSLOptions +ExportCertData ProxyPass http://www.test.org:8888/services/GbTestProxy ProxyPassReverse http://www.test.org:8888/services/GbTestProxy </Location> </VirtualHost> This is my second Apache configuration: <VirtualHost 192.168.56.100:443> ServerName www.test.org ServerAdmin webmaster@localhost DocumentRoot /var/www ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/apache2/ssl/certs/www.test.org.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.test.org.key SSLCACertificateFile /etc/apache2/ssl/ca/ca.crt <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> <Location /carbon> ProxyPass http://www.test.org:9763/carbon ProxyPassReverse http://www.test.org:9763/carbon </Location> <Location /services/GbTestProxy> SSLVerifyClient require SSLVerifyDepth 5 RequestHeader set SSL_CLIENT_S_DN "%{SSL_CLIENT_S_DN}s" RequestHeader set SSL_CLIENT_I_DN "%{SSL_CLIENT_I_DN}s" RequestHeader set SSL_CLIENT_S_DN_CN "%{SSL_SERVER_S_DN_CN}s" RequestHeader set SSL_SERVER_S_DN_OU "%{SSL_SERVER_S_DN_OU}s" RequestHeader set SSL_CLIENT_CERT "%{SSL_CLIENT_CERT}s" RequestHeader set SSL_CLIENT_CERT_CHAIN0 "%{SSL_CLIENT_CERT_CHAIN0}s" RequestHeader set SSL_CLIENT_CERT_CHAIN1 "%{SSL_CLIENT_CERT_CHAIN1}s" RequestHeader set SSL_CLIENT_VERIFY "%{SSL_CLIENT_VERIFY}s" ProxyPass http://www.test.org:8888/services/GbTestProxy ProxyPassReverse http://www.test.org:8888/services/GbTestProxy </Location> </VirtualHost> Hope someone can help. Regards, nidkil

    Read the article

  • OS X Snow Leopard 10.6 Refuses to Load Websites the first time intermittently

    - by Brandon
    Many times when I am browsing the web, Snow Leopard will sit and load a site for 20 seconds or more, until it times out and says it cannot be displayed. If I refresh, it loads RIGHT away, every time. The issue is intermittent but happens from once every couple of days to a few times a day. So the long and short of it is this: Aluminum MacBook (Non-Pro) 2.4GHz Core2Duo, 4GB DDR3 I am using 10.6.6 but I have had this issue since 10.6.0 It happens in Firefox, Chrome, and Safari I have flushed my DNS (using the command 'blablabla flush') I am using custom DNS servers which I hoped would fix it but it had no effect* I am running Apache currently but haven't been for most of the time I've reformatted multiple times, always experiencing the issue I am on Cox cable internet, with a Motorola Surfboard & a Belkin F6D4230-4 v1 (Pre?) N wireless router. I've put the router in G only & N only & G+N to no effect It seems to be domain dependant as I can sometimes load the Google cache right away, and sometimes other sites will load but Google will refuse My Powerbook G4 with Leopard, other Windows XP laptops, & my wired Win7 desktop do not suffer from the issue. *I recently started using these to escape the awful Cox redirect page on timeouts I'm almost positive the issue has happened on other networks but I can't recall a specific instance (I have a terrible memory). The problem is intermittent and fixable enough (I just have to wait until it times out and hit refresh one time) but incredibly annoying since I'm constantly reading documentation from a large variety of sites. EDIT: To clarify, this happens with ALL sites, not only specific sites. I haven't been able to detect any pattern to the failures, but one day Google.com will refuse to load while reddit.com will, and the next day vice versa. Keep in mind that waiting for a timeout and hitting refresh loads the page right away, every time. If I don't wait for the timeout, opening more links, hitting refresh, and clicking the link a billion times have no effect. It seems to be domain neutral, affecting sites seemingly at random. It doesn't seem to have anything to do with connection inactivity either, because I will be SSHed into different servers, uploading files, browsing, downloading, etc, and it will just quit loading Jquery.com (for example) until I sit and wait for a timeout. /EDIT This is my last resort. Please, someone, tell me what is happening. Thank you.

    Read the article

  • Moving windows-2003 hdd into virtual machine - with HDD shrink

    - by jm666
    Before you vote to close as exact duplicate, please read the full question. I was already read: Can I make a virtual machine out of a Windows XP physical machine? Disk2vhd,convert my PC to Hyper-V Virtual Machine Creating a Windows Virtual PC image from a Physical machine physical machine to virtual machine and place into VirtualBox BSOD trying to migrate Windows XP from a physical to a virtual machine http://en.wikipedia.org/wiki/Physical-to-Virtual and all other similiar questions here and several external sites too Unfortunately, don't find answer for my problem. I have an physical machine with 500GB HDD, on what is installed old Windows-2003 server with one server application. The application is like the windows itself, too old, no support for it today, haven't installation media and so on.. ;( On the HDD it is used only approx. 100MB (maybe less when will delete all unnecessary files). Want convert the the machine into the VirtualBox, and the VirtualBox should run on the same machine. Is possible to do this with the next steps? I can attach another HDD (via USB or internally) Boot an live Linux from CD, mount HDDs Run "something" on the Linux (the above wikipedia article have many pointer for the SW) for the conversion and store the image on the USB HDD - unfortunately, many of tools uses some specialty what exists in Windows-XP and above. No informations about Windows-2003 server, so what is an working solution for Windows-2003? try boot the virtual image with VirtualBox when it will run ok, remove the old installation, install Linux on the old 500GB hdd, copy the image and run.. The above should works (i hope), but the problems: i currently have only 320GB external USB hdd. (ofc, i can remove it from a box and enter it as internal HDD too) so, for the conversion I looking for the on the fly HDD shrink, so while moving the physical 500GB HDD need shrink it into smaller HDD - as i told above, only 100MB is used Exists something for this? (free) - or the only way is buying and larger 1TB hdd and using it for the conversion? Another question are: is anybody have real experience with windows-2003 conversion into VirtualBox? Looking for an answer from someone who really doing it and can figure out real pitfalls. (googling can do myself). exists here better approach for the solution?

    Read the article

  • PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 + SuExec => Connection reset by peer: mod_fcgid: error readi

    - by Zigzag
    Hi, I'm trying to use PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 but I always have the error : "Connection reset by peer: mod_fcgid: error reading data from FastCGI server". And Apache returns an error 500 each time I tried to execute a php page : I have compiled the Apache with this options: ./configure --with-mpm=worker --enable-userdir=shared --enable-actions=shared --enable-alias=shared --enable-auth=shared --enable-so --enable-deflate \ --enable-cache=shared --enable-disk-cache=shared --enable-info=shared --enable-rewrite=shared \ --enable-suexec=shared --with-suexec-caller=www-data --with-suexec-userdir=site --with-suexec-logfile=/usr/local/apache2/logs/suexec.log --with-suexec-docroot=/home Then PHP: ./configure --with-config-file-path=/usr/local/apache2/php --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql --with-zlib --enable-exif --with-gd --enable-cgi Then FCdigd: APXS=/usr/local/apache2/bin/apxs ./configure.apxs The VHOST is: <Directory /home/website_panel/site/> FCGIWrapper /home/website_panel/cgi/php .php ... ErrorLog /home/website_panel/logs/error.log </Directory> cat /home/website_panel/logs/error.log [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:42 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:42 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:43 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:43 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php The Suexec log: root:/usr/local/apache2# cat /var/log/apache2/suexec.log [2010-03-07 22:11:05]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:15]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:23]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:42]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:43]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php root:/usr/local/apache2# cat logs/error_log [Sun Mar 07 22:18:47 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/local/apache2/bin/suexec) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Memory Allocated 0 bytes (each conf takes 32 bytes) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Version 0.7 - Initialized [0 Confs] [Sun Mar 07 22:18:47 2010] [notice] Apache/2.2.14 (Unix) mod_fcgid/2.3.5 configured -- resuming normal operations root:/usr/local/apache2# /home/website_panel/cgi/php -v PHP 5.3.2 (cli) (built: Mar 7 2010 16:01:49) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies If someone has got an idea, I want to hear it ^^ Thanks !

    Read the article

  • OpenBSD pf 'match in all scrub (no-df)' causes HTTPS to be unreachable on mobile network

    - by Frank ter V.
    First of all: excuse me for my poor usage of the English language. For several years I'm experiencing problems with the 'match in all scrub (no-df)' rule in pf. I can't find out what's happening here. I'll try to be clear and simple. The pf.conf has been extremely shortened for this forum posting. Here is my pf.conf: set skip on lo0 match in all scrub (no-df) block all block in quick from urpf-failed pass in on em0 proto tcp from any to 213.125.xxx.xxx port 80 synproxy state pass in on em0 proto tcp from any to 213.125.xxx.xxx port 443 synproxy state pass out on em0 from 213.125.xxx.xxx to any modulate state HTTP and HTTPS are working fine. Until the moment a customer in France (Wanadoo DSL) couldn't view HTTPS pages! I blamed his provider and did no investigation on that problem. But then... I bought an Android Samsung Galaxy SII (Vodafone) to monitor my servers. Hours after I walked out of the telephone store: no HTTPS-connections on my server! I thought my servers were down, drove back to the office very fast. But they were up. I discovered that disabling the rule match in all scrub (no-df) solves the problem. Android phone (Vodafone NL) and Wanadoo DSL FR are now OK on HTTPS. But now I don't have any scrubbing anymore. This is not what I want. Does anyone here understand what is going on? I don't. Enabling scrubbing causes HTTPS webpages not to be loaded on SOME ISP's, but not all. In systat, I strangely DO see a state created and packets received from those ISP's... Still confused. I'm using OpenBSD 5.1/amd64 and OpenBSD 5.0/i386. I have two ISP's at my office (one DSL and one cable). Affects both. This can be reproduced quite easily. I hope someone has experience with this problem. Greetings, Frank

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • Issue with VMWare vSphere and NFS: re occurring apd state

    - by Bastian N.
    I am experiencing issues with VMWare vSphere 5.1 and NFS storage on 2 different setups, which result in an "All Path Down" state for the NFS shares. This first happened once or twice a day, but lately it occurs much more frequent, as specially when Acronis Backup jobs are running. Setup 1 (Production): 2 ESXi 5.1 hosts (Essentials Plus) + OpenFiler with NFS as storage Setup 2 (Lab): 1 ESXi 5.1 host + Ubuntu 12.04 LTS with NFS as storage Here is an example from the vmkernel.log: 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 248: APD Timer started for ident [987c2dd0-02658e1e] 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 395: Device or filesystem with identifier [987c2dd0-02658e1e] has entered the All Paths Down state. 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 846: APD Start for ident [987c2dd0-02658e1e]! 2013-05-28T08:07:37.485Z cpu0:2052)NFSLock: 610: Stop accessing fd 0x410007e4cf28 3 2013-05-28T08:07:37.485Z cpu0:2052)NFSLock: 610: Stop accessing fd 0x410007e4d0e8 3 2013-05-28T08:07:41.280Z cpu1:2049)StorageApdHandler: 277: APD Timer killed for ident [987c2dd0-02658e1e] 2013-05-28T08:07:41.280Z cpu1:2049)StorageApdHandler: 402: Device or filesystem with identifier [987c2dd0-02658e1e] has exited the All Paths Down state. 2013-05-28T08:07:41.281Z cpu1:2049)StorageApdHandler: 902: APD Exit for ident [987c2dd0-02658e1e]! 2013-05-28T08:07:52.300Z cpu1:3679)NFSLock: 570: Start accessing fd 0x410007e4d0e8 again 2013-05-28T08:07:52.300Z cpu1:3679)NFSLock: 570: Start accessing fd 0x410007e4cf28 again As long as the issue occurred once or twice a day it really wasn't a problem, but now this issue has impact on the VMs. The VMs get slow or even hang, resulting in a reset through vCenter in the production environment. I searched the web extensively and asked in forums, but till now nobody was able to help me. Based on blog posts and VMWare KB articles I tried the following NFS settings: Net.TcpipHeapSize = 32 Net.TcpipHeapMax = 128 NFS.HartbeatFrequency = 12 NFS.HartbeatMaxFailures = 10 NFS.HartbeatTimeout = 5 NFS.MaxQueueDepth = 64 Instead of NFS.MaxQueueDepth = 64 I already tried other settings like NFS.MaxQueueDepth = 32 or even NFS.MaxQueueDepth = 1. Unfortunately without any luck. It would be great if someone could help me on this issue. It is really annoying. Thanks in advance for all the help. [UPDATE] As I explained in the comment below, here is the network setup: On the production setup the NFS traffic is bound to a separate VLAN with ID 20. I am using a HP 1810 24 Port Switch. The OpenFiler system is connected to the VLAN with 4 Intel GbE NICs with dynamic LACP. The ESXis both have 4 Intel GbE NICs using 2 static LACP trunks containing 2 NICs each. One pair is connected to the regular LAN and the other one to the VLAN 20. And here is a screenshot of the vSwitch: Switch configuration: Port configuration: On the lab setup its a single Intel NIC on each side without VLAN, but with different IP subnet.

    Read the article

  • Linux buffer cache effect on IO writes?

    - by Patrick LeBoutillier
    I'm copying large files (3 x 30G) between 2 filesystems on a Linux server (kernel 2.6.37, 16 cores, 32G RAM) and I'm getting poor performance. I suspect that the usage of the buffer cache is killing the I/O performance. To try and narrow down the problem I used fio directly on the SAS disk to monitor the performance. Here is the output of 2 fio runs (the first with direct=1, the second one direct=0): Config: [test] rw=write blocksize=32k size=20G filename=/dev/sda # direct=1 Run 1: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/205M /s] [0/6K iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4667 write: io=20,480MB, bw=199MB/s, iops=6,381, runt=102698msec clat (usec): min=104, max=13,388, avg=152.06, stdev=72.43 bw (KB/s) : min=192448, max=213824, per=100.01%, avg=204232.82, stdev=4084.67 cpu : usr=3.37%, sys=16.55%, ctx=655410, majf=0, minf=29 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 250=99.50%, 500=0.45%, 750=0.01%, 1000=0.01% lat (msec): 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=199MB/s, minb=204MB/s, maxb=204MB/s, mint=102698msec, maxt=102698msec Disk stats (read/write): sda: ios=0/655238, merge=0/0, ticks=0/79552, in_queue=78640, util=76.55% Run 2: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/0K /s] [0/0 iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4733 write: io=20,480MB, bw=91,265KB/s, iops=2,852, runt=229786msec clat (usec): min=16, max=127K, avg=349.53, stdev=4694.98 bw (KB/s) : min=56013, max=1390016, per=101.47%, avg=92607.31, stdev=167453.17 cpu : usr=0.41%, sys=6.93%, ctx=21128, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 20=5.53%, 50=93.89%, 100=0.02%, 250=0.01%, 500=0.01% lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.12% lat (msec): 100=0.38%, 250=0.04% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=91,265KB/s, minb=93,455KB/s, maxb=93,455KB/s, mint=229786msec, maxt=229786msec Disk stats (read/write): sda: ios=8/79811, merge=7/7721388, ticks=9/32418456, in_queue=32471983, util=98.98% I'm not knowledgeable enough with fio to interpret the results, but I don't expect the overall performance using the buffer cache to be 50% less than with O_DIRECT. Can someone help me interpret the fio output? Are there any kernel tunings that could fix/minimize the problem? Thanks a lot,

    Read the article

  • Forward all traffic through an ssh tunnel

    - by Eamorr
    I hope someone can follow this and I'll explain as best I can. I'm trying to forward all traffic from port 6999 on x.x.x.224, through an ssh tunnel, and onto port 7000 on x.x.x.218. Here is some ASCII art: |browser|-----|Squid on x.x.x.224|------|ssh tunnel|------<satellite link>-----|Squid on x.x.x.218|-----|www| 3128 6999 7000 80 When I remove the ssh tunnel, everything works fine. The idea is to turn off encryption on the ssh tunnel (to save bandwidth) and turn on maximum compression (to save more bandwidth). This is because it's a satellite link. Here's the ssh tunnel I've been using: ssh -C -f -C -o CompressionLevel=9 -o Cipher=none [email protected] -L 7000:172.16.1.224:6999 -N The trouble is, I don't know how to get data from Squid on x.x.x.224 into the ssh tunnel? Am I going about this the wrong way? Should I create an ssh tunnel on x.x.x.218? I use iptables to stop squid on x.x.x.224 from reading port 80, but to feed from port 6999 instead (i.e. via the ssh tunnel). Do I need another iptables rule? Any comments greatly appreciated. Many thanks in advance, Regarding Eduardo Ivanec's question, here is a netstat -i any port 7000 -nn dump from x.x.x.218: 14:42:15.386462 IP 172.16.1.224.40006 > 172.16.1.218.7000: Flags [S], seq 2804513708, win 14600, options [mss 1460,sackOK,TS val 86702647 ecr 0,nop,wscale 4], length 0 14:42:15.386690 IP 172.16.1.218.7000 > 172.16.1.224.40006: Flags [R.], seq 0, ack 2804513709, win 0, length 0 Update 2: When I run the second command, I get the following error in my browser: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://109.123.109.205/index.php Zero Sized Reply Squid did not receive any data for this request. Your cache administrator is webmaster. Generated Fri, 01 Jul 2011 16:06:06 GMT by remote-site (squid/2.7.STABLE9) remote-site is 172.16.1.224 When I do a tcpdump -i any port 7000 -nn I get the following: root@remote-site:~# tcpdump -i any port 7000 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused

    Read the article

  • Installing ImageMagick on Mac OSX 10.6

    - by Russell C.
    I just got a new Mac and am trying to setup a local Perl development environment. I'm using MAMP but also need the ImageMagick perl module installed in order to do some of the photo processing our scripts require. I tried installing ImageMagick manually but ran into some issues and after reading online a lot of people reported having issues going this route. The general consensus was to install it using MacPorts instead so I went ahead and installed MacPorts. Unfortunately, MacPorts can't seem to install it successfully either. Here is the command I'm using to try to install ImageMagick: sudo port install p5-perlmagick And here are all the errors reported during install: ---> Computing dependencies for p5-perlmagick ---> Building p5-perlmagick Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-perlmagick/work/PerlMagick-6.32" && /usr/bin/make -j2 all " returned error 2 Command output: Magick.xs:10918: error: 'struct Methods' has no member named 'exception' Magick.xs:10918: error: request for member 'severity' in something not a structure or union Magick.xs:10918: error: 'ErrorException' undeclared (first use in this function) Magick.xs:10919: error: 'struct Methods' has no member named 'exception' Magick.xs:10920: warning: implicit declaration of function 'GetImageException' Magick.xs:10922: error: 'struct PackageInfo' has no member named 'image_info' Magick.xs:10922: error: 'struct Methods' has no member named 'adjoin' Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: 'UndefinedException' undeclared (first use in this function) Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'reason' in something not a structure or union Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'reason' in something not a structure or union Magick.xs:10929: warning: pointer/integer type mismatch in conditional expression Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: warning: pointer/integer type mismatch in conditional expression Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: warning: passing argument 2 of 'Perl_sv_catpv' from incompatible pointer type Magick.xs:10929: warning: unused variable 'message' Magick.xs:10856: warning: unused variable 'filename' Magick.c:10784: warning: unused variable 'ref' Magick.c:10777: warning: unused variable 'ix' Magick.xs: In function 'boot_Image__Magick': Magick.xs:2122: warning: implicit declaration of function 'InitializeMagick' Magick.xs:2123: warning: implicit declaration of function 'SetWarningHandler' Magick.xs:2124: warning: implicit declaration of function 'SetErrorHandler' make: *** [Magick.o] Error 1 Error: Status 1 encountered during processing. Before reporting a bug, first run the command again with the -d flag to get complete output. I have no idea what the problem might be or how to go about successfully installing ImageMagick. I'd appreciate any help & advice that someone out there that has done this successfully might be able to provide. Thanks in advance!

    Read the article

  • Port forwarding problem

    - by Steve
    I have a modem connecting to ADSL2 network and a router connecting to the modem. The rest of the machines all connect to the router. The modem has IP as 192.168.1.1 and the router's IP is 192.168.0.1. From the modem configuration, I can see that the modem thinks the router's IP is 192.168.1.2. I can visit the router by either using 192.168.0.1 or 192.168.1.2. Now I forward a port from the router to a private machine. It works. I can test it by typing 192.168.1.2 and it is redirected to the private machine. But if I use 192.168.0.1, it is still the router's configuration page. I also do a port forwarding on my modem. Since the modem sees only the router, I can only forward the port to the router's specific port. And I am thinking that by doing this, I can reach the private machine after two times port forwarding, once on the modem and once on the router. I also have a static public IP. I want to achieve the goal that when someone types the public IP, he will be redirected to the private machine. But when I use some online port forwarding tester, the result always says that the port is closed on the public IP. I have the questions: Why my router has two IPs? Why using one IP I can see the port forwarding result while using the other I cannot? I think the port forwarding only works when visiting from outside, rather than from both outside and inside. Otherwise, if I set port forwarding on my router/modem on port 80, I will never be able to see its original configuration page again. Everything is forwarded. Am I right? How can I achieve my goal described above? By achieve this, I will have a dedicated server of my own and the users can visit from the public IP. Anyone can correct me on any mistakes I made? I am using Netconn modem and D-Link DIR-300 router. Thank you very much for any help. Edit: Consider I have correctly setup the whole thing. Now I want to test my website by using public IP to visit it, but the port forwarding doesn't work. Does it consider that I am inside the local network and not using the port forwarding? If so, how can I do it? I ask my friends (outside my local network) to have a try and they can see the website. What should I do so that from the inside, I can do the testing? Thank you very much.

    Read the article

  • What are some fast methods for navigating to frequently used folders in Windows 7?

    - by fostandy
    (This is a followup question from my previous question.) In windows XP I used to be able to quickly navigate to frequently used folders by making use of the 'Favorites' menu item and the hotkey behaviour. In certain conditions it could be set up so that getting to a particular folder was as easy as alt-a x (and without a file explorer window open it was as fast as win-e alt-a x). I am struggling to get anywhere near this speed in Windows 7 and would like to solicit advice from others regarding fast folder navigation to see if I am missing any methods. My current way to navigate quickly is basically move hand to mouse move cursor to navigation pane/pain. scroll all the way to the top (because normally I the panel is focused on whatever deep directory structure I am already in). sift through my 50+ favorites to get the one I want, or click a link to a folder that contains further links in some sort of 'pseudo-tree' functionality. select it. This is slower than my previous method by upwards of an order of magnitude. There are a couple of things I've contemplated: add expandable folders, not just direct links, to the favorites menu. add expandable folders, not just direct links, to the start menu. add links of my favorite folders to a submenu of the start menu so that they come up when I search them. They do but this still rather cumbersome started using 7stacks - url here (I cannot link the url directly due to lack of reputation but http://www.alastria.com/index.php?p=software-7s). This is about the closest I've gotten to some sort of compact, customizeable, easy to access, tree based navigation structure. How do you power users quickly navigate to your favorite folders? Are there keyboard shortcuts I am missing? Can someone recommend other apps or addon or extensions that can achieve this sort of functionality? The Current solution (thanks to the answers below) I am going to use is a combination of Autohotkey and 7stacks - autohotkey to launch 7stacks, 7stacks with the 'menu' stack type for fast, key-enabled navigation to folders organised in a tree structure. This solves about 90% of the issue, the only issues are (note that these are really minor, I am really splitting hairs more than anything here) Can't use this for existing folder navigation (ie already have a explorer window open, want to go to another directory) A bit more cumbersome to add/remove entries to compared to xp favorites. A little slower than xp favorites. Whatever. I'm happy. Thanks guys. I think the answer is a split to John T and Kelbizzle - I've elected to give the answer to John T and +1 to Kelbizzle as I had already mentioned 7stacks.

    Read the article

  • Optical SPDIF audio from motherboard not working with receiver

    - by simon b
    Hi, I hope someone can help; I can't get my SPDIF optical out working through my receiver and all the responses I can see on the web assume you have a sound card, while I settled for the (seemingly high end) sound on my motherboard (Asus P7P55D-E PRO), which appears to limit some of my options. My set-up is a "new out of the box" one and is: *Windows 7 PC (using PowerDVD10 for DVDs/Blurays and Windows media player for music) *Asus P7P55D-E PRO motherboard - has 8-channel audio TRS jacks and SPDIF optical and coaxial out *An old Yamaha receiver, whose only multi-channel input options are optical in and 6 channel RCA in. However, it still can handle DTS and DD *Boston Acoustic Soundware XS 5.1 speakers I've currently got the SPDIF optical out from the motherboard connected to the in on my receiver, have SPDIF enabled in the sound menu and the light is glowing red down the fibre. But I'm getting no sound at all. What I want is to be able to play DVDs/BluRays in 5.1 but also to be able to play music in multi-channel mode (even though I know this will be "fake" multichannel; it's more about where I sit in the room and my requirement to use the sub because the Boston is a satellite/sub set-up) My questions are: *Will optical work at all for multi-channel? THe latest posts I can see suggest it does but some people seem to say optical only outputs stereo. Whom to believe? *Even if it does work, I've read that I have to disable AC-3 decoding, or make various other changes, which don't seem to be possible without the menu options that a sound-card brings. Is the motherboard-only option just too inflexible? *Although my SPDIF device is enabled in the sound menu, it insists under "Jack information" that it is a "rear panel RCA jack", when of course it is not (both TOSLINK and rCA jacks do exist). Has the PC just forgotten that it has an optical? *I think I could relatively easily connect the 8-channel 3.5mm TRS jacks to my receiver 6-ch input jacks by way of TRS/RCA cables, but would that not stop me from being able to play music from media-player in multi-channel mode, as I'm not sure the motherboard can cope *Or do I need to bite the bullet and buy a sound-card? And if so, how can I be sure the one I get doesn't have the same problem? Any thoughts gratefully received, Cheers, simon

    Read the article

  • My server freezes within a few hours of logging out. Staying logged in keeps the server running

    - by HappyEngineer
    I have an Ubuntu Godaddy server I use to host mail and webapps. It started having problems a couple months ago. It would lock up and stop responding to anything. I couldn't ssh into it, so I'd have godaddy power cycle the server. I have never seen anything that looked suspicious in the var logs (although I'm no expert at reading them). An fsck turned up no problems. Godaddy replaced the ram, but found no hardware problems. I started logging the output from "top" to a log file and found that even that stops running when the server freezes. Now, here is the crazy part: It got so bad that it would actually go down every few hours, but then it stopped going down. I eventually realized I had left an ssh terminal logged into the machine running top. This seemed unlikely to be a reason, but after the server was up with no problems for a full week (remember, it had been going down after just a few hours), I disconnected from the ssh session. Lo and behold, within a few hours the server froze again! I had them power cycle again and then left another ssh session open with top. It has been going without problems for 8 days now. I told others about this and they hardly believe me. I simply can't imagine what is going on. I don't know what else to try other than to just get a new server and reinstall everything. Does anyone have any ideas about what I can look for to determine what the cause is? Is it possible there's some sort of exploit on the server which only runs if everyone is logged out of the system? EDIT: The power management gone haywire sounds plausible, so I've modified the /boot/grub/menu.lst to boot with acpi=off and apm=off. It appears to have prevented kacpid and kacpid_notify from being in the process list, so I assume I did that right. I've disconnected all my sessions from the server. I'll check later tonight to see if it's still up. If it goes down then I'll try the pinging process idea. EDIT: It went down again. It lasted about a day. I've had them reboot, so now I'll try running "nohup ping -i 5 google.com &" and then disconnect. If it goes down again I'll come back. Hopefully someone will have some more ideas.

    Read the article

  • Windows XP long login (15 minutes +)

    - by Emily Pinkerton
    I'm having a lot of issues with our Windows XP SP3 machines (about 5, but every week another gets on the bandwagon of this issue). They take forever (15 minutes) to apply the user settings once our employee's enter their username and password to login to our domain. It only happens say if a user has reboot the machine and then when they go to log back in then it hangs forever. Reboot and restart are the key words for sure I've noticed with this issue. Here are things I have tested: •Made sure the DNS was set to point to our two servers (Server01 & Server02 are DNS Domain Controllers, 01 is primary and 02 backup). •No major changes have been applied to our network. •All profiles are local, so I have deleted out local profiles that aren't being used on those machines that run slow. •Also I have tried to enable and disable the Enable Fast Login under the local machines GP. It was not configured originally and when I tested both, it made the computer hang on "applying computer settings" for about 15 minutes. When it finally came up to the login screen the it was very quick to login to the domain. However this doesn't fix my issue, and even more frustrating upon setting it back to being not configured it now still takes for forever to apply computer settings. •I enabled the userenv log and here is what I see, but my experience is limited and I'm not sure how to read it exactly. (see below for log, this isn't the whole thing because it's really long) USERENV(2ec.2f0) 10:50:41:843 LoadUserProfile: LoadUserProfileP succeeded USERENV(2ec.2f0) 10:50:41:843 LoadUserProfile: Returning success. Final Information follows: USERENV(2ec.2f0) 10:50:41:843 lpProfileInfo-UserName = USERENV(2ec.2f0) 10:50:41:843 lpProfileInfo-lpProfilePath = < USERENV(2ec.2f0) 10:50:41:843 lpProfileInfo-dwFlags = 0x0 USERENV(2ec.2f0) 10:50:41:843 LoadUserProfile: Returning TRUE. hProfile = <0x818 USERENV(2ec.2f0) 10:50:41:984 IsSyncForegroundPolicyRefresh: Synchronous, Reason:NonCachedCredentials USERENV(2ec.248) 10:50:41:984 IsSyncForegroundPolicyRefresh: Synchronous, Reason:NonCachedCredentials USERENV(3c4.3dc) 10:51:26:166 LibMain: Process Name: C:\WINDOWS\system\wbem\wmiprvse.exe USERENV(2ec.5cc) 11:05:08:741 ProcessGPOs: network name is 192.168.49.0 USERENV(4a8.888) 11:05:08:804 GetProfileType: Profile already loaded. USERENV(4a8.888) 11:05:08:804 LoadProfileInfo: Failed to query central profile with error 2 USERENV(4a8.888) 11:05:08:804 GetProfileType: ProfileFlags is 0 Also this error is in the file quite a lot: USERENV(328.5bc) 11:05:29:733 GetUserDNSDomainName: Failed to impersonate user USERENV(328.834) 11:05:29:733 ImpersonateUser: Failed to impersonate user with 5. I'm really not sure what else to do with my limited experience, but I'm hoping someone can help me. I feel like I'm dealing with an issue way above my level and any knowledge I can gain out of getting this issue fixed would be amazing.

    Read the article

  • Excessive CPU Utilization for Bind 9.8.1 `named` processes

    - by justinzane
    I just noticed that named is eating vast amounts of CPU time for a very small network with only a few domains. Can someone help me determine what is misconfigured, please? Or how to debug this. top top - 14:13:08 up 25 days, 14:16, 1 user, load average: 1.04, 1.04, 1.05 Tasks: 149 total, 1 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 17.3 us, 4.3 sy, 0.0 ni, 78.2 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 2042776 total, 1347916 used, 694860 free, 249396 buffers KiB Swap: 3976080 total, 30552 used, 3945528 free, 574164 cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17445 bind 20 0 244m 42m 3124 S 99.4 2.2 2345:03 named rndc stats +++ Statistics Dump +++ (1352931389) ++ Incoming Requests ++ 65869 QUERY ++ Incoming Queries ++ 31809 A 241 NS 3 CNAME 27455 SOA 276 PTR 123 MX 462 TXT 5400 AAAA 7 A6 1 DS 14 DNSKEY 15 SPF 55 AXFR 8 ANY ++ Outgoing Queries ++ [View: internal] 22206 A 509 NS 10 SOA 25 PTR 12 MX 524 TXT 4851 AAAA 62 DNSKEY 19 SPF 3157 DLV [View: external] 87 A 2 NS 80 AAAA 120 DNSKEY 7 DLV [View: _bind] ++ Name Server Statistics ++ 65869 IPv4 requests received 27670 requests with EDNS(0) received 112 TCP requests received 65652 responses sent 20 truncated responses sent 27670 responses with EDNS(0) sent 62920 queries resulted in successful answer 37117 queries resulted in authoritative answer 28482 queries resulted in non authoritative answer 7 queries resulted in referral answer 591 queries resulted in nxrrset 53 queries resulted in SERVFAIL 2081 queries resulted in NXDOMAIN 14530 queries caused recursion 162 duplicate queries received 55 requested transfers completed ++ Zone Maintenance Statistics ++ 109536 IPv4 notifies sent ++ Resolver Statistics ++ [Common] [View: internal] 29362 IPv4 queries sent 2013 IPv6 queries sent 28531 IPv4 responses received 4209 NXDOMAIN received 6 SERVFAIL received 31 FORMERR received 32 EDNS(0) query failures 3359 query retries 836 query timeouts 5348 IPv4 NS address fetches 3271 IPv6 NS address fetches 83 IPv4 NS address fetch failed 2779 IPv6 NS address fetch failed 17421 DNSSEC validation attempted 12731 DNSSEC validation succeeded 4690 DNSSEC NX validation succeeded 21104 queries with RTT 10-100ms 7418 queries with RTT 100-500ms 3 queries with RTT 500-800ms 1 queries with RTT 800-1600ms [View: external] 192 IPv4 queries sent 104 IPv6 queries sent 192 IPv4 responses received 2 NXDOMAIN received 104 query retries 44 IPv4 NS address fetches 44 IPv6 NS address fetches 1 IPv4 NS address fetch failed 1 IPv6 NS address fetch failed 4 DNSSEC validation attempted 3 DNSSEC validation succeeded 1 DNSSEC NX validation succeeded 152 queries with RTT 10-100ms 40 queries with RTT 100-500ms [View: _bind] ++ Cache DB RRsets ++ [View: internal (Cache: internal)] 2007 A 652 NS 131 CNAME 1 MX 32 TXT 421 AAAA 28 DS 244 RRSIG 110 NSEC 3 DNSKEY 2 !A 2 !TXT 89 !AAAA 2 !SPF 14 !DLV 148 NXDOMAIN [View: external (Cache: external)] 55 A 12 NS 34 AAAA 2 DS 10 RRSIG 1 DNSKEY [View: _bind (Cache: _bind)] ++ Socket I/O Statistics ++ 82958 UDP/IPv4 sockets opened 2118 UDP/IPv6 sockets opened 4 TCP/IPv4 sockets opened 1 TCP/IPv6 sockets opened 82956 UDP/IPv4 sockets closed 2117 UDP/IPv6 sockets closed 58 TCP/IPv4 sockets closed 15 UDP/IPv4 socket bind failures 2117 UDP/IPv6 socket connect failures 29554 UDP/IPv4 connections established 59 TCP/IPv4 connections accepted 2117 UDP/IPv6 send errors 5 UDP/IPv4 recv errors ++ Per Zone Query Statistics ++ --- Statistics Dump --- (1352931389)

    Read the article

  • My server is slower than the average user's computer, should I still offload Access queries to SQL Server? [closed]

    - by andrewb
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I have a database set up with MS Access 2007 front ends and an SQL Server 2005 back end. At the moment, all the queries are saved in the front end as I've only recently moved to an SQL Server backend. I'm wondering how much of those queries I should save as stored procedures/views on SQL Server. About the system The number of concurrent users is only a handful, though it could be as high as 25 at one time (very unlikely). The average computer has an Intel i3-2120 CPU running at 3.3 GHz, which gets a PassMark score of 3,987, whilst the server has an Intel Xeon E5335 running at 2.0 GHz, which gets a PassMark score of 2,637. Always an awkward situation when an i3 outperforms a Xeon... though the i3 is from Q1 2011 and the Xeon is Q2 2009. There is potential for a server upgrade in the future, though it wouldn't come easy. I'm inclined to move the queries to the back end, as they are beginning to take noticeable time and I figure that is a better way of doing things. I like the idea of throwing everything at the server, then pushing for a server upgrade. It makes more sense in my mind to be upgrading one server rather than 30 PCs. Or am I being overzealous? Why my question isn't a duplicate It seems that my question has been misinterpreted and labelled a duplicate of quite a different question, one about testing and capacity planning. I'll try explain how my question is very different from the linked question. The crux of my question is something like "Even though my server is technically slower, is it better to have it doing more of the queries?" There's two ways that people could have answered this: I agree the server is going to be slower, but the extra benefits of such and such (like the less Access the better) means you should move most to the server anyway. (OR no it doesn't outweigh the benefit, keep them in Access) Actually the server will be faster because of such and such. I'm hoping that people out there could provide some answers like this, and the question in the dupe link doesn't really provide either of these answers. Ok sure, I suppose I could do extensive performance testing to compare Access queries running on a local machine to SQL Server queries running on the server, but that sounds like a very hard task (particularly performance testing of access) compared to someone giving some quick general guidance, and again, my question is looking for a lot more than immediate performance benefit.

    Read the article

  • Linux buffer cache effect on IO writes?

    - by Patrick LeBoutillier
    Hi, I'm copying large files (3 x 30G) between 2 filesystems on a Linux server (kernel 2.6.37, 16 cores, 32G RAM) and I'm getting poor performance. I suspect that the usage of the buffer cache is killing the I/O performance. To try and narrow down the problem I used fio directly on the SAS disk to monitor the performance. Here is the output of 2 fio runs (the first with direct=1, the second one direct=0): Config: [test] rw=write blocksize=32k size=20G filename=/dev/sda # direct=1 Run 1: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/205M /s] [0/6K iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4667 write: io=20,480MB, bw=199MB/s, iops=6,381, runt=102698msec clat (usec): min=104, max=13,388, avg=152.06, stdev=72.43 bw (KB/s) : min=192448, max=213824, per=100.01%, avg=204232.82, stdev=4084.67 cpu : usr=3.37%, sys=16.55%, ctx=655410, majf=0, minf=29 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 250=99.50%, 500=0.45%, 750=0.01%, 1000=0.01% lat (msec): 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=199MB/s, minb=204MB/s, maxb=204MB/s, mint=102698msec, maxt=102698msec Disk stats (read/write): sda: ios=0/655238, merge=0/0, ticks=0/79552, in_queue=78640, util=76.55% Run 2: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/0K /s] [0/0 iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4733 write: io=20,480MB, bw=91,265KB/s, iops=2,852, runt=229786msec clat (usec): min=16, max=127K, avg=349.53, stdev=4694.98 bw (KB/s) : min=56013, max=1390016, per=101.47%, avg=92607.31, stdev=167453.17 cpu : usr=0.41%, sys=6.93%, ctx=21128, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 20=5.53%, 50=93.89%, 100=0.02%, 250=0.01%, 500=0.01% lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.12% lat (msec): 100=0.38%, 250=0.04% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=91,265KB/s, minb=93,455KB/s, maxb=93,455KB/s, mint=229786msec, maxt=229786msec Disk stats (read/write): sda: ios=8/79811, merge=7/7721388, ticks=9/32418456, in_queue=32471983, util=98.98% I'm not knowledgeable enough with fio to interpret the results, but I don't expect the overall performance using the buffer cache to be 50% less than with O_DIRECT. Can someone help me interpret the fio output? Are there any kernel tunings that could fix/minimize the problem? Thanks a lot,

    Read the article

  • Is there a way to permanently arrange 2 displays under XP?

    - by rumtscho
    When I am home or on a business trip, or on a meeting, I use my laptop in the usual way. When I get to work, I put it on the docking station and boot it with the lid closed. The image appears on the two displays connected to the docking station. On the left, there is an old monitor connected over VGA, on the right, a big widescreen connected over DVI. Obviously, the videocard seems to think that the DVI is the primary output, and the VGA the secondary one. Thus Windows always displays the widescreen on the left and the old FSC monitor on the right. So when I want to move the mouse pointer from the (physically) left display to the (physically) right display, I have to move it from right to left, which is a usability nightmare. Of course, I can just drag one display over the other one in the display properties, and then everything is as it should be. The catch: Windows remembers this only as long as it has the two displays. Every time it runs on the laptop display, it forgets the setting. Physically switching the monitors isn't an option, for ergonomical reasons. I prefer to run the more important applications on the bigger screen with the better colourspace, and the shape of my desk forces me to sit off-center, so the more important applications should be shown on the right display. Just switching the video ports doesn't help either. When I connect the big monitor over VGA, image quality deteriorates visibly. So what I do now is: every time I bring the laptop to my desk, I boot it. I wait the whole 7 minutes of XP booting, syncing network drives, etc. Then I fire up the display properties, switch to the last tab, drag the widescreen display to the right, and close. Only then can I start working. Does someone have a better idea? The laptop is a Dell Latitude 630 with Windows XP SP 3. It has an nVidia graphics card (not an onboard chip).

    Read the article

  • svchost.exe crash on wake up

    - by Serge
    Lately whenever I wake up my laptop from sleep I get a series of errors (generated by a host process failing) I haven't been able to figure out why this happens but I know which host process fails and was wondering if someone had some insight on why this keeps occuring 99% of the time when my laptop wakes up. here's the host process error Faulting application svchost.exe_SysMain, version 6.0.6001.18000, time stamp 0x47919291, faulting module ntdll.dll, version 6.0.6002.18005, time stamp 0x49e0421d, exception code 0xc0000006, fault offset 0x000000000005a02d, process id 0x1738, application start time 0x01cae656279b1010. and here are some services that fail because of that host The Windows Audio Endpoint Builder service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service. The Wired AutoConfig service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 0 milliseconds: Restart the service. The ReadyBoost service terminated unexpectedly. It has done this 2 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service. The Human Interface Device Access service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 120000 milliseconds: Restart the service. The Network Connections service terminated unexpectedly. It has done this 2 time(s). The following corrective action will be taken in 100 milliseconds: Restart the service. The Program Compatibility Assistant Service service terminated unexpectedly. It has done this 2 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service. The Superfetch service terminated unexpectedly. It has done this 2 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service. Anyways I think you get the point, there are a few more. It got really annoying to wait for those services to restart so I created a batch file that does it automatically whenever the wlan stops I'm using Vista x64 on a Studio XPS 1640

    Read the article

  • Web Site Serving, Cloud-Computing, oh, my

    - by Frank
    I'm planning a software based service. To give it a bit of context (type of traffic), assume it similar to facebook in nature (with a little GitHub thrown in). I've been trying to understand my different hosting options. I've been using a shared host with GoDaddy for years just fine. I currently host a Wordpress web site there and I've not had any problems. Quite frankly, they've taken good care of me. However, the nature of a shared hosting environment is limited in nature. For example, I can't do anything but host a web site there. For example, I can not run a Mercurial server. Last time I attempted to build a web application with the intention of eventually launching it via GoDaddy, I ran in to all sorts of troubles because it was shared-hosted. Assembly issues, etc. At the time, the cost and time sank my project. (The lack of direct access was also frustrating.) (to be fair to godaddy, this was over 3 years ago) I've been looking at Rackspace or Amazon as a possible cloud solution but it seems to be just processing power and bandwidth (and an OS). From what I understand, I'd need to get Apache and MySQL Working on my own. The way cloud hosting is priced, however, seems appealing. I figure my final option might be to use a virtual private host. I think this would be more flexible than a shared-host site but less scalable than a cloud based server. So, I guess my question is what is an appropriate solution for someone who intends to build a web application service? I figure that I need to establish a hosting environment now rather than later so I can plan to effectively use the environment. I'd prefer to be fairly economical to start out with. I really can't afford to pay $999 (or even $99) while I build up the site and get the core functionality online but at the same time, I'd like to have the selected environment grow as needed. Thank you.

    Read the article

  • Looking for a fiber optic "switch" or "router" for home use

    - by Shrout1
    The gist of my question: What is a "fiber optic" switch called? I.E. a layer 2 ethernet switch that uses fiber TX and RX connections and sends layer 2 network traffic between the fiber strands that are connected. Can someone purchase a dedicated fiber switch that does not have copper ethernet ports? What is the current average price of a device like this? Not necessarily looking for product endorsements, just information Might not make sense to go this route if it is too cost prohibitive What type of fiber connector is used for terminating a fiber strand into a jack on the wall? Can fiber be "patched" using two jacks and a "patch" cable? Is signal loss a concern with the longest runs at 100-200ft, a patch cable and media converters? The full story: My parents had unterminated fiber optic cable and terminated Cat5e run throughout their home when it was built in 2004. 10 years later the Cat5e isn't providing the throughput that my father needs to accomplish multiple streams of HD and fast system backups throughout the house. He can't reach gigabit speeds across the distance of the Cat5e runs. We are both interested in terminating the fiber connections and using them as high speed "backbones" to copper switches in each room of the house. It would be easy to attain gigabit speeds (or better, eventually) using the fiber. I have searched and searched for a "fiber optic switch" or "fiber optic router" and cannot find the correct term to describe this piece of hardware. We can use fiber media converters at the end points of each connection, however it would be nice to have a "patch panel" set up in the network closet in the basement that has fiber connections on it and switches the ethernet streams between the connections/systems in the house. Each fiber media converter costs between $50-$100 a piece... After 10 or so terminated connections it might make sense to find a piece of hardware that does not require media converters. That would depend upon the cost of this hardware Somewhat unrelated, if we are able to route between these fiber strands successfully, what is the physical connector type used in a jack on the wall? Just like RJ45 has a wall outlet (depicted below): What is the fiber optic equivalent of this? In the interim could we "patch" a couple fiber strands together in the network closet? Would signal loss be of concern with a run length of 100-200 feet, a patch cable and two media converters? If that would work then it could be used until the funds are available for more.

    Read the article

  • WinXP - Having trouble sharing internet with 3G USB modem via ICS

    - by Carlos Nunez
    all! I've been banging my head against a wall with this issue for a few days now and am hoping someone can help out. I recently signed up for T-Mobile's webConnect 3G/4G service to replace the faltering (and slow) DSL connection in my apartment. The goal was to put the SIM in one of my old phones and use its built-in WLAN tethering feature to share Internet out to rest of my computers. I quickly found out that webConnect-provisioned SIMs do not work with regular smartphones, so I was forced to either buy a 4G-compatible router or tether one of my old laptops to my wireless router and share out that way. I chose the latter, and it's sharpening my inner masochistic self by the day. Here's the setup: GSM USB modem (via hub), ICS host - 10/100 Mbps Ethernet NIC, ICS "guest" - WAN port of my SMC WGBR14N wireless router in bridged mode (i.e. wireless access point). Ideally, this would make my laptop the DHCP server and internet gateway with the WAP giving everyone wireless coverage. I can browse internet on the host laptop fine. However, when clients try to connect, they get a DHCP-assigned IP from the laptop and are able to use the Internet for a few minutes before completely dying. After that happens, they are able to re-associate with the WAP and get IP addresses, but are unable to use Internet or resolve IP addresses until the laptop and router are restarted. If they do get access, it's very, very slow. After running Wireshark on the host machine, it turns out that this is because every TCP connection keeps getting RST. DNS seems to work. I would normally think the firewall is the culprit here, but when it drops packets, it drops them completely. The fact that TCP connections are being ACK'ed by the destination rules that out. Of course, none of the event Log isn't saying anything about what's going on. I also tried disabling power management on the NIC, since that's caused problems in the past; that didn't help either. I finally disabled receive-side scaling as per a Microsoft KB (that applied to Windows Server 2003, SP2) to no avail. I'm thinking of trying it with a different NIC (will be tough; don't have a spare Ethernet NIC around for the laptop), but I'm getting the impression that this simply doesn't work. Can anyone please advise? I apologise for the length of this post; all contributions are much appreciated! -Carlos.

    Read the article

  • Nginx / uWsgi / Django site can handle more traffic with rewrite URL

    - by Ludo
    Hi there. I'm running a Django app, using uWsgi behind Nginx. I've been doing some performance tuning and load testing using ApacheBench and have discovered something unexpected which I wonder if someone could explain for me. In my Nginx config I have a rewrite directive which catches lots of different URL permutations and then forwards them to the canonical URL I wish to use, eg, it traps www.mysite.com/whatever, www.mysite.co.uk/whatever and forwards them all to http://mysite.com/whatever. If I load test against any of the URLs listed with a redirect (ie, NOT the canonical URL which it is eventually forwarded to), it can serve 15000 concurrent connections without breaking a sweat. If I load test against the canonical URL, which the above test I would have expected got forwarded to anyway, it can't handle nearly as much. It will drop about 4000 of the 15000 requests, and can only handle about 9000 reliably. This is the command line I'm using to test: ab -c15000 -n15000 http://www.mysite.com/somepath/ and ab -c15000 -n15000 http://mysite.com/somepath/ I've tried several different types - it makes no different which order I do them in. This doesn't make sense to me - I can understand why the requests involving a redirect may not handle quite so many concurrent connections, but it's happening the other way round. Can anyone explain? I'd really prefer it if the canonical URL was the one which could handle more traffic. I'll post my Nginx config below. Thanks loads for any help! server { server_name www.somesite.com somesite.net www.somesite.net somesite.co.uk www.somesite.co.uk; rewrite ^(.*) http://somesite.com$1 permanent; } server { root /home/django/domains/somesite.com/live/somesite/; server_name somesite.com somesite-live.myserver.somesite.com; access_log /home/django/domains/somesite.com/live/log/nginx.log; location / { uwsgi_pass unix:////tmp/somesite-live.sock; include uwsgi_params; } location /media { try_files $uri $uri/ /index.html; } location /site_media { try_files $uri $uri/ /index.html; } location = /favicon.ico { empty_gif; } }

    Read the article

< Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >