Search Results

Search found 6431 results on 258 pages for 'cache invalidation'.

Page 230/258 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • MySQL daemon keeps terminating unexpectedly

    - by Yehia A.Salam
    The MySQL daemon on my CentOS server keeps crashing, i got the logs from /var/logs/mysqld but still i am not sure how to fix this: 121114 16:22:56 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 121114 21:55:11 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121114 21:55:11 [Note] Plugin 'FEDERATED' is disabled. 121114 21:55:11 InnoDB: The InnoDB memory heap is disabled 121114 21:55:11 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121114 21:55:11 InnoDB: Compressed tables use zlib 1.2.3 121114 21:55:11 InnoDB: Using Linux native AIO 121114 21:55:11 InnoDB: Initializing buffer pool, size = 128.0M 121114 21:55:11 InnoDB: Completed initialization of buffer pool 121114 21:55:11 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 121114 21:55:11 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 121114 21:55:12 InnoDB: Waiting for the background threads to start 121114 21:55:13 InnoDB: 1.1.6 started; log sequence number 77177262 121114 21:55:13 [Note] Event Scheduler: Loaded 0 events 121114 21:55:13 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.5.12' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) by Remi 121115 00:19:44 mysqld_safe Number of processes running now: 0 121115 00:19:44 mysqld_safe mysqld restarted 121115 0:19:47 [Note] Plugin 'FEDERATED' is disabled. 121115 0:19:47 InnoDB: The InnoDB memory heap is disabled 121115 0:19:47 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121115 0:19:47 InnoDB: Compressed tables use zlib 1.2.3 121115 0:19:47 InnoDB: Using Linux native AIO 121115 0:19:47 InnoDB: Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 121115 0:19:47 InnoDB: Completed initialization of buffer pool 121115 0:19:47 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121115 0:19:47 [ERROR] Plugin 'InnoDB' init function returned error. 121115 0:19:47 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121115 0:19:47 [ERROR] Unknown/unsupported storage engine: InnoDB 121115 0:19:47 [ERROR] Aborting Edit #1 total used free shared buffers cached Mem: 496 370 126 0 24 110 -/+ buffers/cache: 234 261 Swap: 1023 9 1014 Edit #2 Also, largest table in my mysql is 20MB, so my the memory used should be pretty moderate. SELECT CONCAT(table_schema, '.', table_name), CONCAT(ROUND(table_rows / 1000000, 2), 'M') rows, CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G') DATA, CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G') idx, CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size, ROUND(index_length / data_length, 2) idxfrac FROM information_schema.TABLES ORDER BY data_length + index_length DESC LIMIT 10;

    Read the article

  • Exchange server not serving mobile devices - how to troubleshoot?

    - by chickeninabiscuit
    Our exchange server has suddenly stopped serving mobile devices. Attempts to connect result in our ActiveSync server returning HTTP 500. It is serving outlook clients fine. Our server is Windows 2003 SBS 6.5 SP2 There are no abnormal events in the system log. I ran the "Exchange ActiveSync with AutoDiscover" at https://www.testexchangeconnectivity.com/ I've notice an abnormality in the exchange properties, Log File Directory shows: Access denied. Facility: Win32 ID no: 80070005 Exchange System Manager As shown in the following image: I think it may be related to a recent issue we had here: http://serverfault.com/questions/40222/windows-server-2003-suddenly-unable-to-connect-to-anything We followed a procedure to reinstall TCP/IP: http://support.microsoft.com/kb/325356 I've run the "exchange activesync" connectivity test at testexchangeconnectivity.com: Attempting to Resolve the host name mail.immersive.com.au in DNS. Host successfully Resolved Additional Details IP(s) returned: 221.133.203.229 Testing TCP Port 443 on host mail.immersive.com.au to ensure it is listening/open. The port was opened successfully. Testing SSL Certificate for validity. The certificate passed all validation requirements. Test Steps Validating certificate name Successfully validated the certificate name Additional Details Found hostname mail.immersive.com.au in Certificate Subject Common name Validating certificate trust for Windows Mobile Devices Certificate is trusted and all certificates are present in chain Additional Details Certificate is trusted for Windows Mobile 5 and Later platforms. Root = [email protected], CN=Thawte Server CA, OU=Certification Services Division, O=Thawte Consulting cc, L=Cape Town, S=Western Cape, C=ZA Testing certificate date to ensure validity Date Validation passed. The certificate is not expired. Additional Details Certificate is valid: NotBefore = 1/5/2009 4:00:00 PM, NotAfter = 1/11/2010 3:59:59 PM Testing Http Authentication Methods for URL https://mail.immersive.com.au/Microsoft-Server-Activesync/ Http Authentication Methods are correct Additional Details Found all expected authentication methods and no disallowed methods. Methods Found: Basic Attempting an Activesync session with server Errors were encountered while testing the ActiveSync session Test Steps Attempting to send OPTIONS command to server OPTIONS response was successfully received and is valid Additional Details Headers received: MicrosoftOfficeWebServer: 5.0_Pub Pragma: no-cache Public: OPTIONS, POST Allow: OPTIONS, POST MS-Server-ActiveSync: 6.5.7638.1 MS-ASProtocolVersions: 1.0,2.0,2.1,2.5 MS-ASProtocolCommands: Sync,SendMail,SmartForward,SmartReply,GetAttachment,GetHierarchy,CreateCollection,DeleteCollection,MoveCollection,FolderSync,FolderCreate,FolderDelete,FolderUpdate,MoveItems,GetItemEstimate,MeetingResponse,ResolveRecipients,ValidateCert,Provision,Search,Notify,Ping Content-Length: 0 Date: Thu, 16 Jul 2009 01:07:27 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Attempting FolderSync command on ActiveSync session FolderSync command test failed Tell me more about this issue and how to resolve it Additional Details Exchange

    Read the article

  • emerge only prints it's parameters along with "Wrong gcc version" message.

    - by Dmitriy Matveev
    Our gentoo server has been left in inconsistent state. I don't know what have been done wrong previously, but now I need to fix the system somehow. I've tried to do revdep-rebuild, but it has failed: ... x11-libs/gksu:0 x11-libs/gtk+:2 x11-libs/gtkglarea:2 x11-libs/libgksu:2 x11-libs/libsvg-cairo:0 x11-libs/qt-gui:4 .......... IMPORTANT: 12 news items need reading for repository 'gentoo'. Use eselect news to read news items. Calculating dependencies... done! emerge: there are no ebuilds to satisfy "gnome-base/gswitchit-plugins:0". emerge: searching for similar names... emerge: Maybe you meant any of these: gnome-base/gswitchit-plugins, gnome-extra/gswitchit-plugins, gnome-base/nautilus? IMPORTANT: 12 news items need reading for repository 'gentoo'. Use eselect news to read news items. revdep-rebuild failed to emerge all packages. you have the following choices: If emerge failed during the build, fix the problems and re-run revdep-rebuild. Use /etc/portage/package.keywords to unmask a newer version of the package. (and remove 5_order.rr to be evaluated again) Modify the above emerge command and run it manually. Compile or unmerge unsatisfied packages manually, remove temporary files, and try again. (you can edit package/ebuild list first) To remove temporary files, please run: rm /var/cache/revdep-rebuild/*.rr I've tried to remove one of the mentioned packages: harley ~ # emerge -C gswitchit-plugins Wrong gcc version = echo -C gswitchit-plugins harley ~ # I don't see any problems with the gcc, but emerge isn't working: harley ~ # gcc --version gcc (Gentoo 4.5.2 p1.0, pie-0.4.5) 4.5.2 Copyright (C) 2010 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. harley ~ # gcc-config -l [1] i686-pc-linux-gnu-3.3.6 [2] i686-pc-linux-gnu-3.4.6 [3] i686-pc-linux-gnu-3.4.6-hardened [4] i686-pc-linux-gnu-3.4.6-hardenednopie [5] i686-pc-linux-gnu-3.4.6-hardenednopiessp [6] i686-pc-linux-gnu-3.4.6-hardenednossp [7] i686-pc-linux-gnu-4.1.2 [8] i686-pc-linux-gnu-4.5.2 * harley ~ # emerge --help Wrong gcc version = echo --help harley ~ # which emerge /root/bin/emerge harley ~ # emerge Wrong gcc version = echo harley ~ # emerge fdslkgj Wrong gcc version = echo fdslkgj harley ~ # How can I fix emerge?

    Read the article

  • OS X Snow Leopard 10.6 Refuses to Load Websites the first time intermittently

    - by Brandon
    Many times when I am browsing the web, Snow Leopard will sit and load a site for 20 seconds or more, until it times out and says it cannot be displayed. If I refresh, it loads RIGHT away, every time. The issue is intermittent but happens from once every couple of days to a few times a day. So the long and short of it is this: Aluminum MacBook (Non-Pro) 2.4GHz Core2Duo, 4GB DDR3 I am using 10.6.6 but I have had this issue since 10.6.0 It happens in Firefox, Chrome, and Safari I have flushed my DNS (using the command 'blablabla flush') I am using custom DNS servers which I hoped would fix it but it had no effect* I am running Apache currently but haven't been for most of the time I've reformatted multiple times, always experiencing the issue I am on Cox cable internet, with a Motorola Surfboard & a Belkin F6D4230-4 v1 (Pre?) N wireless router. I've put the router in G only & N only & G+N to no effect It seems to be domain dependant as I can sometimes load the Google cache right away, and sometimes other sites will load but Google will refuse My Powerbook G4 with Leopard, other Windows XP laptops, & my wired Win7 desktop do not suffer from the issue. *I recently started using these to escape the awful Cox redirect page on timeouts I'm almost positive the issue has happened on other networks but I can't recall a specific instance (I have a terrible memory). The problem is intermittent and fixable enough (I just have to wait until it times out and hit refresh one time) but incredibly annoying since I'm constantly reading documentation from a large variety of sites. EDIT: To clarify, this happens with ALL sites, not only specific sites. I haven't been able to detect any pattern to the failures, but one day Google.com will refuse to load while reddit.com will, and the next day vice versa. Keep in mind that waiting for a timeout and hitting refresh loads the page right away, every time. If I don't wait for the timeout, opening more links, hitting refresh, and clicking the link a billion times have no effect. It seems to be domain neutral, affecting sites seemingly at random. It doesn't seem to have anything to do with connection inactivity either, because I will be SSHed into different servers, uploading files, browsing, downloading, etc, and it will just quit loading Jquery.com (for example) until I sit and wait for a timeout. /EDIT This is my last resort. Please, someone, tell me what is happening. Thank you.

    Read the article

  • Why do I see a large performance hit with DRBD?

    - by BHS
    I see a much larger performance hit with DRBD than their user manual says I should get. I'm using DRBD 8.3.7 (Fedora 13 RPMs). I've setup a DRBD test and measured throughput of disk and network without DRBD: dd if=/dev/zero of=/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 4.62985 s, 116 MB/s / is a logical volume on the disk I'm testing with, mounted without DRBD iperf: [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec According to Throughput overhead expectations, the bottleneck would be whichever is slower, the network or the disk and DRBD should have an overhead of 3%. In my case network and I/O seem to be pretty evenly matched. It sounds like I should be able to get around 100 MB/s. So, with the raw drbd device, I get dd if=/dev/zero of=/dev/drbd2 bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 6.61362 s, 81.2 MB/s which is slower than I would expect. Then, once I format the device with ext4, I get dd if=/dev/zero of=/mnt/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 9.60918 s, 55.9 MB/s This doesn't seem right. There must be some other factor playing into this that I'm not aware of. global_common.conf global { usage-count yes; } common { protocol C; } syncer { al-extents 1801; rate 33M; } data_mirror.res resource data_mirror { device /dev/drbd1; disk /dev/sdb1; meta-disk internal; on cluster1 { address 192.168.33.10:7789; } on cluster2 { address 192.168.33.12:7789; } } For the hardware I have two identical machines: 6 GB RAM Quad core AMD Phenom 3.2Ghz Motherboard SATA controller 7200 RPM 64MB cache 1TB WD drive The network is 1Gb connected via a switch. I know that a direct connection is recommended, but could it make this much of a difference? Edited I just tried monitoring the bandwidth used to try to see what's happening. I used ibmonitor and measured average bandwidth while I ran the dd test 10 times. I got: avg ~450Mbits writing to ext4 avg ~800Mbits writing to raw device It looks like with ext4, drbd is using about half the bandwidth it uses with the raw device so there's a bottleneck that is not the network.

    Read the article

  • How to block subreddits with BIND9?

    - by user1391189
    Please help me block NSFW subreddits like this one (http://www.reddit.com/r/NSFW/) I would like to keep access to SFW subreddits, but block certain subreddits that are distracting or NSFW. I know how to filter domains. (see files below) But how do I apply the filter only to certain subreddits? So far I have set up the following files: blocklist.conf zone "adimages.go.com" { type master; file "dummy-block"; }; zone "admonitor.net" { type master; file "dummy-block"; }; zone "ads.specificpop.com" { type master; file "dummy-block"; }; ... named.conf options { allow-query { 127.0.0.1; }; allow-recursion { 127.0.0.1; }; directory "c:\bind\etc"; notify no; }; zone "." IN { type hint; file "c:\bind\etc\named.root"; }; zone "localhost" IN { allow-update { none; }; file "c:\bind\etc\localhost.zone"; type master; }; zone "0.0.127.in-addr.arpa" IN { allow-update { none; }; file "c:\bind\etc\named.local"; type master; }; key "rndc-key" { algorithm hmac-md5; secret "O5VdbBKKEMzuLYjM60CxwuLLURFA6peDYHCBvZCqjoa6KtL1ggD7OTLeLtnu2jR5I5cwA/MQ8UdHc+9tMJRSiw=="; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; //Blocklist include "c:\bind\etc\blocklist.conf"; dummy-block $TTL 604800 @ IN SOA localhost. root.localhost. ( 2 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS localhost. @ IN A 127.0.0.1 * IN A 127.0.0.1

    Read the article

  • nginx proxypath https redirect fails without trailing slash

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass. The links on the pages that lack trailing slashes do have https:// in front, but get redirected to a http request with a trailing slash - which ends in connection refused - I only want these services to be available through https. So if a link is too https://example.com/internal/errorlogs in a browser when loaded https://example.com/internal/errorlogs gives Error Code 10061: Connection refused (it redirects to http://example.com/internal/errorlogs/) If I manually append the trialing slash https://example.com/internal/errorlogs/ it loads I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf to no effect, have also added server_name_in_redirect off; This happens on more than one app under nginx, and works in apache reverse proxy Config files; proxy.conf location /internal { proxy_pass http://localhost:8081/internal; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 443; server_name example.com; server_name_in_redirect off; include proxy.conf; ssl on; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-Proto https; curl output -$ curl -I -k https://example.com/internal/errorlogs/ HTTP/1.1 200 OK Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:07 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 14327 -$ curl -I -k https://example.com/internal/errorlogs HTTP/1.1 301 Moved Permanently Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:11 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 127 Location: http://example.com/internal/errorlogs/

    Read the article

  • Windows 7 x64 "upgrade" repair fails

    - by Polynomial
    I've been running into issues with Windows Update, which I can't seem to fix. The hotfixes don't work, nor does the Windows update readyness tool, or the manual SP1 upgrade. I get various esoteric errors which nobody seems to have a fix for. Looks like some of the update cache is corrupt and digital signatures seem to be broken on some packages / Windows Update components. Long story short, I have discovered the only option is to do a repair operation on the OS, to repair everything. It's so corrupt that only a complete replacement will fix it. According to various sources (including MSKB) one can perform a repair by running an in-place upgrade. I've got the Windows 7 Ultimate retail disc, which I've inserted into my machine. I ran setup.exe and went through in the following order: Install now Go online to get latest updates (I've also tried not getting updates) Wait for updates to be downloaded Select Windows 7 Ultimate (x64 architecture) and click next Accept the T&Cs, click next Click Upgrade At this point it spends a minute on the "checking compatibility" screen, after which I get the following error: The following issues are preventing Windows from upgrading. Cancel the upgrade, complete each task, and then restart the upgrade to continue. You can’t upgrade 64-bit Windows to a 32-bit version of Windows. To upgrade, obtain a 64-bit version of the installation disc, or go online to see how to install Windows 7 and keep your files and settings. 32-bit Windows cannot be upgraded to a 64-bit version of Windows. To upgrade, obtain a 32-bit version of the Windows installation disc. It also mentions a warning about potential conflicts with a storage driver and VS2010, but that doesn't seem to be the blocking issue. My currently installed version of Windows is Ultimate 64-bit (absolutely sure of this) and the disc is definitely a x86 / x64 combined Ultimate retail disc. There seem to be a few people who have run into this (e.g. this question), but I've not seen any answers. I've checked the event viewer, but can't spot anything in there that's related. Any idea how I can get this working? P.S: Just to pre-empt the inevitable "are you suuuuuuuuuuuuure it's x64 Ultimate?" questions:

    Read the article

  • Nginx Retry of Requests ( Nginx - Haproxy Combination )

    - by vaibhav
    I wanted to ask about Nginx Retry of Requests. I have a Nginx running at the backend which then sends the requests to HaProxy which then passes it on the web server and the request is processed. I am reloading my Haproxy config dynamically to provide elasticity. The problem is that the requests are dropped when I reload Haproxy. So I wanted to have a solution where I can just retry that from Nginx. I looked through the proxy_connect_timeout, proxy_next_upstream in http module and max_fails and fail_timeout in server module. I initially only had 1 server in the upstream connections so I just that up twice now and less requests are getting dropped ( only when ) have say the same server twice in upstream , if I have same server 3-4 times drops increase ). So , firstly I wanted to now , that when a request is not able to establish connection from Nginx to Haproxy so while reloading it seems that conneciton is seen as error and straightway the request is dropped . So how can I either specify the time after the failure I want to retry the request from Nginx to upstream or the time before which Nginx treats it as failed request. ( I have tried increaing proxy_connect_timeout - didn't help , mail_retires , fail_timeout and also putting the same upstream server twice ( that gave the best results so far ) Nginx Conf File upstream gae_sleep { server 128.111.55.219:10000; } server { listen 8080; server_name 128.111.55.219; root /var/apps/sleep/app; # Uncomment these lines to enable logging, and comment out the following two #access_log /var/log/nginx/sleep.access.log upstream; error_log /var/log/nginx/sleep.error.log; access_log off; #error_log /dev/null crit; rewrite_log off; error_page 404 = /404.html; set $cache_dir /var/apps/sleep/cache; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://gae_sleep; client_max_body_size 2G; proxy_connect_timeout 30; client_body_timeout 30; proxy_read_timeout 30; } location /404.html { root /var/apps/sleep; } location /reserved-channel-appscale-path { proxy_buffering off; tcp_nodelay on; keepalive_timeout 55; proxy_pass http://128.111.55.219:5280/http-bind; } }

    Read the article

  • Having trouble Getting "RTSP over HTTP"

    - by Muhammad Adeel Zahid
    There is an axis camera that is connected to our site (camba.tv) through axis one click connection component (which acts as proxy). We can communicate with this camera only through http by setting the proxy to our OCCC server's address. If we want to get RTSP streams (h.264) we are only left with "RTSP over HTTP" option. For this I have followed axis VAPIX 3 documentation section 3.3. I issue requests through fiddler but don't get any response. But when i put the URL (axrtsphttp://1.00408CBEA38B/axis-media/media.amp) in windows media player (with proxy set to OCCC server 212.78.237.156:3128) the player is able to get RTSP stream over HTTP after logging in. I have created a request trace of communication between camera and windows media player through wireshark and the request that brings the stream looks like http://1.00408cbea38b/axis-media/media.amp HTTP/1.1 x-sessioncookie: 619 User-Agent: Axis AMC Host: 1.00408CBEA38B Proxy-Connection: Keep-Alive Pragma: no-cache Authorization: Digest username="root",realm="AXIS_00408CBEA38B",nonce="000a8b40Y0100409c13ac7e6cceb069289041d8feb1691",uri="/axis-media/media.amp",cnonce="9946e2582bd590418c9b70e1b17956c7",nc=00000001,response="f3cab86fc84bfe33719675848e7fdc0a",qop="auth" HTTP/1.0 200 OK Content-Type: application/x-rtsp-tunnelled Date: Tue, 02 Nov 2010 11:45:23 GMT RTSP/1.0 200 OK CSeq: 1 Content-Type: application/sdp Content-Base: rtsp://1.00408CBEA38B/axis-media/media.amp/ Date: Tue, 02 Nov 2010 11:45:23 GMT Content-Length: 410 v=0 o=- 1288698323798001 1288698323798001 IN IP4 1.00408CBEA38B s=Media Presentation e=NONE c=IN IP4 0.0.0.0 b=AS:50000 t=0 0 a=control:* a=range:npt=0.000000- m=video 0 RTP/AVP 96 b=AS:50000 a=framerate:30.0 a=transform:1,0,0;0,1,0;0,0,1 a=control:trackID=1 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1; profile-level-id=420029; sprop-parameter-sets=Z0IAKeNQFAe2AtwEBAaQeJEV,aM48gA== RTSP/1.0 200 OK CSeq: 2 Session: 3F4763D8; timeout=60 Transport: RTP/AVP/TCP;unicast;interleaved=0-1;ssrc=060922C6;mode="PLAY" Date: Tue, 02 Nov 2010 11:45:24 GMT RTSP/1.0 200 OK CSeq: 3 Session: 3F4763D8 Range: npt=0- RTP-Info: url=rtsp://1.00408CBEA38B/axis-media/media.amp/trackID=1;seq=7392;rtptime=4190934902 Date: Tue, 02 Nov 2010 11:45:24 GMT [Binary Stream Content] But when i copy this request to fiddler, I only get 200 status code with content-type set to application/x-rtsp-tunneled and there is no stream data. The only thing i do different with stream is to use Basic in authorization header instead of Digest and I do not get 401 (Un authorized) status code. Can anyone explain what's happening here? How can I write request sequences to get stream in fiddler? If it is needed, I can upload the wireshark request dump somewhere.

    Read the article

  • Forward all traffic through an ssh tunnel

    - by Eamorr
    I hope someone can follow this and I'll explain as best I can. I'm trying to forward all traffic from port 6999 on x.x.x.224, through an ssh tunnel, and onto port 7000 on x.x.x.218. Here is some ASCII art: |browser|-----|Squid on x.x.x.224|------|ssh tunnel|------<satellite link>-----|Squid on x.x.x.218|-----|www| 3128 6999 7000 80 When I remove the ssh tunnel, everything works fine. The idea is to turn off encryption on the ssh tunnel (to save bandwidth) and turn on maximum compression (to save more bandwidth). This is because it's a satellite link. Here's the ssh tunnel I've been using: ssh -C -f -C -o CompressionLevel=9 -o Cipher=none [email protected] -L 7000:172.16.1.224:6999 -N The trouble is, I don't know how to get data from Squid on x.x.x.224 into the ssh tunnel? Am I going about this the wrong way? Should I create an ssh tunnel on x.x.x.218? I use iptables to stop squid on x.x.x.224 from reading port 80, but to feed from port 6999 instead (i.e. via the ssh tunnel). Do I need another iptables rule? Any comments greatly appreciated. Many thanks in advance, Regarding Eduardo Ivanec's question, here is a netstat -i any port 7000 -nn dump from x.x.x.218: 14:42:15.386462 IP 172.16.1.224.40006 > 172.16.1.218.7000: Flags [S], seq 2804513708, win 14600, options [mss 1460,sackOK,TS val 86702647 ecr 0,nop,wscale 4], length 0 14:42:15.386690 IP 172.16.1.218.7000 > 172.16.1.224.40006: Flags [R.], seq 0, ack 2804513709, win 0, length 0 Update 2: When I run the second command, I get the following error in my browser: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://109.123.109.205/index.php Zero Sized Reply Squid did not receive any data for this request. Your cache administrator is webmaster. Generated Fri, 01 Jul 2011 16:06:06 GMT by remote-site (squid/2.7.STABLE9) remote-site is 172.16.1.224 When I do a tcpdump -i any port 7000 -nn I get the following: root@remote-site:~# tcpdump -i any port 7000 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • Why is my cron daemon is being killed every few minutes? OpenVZ?

    - by user113215
    As of about a week ago, my cron daemon refuses to stay running. I'm using Debian 6. Running something like pgrep cron shows that the daemon isn't running. I start the service with service cron start or /etc/init.d/cron start and it launches, but it disappears from the running process list after a few minutes (varying anywhere between 1 - 30 minutes before the process is killed again). Using strace -f service cron start, I can see that the process is being killed for some reason: nanosleep({56, 0}, 0x7fffa7184c80) = 0 stat("crontabs", {st_mode=S_IFDIR|S_ISVTX|0730, st_size=4096, ...}) = 0 stat("/etc/crontab", {st_mode=S_IFREG|0644, st_size=1100, ...}) = 0 stat("/etc/cron.d", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 stat("/etc/cron.d/php5", {st_mode=S_IFREG|0644, st_size=475, ...}) = 0 stat("/etc/cron.d/anacron", {st_mode=S_IFREG|0644, st_size=244, ...}) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 rt_sigaction(SIGCHLD, NULL, {0x4036f0, [CHLD], SA_RESTORER|SA_RESTART, 0x2b0e8465f230}, 8) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 nanosleep({60, 0}, <unfinished ...> +++ killed by SIGKILL +++ There's nothing relevant in /var/log/syslog, /var/log/messages, /var/log/auth.log, or /var/log/kern.log to explain why the the process is dying. The system has about 500 MB of free memory, and cat /proc/loadavg returns 0.10 0.21 0.45 so resources shouldn't be the issue. I also tried removing and reinstalling the cron package using apt-get. What else should I check? How do I find out what's killing my crond? Edit: I'm on a virtual machine under OpenVZ (and as such, I have no swap). With cron running, free -m reports: total used free shared buffers cached Mem: 1024 465 558 0 0 0 -/+ buffers/cache: 465 558 Swap: 0 0 0 My OpenVZ User Beancounters via cat /proc/user_beancounters: Version: 2.5 uid resource held maxheld barrier limit failcnt 172087: kmemsize 8275718 25561636 51200000 51200000 0 lockedpages 0 968 2048 2048 0 privvmpages 113442 266465 262200 262200 3740757 shmpages 788 4004 128000 128000 0 dummy 0 0 0 0 0 numproc 39 98 600 600 0 physpages 50521 208434 0 9223372036854775807 0 vmguarpages 0 0 512000 512000 0 oomguarpages 50521 208447 512000 512000 0 numtcpsock 7 323 4096 4096 0 numflock 7 64 2048 2048 0 numpty 1 4 32 32 0 numsiginfo 0 23 1024 1024 0 tcpsndbuf 137984 17878480 20480000 20480000 0 tcprcvbuf 114688 6983504 20480000 20480000 0 othersockbuf 162960 1074440 20480000 20480000 0 dgramrcvbuf 0 24208 10240000 10240000 0 numothersock 101 353 2048 2048 0 dcachesize 459171 747444 10240000 10240000 0 numfile 1010 4221 50000 50000 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 39 424 2048 2048 0

    Read the article

  • Memcached Debuging/Server Logs Monitor the Memcached Servers?

    - by user1179459
    I have chat engine which is based on the Memcached variables, putting them into arrays and reading them in other end via jquery, which works fine 95% of the times, however when the server load is high memcached (presume its the memcached) the crash and browser gets stucks up. I dont think its jquery issue since this only happens when the server load is very high. I need a way to monitor the memcached servers or somehow write a log file into where the fails/errors comes in... Any idea on how i can do this ? or any idea why memcached servers fails ? I run the memcached as follows $GLOBALS['MemCached'] = FALSE; $GLOBALS['MemCached'] = new Memcache; $GLOBALS['MemCached']->pconnect('localhost', 11211); My memcached config is as follows #! /bin/sh # # chkconfig: - 55 45 # description: The memcached daemon is a network memory cache service. # processname: memcached # config: /etc/sysconfig/memcached # pidfile: /var/run/memcached/memcached.pid # Standard LSB functions #. /lib/lsb/init-functions # Source function library. . /etc/init.d/functions PORT=11211 USER=memcached MAXCONN=1024 CACHESIZE=128 OPTIONS="" if [ -f /etc/sysconfig/memcached ];then . /etc/sysconfig/memcached fi # Check that networking is up. . /etc/sysconfig/network if [ "$NETWORKING" = "no" ] then exit 0 fi RETVAL=0 prog="memcached" pidfile=${PIDFILE-/var/run/memcached/memcached.pid} lockfile=${LOCKFILE-/var/lock/subsys/memcached} start () { echo -n $"Starting $prog: " # Ensure that /var/run/memcached has proper permissions if [ "`stat -c %U /var/run/memcached`" != "$USER" ]; then chown $USER /var/run/memcached fi daemon --pidfile ${pidfile} memcached -d -p $PORT -u $USER -m $CACHESIZE -c $MAXCONN -P ${pidfile} $OPTIONS RETVAL=$? echo [ $RETVAL -eq 0 ] && touch ${lockfile} } stop () { echo -n $"Stopping $prog: " killproc -p ${pidfile} /usr/bin/memcached RETVAL=$? echo if [ $RETVAL -eq 0 ] ; then rm -f ${lockfile} ${pidfile} fi } restart () { stop start } # See how we were called. case "$1" in start) start ;; stop) stop ;; status) status -p ${pidfile} memcached RETVAL=$? ;; restart|reload|force-reload) restart ;; condrestart|try-restart) [ -f ${lockfile} ] && restart || : ;; *) echo $"Usage: $0 {start|stop|status|restart|reload|force-reload|condrestart|try-restart}" RETVAL=2 ;; esac exit $RETVAL

    Read the article

  • PowerShell remove force

    - by mausch
    Trying to delete a directory recursively with rm -Force -Recurse somedirectory, I get several "The directory is not empty" errors. If I retry the same command, it succeeds. Example: PS I:\Documents and Settings\m\My Documents\prg\net> rm -Force -Recurse .\FileHelpers Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests\Data\RunTime\_svn: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (_svn:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests\Data\RunTime: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (RunTime:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests\Data: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (Data:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (FileHelpers.Tests:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\Libs\nunit\_svn: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (_svn:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\Libs\nunit: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (nunit:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\Libs: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (Libs:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (I:\Documents an...net\FileHelpers:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand PS I:\Documents and Settings\m\My Documents\prg\net> rm -Force -Recurse .\FileHelpers PS I:\Documents and Settings\m\My Documents\prg\net> Of course, this doesn't happen always. Also, it doesn't happen only with _svn directories, and I don't have TortoiseSVN cache or anything like that so nothing is blocking the directory. Any ideas?

    Read the article

  • 2 Servers setup for redundency, backup

    - by minal
    I presently have 1 dedicated virtual server running my website/blog/mail, etc. This is on Hyper-V with 512MB RAM. Windows Web2008. With the VM, I have these running within it: SmarterMail – for emails MS DNS – I have my own nameservers on this server SQL Express IIS7 2 IP Address I have now leased 2 physical servers : P4 2.6Ghz 1GB RAM 80GB HDD. With these new servers, I get 2 IPs per server as well. These are running Windows 2008 Standard. With the VM the HDD was obviously on a RAID setup so I was not worried about hardware issues as it fell on the provider to manage. However, with the new servers the HDD is not RAID’d, hence my concern is that if it fails I need a backup position. What would be the most ideal setup to go for? I am thinking: Server 1: (Web/PrimaryDNS) DNS – NS1 SQL Express – OFF turn on when required, ie. Server2 is down SmarterMail – OFF turn on when required, ie. Server2 is down IIS 7 Server2:(SQL/Backup) DNS – NS2 SQL Web Edition SmarterMail IIS 7 How can I set it up so that if 1 goes down I can have everything on 2 instantly or by manual switching over. I am confused as other DNS servers will cache the web servers IP address for requests, and if that server goes down, the backup server will have a different IP. How do I make this work? I will be doing routine backups, in which case I will keep copies of backups on both servers. If I am copying the same stuff on both servers like a mirror then I am losing on using the true performance out of it. It's like 1 server is always on standby. Ideally I want SQL and web on 2 diff machines for best performance. If Server1 goes down, I should be able to switch to Server2 fairly easily. I don't have a problem with manual intervention to start the sql/mail services, etc. In terms of scalabilty, the VM has coped pretty well to date. Moving forward the SQL and IIS workload is going to double pretty quickly. Some ideas would be great.

    Read the article

  • Apache2 - mod_expire and mod_rewrite not working in httpd.conf - serving content from tomcat

    - by Ankit Agrawal
    I am using apache2 server running on debian which forwards all the http request to tomcat installed on same machine. I have two files under my /etc/apache2/ folder apache2.conf and httpd.conf I modified httpd.conf file to look like following. # forward all http request on port 80 to tomcat ProxyPass / ajp://127.0.0.1:8009/ ProxyPassReverse / ajp://127.0.0.1:8009/ # gzip text content AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE text/javascript AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript DeflateCompressionLevel 9 BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # Turn on Expires and mark all static content to expire in a week # unset last modified and ETag ExpiresActive On ExpiresDefault A0 <FilesMatch "\.(jpg|jpeg|png|gif|js|css|ico)$" ExpiresDefault A604800 Header unset Last-Modified Header unset ETag FileETag None Header append Cache-Control "max-age=604800, public" </FilesMatch RewriteEngine On # rewrite all www.example.com/content/XXX-01.js and YYY-01.css files to XXX.js and YYY.css RewriteRule ^content/(js|css)/([a-z]+)-([0-9]+)\.(js|css)$ /content/$1/$2.$4 # remove all query parameters from URL after we are done with it RewriteCond %{THE_REQUEST} ^GET\ /.*\;.*\ HTTP/ RewriteCond %{QUERY_STRING} !^$ RewriteRule .* http://example.com%{REQUEST_URI}? [R=301,L] # rewrite all www.example.com to example.com RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] I want to achieve following. forward all traffic to tomcat GZIP all the text content. Put 1 week expiry header to all static files and unset ETag/last modified header. rewrite all js and css file to certain format. remove all the query parameters from URL forward all www.example.com to example.com The problem is only 1 and 2 are working. I tried a lot with many combinations but the expire and rewrite rule (3-6) do not work at all. I also tried moving these rules to apache2.conf and .htaccess files but it didn't work either. It does not give any error but these rules are simple ignored. expires and rewrite modules are ENABLED. Please let me know what should I do to fix this. 1. Do I need to add something else in httpd.conf file (like Options +FollowSymLink) or something else? 2. Do I need to add something in apache2.conf file? 3. Do I need to move these rules to .htaccess file? If yes, what should I write in that file and where should I keep that file? in /etc/apache2/ folder or /var/www/ folder? 4. Any other info to make this work? Thanks, Ankit

    Read the article

  • MySQL on Linux out of memory

    - by Sunrays
    OS: Redhat Enterprise Linux Server Release 5.3 (Tikanga) Architecture: Intel Xeon 64Bit MySQL Server 5.5.20 Enterprise Server advanced edition. Application: Liferay. My database size is 200MB. RAM is 64GB. The memory consumption increases gradually and we run out of memory. Then only rebooting releases all the memory, but then process of memory consumption starts again and reaches 63-64GB in less than a day. Parameters detail: key_buffer_size=16M innodb_buffer_pool_size=3GB inndb_buffer_pool_instances=3 max_connections=1000 innodb_flush_method=O_DIRECT innodb_change_buffering=inserts read_buffer_size=2M read_rnd_buffer_size=256K It's a serious production server issue that I am facing. What could be the reason behind this and how to resolve. This is the report of 2pm today, after Linux was rebooted yesterday @ around 10pm. Output of free -m total used free shared buffers cached Mem: 64455 22053 42402 0 1544 1164 -/+ buffers/cache: 19343 45112 Swap: 74998 0 74998 Output of vmstat 2 5 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 43423976 1583700 1086616 0 0 1 173 22 27 1 1 98 0 0 2 0 0 43280200 1583712 1228636 0 0 0 146 1265 491 2 2 96 1 0 0 0 0 43421940 1583724 1087160 0 0 0 138 1469 738 2 1 97 0 0 1 0 0 43422604 1583728 1086736 0 0 0 5816 1615 934 1 1 97 0 0 0 0 0 43422372 1583732 1086752 0 0 0 2784 1323 545 2 1 97 0 0 Output of top -n 3 -b top - 14:16:22 up 16:32, 5 users, load average: 0.79, 0.77, 0.93 Tasks: 345 total, 1 running, 344 sleeping, 0 stopped, 0 zombie Cpu(s): 1.0%us, 0.9%sy, 0.0%ni, 98.1%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 66002772k total, 22656292k used, 43346480k free, 1582152k buffers Swap: 76798724k total, 0k used, 76798724k free, 1163616k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6434 mysql 15 0 4095m 841m 5500 S 113.5 1.3 426:53.69 mysqld 1 root 15 0 10344 680 572 S 0.0 0.0 0:03.09 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/6 21 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/6 22 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/6 23 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/7 24 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/7 25 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/7 26 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/8 27 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/8 28 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/8 29 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/9 30 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/9 31 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/9 32 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/10 33 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/10 34 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/10 35 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/11 36 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/11 37 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/11 38 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/12 39 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/12 40 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/12 41 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/13 42 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/13 43 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/13 44 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/14 45 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/14 46 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/14 47 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/15 48 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/15 49 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/15 50 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/16 51 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/16 52 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/16 53 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/17 54 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/17 55 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/17 56 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/18 57 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/18 58 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/18 59 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/19 60 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/19 61 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/19 62 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/20 63 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/20 64 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/20 65 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/21 66 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/21 67 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/21 68 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/22 69 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/22 70 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/22 71 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/23 72 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/23 73 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/23 74 root 10 -5 0 0 0 S 0.0 0.0 0:00.02 events/0 75 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/1 76 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/2 77 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/3 78 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/4 79 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/5 80 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/6 81 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/7 82 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/8 83 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/9 84 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/10 85 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/11 86 root 10 -5 0 0 0 S 0.0 0.0 0:00.01 events/12 87 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/13 88 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/14 89 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/15 90 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/16 91 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/17 92 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/18 93 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/19 94 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/20 95 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/21 96 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/22 97 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/23 98 root 10 -5 0 0 0 S 0.0 0.0 0:00.01 khelper 615 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kthread 643 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/0 644 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/1 645 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/2 646 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/3 647 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/4 648 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/5 649 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/6 650 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/7 651 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/8 652 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/9 653 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/10 654 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/11 655 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/12 656 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/13 657 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/14 658 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/15 659 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/16 660 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/17 661 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/18 662 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/19 663 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/20 664 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/21 665 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/22 666 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/23 667 root 17 -5 0 0 0 S 0.0 0.0 0:00.00 kacpid 840 root 17 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/0 841 root 18 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/1 842 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/2 843 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/3 844 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/4 845 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/5 846 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/6 847 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/7 848 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/8 849 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/9 850 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/10 851 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/11 852 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/12 853 root 16 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/13 854 root 17 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/14 855 root 18 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/15 856 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/16 857 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/17 858 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/18 859 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/19 860 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/20 861 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/21 862 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/22 863 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/23 866 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 khubd 868 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kseriod 1118 root 23 0 0 0 0 S 0.0 0.0 0:00.00 pdflush 1119 root 15 0 0 0 0 S 0.0 0.0 0:00.11 pdflush 1120 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 kswapd0 1121 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 kswapd1 1122 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 aio/0 1123 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 aio/1 1124 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 aio/2 1125 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/3 1126 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/4 1127 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/5 1128 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/6 1129 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/7 1130 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/8 1131 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/9 1132 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/10 1133 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/11 1134 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/12 1135 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/13 1136 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/14 1137 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/15 1138 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/16 1139 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/17 1140 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/18 1141 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/19 1142 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/20 1143 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/21 1144 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/22 1145 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/23 1308 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 kpsmoused 1566 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/0 1567 root 10 -5 0 0 0 S 0.0 0.0 0:00.27 ata/1 1568 root 10 -5 0 0 0 S 0.0 0.0 0:02.39 ata/2 1569 root 10 -5 0 0 0 S 0.0 0.0 0:00.07 ata/3 1570 root 10 -5 0 0 0 S 0.0 0.0 0:00.72 ata/4 1571 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/5 1572 root 10 -5 0 0 0 S 0.0 0.0 0:00.15 ata/6 1573 root 10 -5 0 0 0 S 0.0 0.0 0:00.07 ata/7 1574 root 10 -5 0 0 0 S 0.0 0.0 0:00.06 ata/8 1575 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/9 1576 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/10 1577 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/11 1578 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/12 1579 root 10 -5 0 0 0 S 0.0 0.0 0:00.14 ata/13 1580 root 10 -5 0 0 0 S 0.0 0.0 0:01.56 ata/14 1581 root 10 -5 0 0 0 S 0.0 0.0 0:00.04 ata/15 1582 root 10 -5 0 0 0 S 0.0 0.0 0:00.40 ata/16 1583 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/17 1584 root 10 -5 0 0 0 S 0.0 0.0 0:00.11 ata/18 1585 root 10 -5 0 0 0 S 0.0 0.0 0:00.03 ata/19 1586 root 10 -5 0 0 0 S 0.0 0.0 0:00.02 ata/20 1587 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/21 1588 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/22 1589 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/23 1590 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 ata_aux 1616 root 10 -5 0 0 0 S 0.0 0.0 0:17.20 scsi_eh_0 1617 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_1 1668 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_2 1669 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 qla2xxx_2_dpc 1670 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_wq_2 1671 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 fc_wq_2 1672 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 fc_dl_2 1673 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_3 1674 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 qla2xxx_3_dpc 1675 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_wq_3 1676 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 fc_wq_3 1677 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 fc_dl_3 1728 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 kstriped 1829 root 10 -5 0 0 0 S 0.0 0.0 1:09.14 kjournald 1857 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kauditd 1891 root 11 -4 13008 1188 388 S 0.0 0.0 0:00.40 udevd 4555 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/0 4556 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/1 4557 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/2 4558 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/3 4559 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/4 4560 root 16 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/5 4561 root 16 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/6 4562 root 17 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/7 4563 root 18 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/8 4564 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/9 4565 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/10 4566 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/11 4567 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/12 4568 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/13 4569 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/14 4570 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/15 4571 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/16 4572 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/17 4573 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/18 4574 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/19 4575 root 16 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/20 4576 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/21 4577 root 16 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/22 4578 root 16 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/23 4579 root 18 -5 0 0 0 S 0.0 0.0 0:00.00 kmpath_handlerd 4734 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kjournald 4736 root 10 -5 0 0 0 S 0.0 0.0 0:04.82 kjournald 4744 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kjournald 5238 root RT 0 87584 3648 2768 S 0.0 0.0 0:03.60 multipathd 5537 root 11 -4 27328 812 580 S 0.0 0.0 0:00.14 auditd 5539 root 7 -8 81804 768 616 S 0.0 0.0 0:00.04 audispd 5564 root 15 0 5904 632 512 S 0.0 0.0 0:00.10 syslogd 5567 root 15 0 3800 432 344 S 0.0 0.0 0:00.01 klogd 5579 root 18 0 10728 384 244 S 0.0 0.0 0:00.42 irqbalance 5592 rpc 18 0 8048 584 464 S 0.0 0.0 0:00.00 portmap 5625 root 18 0 11032 768 632 S 0.0 0.0 0:00.00 rpc.statd 5681 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/0 5682 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/1 5683 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/2 5684 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/3 5685 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/4 5686 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/5 5687 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/6 5688 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/7 5689 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/8 5690 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/9 5691 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/10 5692 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/11

    Read the article

  • Apache/Jboss Issue - is this connection timeout?

    - by user115391
    We have an application. The architecture is as below 1 load balancer (apache), which redirects to 2 app servers (jboss). The site is working fine and I am able to access it fine. But sometimes, randomly the homepage takes a while (like 30-40 secs) to load. I tried checking the logs but could not figure out why. I used the httptraffic analyzer, fiddler to see the traffic, but it just says the request/response took 30 secs or so. I checked the apache access logs, mod_jk.log. My configurations are below mod-jk.conf LoadModule jk_module modules/mod_jk.so JkWorkersFile conf/workers.properties JkLogFile logs/mod_jk.log #JkLogLevel info #JkLogLevel debug JkLogLevel error # Select the log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" JkOptions +ForwardKeySize +ForwardURICompatUnparsed -ForwardDirectories JkRequestLogFormat "%w %V %T %P %{tid}P %D" JkMount /__application__/* loadbalancer JkUnMount /__application__/images/* loadbalancer <VirtualHost *:8080 > JkMountFile conf/uriworkermap.properties </VirtualHost> JkShmFile run/jk.shm <Location /jkstatus> JkMount status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> ----------------------------- uriworkermap.properties Simple worker configuration file # Mount the Servlet context to the ajp13 worker /=loadbalancer /*=loadbalancer ----------------------------- workers.properties worker.list=loadbalancer,status worker.template.port=8009 worker.template.type=ajp13 worker.template.lbfactor=1 worker.template.prepost_timeout=10000 worker.template.connect_timeout=10000 worker.template.ping_mode=A worker.worker1.reference=worker.template worker.worker1.host=hostname1 worker.worker2.reference=worker.template worker.worker2.host=hostname2 worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=worker1,worker2 worker.status.type=status ----------------------------- my jboss server.xml - $JBOSS_HOME/server/default/deploy/jbossweb.sar/server.xml --------------------------------- The logs from access log is below The issue where it took time - look at the seconds column [23/Mar/2012:12:10:38 -0400] "GET / HTTP/1.1" 200 138 x.x.x.x - - [23/Mar/2012:12:10:49 -0400] "GET /index.jsp HTTP/1.1" 302 - x.x.x.x - - [23/Mar/2012:12:11:10 -0400] "GET /home.jsp HTTP/1.1" 200 936 x.x.x.x - - [23/Mar/2012:12:11:31 -0400] "POST /login/ HTTP/1.1" 200 8895 x.x.x.x - - [23/Mar/2012:12:11:52 -0400] "GET /login/includes/login-style.css HTTP/1.1" 304 - The one after the issue x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET / HTTP/1.1" 200 138 x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET /index.jsp HTTP/1.1" 302 - x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET /home.jsp HTTP/1.1" 200 936 x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "POST /login/ HTTP/1.1" 200 8895 x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET /login/includes/login-style.css HTTP/1.1" 304 - Would it be a cache or timeout issue? Any help is appreciated. Thanks.

    Read the article

  • VMWare ESXi virtual machine can contact the gateway but not the DNS server

    - by Nathan Palmer
    I am having a bit of a strange issue. I have a VMWare ESXi server with two virtual machines running on it. They are running just fine and can communicate on the network without a problem. I am now trying to add a third. I am installing Ubuntu 8.04 Server. I assign it a static IP address and it's a fresh installation. Once installed I can ping the gateway but I cannot ping the DNS server. It's on the same network with the other two VMs which are communicating just fine. I have tried to reinstall the operating system but it still fails to connect. Here is /etc/network/interfaces auto eth0 iface eth0 inet static address 192.168.1.23 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 208.67.222.222 #opendns dns-search mydomain.com Here is route Destination | Gateway | Genmask | Flags | Metric | Ref | Use | Iface localnet | * | 255.255.255.0 | U | 0 | 0 | 0 | eth0 default | 192.168.1.1 | 0.0.0.0 | UG | 100 | 0 | 0 | eth0 Since I'm running this behind a FortiGate this is what the sniff command gives me when I try to ping 208.67.222.222 arp who-has 192.168.1.1 tell 192.168.1.23 arp reply 192.168.1.1 is-at MAC 192.168.1.23 -> 208.67.222.222: icmp: echo request 192.168.1.23 -> 208.67.222.222: icmp: echo request 192.168.1.23 -> 208.67.222.222: icmp: echo request 192.168.1.23 -> 208.67.222.222: icmp: echo request 192.168.1.23 -> 208.67.222.222: icmp: echo request As you can see it looks like I never get a response. One interesting thing I notice is the arp reply's MAC doesn't look right. I have cleared the FortiGate's ARP cache though and checked the entry and it seems correct. The MAC it lists is the one for the router. However if I ping from a different virtual machine that is also Ubuntu 8.04 with a nearly identical configuration I get this. 192.168.1.22 -> 208.67.222.222: icmp: echo request 208.67.222.222 -> 192.168.1.22: icmp: echo reply 192.168.1.22 -> 208.67.222.222: icmp: echo request 208.67.222.222 -> 192.168.1.22: icmp: echo reply 192.168.1.22 -> 208.67.222.222: icmp: echo request 208.67.222.222 -> 192.168.1.22: icmp: echo reply So, what could I be missing? Thanks.

    Read the article

  • Apache Mod_rewrite rule working on one server, but not another

    - by Mason
    I am using mod_jk and mod_rewrite on httpd 2.2.15. I have a rule.... RewriteCond %{REQUEST_URI} !^/video/play\.xhtml.* RewriteRule ^/video/(.*) /video/play.xhtml?vid=$1 [PT] I just want to rewrite something like /video/videoidhere to /video/play.xhtml?vid=videoidhere This works perfectly on my developer machine, but on production I get a 404 (generated by Jboss, not Apache). here is the tail of access.log and rewrite.log on prod (broken). the rewrite.log is exactly the same on dev(working) applying pattern '^/video/(.*)' to uri '/video/46279d4daf5440b2844ec831413dcc3b' RewriteCond: input='/video/46279d4daf5440b2844ec831413dcc3b' pattern='!^/video/play\.xhtml.*' => matched rewrite '/video/46279d4daf5440b2844ec831413dcc3b' -> '/video/play.xhtml?vid=46279d4daf5440b2844ec831413dcc3b' split uri=/video/play.xhtml?vid=46279d4daf5440b2844ec831413dcc3b -> uri=/video/play.xhtml, args=vid=46279d4daf5440b2844ec831413dcc3b forcing '/video/play.xhtml' to get passed through to next API URI-to-filename handler "GET /video/46279d4daf5440b2844ec831413dcc3b HTTP/1.1" 404 420 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.6) Gecko/20100628 Ubuntu/10.04 (lucid) Firefox/3.6.6" I can access http://www.fivi.com/video/play.xhtml?vid=46279d4daf5440b2844ec831413dcc3b but not /video/46279d4daf5440b2844ec831413dcc3b Both server are even using the EXACT same httpd.conf, and modules. I built Apache with... ./configure --prefix /usr/local/apache2.2.15 --enable-alias --enable-rewrite --enable-cache --enable-disk_cache --enable-mem_cache --enable-ssl --enable-deflate Thanks, Mason ----UPDATE---- -mod-jk.conf JkWorkersFile /usr/local/apache2.2.15/conf/workers.properties JkLogFile /var/log/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" JkOptions +ForwardKeySize +ForwardURICompatUnparsed -ForwardDirectories JkRequestLogFormat "%w %V %T" JkShmFile run/jk.shm <Location /jkstatus> JkMount status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> -workers.properties worker.node1.port=8009 worker.node1.host=75.102.10.74 worker.node1.type=ajp13 worker.node1.lbfactor=20 worker.node1.ping_mode=A #As of mod_jk 1.2.27 worker.node2.port=8009 worker.node2.host=75.102.10.75 worker.node2.type=ajp13 worker.node2.lbfactor=10 worker.node2.ping_mode=A #As of mod_jk 1.2.27 worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=node2,node1 worker.loadbalancer.sticky_session=True worker.status.type=status -httpd.conf ServerName www.fivi.com:80 Include /usr/local/apache2.2.15/conf/mod-jk.conf NameVirtualHost * <VirtualHost *> ServerName * DocumentRoot /usr/local/apache2/htdocs JkUnMount /* loadbalancer RedirectMatch 301 /(.*) http://www.fivi.com/$1 </VirtualHost> <VirtualHost *> ServerName www.fivi.com ServerAlias www.fivi.com images.fivi.com JkMount /* loadbalancer JkMount / loadbalancer [root@fivi conf]# /usr/local/apache2.2.15/bin/httpd -M Loaded Modules: core_module (static) authn_file_module (static) authn_default_module (static) authz_host_module (static) authz_groupfile_module (static) authz_user_module (static) authz_default_module (static) auth_basic_module (static) cache_module (static) disk_cache_module (static) mem_cache_module (static) include_module (static) filter_module (static) deflate_module (static) log_config_module (static) env_module (static) headers_module (static) setenvif_module (static) version_module (static) ssl_module (static) mpm_prefork_module (static) http_module (static) mime_module (static) status_module (static) autoindex_module (static) asis_module (static) cgi_module (static) negotiation_module (static) dir_module (static) actions_module (static) userdir_module (static) alias_module (static) rewrite_module (static) so_module (static) jk_module (shared) Syntax OK

    Read the article

  • Very poor read performance compared to write performance on md(raid1) / crypt(luks) / lvm

    - by Android5360
    I'm experiencing very poor read performance over raid1/crypt/lvm. In the same time, write speeds are about 2x+ faster on the same setup. On another raid1 setup on the same machine I get normal read speeds (maybe because I'm not using cryptsetup). OS related disks: sda + sdb. I have raid1 configuration with two disks, both are in place. I'm using LVM over the RAID. No encryption. Both disks are WD Green, 5400 rpm. IO test results on this raid1: dd if=/dev/zero of=/tmp/output.img3 bs=8k count=256k conv=fsync - 2147483648 bytes (2.1 GB) copied, 22.3392 s, 96.1 MB/s sync echo 3 > /proc/sys/vm/drop_caches dd if=/tmp/output.img3 of=/dev/null bs=8k - 2147483648 bytes (2.1 GB) copied, 15.9 s, 135 MB/s And here is the problematic setup (on the same machine). Currently I have only one sdc (WD Green, 5400rpm) configured in software raid1 + crypt (luks, serpent-xts-plain) + lvm. Tomorrow I will attach another disk (sdd) to complete this two-disk raid1 setup. IO tests results on this raid1: dd if=/dev/zero of=output.img3 bs=8k count=256k conv=fsync 2147483648 bytes (2.1 GB) copied, 17.7235 s, 121 MB/s sync echo 3 > /proc/sys/vm/drop_caches dd if=output.img3 of=/dev/null bs=8k 2147483648 bytes (2.1 GB) copied, 36.2454 s, 59.2 MB/s We can see that the read performance is very very bad (59MB/s compared to 135MB/s when using no encryption). Nothing is using the disks during benchmark. I can confirm this because I checked with iostat and dstat. Details on the hardware: disks: all are WD green, 5400rpm, 64mb cache. cpu: FX-8350 at stock speed ram: 4x4GB at 1066Mhz. Details on the software: OS: Debian Wheezy 7, amd64 mdadm: v3.2.5 - 18th May 2012 LVM version: 2.02.95(2) (2012-03-06) LVM Library version: 1.02.74 (2012-03-06) LVM Driver version: 4.22.0 cryptsetup: 1.4.3 Here is how I configured the slow raid1+crypt+lvm setup: parted /dev/sdc mklabel gpt type: ext4 start: 2048s end: -1 Now the raid, crypt and the lvm configuration: mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdc cryptsetup --cipher serpent-xts-plain luksFormat /dev/md1 cryptsetup luksOpen /dev/md1 md1_crypt vgcreate vg_sql /dev/mapper/md1_crypt lvcreate -l 100%VG vg_sql -n lv_sql mkfs.ext4 /dev/mapper/vg_sql-lv-sql mount /dev/mapper/vg_sql-lv_sql /sql So guys, can you help me identify the reason and fix it? It has to be something with the cryptsetup as there is no such read slowdown on the other setup (sda+sdb) where no encryption is present. But I have no idea what to do. Thanks!

    Read the article

  • BIND split-view DNS config problem

    - by organicveggie
    We have two DNS servers: one external server controlled by our ISP and one internal server controlled by us. I'd like internal requests for foo.example.com to map to 192.168.100.5 and external requests continue to map to 1.2.3.4, so I'm trying to configure a view in bind. Unfortunately, bind fails when I attempt to reload the configuration. I'm sure I'm missing something simple, but I can't figure out what it is. options { directory "/var/cache/bind"; forwarders { 8.8.8.8; 8.8.4.4; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; zone "." { type hint; file "/etc/bind/db.root"; }; zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; view "internal" { zone "example.com" { type master; notify no; file "/etc/bind/db.example.com"; }; }; zone "example.corp" { type master; file "/etc/bind/db.example.corp"; }; zone "100.168.192.in-addr.arpa" { type master; notify no; file "/etc/bind/db.192"; }; I have excluded the entries in the view for allow-recursion and recursion in an attempt to simplify the configuration. If I remove the view and just load the example.com zone directly, it works fine. Any advice on what I might be missing?

    Read the article

  • Impossible to create/format install window 7 on unpartioned HD

    - by fra pet
    Hi guys i am getting nut reinstalling windows 7 on one of those Acer aspire all-in-one... The original OS(windows professional x64) was not starting, after the initial screen the bios was prompted. So step 1: i tried to access the system partition and reinstall everything but could not get at the point step 2: i set the bios to native ide and i tried to insert my original copy of windows professional and do a clean installation, but it does not allow me to format/create other partion form the installation mask step 3: my BAD, i tried to install ubuntu and i clean the whole hard drive, i was getting an error at some point during installation so i decided to get back to windows step 4: Windows 7 again, at the disk screen of the windows installation i entered into the prompt and played around with DISKPART... -i listed the disk and the HD was disk 0. -i selected disk 0. -i CLEAN disk 0 successfully. -i tried to create a CREATE PARTITION PRIMARY but gave an error cache corrupt and disk not up-to-date (after i try to create a partiton in disk 0 it disappear when i try LIST DISK and i have to restart before he can list DISK 0 again, RESCAN did not worked). -tried CLEAN ALL(2 hours) and succeeded. -try again to create primary partition and failed, same errors -try to install my old copy of windows xP pro and it seems to work, it create a partition, format(only quick worked, slow mode was at 0% after 1 hour so i stop), it start installing and around 90% installation said it could not copy a file and he stop -back on windows 7 again, it says that the hard drive has 490+gb unpartitioned but won't create a partition and format. -i tried again DISKPART as i though i messed up with the MBR when i installed ubuntu, so i did all of the instruction below http://www.sevenforums.com/tutorials/20864-mbr-restore-windows-7-master-boot-record.html http://support.microsoft.com/kb/927392 the errors were: on bootsect: the systne partiton was not found, Data error cyclic redundancy check on bootrec /FixMbr: A device attached to the system is not functioning but did not worked, and still can not partiton/format/install on a blank HD i tried some bootable clean disk tool and start infinite loop on the same errors the bios setting are: sata: native ide. if i put AHCL(or something like that does not load the HD and the DVD). quick start/quite start: disabled. Are there any other option or tools i can try before i try change the HD(That is my last option)? Thanks to everyone.

    Read the article

  • 403 Forbidden Error when trying to view localhost on Apache

    - by misbehavens
    I think my Apache must be all screwed up. I don't know if it ever worked. I just upgraded to Snow Leopard, and the first step on this tutorial is to start apache and check that it's working by opening http://localhost. It starts fine but when I go to localhost I get a 403 forbidden error. I don't know where to start figuring out how to fix it, so I wonder if a fresh install of Apache would do the trick. What do you think? Update: I found some error logs in /private/var/log/apache2/. Found this in one of the logs. Not sure what it means: [Tue Nov 10 17:53:08 2009] [notice] caught SIGTERM, shutting down [Tue Nov 10 21:49:17 2009] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] Warning: DocumentRoot [/usr/docs/dummy-host.example.com] does not exist Warning: DocumentRoot [/usr/docs/dummy-host2.example.com] does not exist httpd: Could not reliably determine the server's fully qualified domain name, using Andrews-Mac-Pro.local for ServerName mod_bonjour: Skipping user 'andrew' - cannot read index file '/Users/andrew/Sites/index.html'. [Tue Nov 10 21:49:19 2009] [notice] Digest: generating secret for digest authentication ... [Tue Nov 10 21:49:19 2009] [notice] Digest: done [Tue Nov 10 21:49:19 2009] [notice] Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8k DAV/2 PHP/5.3.0 configured -- resuming normal operations Update: I also found something in the dummy-host.example.com-error_log file. I didn't set these dummy-host things by the way. Is this the default configuration? [Tue Nov 10 21:59:57 2009] [error] [client ::1] client denied by server configuration: /usr/docs Update: Woohoo! I found the file that had the virtual host definitions. It was in /etc/apache2/extra/httpd-vhosts.conf. It had those two dummy virtual host settings in there. I added a localhost virtual host. Not sure if this is necessary, but since it wasn't working before, decided to do it anyway. After removing the old virtual hosts, adding my new localhost virtual host, and restarting apache, it seems to work. So I guess whenever I want to add a virtual host, I only need to add them to this file? Or is there a hosts file somewhere, like there is on Linux? Update: Yes, there is an /etc/hosts file that need to be changed to. Add the virtual host name to that file.

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >