Search Results

Search found 9137 results on 366 pages for 'worker thread'.

Page 131/366 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • php-cgi memory usage higher than php's memory limit

    - by Josh Nankin
    I'm running apache with a worker MPM and php with fastcgi. the following are my mpm limits: StartServers 5 MinSpareThreads 5 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 10 MaxRequestsPerChild 2000 I've also setup my php-cgi with the following: PHP_FCGI_CHILDREN=5 PHP_FCGI_MAX_REQUESTS=500 I'm noticing that my average php-cgi process is using around 200+mb of RAM, even as soon as they are started. However, my php memory_limit is only 128M. How is this possible, and what can I do to lower the php-cgi memory consumption?

    Read the article

  • IIS on localhost is very slow

    - by Nyla Pareska
    I am using IIS7 on Windows Vista dual core cpu. The first time hit on a WCF service or an ASP.NET webform sometimes takes way longer than a minute which is not really acceptable for me. I configured the application to use the Classic .NET application pool and tried playing with the Maximum worker processes, first setting it to 4 but put it back to 1 as it did not have the expected result. Are there any other things that I can try?

    Read the article

  • email bouncing back

    - by moiz.in
    Some emails are bouncing back with the error message below The following organization rejected your message: cluster-m.mailcontrol.com Also when I looked the further details it gives me this information: Diagnostic information for administrators: Generating server: myserver.com.au [email protected] cluster-m.mailcontrol.com #554 5.7.1 Access denied ## Received: from myserver.com.au ([192.168.0.3]) by myserver.com.au ([192.168.0.3]) with mapi; Mon, 27 Jun 2011 08:04:50 +0800 From: XYZ <[email protected]> To: "XYZ ([email protected])" <[email protected]> Date: Mon, 27 Jun 2011 08:04:49 +0800 Subject: FW: Pic S979888 Thread-Topic: Pic S979888 Thread-Index: Acw0WppDIX2PPJwZR0OGVP1rbUtzDAAAzcuA Message-ID: <[email protected]> Accept-Language: en-US, en-AU Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: acceptlanguage: en-US, en-AU Content-Type: multipart/mixed; boundary="_004_573874A6BF36864EA3FB179BF7A43C2B031D388DF7D8bunsrvapp00_" MIME-Version: 1.0 Could you please tell me what is wrong with this and why is it bouncing back?

    Read the article

  • Apache2 benchmarks - very poor performance

    - by andrzejp
    I have two servers on which I test the configuration of apache2. The first server: 4GB of RAM, AMD Athlon (tm) 64 X2 Dual Core Processor 5600 + Apache 2.2.3, mod_php, mpm prefork: Settings: Timeout 100 KeepAlive On MaxKeepAliveRequests 150 KeepAliveTimeout 4 <IfModule Mpm_prefork_module> StartServers 7 MinSpareServers 15 MaxSpareServers 30 MaxClients 250 MaxRequestsPerChild 2000 </ IfModule> Compiled in modules: core.c mod_log_config.c mod_logio.c prefork.c http_core.c mod_so.c Second server: 8GB of RAM, Intel (R) Core (TM) i7 CPU [email protected] Apache 2.2.9, **fcgid, mpm worker, suexec** PHP scripts are running via fcgi-wrapper Settings: Timeout 100 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 4 <IfModule Mpm_worker_module> StartServers 10 MaxClients 200 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 1000 </ IfModule> Compiled in modules: core.c mod_log_config.c mod_logio.c worker.c http_core.c mod_so.c The following test results, which are very strange! New server (dynamic content - php via fcgid+suexec): Server Software: Apache/2.2.9 Server Hostname: XXXXXXXX Server Port: 80 Document Path: XXXXXXX Document Length: 179512 bytes Concurrency Level: 10 Time taken for tests: 0.26276 seconds Complete requests: 1000 Failed requests: 0 Total transferred: 179935000 bytes HTML transferred: 179512000 bytes Requests per second: 38.06 Transfer rate: 6847.88 kb/s received Connnection Times (ms) min avg max Connect: 2 4 54 Processing: 161 257 449 Total: 163 261 503 Old server (dynamic content - mod_php): Server Software: Apache/2.2.3 Server Hostname: XXXXXX Server Port: 80 Document Path: XXXXXX Document Length: 187537 bytes Concurrency Level: 10 Time taken for tests: 173.073 seconds Complete requests: 1000 Failed requests: 22 (Connect: 0, Length: 22, Exceptions: 0) Total transferred: 188003372 bytes HTML transferred: 187546372 bytes Requests per second: 5777.91 Transfer rate: 1086267.40 kb/s received Connnection Times (ms) min avg max Connect: 3 3 28 Processing: 298 1724 26615 Total: 301 1727 26643 Old server: Static content (jpg file) Server Software: Apache/2.2.3 Server Hostname: xxxxxxxxx Server Port: 80 Document Path: /images/top2.gif Document Length: 40486 bytes Concurrency Level: 100 Time taken for tests: 3.558 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 40864400 bytes HTML transferred: 40557482 bytes Requests per second: 281.09 [#/sec] (mean) Time per request: 355.753 [ms] (mean) Time per request: 3.558 [ms] (mean, across all concurrent requests) Transfer rate: 11217.51 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 3 11 4.5 12 23 Processing: 40 329 61.4 339 1009 Waiting: 6 282 55.2 293 737 Total: 43 340 63.0 351 1020 New server - static content (jpg file) Server Software: Apache/2.2.9 Server Hostname: XXXXX Server Port: 80 Document Path: /images/top2.gif Document Length: 40486 bytes Concurrency Level: 100 Time taken for tests: 3.571531 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 41282792 bytes HTML transferred: 41030080 bytes Requests per second: 279.99 [#/sec] (mean) Time per request: 357.153 [ms] (mean) Time per request: 3.572 [ms] (mean, across all concurrent requests) Transfer rate: 11287.88 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 2 63 24.8 66 119 Processing: 124 278 31.8 282 391 Waiting: 3 70 28.5 66 164 Total: 126 341 35.9 350 443 I noticed that in the apache error.log is a lot of entries: [notice] mod_fcgid: call /www/XXXXX/public_html/forum/index.php with wrapper /www/php-fcgi-scripts/XXXXXX/php-fcgi-starter What I have omitted, or do not understand? Such a difference in requests per second? Is it possible? What could be the cause?

    Read the article

  • Capacity Planning IIS6

    - by user45457
    Hello, I am planning to host 5000 site with separate application pool. Based on my estimate there are around 1600- 2000 worker process will be running on the server. So my question is: Is is possible to host 5000 sites on a server with IIS 6. Server Configuration: 16 Core @ 2.27 Ghz CPU 8 GB RAM Please tell me if u require any other information. Kartik

    Read the article

  • multiple webapps in tomcat -- what is the optimal architecture?

    - by rvdb
    I am maintaining a growing base of mainly Cocoon-2.1-based web applications [http://cocoon.apache.org/2.1/], deployed in a Tomcat servlet container [http://tomcat.apache.org/], and proxied with an Apache http server [http://httpd.apache.org/docs/2.2/]. I am conceptually struggling with the best way to deploy multiple web applications in Tomcat. Since I'm not a Java programmer and we don't have any sysadmin staff I have to figure out myself what is the most sensible way to do this. My setup has evolved through 2 scenarios and I'm considering a third for maximal separation of the distinct webapps. [1] 1 Tomcat instance, 1 Cocoon instance, multiple webapps -tomcat |_ webapps |_ webapp1 |_ webapp2 |_ webapp[n] |_ WEB-INF (with Cocoon libs) This was my first approach: just drop all web applications inside a single Cocoon webapps folder inside a single Tomcat container. This seemed to run fine, I did not encounter any memory issues. However, this poses a maintainability drawback, as some Cocoon components are subject to updates, which often affect the webapp coding. Hence, updating Cocoon becomes unwieldy: since all webapps share the same pool of Cocoon components, updating one of them would require the code in all web applications to be updated simultaneously. In order to isolate the web applications, I moved to the second scenario. [2] 1 Tomcat instance, each webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp[n] |_ WEB-INF (with Cocoon libs) This approach separates all webapps into their own Cocoon environment, run inside a single Tomcat container. In theory, this works fine: all webapps can be updated independently. However, this soon results in PermGenSpace errors. It seemed that I could manage the problem by increasing memory allocation for Tomcat, but I realise this isn't a structural solution, and that overloading a single Tomcat in this way is prone to future memory errors. This set me thinking about the third scenario. [3] multiple Tomcat instances, each with a single webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp2 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp[n] |_ WEB-INF (with Cocoon libs) I haven't tried this approach, but am thinking of the $CATALINA_BASE variable. A single Tomcat distribution can be multiply instanciated with different $CATALINA_BASE environments, each pointing to a Cocoon instance with its own webapp. I wonder whether such an approach could avoid the structural memory-related problems of approach [2], or will the same issues apply? On the other hand, this approach would complicate management of the Apache http frontend, as it will require the AJP connectors of the different Tomcat instances to be listening at different ports. Hence, Apache's worker configuration has to be updated and reloaded whenever a new webapp (in its own Tomcat instance) is added. And there seems no way to reload worker.properties without restarting the entire Apache http server. Is there perhaps another / more dynamic way of 'modularizing' multiple Tomcat-served webapps, or can one of these scenarios be refined? Any thoughts, suggestions, advice much appreciated. Ron

    Read the article

  • HAProxy -- pause/queue all traffic without losing requests

    - by Marc
    I basically have the same problem as mentioned in this thread -- I would like to temporarily suspend all requests to all servers of a certain backend, so that I can upgrade the backend and the database it uses. Since this is a live system, I would like to queue up requests, and send them to the backend servers once they've been upgraded. Since I'm doing a database upgrade with the code change, I have to upgrade all backend servers simultaneously, so I can't just bring one down at a time. I tried using the tcp-request options combined with removing the static healthcheck file as mentioned in that thread, but had no luck. Setting the default "maxconn" value to 0 seems to pause and queue connections as desired, but then there seems to be no way to increase the value back to a positive number without restarting HAProxy, which kills all requests that had been queued up until that point. (The "hot-reconfiguration" options using -sf and -st start a new process, which doesn't seem to do what I want). Is what I'm trying to do possible?

    Read the article

  • Application usage report on XenApp 6

    - by Garrett
    We just deployed the latest Citrix XenApp 6 onto Server 2008 R2, and we'd like to run an application usage report. I've googled around, but all the how-to's seem to be aimed at XenApp 5 and lower, when apparently it was much easier. I came across this Citrix expert thread: http://forums.citrix.com/thread.jspa?threadID=265554 Which gives a powershell command, Get-XAApplicationReport, but when I run that on our Citrix Server in PS2 it says it's not recognized. Do I need to register the Citrix commands in PS some how? Is there a better way to generate the application usage report?

    Read the article

  • How can I install both mod-perl2 and mod-php5 on Ubuntu?

    - by RickMeasham
    From Ubuntu's package libary, I find the two modules I need. However: mod-perl2 requires apache2-mpm-worker mod-php5 requires apache2-mpm-prefork The two apache modules are mutually exclusive and ask me to un-install the other in order to install each. Which means I can't get a server running with both mod-perl2 and mod-php5. Any help greatly appreciated.

    Read the article

  • Filezilla/Puttygen doesn't recognize private key file

    - by devzoner
    I have generated a key for an Ubuntu Virtual Machine running on Azure Cloud Services http://www.windowsazure.com/en-us/manage/linux/how-to-guides/ssh-into-linux/ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myPrivateKey.key -out myCert.pem When loading the private key into Filezilla, it asks me to convert the format, however, when converting the key it fails, the same happens with puttygen from linux console, using this: puttygen myPrivateKey.key -o myKey.ppk In both cases I have the following error: puttygen: error loading `myPrivateKey.key': unrecognised key type By the way, this key doesn't have a passphrase. I found an old thread about it, but I'm using 0.6.3 version which is newer than what this thread recommends: http://fixunix.com/ssh/541874-puttygen-unable-import-openssh-key.html I've managed to solve this issue by using another gui client Fugu for Mac, but one of my co-worker uses windows and I still have to figure this out. Since Filezilla is the de-facto ftp client, I thought it would be easier to solve it there. Thanks

    Read the article

  • ASP.NET web application can't find an assembly

    - by Charlie Somerville
    I deployed an ASP.NET web application last night and I when I woke up this morning it was very slow and would occasionally just throw a 'Service Unavailable' error. I checked the Event Viewer and it was filled up with these errors: I'm puzzled as it was working perfectly when I deployed it (MonoTorrent is required to retrieve the number of seeders/leechers for a certain torrent off the tracker - this was working fine), but it's no longer working and whenever code that uses MonoTorrent gets involved, the worker process just crashes. MonoTorrent.dll is in the /bin/ directory.

    Read the article

  • Delay of mail delivery - Hosted exchange provider

    - by alex
    Hi, I recently signed up to a new hosted email provider. When I send mail (from OWA, OR Outlook) there is a delay of up to 3 minutes from when i send the message, to when it's received (in my gmail account for example) I've listed the headers below. Is there anything I can advise my new email host to do? My previous email host delivers within 5 seconds!! New email provider: Delivered-To: ****.*****@******.co.uk.test-google-a.com Received: by 10.223.120.148 with SMTP id d20cs333125far; Mon, 30 Nov 2009 08:49:43 -0800 (PST) Received: by 10.213.106.202 with SMTP id y10mr4864870ebo.35.1259599782838; Mon, 30 Nov 2009 08:49:42 -0800 (PST) Return-Path: Received: from relay005.apm-internet.net (relay005.apm-internet.net [85.119.248.8]) by mx.google.com with SMTP id 26si13016480ewy.43.2009.11.30.08.49.42; Mon, 30 Nov 2009 08:49:42 -0800 (PST) Received-SPF: neutral (google.com: 85.119.248.8 is neither permitted nor denied by best guess record for domain of ****@*******.com) client-ip=85.119.248.8; Authentication-Results: mx.google.com; spf=neutral (google.com: 85.119.248.8 is neither permitted nor denied by best guess record for domain of ****@*******.com) smtp.mail=****@*******.com Received: (qmail 63915 invoked from network); 30 Nov 2009 16:49:41 -0000 Received: from unknown (HELO mx-out-manc2.simplymailsolutions.com) (88.151.129.22) by relay005.apm-internet.net with SMTP; 30 Nov 2009 16:49:42 -0000 X-APM-IP: 88.151.129.22 X-APM-Score: 4 Received-SPF: none (relay005.apm-internet.net: domain at alexjamesbrown.com does not designate permitted sender hosts) Received: from [10.1.20.1] (helo=win-s-manc1.shared.ifeltd.com) by mx-out-manc2.simplymailsolutions.com with esmtp (Exim 4.63) (envelope-from ) id 1NF9QZ-0005By-Hw for ****.*****@******.co.uk; Mon, 30 Nov 2009 16:48:46 +0000 Received: from sha-exch8.shared.ifeltd.com ([10.1.20.8]) by win-s-manc1.shared.ifeltd.com with Microsoft SMTPSVC(6.0.3790.3959); Mon, 30 Nov 2009 16:48:34 +0000 Received: from sha-exch9.shared.ifeltd.com ([10.1.20.9]) by sha-exch8.shared.ifeltd.com with Microsoft SMTPSVC(6.0.3790.3959); Mon, 30 Nov 2009 16:48:34 +0000 Received: from SHA-EXCH13.shared.ifeltd.com (10.1.20.13) by sha-exch9.shared.ifeltd.com (10.1.20.9) with Microsoft SMTP Server (TLS) id 8.1.393.1; Mon, 30 Nov 2009 16:48:25 +0000 Received: from SHA-EXCH12.shared.ifeltd.com ([fe80::ecba:36d0:eec5:c928]) by SHA-EXCH13.shared.ifeltd.com ([fe80::212b:916c:70c7:a4e5%11]) with mapi; Mon, 30 Nov 2009 16:48:05 +0000 From: Alex Brown To: "****.*****@*****.co.uk" Date: Mon, 30 Nov 2009 16:48:04 +0000 Subject: testing Thread-Topic: testing Thread-Index: AQHKcdzZg4oiDsOYIEio/7k6bCk8BQ== Message-ID: Accept-Language: en-US, en-GB Content-Language: en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US, en-GB Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginalArrivalTime: 30 Nov 2009 16:48:34.0235 (UTC) FILETIME=[F48178B0:01CA71DC] Here are the headers using my previous exchange host: Delivered-To: ****.*****@******.co.uk.test-google-a.com Received: by 10.223.120.148 with SMTP id d20cs333076far; Mon, 30 Nov 2009 08:48:35 -0800 (PST) Received: by 10.213.2.70 with SMTP id 6mr4797985ebi.25.1259599715739; Mon, 30 Nov 2009 08:48:35 -0800 (PST) Return-Path: Received: from relay005.apm-internet.net (relay005.apm-internet.net [85.119.248.8]) by mx.google.com with SMTP id 26si13030993ewy.23.2009.11.30.08.48.35; Mon, 30 Nov 2009 08:48:35 -0800 (PST) Received-SPF: neutral (google.com: 85.119.248.8 is neither permitted nor denied by best guess record for domain of ****@*********.com) client-ip=85.119.248.8; Authentication-Results: mx.google.com; spf=neutral (google.com: 85.119.248.8 is neither permitted nor denied by best guess record for domain of ****@*********.com) smtp.mail=****@*********.com Received: (qmail 60920 invoked from network); 30 Nov 2009 16:48:34 -0000 Received: from unknown (HELO MTAb.MsExchange2007.com) (89.31.236.50) by relay005.apm-internet.net with SMTP; 30 Nov 2009 16:48:35 -0000 X-APM-IP: 89.31.236.50 X-APM-Score: 1 Received-SPF: none (relay005.apm-internet.net: domain at alexjamesbrown.com does not designate permitted sender hosts) Received: from EXHUB02.SL.local (no.ptr.hostlogic.biz [89.31.236.28]) by MTAb.MsExchange2007.com (Spam Firewall) with ESMTP id B677A34FE0F for ; Mon, 30 Nov 2009 16:48:33 +0000 (GMT) Received: from EXHUB02.SL.local (no.ptr.hostlogic.biz [89.31.236.28]) by MTAb.MsExchange2007.com with ESMTP id 8X5B8V4tExVzoNyU for ; Mon, 30 Nov 2009 16:48:34 +0000 (GMT) Received: from EXCCR03STORE.SL.local ([10.0.0.2]) by EXHUB02.SL.local ([192.168.92.64]) with mapi; Mon, 30 Nov 2009 16:48:31 +0000 From: Alex James Brown To: "****.*****@******.co.uk" Date: Mon, 30 Nov 2009 16:48:30 +0000 Subject: testing from o Thread-Topic: testing from o Thread-Index: AQHKcdzyY1iBFWiol0ykG6xPQUZiTg== Message-ID: Accept-Language: en-US, en-GB Content-Language: en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US, en-GB Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0

    Read the article

  • php-cgi memory usage higher than php's memory limit

    - by Josh Nankin
    I'm running apache with a worker MPM and php with fastcgi. the following are my mpm limits: StartServers 5 MinSpareThreads 5 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 10 MaxRequestsPerChild 2000 I've also setup my php-cgi with the following: PHP_FCGI_CHILDREN=5 PHP_FCGI_MAX_REQUESTS=500 I'm noticing that my average php-cgi process is using around 200+mb of RAM, even as soon as they are started. However, my php memory_limit is only 128M. How is this possible, and what can I do to lower the php-cgi memory consumption?

    Read the article

  • Nginx virtual server only partially working with Java application

    - by MFB
    Final Cut Server is a Java application (made by Apple) which launches from a web page. I have Nginx in front of this web server (amongst others) and, whilst the web server can be browsed externally, the Java app fails to launch correctly and throws the errors below. Can anyone offer clues as to what additional config I many have to add to Nginx to get this working? My existing Nginx config: user xxxx; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/conf/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; server { server_name _; return 444; } upstream fcs-site { server 10.10.5.20:8080; } server { listen 80; server_name example.com 10.10.5.90; access_log /var/log/nginx/fcs_access.log; error_log /var/log/nginx/fcs_error.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 60s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffering off; proxy_temp_file_write_size 64k; proxy_pass http://fcs-site; proxy_redirect off; } } upstream myapp-site { server 127.0.0.1:6543; } server { listen 80; server_name otherexample.com www.otherexample.com; rewrite ^ https://$server_name$request_uri? permanent; } server { listen 443; ssl on; ssl_certificate /etc/ssl/otherapp.crt; ssl_certificate_key /etc/ssl/otherapp.key; server_name otherexample.com www.otherexample.com; access_log /var/log/nginx/otherapp_access.log; error_log /var/log/nginx/other_error.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 60s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffering off; proxy_temp_file_write_size 64k; proxy_pass http://myapp-site; proxy_redirect off; } location /static { root /www; expires 30d; add_header Cache-Control public; access_log off; } } Java errors: com.sun.deploy.net.FailedDownloadException: Unable to load resource: http://example.com:8080/FinalCutServer/FinalCutServer_mac.jnlp at com.sun.deploy.net.DownloadEngine.actionDownload(Unknown Source) at com.sun.deploy.net.DownloadEngine._downloadCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source) at com.sun.javaws.Launcher.updateFinalLaunchDesc(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.launch(Unknown Source) at com.sun.javaws.Main.launchApp(Unknown Source) at com.sun.javaws.Main.continueInSecureThread(Unknown Source) at com.sun.javaws.Main.access$000(Unknown Source) at com.sun.javaws.Main$1.run(Unknown Source) at java.lang.Thread.run(Thread.java:722) java.net.ConnectException: Operation timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:378) at sun.net.www.http.HttpClient.openServer(HttpClient.java:473) at sun.net.www.http.HttpClient.(HttpClient.java:203) at sun.net.www.http.HttpClient.New(HttpClient.java:290) at sun.net.www.http.HttpClient.New(HttpClient.java:306) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:995) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:931) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:849) at com.sun.deploy.net.BasicHttpRequest.doRequest(Unknown Source) at com.sun.deploy.net.BasicHttpRequest.doRequest(Unknown Source) at com.sun.deploy.net.BasicHttpRequest.doGetRequest(Unknown Source) at com.sun.deploy.net.DownloadEngine.actionDownload(Unknown Source) at com.sun.deploy.net.DownloadEngine._downloadCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source) at com.sun.javaws.Launcher.updateFinalLaunchDesc(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.launch(Unknown Source) at com.sun.javaws.Main.launchApp(Unknown Source) at com.sun.javaws.Main.continueInSecureThread(Unknown Source) at com.sun.javaws.Main.access$000(Unknown Source) at com.sun.javaws.Main$1.run(Unknown Source) at java.lang.Thread.run(Thread.java:722)

    Read the article

  • How can I poll different aws sqs in the same process?

    - by Luccas
    What is the right way to poll from differents AWS SQS in the same process? Suppose I have a ruby script: listen_queues.rb and run it. Should I need to create threads to wrap each SQS poll or start sub processes? t1 = Thread.new do queue1.poll do |msg| .... end t2 = Thread.new do queue2.poll do |msg| .... end t2.join I tried this code, but the poll is not receiving any of the messages available. When I run only one of them (t1 or t2), it works. But I need the 2 running. What is going on? Thanks!!

    Read the article

  • Is there a good host for console applications?

    - by Jamey McElveen
    We have several applications that run in AppHarbor that we are bringing in-house. Our worker processes for AppHarbor are developed as windows console applications. What is the best way to host these console application in house? Of course I can just install and run them but I am hoping for a more managed way to host them. I want them to restart if they fail, ability to run multiple applications. These are things that were managed by AppHarbor. Thanks

    Read the article

  • Windows 8.1 IRQL_NOT_LESS_OR_EQUAL with Asus PCE-n53

    - by JArsenault89
    I saw the following question, and it is the exact same problem on my machine, I have tracked it to the ASUS PCE-n53 wireless card in my desktop. Does anyone know of a workaround? Windows 8.1 RTM installation crashes The adapter worked fine in windows 8... any ideas? EDIT: Crash Dump Analysis * Bugcheck Analysis * * IRQL_NOT_LESS_OR_EQUAL (a) An attempt was made to access a pageable (or completely invalid) address at an interrupt request level (IRQL) that is too high. This is usually caused by drivers using improper addresses. If a kernel debugger is available get the stack backtrace. Arguments: Arg1: 0000000000000000, memory referenced Arg2: 0000000000000002, IRQL Arg3: 0000000000000001, bitfield : bit 0 : value 0 = read operation, 1 = write operation bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status) Arg4: fffff801ef4f1316, address which referenced memory Debugging Details: WRITE_ADDRESS: 0000000000000000 CURRENT_IRQL: 2 FAULTING_IP: nt!KeReleaseSpinLock+16 fffff801`ef4f1316 f048832100 lock and qword ptr [rcx],0 DEFAULT_BUCKET_ID: WIN8_DRIVER_FAULT BUGCHECK_STR: AV PROCESS_NAME: System ANALYSIS_VERSION: 6.3.9600.16384 (debuggers(dbg).130821-1623) amd64fre TRAP_FRAME: ffffd00020d45550 -- (.trap 0xffffd00020d45550) NOTE: The trap frame does not contain all registers. Some register values may be zeroed or incorrect. rax=0000000000000001 rbx=0000000000000000 rcx=0000000000000000 rdx=0000000055920200 rsi=0000000000000000 rdi=0000000000000000 rip=fffff801ef4f1316 rsp=ffffd00020d456e0 rbp=ffffd00020d45768 r8=0000000055920222 r9=0000000035930000 r10=0000000055920222 r11=ffffd00020d456a8 r12=0000000000000000 r13=0000000000000000 r14=0000000000000000 r15=0000000000000000 iopl=0 nv up ei pl zr na po nc nt!KeReleaseSpinLock+0x16: fffff801ef4f1316 f048832100 lock and qword ptr [rcx],0 ds:0000000000000000=???????????????? Resetting default scope LOCK_ADDRESS: fffff801ef6da360 -- (!locks fffff801ef6da360) Resource @ nt!PiEngineLock (0xfffff801ef6da360) Exclusively owned Contention Count = 6 Threads: ffffe000010ff040-01<* 1 total locks, 1 locks currently held PNP_TRIAGE: Lock address : 0xfffff801ef6da360 Thread Count : 1 Thread address: 0xffffe000010ff040 Thread wait : 0x1fbe LAST_CONTROL_TRANSFER: from fffff801ef5647e9 to fffff801ef558ca0 STACK_TEXT: ffffd00020d45408 fffff801ef5647e9 : 000000000000000a 0000000000000000 0000000000000002 0000000000000001 : nt!KeBugCheckEx ffffd00020d45410 fffff801ef56303a : 0000000000000001 0000000000000000 ffff0c83e3e25300 ffffd00020d45550 : nt!KiBugCheckDispatch+0x69 ffffd00020d45550 fffff801ef4f1316 : 00000000000a5890 0000000000000001 0000000000000000 ffffe00004c00000 : nt!KiPageFault+0x23a ffffd00020d456e0 fffff80003b430ad : 00000000000afe80 ffffe00004c00000 00000000000a2f80 0000000035720000 : nt!KeReleaseSpinLock+0x16 ffffd00020d45710 fffff80003ac249f : ffffe00004c00000 00000000000000a8 ffffe00004c85050 0000000000000800 : netr28x+0x840ad ffffd00020d457b0 fffff80000b76475 : ffffd00020d459e8 ffffd00020d459f0 ffffe00004ac2006 ffffe00004ac21a0 : netr28x+0x349f ffffd00020d459a0 fffff80000baa248 : ffffe00004ac2eb8 0000000000000000 ffffe00000000000 ffffe00004ac21a0 : ndis!ndisMInvokeInitialize+0x39 ffffd00020d459e0 fffff80000b74784 : 0000000000000050 ffffe00004907ba0 0000000000000000 01cecbbc328e6cde : ndis!ndisMInitializeAdapter+0x4dc ffffd00020d46050 fffff80000b74d3d : 0000000000000050 ffffe0000443e770 ffffc00000951480 ffffe00004ac21a0 : ndis!ndisInitializeAdapter+0x60 ffffd00020d460a0 fffff80000b74c14 : ffffe00004ac21a0 ffffe00004ac2050 ffffe000047ec2a0 0000000000000000 : ndis!ndisPnPStartDevice+0x89 ffffd00020d460f0 fffff80000b87695 : ffffe00004ac21a0 ffffe00004ac21a0 ffffd00020d461b0 ffffe000047ec2a0 : ndis!ndisStartDeviceSynchronous+0x58 ffffd00020d46140 fffff80000b6a760 : ffffe000047ec2a0 ffffe00004ac21a0 0000000000000000 0000000000000000 : ndis!ndisPnPIrpStartDevice+0x13471 ffffd00020d46170 fffff8000032576c : ffffe00004b11501 ffffe00004b11570 0000000000000001 fffff80000325880 : ndis!ndisPnPDispatch+0x140 ffffd00020d461e0 fffff8000030b40a : ffffe000047ec2a0 0000000000000106 ffffd00020d462f0 ffffe00004b116c0 : Wdf01000!FxPkgFdo::PnpSendStartDeviceDownTheStackOverload+0xe8 ffffd00020d46250 fffff80000305942 : 0000000000000106 ffffd00020d462f0 0000000000000105 ffffd00020d464d0 : Wdf01000!FxPkgPnp::PnpEventInitStarting+0xa ffffd00020d46280 fffff80000305a5a : ffffe00004b116c8 0000000000000002 ffffe00004b11570 ffffe00004b11600 : Wdf01000!FxPkgPnp::PnpEnterNewState+0x102 ffffd00020d46310 fffff80000305bc4 : 0000000000000000 ffffd00020d46400 ffffe00004b116a0 0000000000000000 : Wdf01000!FxPkgPnp::PnpProcessEventInner+0xc2 ffffd00020d46390 fffff8000030c27a : 0000000000000000 ffffe00004b11570 0000000000000000 ffffe00004b11570 : Wdf01000!FxPkgPnp::PnpProcessEvent+0xe4 ffffd00020d46430 fffff80000300936 : ffffe00004b11570 ffffd00020d464c0 0000000000000000 ffffe00004a0e630 : Wdf01000!FxPkgPnp::_PnpStartDevice+0x1e ffffd00020d46460 fffff800002fba18 : ffffe000047ec2a0 ffffe000047ec2a0 0000000000000000 ffffe0000486f020 : Wdf01000!FxPkgPnp::Dispatch+0xd2 ffffd00020d464d0 fffff801ef838796 : 0000000000000000 fffff801ef6aa101 0000000000000000 ffffd000208aa180 : Wdf01000!FxDevice::DispatchWithLock+0x7d8 ffffd00020d465b0 fffff801ef4d5bad : ffffe000011dc3a0 ffffd00020d46659 0000000000000000 fffff801ef7f5ba4 : nt!PnpAsynchronousCall+0x102 ffffd00020d465f0 fffff801ef838e57 : ffffe000011db8d0 ffffe000011db8d0 ffffe00004a8d060 ffffc00002b11200 : nt!PnpStartDevice+0xc5 ffffd00020d466c0 fffff801ef838fe7 : ffffe000011db8d0 ffffe000011db8d0 0000000000000000 ffffe000011db8d0 : nt!PnpStartDeviceNode+0x147 ffffd00020d46790 fffff801ef7fd19e : ffffe000011db8d0 0000000000000001 0000000000000001 ffffe00000000001 : nt!PipProcessStartPhase1+0x53 ffffd00020d467d0 fffff801ef897b17 : ffffe000011db8d0 0000000000000001 0000000000000000 fffff801ef7ef7b2 : nt!PipProcessDevNodeTree+0x3ce ffffd00020d46a50 fffff801ef4f5033 : 0000000100000003 0000000000000000 0000000000000000 0000000000000000 : nt!PiRestartDevice+0xaf ffffd00020d46aa0 fffff801ef44565d : fffff801ef4f4c90 ffffd00020d46bd0 0000000000000000 ffffe00004a10170 : nt!PnpDeviceActionWorker+0x3a3 ffffd00020d46b50 fffff801ef4eec80 : 0000000000000000 ffffe000010ff040 ffffe000010ff040 ffffe0000035c900 : nt!ExpWorkerThread+0x2b5 ffffd00020d46c00 fffff801ef55f2c6 : ffffd00020472180 ffffe000010ff040 ffffe00000608040 ffffc00000002710 : nt!PspSystemThreadStartup+0x58 ffffd00020d46c60 0000000000000000 : ffffd00020d47000 ffffd00020d41000 0000000000000000 0000000000000000 : nt!KiStartSystemThread+0x16 STACK_COMMAND: kb FOLLOWUP_IP: netr28x+840ad fffff800`03b430ad 4533e4 xor r12d,r12d SYMBOL_STACK_INDEX: 4 SYMBOL_NAME: netr28x+840ad FOLLOWUP_NAME: MachineOwner MODULE_NAME: netr28x IMAGE_NAME: netr28x.sys DEBUG_FLR_IMAGE_TIMESTAMP: 51de7a8d FAILURE_BUCKET_ID: AV_netr28x+840ad BUCKET_ID: AV_netr28x+840ad ANALYSIS_SOURCE: KM FAILURE_ID_HASH_STRING: km:av_netr28x+840ad FAILURE_ID_HASH: {a1f86ced-f566-ac23-afeb-1aa88ea5ab8f} Followup: MachineOwner

    Read the article

  • Benchmark MySQL Cluster using flexAsynch: No free node id found for mysqld(API)?

    - by quanta
    I am going to benchmark MySQL Cluster using flexAsynch follow this guide, details as below: mkdir /usr/local/mysqlc732/ cd /usr/local/src/mysql-cluster-gpl-7.3.2 cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysqlc732/ -DWITH_NDB_TEST=ON make make install Everything works fine until this step: # /usr/local/mysqlc732/bin/flexAsynch -t 1 -p 80 -l 2 -o 100 -c 100 -n FLEXASYNCH - Starting normal mode Perform benchmark of insert, update and delete transactions 1 number of concurrent threads 80 number of parallel operation per thread 100 transaction(s) per round 2 iterations Load Factor is 80% 25 attributes per table 1 is the number of 32 bit words per attribute Tables are with logging Transactions are executed with hint provided No force send is used, adaptive algorithm used Key Errors are disallowed Temporary Resource Errors are allowed Insufficient Space Errors are disallowed Node Recovery Errors are allowed Overload Errors are allowed Timeout Errors are allowed Internal NDB Errors are allowed User logic reported Errors are allowed Application Errors are disallowed Using table name TAB0 NDBT_ProgramExit: 1 - Failed ndb_cluster.log: WARNING -- Failed to allocate nodeid for API at 127.0.0.1. Returned eror: 'No free node id found for mysqld(API).' I also have recompiled with -DWITH_DEBUG=1 -DWITH_NDB_DEBUG=1. How can I run flexAsynch in the debug mode? # /usr/local/mysqlc732/bin/flexAsynch -h FLEXASYNCH Perform benchmark of insert, update and delete transactions Arguments: -t Number of threads to start, default 1 -p Number of parallel transactions per thread, default 32 -o Number of transactions per loop, default 500 -l Number of loops to run, default 1, 0=infinite -load_factor Number Load factor in index in percent (40 -> 99) -a Number of attributes, default 25 -c Number of operations per transaction -s Size of each attribute, default 1 (PK is always of size 1, independent of this value) -simple Use simple read to read from database -dirty Use dirty read to read from database -write Use writeTuple in insert and update -n Use standard table names -no_table_create Don't create tables in db -temp Create table(s) without logging -no_hint Don't give hint on where to execute transaction coordinator -adaptive Use adaptive send algorithm (default) -force Force send when communicating -non_adaptive Send at a 10 millisecond interval -local 1 = each thread its own node, 2 = round robin on node per parallel trans 3 = random node per parallel trans -ndbrecord Use NDB Record -r Number of extra loops -insert Only run inserts on standard table -read Only run reads on standard table -update Only run updates on standard table -delete Only run deletes on standard table -create_table Only run Create Table of standard table -drop_table Only run Drop Table on standard table -warmup_time Warmup Time before measurement starts -execution_time Execution Time where measurement is done -cooldown_time Cooldown time after measurement completed -table Number of standard table, default 0

    Read the article

  • Compiling zip component for PHP 5.2.11 in MAMP PRO

    - by Zlatoroh
    Helo I installed MAMP PRO on my Macbook Pro (10.6) some time ago. Now I would like to use zip functions in php. I found that I must add zip.so to my extension folder and edited php.ini. On my computer I have two different versions of PHP one in MAMP folder and other in user/lib which was pre-installed on my system. Now I wish to compile my zip library for MAMP version. I got zip sources for my version of PHP then in terminal called function /Applications/MAMP/bin/php5/bin/phpize so it uses mamp php version ./configure make then I moved compile zip.so to extensions/no-debug-non-zts-20060613. When MAMP is launched it returns this error: [11-Apr-2010 16:33:27] PHP Warning: PHP Startup: zip: Unable to initialize module Module compiled with module API=20090626, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 Can some body explain to me how to do this the right way.

    Read the article

  • Increasing FreeBSD threads

    - by sh-beta
    For network apps that create one thread per connection (like Pound), threadcount can become a bottleneck on the number of concurrent connections you can server. I'm running FreeBSD 8 x64: $ sysctl kern.maxproc kern.maxproc: 6164 $ sysctl kern.threads.max_threads_per_proc kern.threads.max_threads_per_proc: 1500 $ limits Resource limits (current): cputime infinity secs filesize infinity kB datasize 33554432 kB stacksize 524288 kB coredumpsize infinity kB memoryuse infinity kB memorylocked infinity kB maxprocesses 5547 openfiles 200000 sbsize infinity bytes vmemoryuse infinity kB pseudo-terminals infinity swapuse infinity kB I want to increase kern.threads.max_threads_per_proc to 4096. Assuming each thread starts with a stack size of 512k, what else do I need to change to ensure that I don't hose my machine?

    Read the article

  • kernel software trap handling

    - by Tony
    I'm reading a book on Windows Internals and there's something I don't understand: "The kernel handles software interrupts either as part of hardware interrupt handling or synchronously when a thread invokes kernel functions related to the software interrupt." So does this mean that software interrupts or exceptions will only be handled under these conditions: a. When the kernel is executing a function from said thread related to the software exception(trap) b. when it is already handling a hardware trap Is my understanding of this correct? The next bit: "In most cases, the kernel installs front-end trap handling functions that perform general trap handling tasks before and after transferring control to other functions that field the trap." I don't quite understand what it means by 'front-end trap handling functions' and 'field the trap'? Can anyone help me?

    Read the article

  • Apache maximum request number 256?

    - by victor hugo
    I have a very good server running an Apache instance with mod_jk for proxying the request to an Application server. I'm doing a load test and although I'm sending over 600 requests, the status worker keep showing this: 256 requests currently being processed, 0 idle workers I'm using 'prefork MPM' <IfModule prefork.c> ServerLimit 2048 StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 1000 MaxRequestsPerChild 0 </IfModule> Is there a compiled limit for Apache to handle just 256 request or what would I be missing?

    Read the article

  • Run application or script on Windows RDC connection

    - by NickLarsen
    I checked this thread, but it did not solve my exact problem. I need to run a script on when a connection is made across my network using windows remote desktop connection. The thread listed above works for the initial login, however, if I don't log out (which is necessary for some processes running on my network), then it wont run the script again the next time someone connects to the system using remote desktop connection. Previously we were using pcAnywhere to achieve this, however after running into some graphical issues with pcAnywhere, we have decided to move away from it to RDC. For a little more information, we need to have an email sent out anytime a connection is made to particular machines. The login name will always be the same for those systems and we do not log off when closing the connection.

    Read the article

  • Setup mutt to read through a threaded message and then return to the email index

    - by dat5h
    I have begun using mutt (terminal email client) recently and I have it setup very nicely and easy to use. My only gripe is that sometimes it is difficult to tell when I transition from one thread to the next. Is it possible to setup the bindings such that moving to the next-undeleted message would check if the next-undeleted message is in the same thread, and if that check is false it returns to the index or throws an error? For example, I currently have bind pager <down> next-undeleted bind pager } next-subthread Is there a more advanced function call that could perform this check? I haven't been able to find much on how to perform more complex function calls with such checks while checking out the manual. Any help would be appreciated.

    Read the article

  • Why did cherokee-admin-launcher crash?

    - by DarenW
    I'm trying out the Cherokee http server on a seemingly fine machine. Following simple set-up instructions, I tried running cherokee-admin-launcher but it printed error messages and hung up. Ctrl-C did not kill it; I had to kill -9 it from another xterm. OTOH, cherokee-admin ran fine (or at least got a lot further). What is the problem with python and cherokee-admin-launcher, and how to fix it? [root@iron rc.d]# cherokee-admin-launcher Checking TCP port 9090 availability.. OK Launching: LD_LIBRARY_PATH=/usr/lib /usr/sbin/cherokee-admin Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 530, in __bootstrap_inner self.run() File "/usr/bin/cherokee-admin-launcher", line 209, in run return self._run_guts() File "/usr/bin/cherokee-admin-launcher", line 217, in _run_guts env=self.environ, close_fds=True) File "/usr/lib/python2.7/subprocess.py", line 672, in __init__ errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1202, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory ^C ^C

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >