Search Results

Search found 21053 results on 843 pages for 'process'.

Page 114/843 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • POST data getting lost: Content Length = 0

    - by Igor K
    I've been trying to find a solution for ages with no success. On our app, customers submit a form and on the next page we process it. Sometimes though the data never arrives. This seems to happen for just a few of our customers. Seems to happen with IE7 and using a proxy. Here's some headers, note the HTTP_VIA: X-REWRITE-URL: /process.asp?r=699743 APPL_MD_PATH: /LM/W3SVC/31555/ROOT APPL_PHYSICAL_PATH: C:\inetpub\vhosts\mysite.com\httpdocs\ AUTH_PASSWORD: AUTH_TYPE: AUTH_USER: CERT_COOKIE: CERT_FLAGS: CERT_ISSUER: CERT_KEYSIZE: CERT_SECRETKEYSIZE: CERT_SERIALNUMBER: CERT_SERVER_ISSUER: CERT_SERVER_SUBJECT: CERT_SUBJECT: CONTENT_LENGTH: 0 CONTENT_TYPE: application/x-www-form-urlencoded GATEWAY_INTERFACE: CGI/1.1 HTTPS: off HTTPS_KEYSIZE: HTTPS_SECRETKEYSIZE: HTTPS_SERVER_ISSUER: HTTPS_SERVER_SUBJECT: INSTANCE_ID: 31555 INSTANCE_META_PATH: /LM/W3SVC/31555 LOCAL_ADDR: XXX.XXX.XXX.XXX LOGON_USER: PATH_INFO: /process.asp PATH_TRANSLATED: C:\inetpub\vhosts\mysite.com\httpdocs\process.asp QUERY_STRING: r=699743 REMOTE_ADDR: YYY.YYY.YYY.YYY REMOTE_HOST: YYY.YYY.YYY.YYY REMOTE_USER: REQUEST_METHOD: POST SCRIPT_NAME: /process.asp SERVER_NAME: www.mysite.com SERVER_PORT: 80 SERVER_PORT_SECURE: 0 SERVER_PROTOCOL: HTTP/1.1 SERVER_SOFTWARE: Microsoft-IIS/7.0 URL: /process.asp HTTP_CONNECTION: Keep-Alive HTTP_PRAGMA: no-cache HTTP_VIA: 1.1 WEBCACHE-2 HTTP_CONTENT_LENGTH: 0 HTTP_CONTENT_TYPE: application/x-www-form-urlencoded HTTP_ACCEPT: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/x-shockwave-flash, application/x-ms-application, application/x-ms-xbap, application/vnd.ms-xpsdocument, application/xaml+xml, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */* HTTP_ACCEPT_LANGUAGE: en-gb HTTP_COOKIE: ASPSESSIONIDQCKSDCTS=FENMPCMDCHEOENGOJPGDGPLN; HTTP_HOST: www.mysite.com HTTP_REFERER: http://www.mysite.com/theform.asp HTTP_USER_AGENT: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022) HTTP_UA_CPU: x86 HTTP_X_REWRITE_URL: /process.asp?r=699743

    Read the article

  • get all running processes info using QProcess

    - by kaycee
    few days ago i asked about how to get all running processes in the system using QProcess. i found a command line that can output all processes to a file: C:\WINDOWS\system32\wbem\wmic.exe" /OUTPUT:C:\ProcessList.txt PROCESS get Caption this will create C:\ProcessList.txt file contains all running processes in the system. i wonder how can i run it using QProcess and take its output to a variable. it seems every time i try to run it and read nothing happens: QString program = "C:\\WINDOWS\\system32\\wbem\\wmic.exe"; QStringList arguments; arguments << "/OUTPUT:C:\\ProcessList.txt" <<"PROCESS"<< "get"<< "Caption"; process->setStandardOutputFile("process.txt"); process->start(program,arguments); QByteArray result = process->readAll(); i prefer not to create process.txt at all and to take all the output to a variable...

    Read the article

  • How can I open VLC via browser with PHP (Mac OS X)

    - by Damiqib
    I'm trying to open VLC via browser and make it instantly play the given video file on Mac OS X. This runs on my local server and is only meant to run locally - therefore I already run apache (MAMP) with my username and with group "staff" (defined in httpd.conf). YES - I do know that VLC has http interface - however that is not what I need, so do not suggest that... My current system works without any problems when I run it via Terminal: php /var/www/Movies/index.php - This leads to VLC opening and video starts playing fullscreen like intented. Problems start when I run the same PHP-page with browser. Then the VLC-process starts, but there's no GUI for it, video file won't start playing and the VLC-process takes nearly 100% of CPU. Both; terminal and browser started VLC-processes run with the same user (mine) Both have "Parent process" bash VLC-process begun with Terminal has empty "Process group" (only process id-number) and browser started has "httpd" + (id-number) VLC-process started via browser makes 1000-times more "Mach System Calls" than it's Terminal-started counterpart. Could anyone give me any pointers on how to get this thing working? index.php # $j is a file path to the videofile and is defined before exec('/var/www/Movies/vlc.sh "' . $j . '" > /dev/null 2>&1 & echo $!;'); # If I do this in the given PHP-page it tells me that apache is running # with my username and with the group "staff" like it should be... exec('whoamI'); vlc.sh #!/bin/bash # Activate VLC in 5 seconds to make it the front-most window (sleep 5; open -a VLC) & # Open video file /Applications/VLC.app/Contents/MacOS/VLC --quiet --fullscreen "$1"

    Read the article

  • Apache module, is it possible to have asynchronous processing

    - by prashant2361
    Hi, I have a requirement where I need to send continous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client. I am looking for suggestions of this implementation at the server end. Basically what I need is this: 1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client 2. server process now waits for new client connections 3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required. Can we do something like this in apache module: 1. apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection 2. the apache process though has returned the status to root process but it is also executing parallely where it going through its global store and sending updates to the client, if any. So can a apache process do these things: 1. Have more than one connection associated with it 2. Asynchronously waiting for new connection and at the same time processing the previous connections? Regards Prashant

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • Is it possible to have asynchronous processing

    - by prashant2361
    Hi, I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client. I am looking for suggestions of this implementation at the server end. Basically what I need is this: 1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client 2. server process now waits for new client connections 3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required. Can we do something like this in Apache module: 1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection 2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any. So can a Apache process do these things: 1. Have more than one connection associated with it 2. Asynchronously waiting for new connection and at the same time processing the previous connections? Regards Prashant

    Read the article

  • Why is PLINQ slower than LINQ for this code?

    - by Rob Packwood
    First off, I am running this on a dual core 2.66Ghz processor machine. I am not sure if I have the .AsParallel() call in the correct spot. I tried it directly on the range variable too and that was still slower. I don't understand why... Here are my results: Process non-parallel 1000 took 146 milliseconds Process parallel 1000 took 156 milliseconds Process non-parallel 5000 took 5187 milliseconds Process parallel 5000 took 5300 milliseconds using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; namespace DemoConsoleApp { internal class Program { private static void Main() { ReportOnTimedProcess( () => GetIntegerCombinations(), "non-parallel 1000"); ReportOnTimedProcess( () => GetIntegerCombinations(runAsParallel: true), "parallel 1000"); ReportOnTimedProcess( () => GetIntegerCombinations(5000), "non-parallel 5000"); ReportOnTimedProcess( () => GetIntegerCombinations(5000, true), "parallel 5000"); Console.Read(); } private static List<Tuple<int, int>> GetIntegerCombinations( int iterationCount = 1000, bool runAsParallel = false) { IEnumerable<int> range = Enumerable.Range(1, iterationCount); IEnumerable<Tuple<int, int>> integerCombinations = from x in range from y in range select new Tuple<int, int>(x, y); return runAsParallel ? integerCombinations.AsParallel().ToList() : integerCombinations.ToList(); } private static void ReportOnTimedProcess( Action process, string processName) { var stopwatch = new Stopwatch(); stopwatch.Start(); process(); stopwatch.Stop(); Console.WriteLine("Process {0} took {1} milliseconds", processName, stopwatch.ElapsedMilliseconds); } } }

    Read the article

  • Windows 2008 RenderFarm Service: CreateProcessAsUser "Session 0 Isolation" and OpenGL

    - by holtavolt
    Hello, I have a legacy Windows server service and (spawned) application that works fine in XP-64 and W2K3, but fails on W2K8. I believe it is because of the new "Session 0 isolation " feature. (Note: As a StackOverflow newbie I'm being limited to one link in this post, so you'll need to scroll to bottom to lookup the links for '' items)* Consequently, I'm looking for code samples/security settings mojo that let you create a new process from a windows service for Windows 2008 Server such that I can restore (and possibly surpass) the previous behavior. I need a solution that: Creates the new process in a non-zero session to get around session-0 isolation restrictions (no access to graphics hardware from session 0) - the official MS line on this is: Because Session 0 is no longer a user session, services that are running in Session 0 do not have access to the video driver. This means that any attempt that a service makes to render graphics fails. Querying the display resolution and color depth in Session 0 reports the correct results for the system up to a maximum of 1920x1200 at 32 bits per pixel. The new process gets a windows station/desktop (e.g. winsta0/default) that can be used to create windows DCs. I've found a solution (that launches OK in an interactive session) for this here: *(Starting an Interactive Client Process in C++ - 2) The windows DC, when used as the basis for an *(OpenGL DescribePixelFormat enumeration - 3), is able to find and use the hardware-accelerated format (on a system appropriately equipped with OpenGL hardware.) Note that our current solution works OK on XP-64 and W2K3, except if a terminal services session is running (VNC works fine.) A solution that also allowed the process to work (i.e. run with OpenGL hardware acceleration even when a terminal services session is open) would be fanastic, although not required. I'm stuck at item #1 currently, and although there are some similar postings that discuss this (like *(this -4), and *(this - 5) - they are not suitable solutions, as there is no guarantee of a user session logged in already to "take" a session id from, nor am I running from a LocalSystem account (I'm running from a domain account for the service, for which I can adjust the privileges of, within reason, although I'd prefer to not have to escalate priorities to include SeTcbPrivileges.) For instance - here's a stub that I think should work, but always returns an error 1314 on the SetTokenInformation call (even though the AdjustTokenPrivileges returned no errors) I've used some alternate strategies involving "LogonUser" as well (instead of opening the existing process token), but I can't seem to swap out the session id. I'm also dubious about using the WTSActiveConsoleSessionId in all cases (for instance, if no interactive user is logged in) - although a quick test of the service running with no sessions logged in seemed to return a reasonable session value (1). I’ve removed error handling for ease of reading (still a bit messy - apologies) //Also tried using LogonUser(..) here OpenProcessToken(GetCurrentProcess(), TOKEN_QUERY | TOKEN_ADJUST_PRIVILEGES | TOKEN_ADJUST_SESSIONID | TOKEN_ADJUST_DEFAULT | TOKEN_ASSIGN_PRIMARY | TOKEN_DUPLICATE, &hToken) GetTokenInformation( hToken, TokenSessionId, &logonSessionId, sizeof(DWORD), &dwTokenLength ) DWORD consoleSessionId = WTSGetActiveConsoleSessionId(); /* Can't use this - requires very elevated privileges (LOCAL only, SeTcbPrivileges as well) if( !WTSQueryUserToken(consoleSessionId, &hToken)) ... */ DuplicateTokenEx(hToken, (TOKEN_QUERY | TOKEN_ADJUST_PRIVILEGES | TOKEN_ADJUST_SESSIONID | TOKEN_ADJUST_DEFAULT | TOKEN_ASSIGN_PRIMARY | TOKEN_DUPLICATE), NULL, SecurityIdentification, TokenPrimary, &hDupToken)) // Look up the LUID for the TCB Name privilege. LookupPrivilegeValue(NULL, SE_TCB_NAME, &tp.Privileges[0].Luid)) // Enable the TCB Name privilege in the token. tp.PrivilegeCount = 1; tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; if (!AdjustTokenPrivileges(hDupToken, FALSE, &tp, sizeof(TOKEN_PRIVILEGES), NULL, 0)) { DisplayError("AdjustTokenPrivileges"); ... } if (GetLastError() == ERROR_NOT_ALL_ASSIGNED) { DEBUG( "Token does not have the necessary privilege.\n"); } else { DEBUG( "No error reported from AdjustTokenPrivileges!\n"); } // Never errors here DEBUG(LM_INFO, "Attempting setting of sessionId to: %d\n", consoleSessionId ); if (!SetTokenInformation(hDupToken, TokenSessionId, &consoleSessionId, sizeof(DWORD))) *** ALWAYS FAILS WITH 1314 HERE *** All the debug output looks fine up until the SetTokenInformation call - I see session 0 is my current process session, and in my case, it's trying to set session 1 (the result of the WTSGetActiveConsoleSessionId). (Note that I'm logged into the W2K8 box via VNC, not RDC) So - a the questions: Is this approach valid, or are all service-initiated processes restricted to session 0 intentionally? Is there a better approach (short of "Launch on logon" and auto-logon for the servers?) Is there something wrong with this code, or a different way to create a process token where I can swap out the session id to indicate I want to spawn the process in a new session? I did try using LogonUser instead of OpenProcessToken, but that didn't work either. (I don't care if all spawned processes share the same non-zero session or not at this point.) Any help much appreciated - thanks! (You need to replace the 'zttp' with 'http' - StackOverflow restriction on one link in my newbie post) 2: http://msdn.microsoft.com/en-us/library/aa379608(VS.85).aspx 3: http://www.opengl.org/resources/faq/technical/mswindows.htm 4: http://stackoverflow.com/questions/2237696/creating-a-process-in-a-non-zero-session-from-a-service-in-windows-2008-server 5: http://stackoverflow.com/questions/1602996/how-can-i-lauch-a-process-which-has-a-ui-from-windows-service

    Read the article

  • HTTP Error 503 - Service is unavailable (how fix?)

    - by SilverLight
    i have a web site for download mobile files and there many users in my web site. sometimes i have the error below : HTTP Error 503 - Service is unavailable 1-so why this error happens and what is that mean? 2-as i know appache free up itself when it's oveloaded, but what about iis? how can i put some limitations in my server (i have remote access to my server) for prevent this error happening? a.is limitation of dowload's speed efficient for prevent that error's occur? how can i do that? is squid useful for this job or i can do that with another iis extension. b.is limitation of download's Bandwidth efficient for prevent that error's occur? how can i do that (with iis or another extension)? in right side of iis - configure area - i found some limits. what do those limits mean and can i use them for keep my server alive all the time? EDIT: after viewing event viewer of windows - custom views - server rols - web server (iis) i figure out there is no error in that area. but many warnings and information. the latest warnings and information are like below : warning A worker process '2408' serving application pool 'ASP.NET 4.0 (Integrated)' failed to stop a listener channel for protocol 'http' in the allotted time. The data field contains the error number. warning A process serving application pool 'ASP.NET 4.0 (Integrated)' exceeded time limits during shut down. The process id was '6764'. warning A worker process '3232' serving application pool 'ASP.NET 4.0 (Integrated)' failed to stop a listener channel for protocol 'http' in the allotted time. The data field contains the error number. warning A process serving application pool 'ASP.NET 4.0 (Integrated)' exceeded time limits during shut down. The process id was '3928'. thanks in advance best regards

    Read the article

  • HTTP Error 503. The service is unavailable

    - by user1671639
    I'm struggling to setup the environment in IIS8, I searched a lot but couldn't find a right solution. I checked the error logs, but no idea. C:\Windows\System32\LogFiles\HTTPERR 2013-10-09 09:28:39 192.168.43.205 60172 192.168.43.205 80 HTTP/1.1 GET / 503 2 AppOffline qa.hti.local 2013-10-09 09:28:39 192.168.43.205 60192 192.168.43.205 80 HTTP/1.1 GET /favicon.ico 503 2 AppOffline qa.hti.local Then in Event Viewer: WARNINGS: A listener channel for protocol 'http' in worker process '11188' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '7492' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '9088' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '9964' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '7716' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. I don't understand what the warning means. ERROR: Application pool 'qa.hti.local' is being automatically disabled due to a series of failures in the process(es) serving that application pool. Note: I learned that consecutive 5 failures leads to APP Pool crash, and this can increased. I also tried increasing this but no success. OS: Windows server 2012 IIS Version: 8 Please share your thoughts.

    Read the article

  • What should I do to make sure that IIS does not recycle my application?

    - by AngryHacker
    I have a WCF service app hosted in IIS. On startup, it goes and fetches a really expensive (in terms of time and cpu) resource to use as local cache. Unfortunately, IIS seems to recycle the process on a fairly regular basis. So I am trying to change the settings on the Application Pool to make sure that IIS does not recycle the application. So far, I've change the following: Limit Interval under CPU from 5 to 0. Idle Time-out under Process Model from 20 to 0. Regular Time Interval under Recycling from 1740 to 0. Will this be enough? And I have specific questions about the items I changed: What specifically does Limit Interval setting under CPU mean? Does it mean that if a certain CPU usage is exceeded, the application pool will be recycled? What exactly does "recycled" mean? Is the application completely torn down and started up again? What is the difference between "Worker Process shutdown" and "Application Pool recycling"? The documentation for the Idle Time-out under Process Model talks about shutting down the worker process. While the docs for Regular Time Interval under Recycling talk about application pool recycling. I don't quite grok the difference between the two. I thought the w3wp.exe is the worker process which runs the application pool. Can someone explain the difference to the application between the two? The reason for having IIS7 and IIS7.5 tags is because the app will run in both and hope the answers are the same between the versions. Image for reference:

    Read the article

  • Windows typeperf and pound signs.

    - by Weegee
    I'm currently trying to use typeperf to access some Windows performance counters. Unfortunately, a few of the instances I'm trying to check are of the format service#1. The command typeperf "\\server\Process(service#1)\Working Set Peak" is unfortunately returning the data for \\server\Process(service)\Working Set Peak rather than the data for the instance service#1. This holds true for any of the services that have pound signs in the counter string. Does anyone know of a method to get around this problem? Sample output: I:\>typeperf -s server "\Process(service#1)\Working Set" "(PDH-CSV 4.0)","\\server\Process(service)\Working Set" "10/08/2009 09:37:29.070","1643274240.000000" "10/08/2009 09:37:30.070","1643274240.000000" "10/08/2009 09:37:31.070","1643274240.000000" The command completed successfully. I:\>typeperf -s server "\Process(service#2)\Working Set" "(PDH-CSV 4.0)","\\server\Process(service)\Working Set" "10/08/2009 09:37:39.273","1643274240.000000" "10/08/2009 09:37:40.273","1643274240.000000" "10/08/2009 09:37:41.273","1643274240.000000" "10/08/2009 09:37:42.273","1643274240.000000" "10/08/2009 09:37:43.273","1643274240.000000" The command completed successfully. I can confirm in PerfMon that the Working Set value "1643274240.000000" is incorrect for both service#1 and service#2. I am running Windows XP Service Pack 2, but a co-worker who is running Windows Server 2003 was having the same troubles.

    Read the article

  • Windows service runs file locally but not on server

    - by Ben
    I created a simple Windows service in dot net which runs a file. When I run the service locally I see the file running in the task manager just fine. However, when I run the service on the server it won't run the file. I've checked the path to the file which is fine. I also checked the permissions on the folder and file, and they fine as well. Also there are no exceptions happening. Below is the code used to launch the process which runs the file. I posted this first on stack overflow, and some people were thinking this is a config issue, so I moved it here. Any ideas? try { // TODO: Add code here to start your service. eventLog1.WriteEntry("VirtualCameraService started"); // Create An instance of the Process class responsible for starting the newly process. System.Diagnostics.Process process1 = new System.Diagnostics.Process(); // Set the directory where the file resides process1.StartInfo.WorkingDirectory = "C:\\VirtualCameraServiceSetup\\"; // Set the filename name of the file to be opened process1.StartInfo.FileName = "VirtualCameraServiceProject.avc"; // Start the process process1.Start(); } catch (Exception ex) { eventLog1.WriteEntry("VirtualCameraService exception - " + ex.InnerException); }

    Read the article

  • Apache crashes a few seconds after the start.

    - by Nacho
    Hi, i've got a problem with apache. When i try to start it (/etc/init.d/apache2 start) it dies after a few seconds. It shows up on "ps aux" consuming a lot of memory and then dies. I don't know what could be causing apache to consume this amount of memory: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 13379 1.0 0.3 14376 3908 ? Ss 22:31 0:00 /usr/sbin/apache2 -k start www-data 13383 0.0 0.4 197316 4196 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13390 0.0 0.3 172728 4172 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13396 0.0 0.3 156336 4160 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13400 0.0 0.3 148140 4156 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13403 0.0 0.3 131748 4148 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start Here is a htop screenshot: http://i.imgur.com/N4Chh.png It happened suddenly, no change had been made to server config, so i don't know whats causing it. The error log of my virtual servers shows this: [Sun Jan 30 22:19:50 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=9685): Couldn't create worker thread 11 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:19:55 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=9685): Couldn't create worker thread 19 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:29:40 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=12009): Couldn't create worker thread 18 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:31:06 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=13396): Couldn't create worker thread 15 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:35:02 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=14009): Couldn't create worker thread 16 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:35:07 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=14009): Couldn't create worker thread 17 in daemon process 'fb.ebookmetafinder.com'. I'm on a ubuntu server vps and i use mod_wsgi with django. Thanks.

    Read the article

  • What could cause an 101 Error in WAMP under Windows 7 ?

    - by Brayn
    Hey, I'be been using WAMP for local development for quite a while now but lately I've been getting an Error 101 message when I browse localhost sites. It's possible for this to have appeared after the last WAMP update but I'm not 100% sure on this. If I try again and again, after several page refreshes it works but it's really annoying! The exact error message is: Error 101 (net::ERR_CONNECTION_RESET): Unknown error. This is my configuration: OS: Windows 7 Apache: 2.2.11 PHP: 5.2.9-2 WAMP: 2.0 Also the local scripts connect to a remote MySQL server, they don't use the local MySQL(I don't know if it matters, just though I let you know). I've been looking into the apache logs and I've found the following. It seems that the apache server keeps restarting and I can't figure why: [Wed Oct 14 13:52:30 2009] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Oct 14 13:52:30 2009] [notice] Apache/2.2.11 (Win32) PHP/5.2.9-2 configured -- resuming normal operations [Wed Oct 14 13:52:30 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Oct 14 13:52:30 2009] [notice] Parent: Created child process 6784 [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Child process is running [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Acquired the start mutex. [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Starting 64 worker threads. [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Starting thread to listen on port 80. [Wed Oct 14 13:52:32 2009] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Oct 14 13:52:33 2009] [notice] Apache/2.2.11 (Win32) PHP/5.2.9-2 configured -- resuming normal operations [Wed Oct 14 13:52:33 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Oct 14 13:52:33 2009] [notice] Parent: Created child process 3572 [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Child process is running [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Acquired the start mutex. [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Starting 64 worker threads. [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Starting thread to listen on port 80. Also I've checked Windows Firewall and disabled any other protection that I have on this computer with no improvement. Thanks!

    Read the article

  • Does IIS Sometimes Allocate More Worker Processes Than Configured?

    - by Paul Williams
    We have an IIS 7.5 web service on Windows Server 2008 that handles WCF requests from C# clients. This service is configured to have Maximum Worker Processes = 1, so it is not a web garden. IIS is setup to recycle itself at the same time every day (3 AM). I am trying to debug gnarly connection issues, so I wanted to be sure the application pool was not recycling itself. I configured the pool to log an event when it recycles itself. To my surprise, I see the following entries in the System event log: Level: Information Date/Time: 3/23/2012 3:00:00 AM - Source: WAS - Event ID: 5076 A worker process with process id of '6636' serving application pool 'MyAppPool' has requested a recycle because it reached its scheduled recycle time. Level: Information Date/Time: 3/23/2012 2:59:39 AM - Source: WAS - Event ID: 5076 A worker process with process id of '9364' serving application pool 'MyAppPool' has requested a recycle because it reached its scheduled recycle time. IIS is correctly recycling the application pool at 3 AM. However, I do not understand why I would be getting two recycle events in the log within a few seconds of each other. The maximum number of processes is 1. Does IIS sometimes allocate multiple processes for an application pool that is specified as having 1 process? -- edit -- I connected at about 4 PM today and only saw 1 w3wp.exe process. There are no other event log entries that would indicate a crash.

    Read the article

  • SQL Server log backups "stalling"

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment [closed]

    - by lmttag
    Possible Duplicate: How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment We have an ASP.NET Web API server that serves up a SQL Server data driven website. The API uses JSON to transfer data from SQL Server to the front end. We need to move it to an internal production environment (nothing will be exposed on the public Internet) and we’re having problems - or just not understanding what needs to be done. There are two domains: The corporate domain - where all users login normally. The process domain - contains the database the Web API needs to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. The ideal configuration is: corp domain (end users) <–> firewall (open port 80) <–> DMZ (web server running IIS) <–> firewall (open port 80 or 1433????) <–> process domain (IIS for Web API and SQL Server) We don’t really understand how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to http://server/appname/api/getitmes? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? NB: The servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.

    Read the article

  • Log backups "stalling" on SQL 2008?

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • Linux CentOS strange memory readings

    - by user2008937
    I am actually a young junior sys admin. I have a question - i am trying to understand how linux deals with memory... while playing around different monitoring programs I found some strange thing. When I run top on my laptop it shows me that FIREFOX process with pid 8778 takes 18,3% of memory (%MEM column). grep "MemTotal" /proc/meminfo Above command give me 1848336kb/1024 = 1805megs of memory (its ok - i have 2 gigs of ram). So if the firefox process takes 18,3% of MEM(according to tops %MEM column) then it takes 0.183 * 1805 which is approximately 325mb of memory. Quite a lot as for firefox... But well, in Linux there are lots of shared libraries that programs commonly uses (like famous libc). And those libraries are added to memory utilization of every process that uses it in the system, despite they are actually reading same file(single object in memory). So top may show too big mem utilization because of those shared libraries. Well, it is time to use PMAP which should show us the real mem utilization of process. But.. pmap -d $(pidof firefox) mapped: 983460K writeable/private: 757164K shared: 66416K so pmap shows that 983460/1024=993MB of memory is mapped to this process. It is in fact much bigger than mem utilization showed by top. Whats wrong here? How pmap can show more than top? even when top adds also the shared libraries (which in fact are single objects in memory) for each process that uses it? and pmap omits it? Regards Krzysztof

    Read the article

  • SQL Server log backups “stalling”

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • 40k Event Log Errors an hour Unknown Username or bad password

    - by ErocM
    I am getting about 200k of these an hour: An account failed to log on. Subject: Security ID: SYSTEM Account Name: TGSERVER$ Account Domain: WORKGROUP Logon ID: 0x3e7 Logon Type: 4 Account For Which Logon Failed: Security ID: NULL SID Account Name: administrator Account Domain: TGSERVER Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x334 Caller Process Name: C:\Windows\System32\svchost.exe Network Information: Workstation Name: TGSERVER Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Advapi Authentication Package: Negotiate Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols. - Key length indicates the length of the generated session key. This will be 0 if no session key was requested. On my server... I changed my adminstrative username to something else and since then I've been inidated with these messages. I found on http://technet.microsoft.com/en-us/library/cc787567(v=WS.10).aspx that the 4 means "Batch logon type is used by batch servers, where processes may be executing on behalf of a user without their direct intervention." which really doesn't shed any light on it for me. I checked the services and they are all logging in as local system or network service. Nothing for administrator. Anyone have any idea how I tell where these are coming from? I would assume this is a program that is crapping out... Thanks in advance!

    Read the article

  • Unable to copy a file from obj\Debug to bin\Debug

    - by M.H
    I have a project in C# and I get this error every time I try to compile the project : (Unable to copy file "obj\Debug\Project1.exe" to "bin\Debug\Project1.exe". The process cannot access the file 'bin\Debug\Project1.exe' because it is being used by another process.), so I have to close the process from the task manager. my project is only one form and there is no multithreading. what is the solution (without restarting VS or Killing the process)?

    Read the article

  • FindBugs: "may fail to close stream" - is this valid in case of InputStream?

    - by thSoft
    In my Java code, I start a new process, then obtain its input stream to read it: BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream())); FindBugs reports an error here: may fail to close stream Pattern id: OS_OPEN_STREAM, type: OS, category: BAD_PRACTICE Must I close the InputStream of another process? And what's more, according to its Javadoc, InputStream#close() does nothing. So is this a false positive, or should I really close the input stream of the process when I'm done?

    Read the article

  • Splitting a test to a set of smaller tests

    - by mkorpela
    I want to be able to split a big test to smaller tests so that when the smaller tests pass they imply that the big test would also pass (so there is no reason to run the original big test). I want to do this because smaller tests usually take less time, less effort and are less fragile. I would like to know if there are test design patterns or verification tools that can help me to achieve this test splitting in a robust way. I fear that the connection between the smaller tests and the original test is lost when someone changes something in the set of smaller tests. Another fear is that the set of smaller tests doesn't really cover the big test. An example of what I am aiming at: //Class under test class A { public void setB(B b){ this.b = b; } public Output process(Input i){ return b.process(doMyProcessing(i)); } private InputFromA doMyProcessing(Input i){ .. } .. } //Another class under test class B { public Output process(InputFromA i){ .. } .. } //The Big Test @Test public void theBigTest(){ A systemUnderTest = createSystemUnderTest(); // <-- expect that this is expensive Input i = createInput(); Output o = systemUnderTest.process(i); // <-- .. or expect that this is expensive assertEquals(o, expectedOutput()); } //The splitted tests @PartlyDefines("theBigTest") // <-- so something like this should come from the tool.. @Test public void smallerTest1(){ // this method is a bit too long but its just an example.. Input i = createInput(); InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow B b = mock(B.class); when(b.process(x)).thenReturn(expected); A classUnderTest = createInstanceOfClassA(); classUnderTest.setB(b); Output o = classUnderTest.process(i); assertEquals(o, expected); verify(b).process(x); verifyNoMoreInteractions(b); } @PartlyDefines("theBigTest") // <-- so something like this should come from the tool.. @Test public void smallerTest2(){ InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow B classUnderTest = createInstanceOfClassB(); Output o = classUnderTest.process(x); assertEquals(o, expected); }

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >