Search Results

Search found 23098 results on 924 pages for 'multiple processes'.

Page 341/924 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • C++ Asymptotic Profiling

    - by Travis
    I have a performance issue where I suspect one standard C library function is taking too long and causing my entire system (suite of processes) to basically "hiccup". Sure enough if I comment out the library function call, the hiccup goes away. This prompted me to investigate what standard methods there are to prove this type of thing? What would be the best practice for testing a function to see if it causes an entire system to hang for a sec (causing other processes to be momentarily starved)? I would at least like to definitively correlate the function being called and the visible freeze. Thanks

    Read the article

  • internet explorer, google chrome injection

    - by Volim Te
    I wrote code that injects a function in Internet Explorer/Chrome but it doesn't work with these processes. Basically, it fills one big structure with all the APIs my function needs, strings, and other data, then it opens a process to get a handle, virtualallocex to allocate enough memory to store a function and structure there, and it writes the function and the structure in allocated memory. It then runs createremotethread there with the function as a starting address and structure as parameter. It works all great with calc/notepad/winamp processes but I have problems with browser injection. I'm wondering what could it be, I'm using these APIs. x.xCreateFile x.xWriteFile x.xCloseHandle x.xSleep x.xVirtualAlloc x.xVirtualFree x.xMessageBox x.xLoadLibrary x.xShellExecute Is it because browsers are protected now and they're running with lowest privileges?

    Read the article

  • how much concurrent http request can erlang handle

    - by user209123
    I am developing a application for benchmarking purposes, for which I require to create large number of http connection in a short time, I created a program in java to test how much threads is java able to create, it turns out in my 2GB single core machine, the limit is variable between 5000 and 6000 with 1 GB of memory given to JVM after which it hits outofmemoryerror with heap limit reached. It is suggested around that erlang will be able to generate much more concurrent processes, I am willing to learn erlang if it is capable of solving the problem , although I am interested in knowing can erlang be able to say generate somewhere around 100000 processes which are essentially http requests waiting for responses, in a matter of few seconds without reaching any limit like memory error etc.,

    Read the article

  • Task vs. process, is there really any difference?

    - by DASKAjA
    Hi there, I'm studying for my final exams in my CS major on the subject distributed systems and operating systems. I'm in the need for a good definition for the terms task, process and threads. So far I'm confident that a process is the representation of running (or suspended, but initiated) program with its own memory, program counter, registers, stack, etc (process control block). Processes can run threads which share memory, so that communication via shared memory is possible in contrast to processes which have to communicate via IPC. But what's the difference between tasks and process. I often read that they're interchangable and that the term task isn't used anymore. Is that really true?

    Read the article

  • Improving HTML scrapper efficiency with pcntl_fork()

    - by Michael Pasqualone
    With the help from two previous questions, I now have a working HTML scrapper that feeds product information into a database. What I am now trying to do is improve efficiently by wrapping my brain around with getting my scrapper working with pcntl_fork. If I split my php5-cli script into 10 separate chunks, I improve total runtime by a large factor so I know I am not i/o or cpu bound but just limited by the linear nature of my scraping functions. Using code I've cobbled together from multiple sources, I have this working test: <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $hrefArray = array("http://slashdot.org", "http://slashdot.org", "http://slashdot.org", "http://slashdot.org"); function doDomStuff($singleHref,$childPid) { $html = new DOMDocument(); $html->loadHtmlFile($singleHref); $xPath = new DOMXPath($html); $domQuery = '//div[@id="slogan"]/h2'; $domReturn = $xPath->query($domQuery); foreach($domReturn as $return) { $slogan = $return->nodeValue; echo "Child PID #" . $childPid . " says: " . $slogan . "\n"; } } $pids = array(); foreach ($hrefArray as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref,$childPid); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); Which raises the following questions: 1) Given my hrefArray contains 4 urls - if the array was to contain say 1,000 product urls this code would spawn 1,000 child processes? If so, what is the best way to limit the amount of processes to say 10, and again 1,000 urls as an example split the child work load to 100 products per child (10 x 100). 2) I've learn that pcntl_fork creates a copy of the process and all variables, classes, etc. What I would like to do is replace my hrefArray variable with a DOMDocument query that builds the list of products to scrape, and then feeds them off to child processes to do the processing - so spreading the load across 10 child workers. My brain is telling I need to do something like the following (obviously this doesn't work, so don't run it): <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $maxChildWorkers = 10; $html = new DOMDocument(); $html->loadHtmlFile('http://xxxx'); $xPath = new DOMXPath($html); $domQuery = '//div[@id=productDetail]/a'; $domReturn = $xPath->query($domQuery); $hrefsArray[] = $domReturn->getAttribute('href'); function doDomStuff($singleHref) { // Do stuff here with each product } // To figure out: Split href array into $maxChilderWorks # of workArray1, workArray2 ... workArray10. $pids = array(); foreach ($workArray(1,2,3 ... 10) as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); But what I can't figure out is how to build my hrefsArray[] in the master/parent process only and feed it off to the child process. Currently everything I've tried causes loops in the child processes. I.e. my hrefsArray gets built in the master, and in each subsequent child process. I am sure I am going about this all totally wrong, so would greatly appreciate just general nudge in the right direction.

    Read the article

  • removing a line from a text file?

    - by Blackbinary
    Hi all. I am working with a text file, which contains a list of processes under my programs control, along with relevant data. At some point, one of the processes will finish, and thus will need to be removed from the file (as its no longer under control). Here is a sample of the file contents (which has enteries added "randomly"): PID=25729 IDLE=0.200000 BUSY=0.300000 USER=-10.000000 PID=26416 IDLE=0.100000 BUSY=0.800000 USER=-20.000000 PID=26522 IDLE=0.400000 BUSY=0.700000 USER=-30.000000 So for example, if I wanted to remove the line that says PID=26416.... how could I do that, without writing the file over again? I can use external unix commands, however I am not very familiar with them so please if that is your suggestion, give an example. Thanks!

    Read the article

  • Sharing some info with all DLLs pulled into a process

    - by JBRWilkinson
    Hi all, We've got an Enterprise system which has many processes (EXEs, services, DCOM servers, COM+ apps, ISAPI, MMC snapins) all of which make use of many COM components. We've recently seen failures in some of the customer deployments, but are finding it hard to troubleshoot the cause. In order to track down the problem, we've augmented the entire source with logging statements where errors occur. In order to identify which logs came from what processes, the C++ logging code (compiled into all components) uses the EXE name to name the log. This is good for some cases, but not all - COM+ apps, ISAPI and MMC snapins all have system EXE names and the logs end up interleaved. I saw this post about shared data sections which might help, but what I don't understand is who decides what goes in the shared section. Is there any way I can guarantee that a particular piece of code writes into the shared section before anyone else reads it? Or is there a better solution to this problem?

    Read the article

  • Reading and writing to SysV shared memory without synchronization (use of semaphores, C/C++, Linux)

    - by user363778
    Hi, I use SysV shared memory to let two processes communicate with each other. I do not want the code to become to complex so I wondered if I really had to use semaphores to synchronize the access to the shared memory. In my C/C++ program the parent process reads from the shared memory and the child process writes to the shared memory. I wrote two test applications to see if I could produce some kind of error like a segmentation fault, but I couldn't (Ubuntu 10.04 64bit). Even two processes writing non stop in a while loop to the same shared memory did not produce any error. I hope someone has experience concerning this matter and can tell me if I really must use semaphores to synchronize the access or if I am OK without synchronization. Thanks

    Read the article

  • How do I view how many concurrent long polling requests there are on my server?

    - by Pascal
    My host is Joyent. My host says I have 15 process limit and prstat -J shows those processes but that doesn't tell me how many long polling requests are currently being served. I could record it myself but that would add alot of performance overhead. I need to know when the server is at its long polling limits. I know this limit occurs far before the memory or CPU is used up. From experimentation, I've already verified that the number of long polls open is NOT equivalant to the number of processes running, probably because each process has multiple threads, each serving a request. thanks.

    Read the article

  • Threaded Django task doesn't automatically handle transactions or db connections?

    - by Gabriel Hurley
    I've got Django set up to run some recurring tasks in their own threads, and I noticed that they were always leaving behind unfinished database connection processes (pgsql "Idle In Transaction"). I looked through the Postgres logs and found that the transactions weren't being completed (no ROLLBACK). I tried using the various transaction decorators on my functions, no luck. I switched to manual transaction management and did the rollback manually, that worked, but still left the processes as "Idle". So then I called connection.close(), and all is well. But I'm left wondering, why doesn't Django's typical transaction and connection management work for these threaded tasks that are being spawned from the main Django thread?

    Read the article

  • malloc hangs in Linux

    - by Rahul
    I am using Linux on a 16 G with 2 quad core CPU. There are 8 processes which are doing some work (CPU intensive/network i/o). Out of which 4 have a memory leak (These are test conditions so no problem in having leaks here). Total space is occupied by all processes is around 15.4 G only 200 MB is free in system. Things are fine for some hours. But after that malloc hangs (for a process which doesn't have a memory leak). Its stuck for for more than 4 minutes (Note CPU is not 100% but io has gone up signficantly). Now there is no problem in the hanged process. (It has not corrupted the memory). What malloc is doing? (is it tryibg to defragment or builidng up swap space).I am using SUSE 10. Any pointers?

    Read the article

  • Sql Server performance

    - by Jose
    I know that I can't get a specific answer to my question, but I would like to know if I can find the tools to get to my answer. Ok we have a Sql Server 2008 database that for the last 4 days has had moments where for 5-20 minutes becomes unresponsive for specific queries. e.g. The following queries run in different query windows simultaneously have the following results SELECT * FROM Assignment --hangs indefinitely SELECT * FROM Invoice -- works fine Many of the tables have non-clustered indexes to help speed up SELECTs Here's what I know: 1) The same query will either hang indefinitely or run normally. 2) In Activity Monitor in the processes tab there are normally around 80-100 processes running I think that what's happening is 1) A user updates a table 2) This causes one or more indexes to get updated 3) Another user issues a select while the index is updating Is there a way I can figure out why at a specific moment in time SQL Server is being unresponsive for a specific query?

    Read the article

  • Creating an n tiered application

    - by aaron
    I am researching architecture for a project that will be started next year. It is mainly a c# web app, but there will be a service layer so that it can talk to our facebook/iphone app. There are a few long running processes, which means that I will be creating a windows process that can handle those. I’m thinking of putting the entire app in the windows service instead of just the long running processes. Asp - wcf - bll Vs Asp - bll I know this will be more scalable. But it is probably overkill as everything will be running on the same box, even the database. This could change down the road if the server can’t handle the traffic like marketing says it will. I don’t have access to production hardware, just my crappy testing box and my local machine. Has anyone decided to go down this route? But mostly, what is the best way to test both methods to get some metrics?

    Read the article

  • asynchronous writing and reading of a file

    - by tazim
    hi, I have two processes. 1.) One processes is redirecting output of some unix command to a file on server side.the data is always appended to the file eg : find / > tmp.txt 2.)Another process is opening and reading the same file and storing it in a string and sending the entire string to the client Now, this things take simultaneously. I am using python. Any suggestion as in what can be possible ways to implement this scenario . Please explain with sample code . Thanks in advance . Tazim.

    Read the article

  • Implementing traceback on i386

    - by markelliott2000
    Hi, I am currently porting our code from an alpha (Tru64) to an i386 processor (Linux) in C. Everything has gone pretty smoothly up until I looked into porting our exception handling routine. Currently we have a parent process which spawns lots of sub processes, and when one of these sub-processes fatal's (unfielded) I have routines to catch the process. I am currently struggling to find the best method of implementing a traceback routine which can list the function addresses in the error log, currently my routine just prints the the signal which caused the exception and the exception qualifier code. Any help would be greatly received, ideally I would write error handling for all processors, however at this stage I only really care about i386, and x86_64. Thanks Mark

    Read the article

  • Does TCP actually define 'TCP server' and 'TCP clients'? [closed]

    - by mjn
    In the Wikipedia article, TCP communication is explained using the terms 'client' and 'server'. It also uses the word 'peers'. But TCP actually does not define "TCP clients" and "TCP servers" - In the RFC 675 document (SPECIFICATION OF INTERNET TRANSMISSION CONTROL PROGRAM), the word "client" never appears. The RFC explains that TCP is used to connect processes over ports (sockets), and that 'A pair of sockets form a CONNECTION which can be used to carry data in either direction [i.e. full duplex]. Calling the originating party the "client" seems to be common practice. But this client/server communication model is not always applicable to TCP communication. For example take peer-to-peer networks. Calling all processes which open a socket (and wait for incoming connections from peers) "TCP servers", sounds wrong to me. I would not call my uncle's telephone device a "Telephony server" if I dial his phone number and he picks up.

    Read the article

  • system-wide hook for 64-bit operating systems

    - by strDisplayName
    Hey everybody I want to perform a system-wide hook (using SetWindowHook) on a 64bit operating system. I know that 64bit processes (= proc64) can load only 64bit dlls (= dll64) and 32bit processes (= proc32) can load only 32bit dlls (= dll32). Currently I am planning to call SetWindowHook twice, once with dll32 and once with dll64, expecting that proc64s will load dll64 and proc32s will load dll32 (while dll32 for proc64s and dll64 for proc32s will fail). Is that the correct way to do that, or is there a "more correct" way to do that? Thanks! :-)

    Read the article

  • pass custom environment variables to System.Diagnostics.Process

    - by Mike Ruhlin
    I'm working on an app that invokes external processes like so: ProcessStartInfo startInfo = new ProcessStartInfo(PathToExecutable, Arguments){ ErrorDialog = false, RedirectStandardError = true, RedirectStandardOutput = true, UseShellExecute = false, CreateNoWindow = true, WorkingDirectory = WorkingDirectory }; using (Process process = new Process()) { process.StartInfo = startInfo; process.Start(); process.BeginErrorReadLine(); process.BeginOutputReadLine(); process.WaitForExit(); return process.ExitCode; } One of the processes I'm calling depends on an environment variable that I'd rather not require my users to set. Is there any way to modify the environment variables that get sent to the external process? Ideally I'd be able to make them visible only to the process that's running, but if I have to programmatically set them system-wide, I'll settle for that (but, would UAC force me to run as administrator to do that?) ProcessStartInfo.EnvironmentVariables is read only, so a lot of help that is...

    Read the article

  • How does Process Explorer enumerate all process names from an XP Guest account?

    - by Joe
    I'm attempting to enumerate all running process EXE names, and have stumbled when attempting this on the XP Guest account. I am able to enumerate all Process IDs using EnumProcesses, but when I attempt OpenProcess with PROCESS_QUERY_INFORMATION Or PROCESS_VM_READ, the function fails. I fired up Process Explorer under the XP Guest account, and it was able to enumerate all process names (though as expected, most other information from processes outside the Guest user-space was not present). So, my question is, how can I duplicate the Process Explorer magic to get the process names of services and other processes running outside the Guest account user-space?

    Read the article

  • SQL Server 2008 Management Studio Activity Monitor

    - by Angelo
    Hi, I tried to turn on the Activity Monitor using SQL Server 2008 Management Studio (SSMS) through the options window of the application (Tools | Options | Environment | General | At Startup). I restarted SSMS and I am getting the following message: "This operation does not support connections to Microsoft SQL Server Standard Edition version 8.00.2249." I need to be able to monitor the processes and activities inside the database since I am investigating a particular application which takes a lot of time in its database data retrieval access and I am thinking it may be due to some locks or some processes. How do I resolve this? Inputs highly appreciated. Thanks.

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >