Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 143/976 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • How can I ignore C comments when I process a C source file with Perl?

    - by YoDar
    I'm running a code that read files, do some parsing, but need to ignore all comments. There are good explanations how to conduct it, like the answer to How can I strip multiline C comments from a file using Perl? $/ = undef; $_ = <>; s#/\*[^*]*\*+([^/*][^*]*\*+)*/|("(\\.|[^"\\])*"|'(\\.|[^'\\])*'|.[^/"'\\]*)#defined $2 ? $2 : ""#gse; print; My first problem is that after run this line $/ = undef; my code doesn't work properly. Actually, I don't know what it does. But if I could turn it back after ignoring all comments it will be helpful. In general, What is the useful way to ignore all comments without changing the rest of the code?

    Read the article

  • ASP.Net: HTTP 400 Bad Request error when trying to process http://localhost:5957/http://yahoo.com

    - by mat3
    I'm trying to create something similar to the diggbar : http://digg.com/http://cnn.com I'm using Visual Studio 2010 and Asp Development server. However, I can't get the ASP dev server to handle the request because it contains "http:" in the path. I've tried to create an HTTPModule to rewrite the URL in the BeginRequest , but the event handler doesn't get called when the url is http://localhost:5957/http://yahoo.com. The event handler does get called if the url is http://localhost:5957/http/yahoo.com To summarize http://localhost:5957/http/yahoo.com works http://localhost:5957/http//yahoo.com does not work http://localhost:5957/http://yahoo.com does not work http://localhost:5957/http:/yahoo.com does not work Any ideas?

    Read the article

  • In Perl, how can I wait for threads to end in parallel?

    - by Pmarcoen
    I have a Perl script that launches 2 threads,one for each processor. I need it to wait for a thread to end, if one thread ends a new one is spawned. It seems that the join method blocks the rest of the program, therefore the second thread can't end until everything the first thread does is done which sort of defeats its purpose. I tried the is_joinable method but that doesn't seem to do it either. Here is some of my code : use threads; use threads::shared; @file_list = @ARGV; #Our file list $nofiles = $#file_list + 1; #Real number of files $currfile = 1; #Current number of file to process my %MSG : shared; #shared hash $thr0 = threads->new(\&process, shift(@file_list)); $currfile++; $thr1 = threads->new(\&process, shift(@file_list)); $currfile++; while(1){ if ($thr0->is_joinable()) { $thr0->join; #check if there are files left to process if($currfile <= $nofiles){ $thr0 = threads->new(\&process, shift(@file_list)); $currfile++; } } if ($thr1->is_joinable()) { $thr1->join; #check if there are files left to process if($currfile <= $nofiles){ $thr1 = threads->new(\&process, shift(@file_list)); $currfile++; } } } sub process{ print "Opening $currfile of $nofiles\n"; #do some stuff if(some condition){ lock(%MSG); #write stuff to hash } print "Closing $currfile of $nofiles\n"; } The output of this is : Opening 1 of 4 Opening 2 of 4 Closing 1 of 4 Opening 3 of 4 Closing 3 of 4 Opening 4 of 4 Closing 2 of 4 Closing 4 of 4

    Read the article

  • Communicate multiple times with a process without breaking the pipe?

    - by Manux
    Hello, it's not the first time I'm having this problem and its really bugging me. Whenever I open a pipe using the Python subprocess module, I can only communicate with it once, as the documentation specifies: Read data from stdout and stderr, until end-of-file is reached proc = sub.Popen("psql -h darwin -d main_db".split(),stdin=sub.PIPE,stdout=sub.PIPE) print proc.communicate("select a,b,result from experiment_1412;\n")[0] print proc.communicate("select theta,zeta,result from experiment_2099\n")[0] The problem here is that the second time, Python isn't happy. Indeed, he decided to close the file after the first communicate: Traceback (most recent call last): File "a.py", line 30, in <module> print proc.communicate("select theta,zeta,result from experiment_2099\n")[0] File "/usr/lib64/python2.5/subprocess.py", line 667, in communicate return self._communicate(input) File "/usr/lib64/python2.5/subprocess.py", line 1124, in _communicate self.stdin.flush() ValueError: I/O operation on closed file So... multiple communications aren't allowed? I hope not ;) Please enlighten me.

    Read the article

  • Running processes at different times stops events from working - C

    - by Jamie Keeling
    Hello, This is a question which follows on from my previously answered question here At first I assumed I had a problem with the way I was creating my events due to the handles for OpenEvent returning NULL, I have managed to find the real cause however I am not sure how to go about it. Basically I use Visual Studio to launch both Process A and B at the same time, in the past my OpenEvent handle wouldn't work due to Process A looking for the address of the event a fraction of a second before Process B had time to make it. My solution was to simply allow Process B to run before Process A, fixing the error. The problem I have now is that Process B now reads events from Process A and as you expect it too returns a null handle when trying to open the events from Process A. I am creating the events in WM_CREATE message of both processes, furthermore I also create a thread at the same time to open/read/act upon the events. It seems if I run them at the same time they don't get chance to see each other, alternatively if I run one before the other one of them misses out and can't open a Handle. Can anyone suggest a solution? Thanks.

    Read the article

  • The way cores, processes, and threads work exactly?

    - by unknownthreat
    I need a bit of an advice for understanding how this whole procedure work exactly. If I am incorrect in any part described below, please correct me. In a single core CPU, it runs each process in the OS, jumping around from one process to another to utilize the best of itself. A process can also have many threads, in which the CPU core runs through these threads when it is running on the respective process. Now, on a multiple core CPU, Do the cores run in every process together, or can the cores run separately in different processes at one particular point of time? For instance, you have program A running two threads, can a duo core CPU run both threads of this program? I think the answer should be yes if we are using something like OpenMP. But while the cores are running in this OpenMP-embedded process, can one of the core simply switch to other process? For programs that are created for single core, when running at 100%, why the CPU utilization of each core are distributed? (ex. A duo core CPU of 80% and 20%. The utilization percentage of all cores always add up to 100% for this case.) Do the cores try help each other run each thread of each process in some ways? Frankly, I'm not sure how this works exactly. Any advice is appreciated.

    Read the article

  • Perl daemon script for message queue hanging for 20 seconds after each process. Why?

    - by Mike Diena
    I have daemon script written in Perl that checks a database tables for rows, pulls them in one by one, sends the contents via HTTP post to another service, then logs the result and repeats (only a single child). When there are rows present, the first one is posted and logged immediately, but every subsequent one is delayed for around 20 seconds. There are no sleep()'s running, and I can't find any other obvious delays. Any ideas?

    Read the article

  • Io exception: There is no process to read data written to a pipe.

    - by Srikanth
    I'm using Hibernate3.2+Websphere6.0+struts1.3.. After deploying ,application works fine. After some idle time ,i will get this type of error repeatedly,am not able to login at all. Im not using any connection pooling. i feel after idle time its not able to connect to the database again..if i restart the server everything works fine for some time...after that same story.. please help me out

    Read the article

  • In a Linux user space process what is the address of the vsyscall page?

    - by TomMD
    I would like to acquire the address of the vsyscall page for my own uses. I only have two ideas here: alter the compiler to store this information in some known location after it is given to __start, or read /proc/[pid]/maps. I really don't want to read /proc/ as that is slow and shouldn't be necessary. I also don't want to make compiler modifications. Does anyone have an alternative? Is there a symbol I should know about? Its at the point I'm tempted to stuff this functionality into an ioctl call in a module I've developed as part of this work!

    Read the article

  • What does the Kernel Virtual Memory of each process contain?

    - by claws
    When say 3 programs (executables) are loaded into memory the layout might look something like this: I've following questions: Is the concept of Virtual Memory limited to user processes? Because, I am wondering where does the Operating System Kernel, Drivers live? How is its memory layout? I know its operating system specific make your choice (windows/linux). They say, on a 32 bit machine in a 4GB address space. Half of it (or more recently 1GB) is occupied by kernel. I can see in this diagram that "Kernel Virtual memory" is occupying 0xc0000000 - 0xffffffff (= 1 GB). Are they talking about this? or is it something else? Just want to confirm. What exactly does the Kernel Virtual Memory of each of these processes contain? What is its layout? When we do IPC we talk about shared memory. I don't see any memory shared between these processes. Where does it live? Resources (files, registries in windows) are global to all processes. So, the resource/file handle table must be in some global space. Which area would that be in? Where can I know more about this kernel side stuff.

    Read the article

  • How does System.TraceListener prepend message with process name?

    - by btlog
    I have been looking at using System.Diagnostics.Trace for doing logging is a very basic app. Generally it does all I need it to do. The downside is that if I call Trace.TraceInformation("Some info"); The output is "SomeApp.Exe Information: 0: Some info". Initally this entertained me but no longer. I would like to just output "Some info" to console. So I thought writing a cusom TraceListener, rather than using the inbuilt ConsoleTraceListener, would solve the problem. I can see a specific format that I want all the text after the second colon. Here is my attempt to see if this would work. class LogTraceListener : TraceListener { public override void Write(string message) { int firstColon = message.IndexOf(":"); int secondColon = message.IndexOf(":", firstColon + 1); Console.Write(message); } public override void WriteLine(string message) { int firstColon = message.IndexOf(":"); int secondColon = message.IndexOf(":", firstColon + 1); Console.WriteLine(message); } } If I output the value of firstColon it is always -1. If I put a break point the message is always just "Some info". Where does all the other information come from? So I had a look at the call stack at the point just before Console.WriteLine was called. The method that called my WriteLine method is: System.dll!System.Diagnostics.TraceListener.TraceEvent(System.Diagnostics.TraceEventCache eventCache, string source, System.Diagnostics.TraceEventType eventType, int id, string message) + 0x33 bytes When I use Reflector to look at this message it all seems pretty straight forward. I can't see any code that changes the value of the string after I have sent it to Console.WriteLine. The only method that could posibly change the underlying string value is a call to UnsafeNativeMethods.EventWriteString which has a parameter that is a pointer to the message. Does anyone understand what is going on here and whether I can change the output to be just my message with out the additional fluff. It seems like evil magic that I can pass a string "Some info" to Console.WriteLine (or any other method for that matter) and the string that output is different.

    Read the article

  • How do I upload a file, process it and return a result file in a single request to a REST WCF service?

    - by sharptooth
    I need to implement the following scenario in a REST service implemented in WCF: the user submits a binary file and a set of parameters the server consumes the file, does some clever work and generates a binary output file the user retrieves that binary result file and all that is done in a single operation from the client perspective. It's pretty easy in a non-REST service. How do I do that in a REST service? Where do I get started?

    Read the article

  • how to find out how much application memory django process is (or will be) taking?

    - by photographer
    There are different "Application memory" options (like 80MB...200MB) in django-friendly hosting called webfaction and I'm confused deciding which one I should buy. Could someone please walk me through the ideas on how to figure out how much memory my project might require (excluding operating system, the main apache server and the database servers memory requirements)? I understand in theory I'll need to perform some kind of load testing, but thought there might be ways to calculate that in advance with some simple/relatively easy understandable approach. I don't know how hard they enforce application memory usage limit, and another question is: what will happen if more users came to the site and more threads started than what I expected? Will the application crash? Or will delays just become uncomfortable? And - no, application is not ready yet (I can't measure anything right now). Development environment if it matters is Winodows 7, 64-bit. Hosting itself is some kind of Linux I think. (Sorry if it's not a stackoverflow question.)

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • Which work process in my company should I Improve first?

    - by shoren
    I've just started to work in a new place, and I see several things they do that I find really terrible, and I want to know if they are indeed so wrong, or I am just too strict. Please let me know if my criticism is in place, and your opinion on which problem is the worst and should be fixed first. The developement is all in Java. 1) Not using svnignore. This means svn stat can't be used, and developers forget to add files and break the build. 2) Generated files go to same folders as committed files. Can't use simple maven clean, have to find them one by one. Maven clean doesn't remove all of them. 3) Not fixing IDE analyze warnings. Analyze code returns about 5,000 warning, of many different kinds. 4) Not following conventions: spring beans names sometimes start with uppercase and sometimes not, ant properties sometimes with underline and sometimes with dots delimiter, etc. 5) Incremental build takes 6 minutes, even when nothing is changed. 6) Developers only use remote debug, and don't know how to run the Tomcat server internally from the IDE. 7) Developers always restart the server after every compilation, instead of dynamically reloading the class and saving the server's state. It takes them at least 10 minutes to start checking any change in the code. 8) Developers only compile from command line. When there are compilation errors, they manually open the file and go the the problematic line. 9) A complete mess in project dependencies. Over 200 open sources are depended on, and no one knows what is indeed needed and why. They do know that not all dependencies are necessary. 10) Mixing Maven and Ant in a way that disables the benefits of both. In one case, even dependency checks are not done by Maven. 11) Not using generics properly. 12) Developers don't use Subversion integration with IDE (Eclipse, Intellij Idea). What do you think? Where should I start? Is any of the things I mentioned not really a problem?

    Read the article

  • How do I write a bash script to restart a process if it dies?

    - by Tom
    I have a python script that'll be checking a queue and performing an action on each item: # checkqueue.py while True: check_queue() do_something() How do I write a bash script that will check if it's running, and if not, start it. Roughly the following pseudo code (or maybe it should do something like ps | grep?): # keepalivescript.sh if processidfile exists: if processid is running: exit, all ok run checkqueue.py write processid to processidfile I'll call that from a crontab: # crontab */5 * * * * /path/to/keepalivescript.sh Thanks in advance.

    Read the article

  • How can I get the "Latency" of a process that has a TCP connection open?

    - by Dave
    Hello, I am looking to get the "Latency" field of a TCP connection. I notice windows Resource Monitor has this field, and I was wondering if there was a way I can find it. Preferrably without using WMI. If you are unsure what field I am talking about, open Task Manager, goto the Performance tab and hit the Resource Monitor button. Once Resource Monitor is open, expand the TCP Connections area and you will see a Latency field. Is there anyway to access this programatically? Thanks!

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >