Search Results

Search found 12281 results on 492 pages for 'memory fences'.

Page 212/492 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • .NET Debugging - System.Threading.ExecutionContext.runTryCode

    - by moogs
    We have this bug that only appears 30% of the time for the Release build. Opening the crash dump in WinDbg (snipped "!analyze -v" output): FAULTING_IP: +4 00000000`00000004 ?? ??? EXCEPTION_RECORD: ffffffffffffffff -- (.exr 0xffffffffffffffff) ExceptionAddress: 0000000000000004 ExceptionCode: c0000005 (Access violation) ExceptionFlags: 00000000 NumberParameters: 2 Parameter[0]: 0000000000000008 Parameter[1]: 0000000000000004 Attempt to execute non-executable address 0000000000000004 ERROR_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s. WRITE_ADDRESS: 0000000000000004 MANAGED_STACK: (TransitionMU) 0000000024B9E370 000007FEEDA1DD38 mscorlib_ni!System.Threading.ExecutionContext.runTryCode(System.Object)+0x178 (TransitionUM) (TransitionMU) 0000000024B9DFB0 000007FF00439010 MyLibrary!DocInfo.IsStatusOK()+0x30 Now, IsStatusOK() just calls PrintSystemJobInfo.Get(), but that doesn't seem to even appear in the stack. Any ideas on how to debug this? I'm sure runTryCode() is really not the problem...but..I'm stuck. Thanks! (I'm really groping here).

    Read the article

  • How can I prevent IIS from trying to load a dll?

    - by Abtin Forouzandeh
    My project is a Speech Server application using Windows Workflow. It runs as an app under IIS. It supports a plug-in system. Here is what is happening: Load DLL into memory and set the type on an InvokeWorkflow control. When the InvokeWorkflow control runs, it appears to correctly instantiate the workflow from the loaded assembly - it completes the Initialize method. Everything crashes an burns, the target workflow is never executed. I can resolve this by putting a copy of the DLL in the application's executing directory. The workflow then executes correctly So it appears that IIS is trying to reload the assembly, even though its already in memory. Is there anyway to alter or disable this behavior in IIS? Perhaps a hook I can write that will intercept the request to load the dll and use my own logic to do so?

    Read the article

  • DAO, Spring and Hibernate

    - by EugeneP
    Correct me if anything is wrong. Now when we use Spring DAO for ORM templates, when we use @Transactional attribute, we do not have control over the transaction and/or session when the method is called externally, not within the method. Lazy loading saves resources - less queries to the db, less memory to keep all the collections fetched in the app memory. So, if lazy=false, then everything is fetched, all associated collections, that is not effectively, if there are 10,000 records in a linked set. Now, I have a method in a DAO class that is supposed to return me a User object. It has collections that represent linked tables of the database. I need to get a object by id and then query its collections. Hibernate "failed to lazily initialize a collection" exception occurs when I try to access the linked collection that this DAO method returns. Explain please, what is a workaround here?

    Read the article

  • How does jQuery stores data with .data()?

    - by TK
    I am a little confused how jQuery stores data with .data() functions. Is this something called expando? Or is this using HTML5 Web Storage although I think this is very unlikely? The documentation says: The .data() method allows us to attach data of any type to DOM elements in a way that is safe from circular references and therefore from memory leaks. As I read about expando, it seems to have a rick of memory leak. Unfortunately my skills are not enough to read and understand jQuery code itself, but I want to know how jQuery stores such data by using data(). http://api.jquery.com/data/

    Read the article

  • GLTessellator crashing

    - by user146780
    I'v followed a tutorial to get the GLU tesselator working. It woulds except the interpolation for colors of new points causes a crash (error reading from memory...) This is my callback where it crashes: void CALLBACK combineCallback(GLdouble coords[3], GLdouble *vertex_data[4], GLfloat weight[4], GLdouble **dataOut) { GLdouble *vertex; int i; vertex = (GLdouble *) malloc(6 * sizeof(GLdouble)); vertex[0] = coords[0]; vertex[1] = coords[1]; vertex[2] = coords[2]; //crashes here **for (int i = 3; i < 6; i++) { vertex[i] = weight[0] * vertex_data[0][i] + weight[1] * vertex_data[1][i] + weight[2] * vertex_data[2][i] + weight[3] * vertex_data[3][i]; }** //crashes here *dataOut = vertex; } I looked at memory when it crashes but can't put my finger on exactly what triggers it. I followed this tutorial: http://www.flipcode.com/archives/Polygon_Tessellation_In_OpenGL.shtml Thanks

    Read the article

  • Remove UIViewController from UIScrollView?

    - by tobi
    Hi, i add different View with (setPageID) to a ScrollView, but know i get a Memory problem on rotaion and i want to remove the actualy not showed view... how can i do this or how can i remove the memory problem? Thanks!!! - (void)setPageID:(int)page { if (page < 0) return; if (page >= self.listOfItems.count) return; CGFloat cx = 0; ScrollingViewStep *controller = [viewControllers objectAtIndex:page]; if ((NSNull *)controller == [NSNull null]) { controller = [[ScrollingViewStep alloc] init]; [viewControllers replaceObjectAtIndex:page withObject:controller]; [controller release]; } if (nil == controller.view.superview) { if([[UIDevice currentDevice] orientation] == UIDeviceOrientationPortrait || [[UIDevice currentDevice] orientation] == UIDeviceOrientationPortraitUpsideDown) { cx = 768.0 * page; controller.view.frame = CGRectMake(cx, 0.0 , 768.0f, 926.0f); } else { cx = 1024.0 * page; controller.view.frame = CGRectMake(cx, 0.0 , 1024.0f, 670.0f); } [controller setView:ItemID PageID:page Text:[[self.listOfItems objectAtIndex:page] objectForKey:@"step"]]; [scrollView addSubview:controller.view]; } } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { CGFloat cx2 = 0; for (int i = 0; i < [self.viewControllers count]; i++) { ScrollingViewStep *viewController = [self.viewControllers objectAtIndex:i]; if ((NSNull *)viewController != [NSNull null]) { if([[UIDevice currentDevice] orientation] == UIDeviceOrientationPortrait || [[UIDevice currentDevice] orientation] == UIDeviceOrientationPortraitUpsideDown) { cx2 = 768.0 * i; viewController.view.frame = CGRectMake(cx2, 0.0 , 768.0f, 926.0f); [viewController repos]; } else { cx2 = 1024.0 * i; viewController.view.frame = CGRectMake(cx2, 0.0 , 1024.0f, 670.0f); [viewController repos]; } } } if((interfaceOrientation == UIInterfaceOrientationPortrait) || (interfaceOrientation == UIInterfaceOrientationPortraitUpsideDown)) { CGRect frame = scrollView.frame; frame.origin.x = 768 * currentPageInScrollview; frame.origin.y = 0; [scrollView scrollRectToVisible:frame animated:NO]; } else { CGRect frame = scrollView.frame; frame.origin.x = 1024 * currentPageInScrollview; [scrollView scrollRectToVisible:frame animated:NO]; } pageControlIsChangingPage = YES; return YES; } - (void)didReceiveMemoryWarning { int currentPage = currentPageInScrollview; NSLog(@"MEMORY"); // unload the views+controllers which are no longer visible UIViewController *l; for (int i = 0; i < [self.viewControllers count]; i++) { ScrollingViewStep* viewController = [self.viewControllers objectAtIndex:i]; if((NSNull *)viewController != [NSNull null]) { if(i < currentPage-1 || i > currentPage+1) { [self.viewControllers replaceObjectAtIndex:i withObject:[NSNull null]]; } } } [super didReceiveMemoryWarning]; }

    Read the article

  • Inserting clobs into oracle with ODBC

    - by Carra
    I'm trying to insert a clob into Oracle. If I try this with an OdbcConnection it does not insert the data into the database. It returns 1 row affected and no errors but nothing is inserted into the database. It does work with an OracleConnection. However, using the Microsoft OracleClient makes our webservices often crash with an AccessViolationException (Attempted to read or write protected memory. This is often an indication that other memory is corrupt.). This happens a lot when we use OracleDataAdapter.Fill(dataset). So using this doesn't seem like an option. Is there any way to insert/update clobs with more then 4.000 characters from .Net with an OdbcConnection?

    Read the article

  • Practical non-Turing-complete languages?

    - by Kyle Cronin
    Nearly all programming languages used are Turing Complete, and while this affords the language to represent any computable algorithm, it also comes with its own set of problems. Seeing as all the algorithms I write are intended to halt, I would like to be able to represent them in a language that guarantees they will halt. Regular expressions used for matching strings and finite state machines are used when lexing, but I'm wondering if there's a more general, broadly language that's not Turing complete? edit: I should clarify, by 'general purpose' I don't necessarily want to be able to write all halting algorithms in the language (I don't think that such a language would exist) but I suspect that there are common threads in halting proofs that can be generalized to produce a language in which all algorithms are guaranteed to halt. There's also another way to tackle this problem - eliminate the need for theoretically infinite memory. Once you limit the amount of memory the machine is allowed, the number of states the machine is in is finite and countable, and therefore you can determine if the algorithm will halt (by not allowing the machine to move into a state it's been in before).

    Read the article

  • Adding trend lines/boxplots (by group) in ggplot2

    - by Tal Galili
    Hi all, I have 40 subjects, of two groups, over 15 weeks, with some measured variable (Y). I wish to have a plot where: x = time, y = T, lines are by subjects and colours by groups. I found it can be done like this: TIME <- paste("week",5:20) ID <- 1:40 GROUP <- sample(c("a","b"),length(ID), replace = T) group.id <- data.frame(GROUP, ID) a <- expand.grid(TIME, ID) colnames(a) <-c("TIME", "ID") group.id.time <- merge(a, group.id) Y <- rnorm(dim(group.id.time)[1], mean = ifelse(group.id.time$GROUP =="a",1,3) ) DATA <- cbind(group.id.time, Y) qplot(data = DATA, x=TIME, y=Y, group=ID, geom = c("line"),colour = GROUP) But now I wish to add to the plot something to show the difference between the two groups (for example, a trend line for each group, with some CI shadelines) - how can it be done? I remember once seeing the ggplot2 can (easily) do this with geom_smooth, but I am missing something about how to make it work. Also, I wondered at maybe having the lines be like a boxplot for each group (with a line for the different quantiles and fences and so on). But I imagine answering the first question would help me resolve the second. Thanks.

    Read the article

  • How to determine Windows.Diagnostics.Process from ServiceController

    - by Alex
    This is my first post, so let me start by saying HELLO! I am writing a windows service to monitor the running state of a number of other windows services on the same server. I'd like to extend the application to also print some of the memory statistics of the services, but I'm having trouble working out how to map from a particular ServiceController object to its associated Diagnostics.Process object, which I think I need to determine the memory state. I found out how to map from a ServiceController to the original image name, but a number of the services I am monitoring are started from the same image, so this won't be enough to determine the Process. Does anyone know how to get a Process object from a given ServiceController? Perhaps by determining the PID of a service? Or else does anyone have another workaround for this problem? Many thanks, Alex

    Read the article

  • ActiveMQ 5.2.0 + REST + HTTP POST = java.lang.OutOfMemoryError

    - by Bruce Loth
    First off, I am a newbie when it comes to JMS & ActiveMQ. I have been looking into a messaging solution to serve as middleware for a message producer that will insert XML messages into a queue via HTTP POST. The producer is an existing system written in C++ that cannot be modified (so Java and the C++ API are out). Using the "demo" examples and some trial and error, I have cobbled together a working example of what I want to do (on a windows box). The web.xml I configured in a test directory under "webapps" specifies that the HTTP POST messages received from the producer are to be handled by the MessageServlet. I added a line for the text app in "activemq.xml" ('ow' is the test app dir): I created a test script to "insert" messages into the queue which works well. The problem I am running into is that it as I continue to insert messages via REST/HTTP POST, the memory consumption and thread count used by ActiveMQ continues to rise (It happens when I have timely consumers as well as slow or non-existent consumers). When memory consumption gets around 250MB's and the thread count exceeds 5000 (as shown in windows task manager), ActiveMQ crashes and I see this in the log: Exception in thread "ActiveMQ Transport Initiator: vm://localhost#3564" java.lang.OutOfMemoryError: unable to create new native thread It is as if Jetty is spawning a new thread to handle each HTTP POST and the thread never dies. I did look at this page: http://activemq.apache.org/javalangoutofmemory.html and tried but that didn't fix the problem (although I didn't fully understand the implications of the change either). Does anyone have any ideas? Thanks! Bruce Loth PS - I included the "test message producer" python script below for what it is worth. I created batches of 100 messages and continued to run the script manually from the command line while watching the memory consumption and thread count of ActiveMQ in task manager. def foo(): import httplib, urllib body = "<?xml version='1.0' encoding='UTF-8'?>\n \ <ROOT>\n \ [snip: xml deleted to save space] </ROOT>" headers = {"content-type": "text/xml", "content-length": str(len(body))} conn = httplib.HTTPConnection("127.0.0.1:8161") conn.request("POST", "/ow/message/RDRCP_Inbox?type=queue", body, headers) response = conn.getresponse() print response.status, response.reason data = response.read() conn.close() ## end method definition ## Begin test code count = 0; while(count < 100): # Test with batches of 100 msgs count += 1 foo()

    Read the article

  • Are there any free .NET OCR libraries that will perform OCR on an application window directly?

    - by Kelsey
    I am looking for a free .NET OCR library that will be able to do OCR on a given application window or even a image in memory (I can take a snapshot of the application window myself). I have looked at tessnet2 and MODI but both require an image located on disk. I need to use OCR because the application I am trying to write a script for does some wacky stuff that cannot be read using windows API and I need to scrape data from the screen. I have tested both of tessnet2 and MODI and they both can read the text mostly but because this has to run in an enviroment that will not be able to write to disk, I need it to be able to read directly from the applciation window or some type of memory stream. I am thinking OCR is my only soution but there could be other methods that I am not thinking of. Suggestions?

    Read the article

  • QT vs. Net - REAL comparisons for R.A.D. projects

    - by Pirate for Profit
    Man in all these Qt vs. .NET discussions 90% these people argue about the dumbest crap. Trying to get a real comparison chart here, because I know a little about both frameworks but I don't know everything. I believe Qt and .NET both have strengths and weaknesses. This is to make a comparison that highlights these so people can make more informed decisions before embarking on a project, in the spirit of R.A.D. Event Handling In Qt the event handling system is very simple. You just emit signals when something cool happens and then catch them in slots. ie. // run some calculations, then emit valueChanged(30, false, 20.2); and then catching it, any object can make a slot to recieve that message easily void MyObj::valueChanged(int percent, bool ok, float timeRemaining). It's easy to "block" an event or "disconnect" when needed, and works seamlessly across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, void valueChanged(object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET anonymous delegates are the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down for the simplest yet still flexible event handling system ever i m o. Plugins and such I do love some of the ease of C# compared to C++, as well as .NET's assembly architecture, even though it leads to a bunch of .dll's (there's ways to combine everything into a single exe though). That is a big bonus for modular projects, which are a PITA to import stuff in C++ as far as RAD is concerned. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. QConcurrent is the simplest threading mechanism I've ever played with. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? Doesn't the just-in-time compiler make native code that is pretty good, like and that only happens the first time the program is run? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • How to free member (i.e BSTR, SAFEARRAY, VARIANT) of an IDL User Defined Structure which is encapsul

    - by Picaro De Vosio
    Hi, I have a structure defined in IDL. This structure has following members: { BSTR m_sFirst; BSTR m_sSecond; VARIANT m_vChildStruct; //This member encapsulate a sub structure SAFEARRAY __RPC_FAR * m_saArray; }CustomINFO; I am allocating the memory for the structs using CoTaskMemAlloc and encapsulating it in Variant as follows: vV-vt = VT_RECORD; vV-pvRecord = pStruct; //Pointer of sturct vV-pRecInfo = pRI; //RecordInfo Interface Is it enough to call VariantClear to deallocate the memory of struct and its members? Will it also release the IRecordInfo interface? Or i have to manually get the encapsulated struct and deallocate each member myself and then use CoTaskMemFree to deallocate sturct. Thanks Picaro De Vosio

    Read the article

  • Google Toolbox For Mac with Core Data on iPhone results in error

    - by JaanusSiim
    I have set up my project for using Google Toolbox for Mac as described on official wiki. And everything is working as expected. For core data usage I have created a 'database' class that uses for final application SQLite storage (this is done based on Xcode template code). For unit tests I have created separate init method for 'database' to use in memory storage (storage url is [NSURL URLWithString:@"memory://store"] and type NSInMemoryStoreType). Without adding my model file (*.xcdatamodel) to unit tests target, test fail in expected place with message: executeFetchRequest:error: A fetch request must have an entity. If I add model file to the test target, then test is executed as expected (core data part looks OK), but after tests execution I get: RunIPhoneUnitTest.sh: line 123: 9487 Segmentation fault "$TARGET_BUILD_DIR/$EXECUTABLE_PATH" -RegisterForSystemEvents Command /bin/sh failed with exit code 139 This problem does not looks directly related to core data, but only happens if model file is added to target. Any pointers on resolving this issue would be appreciated!

    Read the article

  • binary search and trio

    - by user121196
    given a large list of alphabetically sorted words in a file,I need to write a program that, given a word x, determines if x is in the list. priorties: 1. speed. 2. memory I already know I can use (n is number of words, m is average length of the words) 1. a tri, time is O(log(n)), space(best case) is O(log(n*m)), space(worst case) is O(n*m). 2. load the complete list into memory, then binary search, time is O(log(n)), space is O(n*m) I'm not sure about the complexity on tri, please correct me if they are wrong. Also are there other good approaches?

    Read the article

  • Penalty of using QGraphicsObject vs QGraphicsItem?

    - by Dutch
    I currently have a hierarchy of items based off of QGraphicsItem. I want to move to QGraphicsObject instead so that I can put properties on my items. I will not be making use of signals/slots or any other features of QObject. I'm told that you shouldn't derive from QObject because it's "heavy" and "slow". To test the impact, I derive from QGraphicsObject, add a couple properties to my items, and look at the memory usage of the running app. I create 1000 items using both flavors and I don't notice anything more than 10k more memory usage. Since all I am adding on to my items are properties, is it safe to say that QObject only adds weight if you are using signals/slots?

    Read the article

  • Problem with return 2 libc method

    - by jth
    Hi, I'am trying to understand the return2libc method. I'am using an ubuntu linux 9.10, 32 bit with ASLR disabled. In theory, it sounds quite easy, overwrite the saved eip with the address of system() (or whatever function you want), then put the address to which system() should return and after that, the parameter for system, the "/bin/bash"-string. But what happens is that my exploit keeps segfaulting the vulnerable program. I assume something with the system()-address went wrong. This is what I did so far: Determined the address of system(): (gdb) print system $1 = {<text variable, no debug info>} 0x167020 <system> (gdb) x/x system 0x167020 <system>: 0x890cec83 I used the subsequent x/x system because those 3 bytes returned by print system looks like an index in some sort of jumptable (PLT?), so I assume 0x890cec83 is the right address which is used to overwrite the saved eip. After that I determined the address of the /bin/bash string in memory, using a small C program which basically consists of this line: printf("Address of string /bin/bash: %p\n", getenv("SHELL")); Then I looked a little bit around in the memory and fount /bin/bash: (gdb) x/s 0xbffff6ca 0xbffff6ca: "/bin/bash" After I gathered this information, I filled the buffer: (gdb) b 9 Breakpoint 1 at 0x8048407: file victim.c, line 9. (gdb) r `perl -e 'print "A"x9 . "\x83\xec\x0c\x89FAKE\xca\f6\ff\bf";'` Breakpoint 1, main (argc=1111638594, argv=0xc360cca) at victim.c:10 10 return 0; (gdb) x/s 0xbffff6ca 0xbffff6ca: "/bin/bash" Stack frame looks like this: (gdb) i f Stack level 0, frame at 0xbffff440: eip = 0x8048407 in main (victim.c:10); saved eip 0x890cec83 source language c. Arglist at 0xbffff438, args: argc=1111638594, argv=0xc360cca Locals at 0xbffff438, Previous frame's sp is 0xbffff440 Saved registers: ebp at 0xbffff438, eip at 0xbffff43c This seems all right to me, saved eip was overwritten with the (hopefully) correct system()-address, return address for system was set to "FAKE" (shouldn't matter) and the address of /bin/bash also seems to be correct. When I'am continuing the execution, victim segfaults on some strange address and certainly not in 0x890cec83: (gdb) cont Continuing. Program received signal SIGSEGV, Segmentation fault. 0x0804840d in main (argc=Cannot access memory at address 0x41414149 ) at victim.c:11 11 } Has anyone an explanation or a hint what happens here and why the execution isn't redirected to 0x890cec83? Thanks in advance, any hint, and be it only vague, would be appreciated. I have no idea why this doesn't work.

    Read the article

  • Is it possible to implement a GC.GetAliveInstancesOf<T>() (for use in debugging)?

    - by Omer Raviv
    Hi, I know this was answered before, but I'd like to pose a somewhat different question. Is there any conceivable way to implement GC.GetAliveInstancesOf(), that can be evaluated in Visual Studio Debug Watch window? Sasha Goldstein shows one solution in this article, but it requires every class you want to query inherit from a specific base class. I'll emphasize that I only want to use this method during debugging, so I don't care that the GC may change an object's address in memory during runtime. One idea might be to somehow harness the !dumpheap –type command of SOS, and do some magic trick to create a temporary variable and have it point out to the memory address printed by SOS. Does anyone have a solution that works?

    Read the article

  • Qt vs .NET - a few comparisons [closed]

    - by Pirate for Profit
    Event Handling In Qt the event handling system you just emit signals when something cool happens and then catch them in slots, for instance emit valueChanged(int percent, bool something); and void MyCatcherObj::valueChanged(int p, bool ok){} blocking them and disconnecting them when needed, doing it across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET delegate crap is the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down i m o. Basically, the footprints make more sense and you can visualize the project easier without the clunky event handling system. I wish I could it explain it better. The only thing is, I do love some of the ease of C# compared to C++ and .NET's assembly architecture. That is a big bonus for modular projects, which are a PITA to do in C++. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • How/is data shared between fastCGI processes?

    - by Josh the Goods
    I've written a simple perl script that I'm running via fastCGI on Apache. The application loads a set of XML data files which are used to lookup values based upon the the query parameters of an incoming request. As I understand it, if I want to increase the amount of concurrent requests my application can handle I need to allow fastCGI to spawn multiple processes. Will each of these processes have to hold duplicate copies of the XML data in memory? Is there a way to set things up so that I can have one copy of the XML data loaded in memory while increasing the capacity to handle concurrent requests?

    Read the article

  • While loop in IL - why stloc.0 and ldloc.0?

    - by Michael Stum
    I'm trying to understand how a while loop looks in IL. I have written this C# function: static void Brackets() { while (memory[pointer] > 0) { // Snipped body of the while loop, as it's not important } } The IL looks like this: .method private hidebysig static void Brackets() cil managed { // Code size 37 (0x25) .maxstack 2 .locals init ([0] bool CS$4$0000) IL_0000: nop IL_0001: br.s IL_0012 IL_0003: nop // Snipped body of the while loop, as it's not important IL_0011: nop IL_0012: ldsfld uint8[] BFHelloWorldCSharp.Program::memory IL_0017: ldsfld int16 BFHelloWorldCSharp.Program::pointer IL_001c: ldelem.u1 IL_001d: ldc.i4.0 IL_001e: cgt IL_0020: stloc.0 IL_0021: ldloc.0 IL_0022: brtrue.s IL_0003 IL_0024: ret } // end of method Program::Brackets For the most part this is really simple, except for the part after cgt. What I don't understand is the local [0] and the stloc.0/ldloc.0. As far as I see it, cgt pushes the result to the stack, stloc.0 gets the result from the stack into the local variable, ldloc.0 pushes the result to the stack again and brtrue.s reads from the stack. What is the purpose of doing this? Couldn't this be shortened to just cgt followed by brtrue.s?

    Read the article

  • server|configuration problem, a php script just die with no error log & no reason

    - by Roberto
    Hi (first of all, thanks for your attention & sorry for my bad english hahaha also this is not a programming error, or thats what I think, I think this is an error in some configuration of the server or something else but I dont know what) I have a php script (is running like a process of linux, its not running on the web browser) that send SMS via SMPP on the port 2055 (using sockets in php) & then inserts like 10,000 rows into a dababase on MySQL, the script gets the data from a XML file; firts it was running in a shared server (Hostgator is our hosting provider) & at the begining it worked fine, with no trouble, but 5 months later an error appear, the process just die with no reason, the script only sent & inserted 700 rows in the table of the database & the process didnt show any warning or error, nothing appears in the error logs, & I didnt make any change in the script Hostgator never helped us hahaha so we decided to move the script from the shared server to a dedicated server; I thought it was a memory problem or something like that, but when we move the script to the dedicated server the problem just get worse, the script die when has just sent & inserted 40 to 50 rows to the database the information about this error: the shared server is on Red Hat 4.1.2-46 & the dedicated server is on CentOS 5.4 I have commented the line that sends the SMS, & the problem remains in the shared server, at the begining the script was fine, but then the script started to die when has just inserted 700 aprox. in the database, & now the script is dying when has inserted 2500 rows, its better but we didnt change anything in the dedicated server, the script dies when has just inserted like 40 row in the table the script, before it dies, change to a zombie process & we dont know why the usage of memory appears to be 0.3%, and of the cpu appears to be 0.7% to 1% I have changed the max memory limit of php to 128Mb, and even to -1 (so php wont have any limit) but the problem remains we have the limit of 50 connections of mysql at the same time, so I think this is not the problem Im using mysqli to connect from php to mysql Hostgator report that they haven't made any change or update in the servers what could be the problem?? what should I do??? what should I search??? is something in the logic Im missing?? what steps do I have to follow when managing & searching problems of process on Linux??? thank you very much, I think this is not a programming problem, but you have more experience than me, you can tell me thanks!!! bye!!! :)

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >