Search Results

Search found 1544 results on 62 pages for 'heap corruption'.

Page 54/62 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Delphi Unicode String Type Stored Directly at its Address (or "Unicode ShortString")

    - by Andreas Rejbrand
    I want a string type that is Unicode and that stores the string directly at the adress of the variable, as is the case of the (Ansi-only) ShortString type. I mean, if I declare a S: ShortString and let S := 'My String', then, at @S, I will find the length of the string (as one byte, so the string cannot contain more than 255 characters) followed by the ANSI-encoded string itself. What I would like is a Unicode variant of this. That is, I want a string type such that, at @S, I will find a unsigned 32-bit integer (or a single byte would be enough, actually) containing the length of the string in bytes (or in characters, which is half the number of bytes) followed by the Unicode representation of the string. I have tried WideString, UnicodeString, and RawByteString, but they all appear only to store an adress at @S, and the actual string somewhere else (I guess this has do do with reference counting and such). Update: The most important reason for this is probably that it would be very problematic if sizeof(string) were variable. I suspect that there is no built-in type to use, and that I have to come up with my own way of storing text the way I want (which actually is fun). Am I right? Update I will, among other things, need to use these strings in packed records. I also need manually to read/write these strings to files/the heap. I could live with fixed-size strings, such as <= 128 characters, and I could redesign the problem so it will work with null-terminated strings. But PChar will not work, for sizeof(PChar) = 1 - it's merely an address. The approach I eventually settled for was to use a static array of bytes. I will post my implementation as a solution later today.

    Read the article

  • C# parameters by reference and .net garbage collection

    - by Yarko
    I have been trying to figure out the intricacies of the .NET garbage collection system and I have a question related to C# reference parameters. If I understand correctly, variables defined in a method are stored on the stack and are not affected by garbage collection. So, in this example: public class Test { public Test() { } public int DoIt() { int t = 7; Increment(ref t); return t; } private int Increment(ref int p) { p++; } } the return value of DoIt() will be 8. Since the location of t is on the stack, then that memory cannot be garbage collected or compacted and the reference variable in Increment() will always point to the proper contents of t. However, suppose we have: public class Test { private int t = 7; public Test() { } public int DoIt() { Increment(ref t); return t; } private int Increment(ref int p) { p++; } } Now, t is stored on the heap as it is a value of a specific instance of my class. Isn't this possibly a problem if I pass this value as a reference parameter? If I pass t as a reference parameter, p will point to the current location of t. However, if the garbage collector moves this object during a compact, won't that mess up the reference to t in Increment()? Or does the garbage collector update even references created by passing reference parameters? Do I have to worry about this at all? The only mention of worrying about memory being compacted on MSDN (that I can find) is in relation to passing managed references to unmanaged code. Hopefully that's because I don't have to worry about any managed references in managed code. :)

    Read the article

  • ColdFusion 8 Slow Performance

    - by JoeBob
    We have started a new CF8 app and it is running dog slow. A test where we go around ColdFusion (queries within a database utility) show normal speed (80ms). CF8 returns the same query in something like 60 to 80 seconds! I have been looking online and seeing lots of posts about CF8 and performance problems, but don't get any overall sense of a solution; just lots of people trying things and saying that they didn't have the problem with CF7. We are also seeing instability on the server, and some errors relating to garbage collection and the memory heap. We have a number of other applications running on CF8 and they perform adequately...our programmer is not an expert or a guru, he just plugs away. We have isolated this down to a single query that takes forever to return, so it is not a complicated test. Are there any known CF8 problems or obvious tweaks that we should consider trying? If we have to start over and learn a new environment, I will never make deadline. JoeBob

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • Linux time sample based profiler.

    - by Caspin
    short version: Is there a good time based sampling profiler for Linux? long version: I generally use OProfile to optimize my applications. I recently found a shortcoming that has me wondering. The problem was a tight loop spawning c++filt to demangle a c++ name. I only stumbled upon the code by accident while chasing down another bottleneck. The OProfile didn't show anything unusual about the code so I almost ignored it but my code sense told me to optimize the call and see what happened. I changed the popen of c++filt to abi::__cxa_demangle. The runtime went from more than a minute to a little over a second. About a x60 speed up. Is there a way I could have configured OProfile to flag the popen call? As the profile data sits now OProfile thinks the bottle neck was the heap and std::string calls (which BTW once optimized dropped the runtime to less than a second, more than x2 speed up). Here is my OProfile configuration: $ sudo opcontrol --status Daemon not running Event 0: CPU_CLK_UNHALTED:90000:0:1:1 Separate options: library vmlinux file: none Image filter: /path/to/excutable Call-graph depth: 7 Buffer size: 65536 Is there another profiler for Linux that could have found the bottleneck? I suspect the issue is that OProfile only logs its samples to the currently running process. I'd like it to always log its samples to the process I'm profiling. So if the process is currently switched out (blocking on IO or a popen call) OProfile would just place its sample at the blocked call. If I can't fix this, OProfile will only be useful when the executable is pushing near 100% CPU. It can't help with executables that that have inefficient blocking calls.

    Read the article

  • Valgrind 'noise', what does it mean?

    - by Chris Huang-Leaver
    When I used valgrind to help debug an app I was working on I notice a huge about of noise which seems to be complaining about standard libraries. As a test I did this; echo 'int main() {return 0;}' | gcc -x c -o test - Then I did this; valgrind ./test ==1096== Use of uninitialised value of size 8 ==1096== at 0x400A202: _dl_new_object (in /lib64/ld-2.10.1.so) ==1096== by 0x400607F: _dl_map_object_from_fd (in /lib64/ld-2.10.1.so) ==1096== by 0x4007A2C: _dl_map_object (in /lib64/ld-2.10.1.so) ==1096== by 0x400199A: map_doit (in /lib64/ld-2.10.1.so) ==1096== by 0x400D495: _dl_catch_error (in /lib64/ld-2.10.1.so) ==1096== by 0x400189E: do_preload (in /lib64/ld-2.10.1.so) ==1096== by 0x4003CCD: dl_main (in /lib64/ld-2.10.1.so) ==1096== by 0x401404B: _dl_sysdep_start (in /lib64/ld-2.10.1.so) ==1096== by 0x4001471: _dl_start (in /lib64/ld-2.10.1.so) ==1096== by 0x4000BA7: (within /lib64/ld-2.10.1.so) * large block of similar snipped * ==1096== Use of uninitialised value of size 8 ==1096== at 0x4F35FDD: (within /lib64/libc-2.10.1.so) ==1096== by 0x4F35B11: (within /lib64/libc-2.10.1.so) ==1096== by 0x4A1E61C: _vgnU_freeres (vg_preloaded.c:60) ==1096== by 0x4E5F2E4: __run_exit_handlers (in /lib64/libc-2.10.1.so) ==1096== by 0x4E5F354: exit (in /lib64/libc-2.10.1.so) ==1096== by 0x4E48A2C: (below main) (in /lib64/libc-2.10.1.so) ==1096== ==1096== ERROR SUMMARY: 3819 errors from 298 contexts (suppressed: 876 from 4) ==1096== malloc/free: in use at exit: 0 bytes in 0 blocks. ==1096== malloc/free: 0 allocs, 0 frees, 0 bytes allocated. ==1096== For counts of detected errors, rerun with: -v ==1096== Use --track-origins=yes to see where uninitialised values come from ==1096== All heap blocks were freed -- no leaks are possible. You can see the full result here: http://pastebin.com/gcTN8xGp I have two questions; firstly is there a way to suppress all the noise? --show-below-main is set to no by default, but there doesn't appear to be a --show-after-main equivalent.

    Read the article

  • C++ Beginner Delete Question

    - by Pooch
    Hi all, This is my first year learning C++ so bear with me. I am attempting to dynamically allocate memory to the heap and then delete the allocated memory. Below is the code that is giving me a hard time: // String.cpp #include "String.h" String::String() {} String::String(char* source) { this->Size = this->GetSize(source); this->CharArray = new char[this->Size + 1]; int i = 0; for (; i < this->Size; i++) this->CharArray[i] = source[i]; this->CharArray[i] = '\0'; } int String::GetSize(const char * source) { int i = 0; for (; source[i] != '\0'; i++); return i; } String::~String() { delete[] this->CharArray; } Here is the error I get when the compiler tries to delete the CharArray: 0xC0000005: Access violation reading location 0xccccccc0. And here is the last call on the stack: msvcr100d.dll!operator delete(void * pUserData) Line 52 + 0x3 bytes C++ I am fairly certain the error exists within this piece of code but will provide you with any other information needed. Oh yeah, using VS 2010 for XP. Thanks for any and all help!

    Read the article

  • PHP MySQL Weird Update Problem

    - by Tim
    I have a heap based table in MySQL that I am trying to update via PHP, but for some reason, the updates do not seem to be taking place. Here is my test code: <?php $freepoints[] = 1; $freepoints[] = 2; $freepoints[] = 3; foreach ($freepoints as $entrypoint) { $query = "update gates set lane='{$entrypoint}' where traffic > 50 limit 50"; echo "$query\n"; mysql_query($query); echo mysql_affected_rows()."\n"; } ?> This outputs the following: update gates set lane='1' where traffic > 50 limit 50 50 update gates set lane='2' where traffic > 50 limit 50 50 update gates set lane='3' where traffic > 50 limit 50 50 In the database to start with lanes 1/2/3 had 0 records and lanes 4/5/6 had 100 records. From this I am expecting all 6 lanes to now have 50 records each. However when I look lanes 4/5/6 still have 100 records and 1/2/3 still have 0 records. When I copy the query "update gates set lane='1' where traffic 50 limit 50" into phpMyAdmin it works absolutely fine, so any ideas why it isn't working in my PHP script when mysql_affected_rows is saying it has updated 50 records?

    Read the article

  • Python - Checking for membership inside nested dict

    - by victorhooi
    heya, This is a followup questions to this one: http://stackoverflow.com/questions/2901422/python-dictreader-skipping-rows-with-missing-columns Turns out I was being silly, and using the wrong ID field. I'm using Python 3.x here. I have a dict of employees, indexed by a string, "directory_id". Each value is a nested dict with employee attributes (phone number, surname etc.). One of these values is a secondary ID, say "internal_id", and another is their manager, call it "manager_internal_id". The "internal_id" field is non-mandatory, and not every employee has one. (I've simplified the fields a little, both to make it easier to read, and also for privacy/compliance reasons). The issue here is that we index (key) each employee by their directory_id, but when we lookup their manager, we need to find managers by their "internal_id". Before, when employee.keys() was a list of internal_ids, I was using a membership check on this. Now, the last part of my if statement won't work, since the internal_ids is part of the dict values, instead of the key itself. def lookup_supervisor(manager_internal_id, employees): if manager_internal_idis not None and manager_internal_id!= "" and manager_internal_id in employees.keys(): return (employees[manager_internal_id]['mail'], employees[manager_internal_id]['givenName'], employees[manager_internal_id]['sn']) else: return ('Supervisor Not Found', 'Supervisor Not Found', 'Supervisor Not Found') So the first question is, how do I check whether the manager_internal_id is present in the dict's values. I've tried substituting employee.keys() with employee.values(), that didn't work. Also, I'm hoping for something a little more efficient, not sure if there's a way to get a subset of the values, specifically, all the entries for employees[directory_id]['internal_id']. Hopefully there's some Pythonic way of doing this, without using a massive heap of nested for/if loops. My second question is, how do I then cleanly return the required employee attributes (mail, givenname, surname etc.). My for loop is iterating over each employee, and calling lookup_supervisor. I'm feeling a bit stupid/stumped here. def tidy_data(employees): for directory_id, data in employees.items(): # We really shouldnt' be passing employees back and forth like this - hmm, classes? data['SupervisorEmail'], data['SupervisorFirstName'], data['SupervisorSurname'] = lookup_supervisor(data['manager_internal_id'], employees) Thanks in advance =), Victor

    Read the article

  • Null Pointer Exception in Primavera P6 8.1

    - by gwrichard
    I am getting a null pointer exception in a Primavera P6 8.1 installation. The exception only occurs in one section of the web-client: Application settings.P6 web and the P6 API are deployed on the same WebLogic (10.3.5) node running on a Windows Server 2008 R2 installation. I have done this installation using this same software stack a dozen times and don't have this issue on any of the other installs. Exact error below: Match: beginTraversal Match: digest selected JREDesc: JREDesc[version 1.6.0_20+, heap=-1--1, args=null, href=http://java.sun.com/products/autodl/j2se, sel=false, null, null], JREInfo: JREInfo for index 0: platform is: 1.7 product is: 1.7.0_17 location is: http://java.sun.com/products/autodl/j2se path is: C:\Program Files (x86)\Java\jre7\bin\javaw.exe args is: null native platform is: Windows, x86 [ x86, 32bit ] JavaFX runtime is: JavaFX 2.2.7 found at C:\Program Files (x86)\Java\jre7\ enabled is: true registered is: true system is: true Match: ignoring maxHeap: -1 Match: ignoring InitHeap: -1 Match: digesting vmargs: null Match: digested vmargs: [JVMParameters: isSecure: true, args: ] Match: JVM args after accumulation: [JVMParameters: isSecure: true, args: ] Match: digest LaunchDesc: http://localhost:7001/p6/action/jnlp/appletsjnlp.jnlp?mainClass=com.primavera.pvapplets.adminpreferences.AdminPreferencesApplet&classPath=adminpreferences.jar,prm-applets-common.jar,forms-1.0.7.jar,prm-guisupport.jar,prm-to.jar,jide.jar,tablesupport.jar,formsupport.jar,applets-bo.jar,commons-lang.jar,prm-common.jar,resource_strings.jar,prm-img.jar,commons-logging.jar&name=AdminPreferences&version=8.1.2.0.0602 Match: digest properties: [] Match: JVM args: [JVMParameters: isSecure: true, args: ] Match: endTraversal .. Match: JVM args final: Match: Running JREInfo Version match: 1.7.0.17 == 1.7.0.17 Match: Running JVM args match: have:<> satisfy want:<> Java Plug-in 10.17.2.02 Using JRE version 1.7.0_17-b02 Java HotSpot(TM) Client VM User home directory = C:\Users\gwrichard ---------------------------------------------------- c: clear console window f: finalize objects on finalization queue g: garbage collect h: display this help message l: dump classloader list m: print memory usage o: trigger logging q: hide console r: reload policy configuration s: dump system and deployment properties t: dump thread list v: dump thread stack x: clear classloader cache 0-5: set trace level to <n> ----------------------------------------------------

    Read the article

  • When do instance variables get initialized and values assigned?

    - by AKh
    When doees the instance variable get initialized? Is it after the constructor block is done or before it? Consider this example: public abstract class Parent { public Parent(){ System.out.println("Parent Constructor"); init(); } public void init(){ System.out.println("parent Init()"); } } public class Child extends Parent { private Integer attribute1; private Integer attribute2 = null; public Child(){ super(); System.out.println("Child Constructor"); } public void init(){ System.out.println("Child init()"); super.init(); attribute1 = new Integer(100); attribute2 = new Integer(200); } public void print(){ System.out.println("attribute 1 : " +attribute1); System.out.println("attribute 2 : " +attribute2); } } public class Tester { public static void main(String[] args) { Parent c = new Child(); ((Child)c).print(); } } OUTPUT: Parent Constructor Child init() parent Init() Child Constructor attribute 1 : 100 attribute 2 : null When the memory for the atribute 1 & 2 are allocated in the heap ? Curious to know why is attribute 2 is NULL ? Are there any design flaws?

    Read the article

  • Getting wierd issue with TO_NUMBER function in Oracle

    - by Fazal
    I have been getting an intermittent issue when executing to_number function in the where clause on a varchar2 column if number of records exceed a certain number n. I used n as there is no exact number of records on which it happens. On one DB it happens after n was 1 million on another when it was 0.1. million. E.g. I have a table with 10 million records say Table Country which has field1 varchar2 containing numberic data and Id If I do a query as an example select * from country where to_number(field1) = 23 and id 1 and id < 100000 This works But if i do the query select * from country where to_number(field1) = 23 and id 1 and id < 100001 It fails saying invalid number Next I try the query select * from country where to_number(field1) = 23 and id 2 and id < 100001 It works again As I only got invalid number it was confusing, but in the log file it said Memory Notification: Library Cache Object loaded into SGA Heap size 3823K exceeds notification threshold (2048K) KGL object name :with sqlplan as ( select c006 object_owner, c007 object_type,c008 object_name from htmldb_collections where COLLECTION_NAME='HTMLDB_QUERY_PLAN' and c007 in ('TABLE','INDEX','MATERIALIZED VIEW','INDEX (UNIQUE)')), ws_schemas as( select schema from wwv_flow_company_schemas where security_group_id = :flow_security_group_id), t as( select s.object_owner table_owner,s.object_name table_name, d.OBJECT_ID from sqlplan s,sys.dba_objects d It seems its related to SGA size, but google did not give me much help on this. Does anyone have any idea about this issue with TO_NUMBER or oracle functions for large data?

    Read the article

  • Setting the default stack size on Linux globally for the program

    - by wowus
    So I've noticed that the default stack size for threads on linux is 8MB (if I'm wrong, PLEASE correct me), and, incidentally, 1MB on Windows. This is quite bad for my application, as on a 4-core processor that means 64 MB is space is used JUST for threads! The worst part is, I'm never using more than 100kb of stack per thread (I abuse the heap a LOT ;)). My solution right now is to limit the stack size of threads. However, I have no idea how to do this portably. Just for context, I'm using Boost.Thread for my threading needs. I'm okay with a little bit of #ifdef hell, but I'd like to know how to do it easily first. Basically, I want something like this (where windows_* is linked on windows builds, and posix_* is linked under linux builds) // windows_stack_limiter.c int limit_stack_size() { // Windows impl. return 0; } // posix_stack_limiter.c int limit_stack_size() { // Linux impl. return 0; } // stack_limiter.cpp int limit_stack_size(); static volatile int placeholder = limit_stack_size(); How do I flesh out those functions? Or, alternatively, am I just doing this entirely wrong? Remember I have no control over the actual thread creation (no new params to CreateThread on Windows), as I'm using Boost.Thread.

    Read the article

  • Fastest Java way to remove the first/top line of a file (like a stack)

    - by christangrant
    I am trying to improve an external sort implementation in java. I have a bunch of BufferedReader objects open for temporary files. I repeatedly remove the top line from each of these files. This pushes the limits of the Java's Heap. I would like a more scalable method of doing this without loosing speed because of a bunch of constructor calls. One solution is to only open files when they are needed, then read the first line and then delete it. But I am afraid that this will be significantly slower. So using Java libraries what is the most efficient method of doing this. --Edit-- For external sort, the usual method is to break a large file up into several chunk files. Sort each of the chunks. And then treat the sorted files like buffers, pop the top item from each file, the smallest of all those is the global minimum. Then continue until for all items. http://en.wikipedia.org/wiki/External_sorting My temporary files (buffers) are basically BufferedReader objects. The operations performed on these files are the same as stack/queue operations (peek and pop, no push needed). I am trying to make these peek and pop operations more efficient. This is because using many BufferedReader objects takes up too much space.

    Read the article

  • C++: is it safe to work with std::vectors as if they were arrays?

    - by peoro
    I need to have a fixed-size array of elements and to call on them functions that require to know about how they're placed in memory, in particular: functions like glVertexPointer, that needs to know where the vertices are, how distant they are one from the other and so on. In my case vertices would be members of the elements to store. to get the index of an element within this array, I'd prefer to avoid having an index field within my elements, but would rather play with pointers arithmetic (ie: index of Element *x will be x - & array[0]) -- btw, this sounds dirty to me: is it good practice or should I do something else? Is it safe to use std::vector for this? Something makes me think that an std::array would be more appropriate but: Constructor and destructor for my structure will be rarely called: I don't mind about such overhead. I'm going to set the std::vector capacity to size I need (the size that would use for an std::array, thus won't take any overhead due to sporadic reallocation. I don't mind a little space overhead for std::vector's internal structure. I could use the ability to resize the vector (or better: to have a size chosen during setup), and I think there's no way to do this with std::array, since its size is a template parameter (that's too bad: I could do that even with an old C-like array, just dynamically allocating it on the heap). If std::vector is fine for my purpose I'd like to know into details if it will have some runtime overhead with respect to std::array (or to a plain C array): I know that it'll call the default constructor for any element once I increase its size (but I guess this won't cost anything if my data has got an empty default constructor?), same for destructor. Anything else?

    Read the article

  • I get this error: "glibc detected"

    - by AKGMA
    Hi I just wrote a piece of CPP code and I compiled it using G++ in ubuntu. When I run my code everything is fine, the code runs well and gives output but doesn't exit and it gives this error: *** glibc detected *** ./a.out: free(): invalid next size (fast): 0x09f931f0 *** ======= Backtrace: ========= /lib/libc.so.6(+0x6c501)[0x3de501] /lib/libc.so.6(+0x6dd70)[0x3dfd70] /lib/libc.so.6(cfree+0x6d)[0x3e2e5d] /usr/lib/libstdc++.so.6(_ZdlPv+0x21)[0x6e2441] ./a.out[0x8049ce6] /lib/libc.so.6(+0x2f69e)[0x3a169e] /lib/libc.so.6(+0x2f70f)[0x3a170f] /lib/libc.so.6(__libc_start_main+0xef)[0x388cef] ./a.out[0x8048a61] ======= Memory map: ======== 00219000-0021a000 r-xp 00000000 00:00 0 [vdso] 00354000-00370000 r-xp 00000000 08:01 8781845 /lib/ld-2.12.1.so 00370000-00371000 r--p 0001b000 08:01 8781845 /lib/ld-2.12.1.so 00371000-00372000 rw-p 0001c000 08:01 8781845 /lib/ld-2.12.1.so 00372000-004c9000 r-xp 00000000 08:01 8781869 /lib/libc-2.12.1.so 004c9000-004ca000 ---p 00157000 08:01 8781869 /lib/libc-2.12.1.so 004ca000-004cc000 r--p 00157000 08:01 8781869 /lib/libc-2.12.1.so 004cc000-004cd000 rw-p 00159000 08:01 8781869 /lib/libc-2.12.1.so 004cd000-004d0000 rw-p 00000000 00:00 0 00638000-00717000 r-xp 00000000 08:01 3935829 /usr/lib/libstdc++.so.6.0.14 00717000-0071b000 r--p 000de000 08:01 3935829 /usr/lib/libstdc++.so.6.0.14 0071b000-0071c000 rw-p 000e2000 08:01 3935829 /usr/lib/libstdc++.so.6.0.14 0071c000-00723000 rw-p 00000000 00:00 0 00909000-0092d000 r-xp 00000000 08:01 8781918 /lib/libm-2.12.1.so 0092d000-0092e000 r--p 00023000 08:01 8781918 /lib/libm-2.12.1.so 0092e000-0092f000 rw-p 00024000 08:01 8781918 /lib/libm-2.12.1.so 00fdb000-00ff5000 r-xp 00000000 08:01 8781903 /lib/libgcc_s.so.1 00ff5000-00ff6000 r--p 00019000 08:01 8781903 /lib/libgcc_s.so.1 00ff6000-00ff7000 rw-p 0001a000 08:01 8781903 /lib/libgcc_s.so.1 08048000-0804b000 r-xp 00000000 08:01 8652645 /home/akg/Desktop/contest/a.out 0804b000-0804c000 r--p 00002000 08:01 8652645 /home/akg/Desktop/contest/a.out 0804c000-0804d000 rw-p 00003000 08:01 8652645 /home/akg/Desktop/contest/a.out 09f93000-09fb4000 rw-p 00000000 00:00 0 [heap] b7600000-b7621000 rw-p 00000000 00:00 0 b7621000-b7700000 ---p 00000000 00:00 0 b7765000-b7768000 rw-p 00000000 00:00 0 b7775000-b7779000 rw-p 00000000 00:00 0 bf9a7000-bf9c8000 rw-p 00000000 00:00 0 [stack] Aborted What does this mean? How can i get rid of it? I'm not using malloc or free , I'm just using vector!

    Read the article

  • MemoryFailPoint fires to early in WinXP 64

    - by msedi
    Hello, I have created a volume class (called VoxelVolume) with a self-organizing memory management, since the GC in C# didn't provide a good mechanism for managing contents of the volume for mapping, unmapping and remapping. Although I could have used the mechanisms of virtual memory, the problem is that the files are often too large to fit into the page file and I don't want to force the users to increase the pagefile size. Currently this system is working quite well and there is no problem in lacking resources and OutOfMemoryExceptions since the InsufficientMemoryException using the MemoryFailPoint works quite well. This was all testes on a 32bit WinXP system with 2GB of main memory. Running the same mechanism on 64bit system with 32GB of main memory also works well, but when the application runs the MemoryFailPoint suddenly throws an exception although 24GB of main memory are still free. Another point is when the MemoryFailPoint has fired once, it fires everytime and there is no chance to get rid of it. What I have read so far, that there is a small object and a large object heap (SOH and LOH). But only for the SOH the GC takes real care of and I can free the SOH from unused objects by applying GC.Collect() and GC.WaitForPendingFinalizers. The MemoryFailPoint is obviously the only way to get a little bit of control for the LOH, but since there is enough memory left on the system I see no reason why the MemoryFilePoint should fire. Is there any experience around here using the MemoryFailPoint? Thank you for your help Martin

    Read the article

  • Preventing a heavy process from sinking in the swap file

    - by eran
    Our service tends to fall asleep during the nights on our client's server, and then have a hard time waking up. What seems to happen is that the process heap, which is sometimes several hundreds of MB, is moved to the swap file. This happens at night, when our service is not used, and others are scheduled to run (DB backups, AV scans etc). When this happens, after a few hours of inactivity the first call to the service takes up to a few minutes (consequent calls take seconds). I'm quite certain it's an issue of virtual memory management, and I really hate the idea of forcing the OS to keep our service in the physical memory. I know doing that will hurt other processes on the server, and decrease the overall server throughput. Having that said, our clients just want our app to be responsive. They don't care if nightly jobs take longer. I vaguely remember there's a way to force Windows to keep pages on the physical memory, but I really hate that idea. I'm leaning more towards some internal or external watchdog that will initiate higher-level functionalities (there is already some internal scheduler that does very little, and makes no difference). If there were a 3rd party tool that provided that kind of service is would have been just as good. I'd love to hear any comments, recommendations and common solutions to this kind of problem. The service is written in VC2005 and runs on Windows servers.

    Read the article

  • Using an ActiveX control without a form/dialog/window in C++, VS 2008

    - by younevertell
    an ActiveX control generated by Visual Basic 6, very old stuff, it has a couple of methods and one event. now I have to use the control in C++, visual studio 2008, but I don't like to generate a form/dialog/window as constainer. I add some MFC class From ActiveX Control. Now the wrap class has all the methods defined in the ActiveX control. I definitely need add event handler to take care of the event fired by ActiveX control. With a form/dialgue/window, it could be done by below BEGIN_EVENTSINK_MAP ON_EVENT. My questions are 1. how to add the event handler without a form/dialgue/window. Is this impossible? 2. Could I manually add constructor in the ActiveX control wrap class, so I could use new operator to get an instance on the heap? Thanks Add the Event Sinks Map near the start of the definition (.cpp) file. The BEGIN_EVENTSINK_MAP macro takes the container's Class name, then the base class name as parameters. The container's class name is used again in the AFX_EVENTSINK_MAP macro and the ON_EVENT macro.

    Read the article

  • How are two-dimensional arrays formatted in memory?

    - by Chris Cooper
    In C, I know I can dynamically allocate a two-dimensional array on the heap using the following code: int** someNumbers = malloc(arrayRows*sizeof(int*)); for (i = 0; i < arrayRows; i++) { someNumbers[i] = malloc(arrayColumns*sizeof(int)); } Clearly, this actually creates a one-dimensional array of pointers to a bunch of separate one-dimensional arrays of integers, and "The System" can figure you what I mean when I ask for: someNumbers[4][2]; But when I statically declare a 2D array, as in the following line...: int someNumbers[ARRAY_ROWS][ARRAY_COLUMNS]; ...does a similar structure get created on the stack, or is it of another form completely? (i.e. is it a 1D array of pointers? If not, what is it, and how do references to it get figured out?) Also, when I said, "The System," what is actually responsible for figuring that out? The kernel? Or does the C compiler sort it out while compiling?

    Read the article

  • how to make accessor for Dictionary in a way that returned Dictionary cannot be changed C# / 2.0

    - by matti
    I thought of solution below because the collection is very very small. But what if it was big? private Dictionary<string, OfTable> _folderData = new Dictionary<string, OfTable>(); public Dictionary<string, OfTable> FolderData { get { return new Dictionary<string,OfTable>(_folderData); } } With List you can make: public class MyClass { private List<int> _items = new List<int>(); public IList<int> Items { get { return _items.AsReadOnly(); } } } That would be nice! Thanks in advance, Cheers & BR - Matti NOW WHEN I THINK THE OBJECTS IN COLLECTION ARE IN HEAP. SO MY SOLUTION DOES NOT PREVENT THE CALLER TO MODIFY THEM!!! CAUSE BOTH Dictionary s CONTAIN REFERENCES TO SAME OBJECT. DOES THIS APPLY TO List EXAMPLE ABOVE? class OfTable { private string _wTableName; private int _table; private List<int> _classes; private string _label; public OfTable() { _classes = new List<int>(); } public int Table { get { return _table; } set { _table = value; } } public List<int> Classes { get { return _classes; } set { _classes = value; } } public string Label { get { return _label; } set { _label = value; } } } so how to make this immutable??

    Read the article

  • maximum memory which malloc can allocate!

    - by Vikas
    I was trying to figure out how much memory I can malloc to maximum extent on my machine (1 Gb RAM 160 Gb HD Windows platform). I read that maximum memory malloc can allocate is limited to physical memory.(on heap) Also when a program exceeds consumption of memory to a certain level, the computer stops working because other applications do not get enough memory that they require. So to confirm,I wrote a small program in C, int main(){ int *p; while(1){ p=(int *)malloc(4); if(!p)break; } } Hoping that there would be a time when memory allocation will fail and loop will be breaked. But my computer hanged as It was an infinite loop. I waited for about an hour and finally I had to forcely shut down my computer. Some questions: Does malloc allocate memory from HD also? What was the reason for above behaviour? Why didn't loop breaked at any point of time.? Why wasn't there any allocation failure?

    Read the article

  • Android - BitmapFactory.decodeByteArray - OutOfMemoryError (OOM)

    - by Bob Keathley
    I have read 100s of article about the OOM problem. Most are in regard to large bitmaps. I am doing a mapping application where we download 256x256 weather overlay tiles. Most are totally transparent and very small. I just got a crash on a bitmap stream that was 442 Bytes long while calling BitmapFactory.decodeByteArray(....). The Exception states: java.lang.OutOfMemoryError: bitmap size exceeds VM budget(Heap Size=9415KB, Allocated=5192KB, Bitmap Size=23671KB) The code is: protected Bitmap retrieveImageData() throws IOException { URL url = new URL(imageUrl); InputStream in = null; OutputStream out = null; HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // determine the image size and allocate a buffer int fileSize = connection.getContentLength(); if (fileSize < 0) { return null; } byte[] imageData = new byte[fileSize]; // download the file //Log.d(LOG_TAG, "fetching image " + imageUrl + " (" + fileSize + ")"); BufferedInputStream istream = new BufferedInputStream(connection.getInputStream()); int bytesRead = 0; int offset = 0; while (bytesRead != -1 && offset < fileSize) { bytesRead = istream.read(imageData, offset, fileSize - offset); offset += bytesRead; } // clean up istream.close(); connection.disconnect(); Bitmap bitmap = null; try { bitmap = BitmapFactory.decodeByteArray(imageData, 0, bytesRead); } catch (OutOfMemoryError e) { Log.e("Map", "Tile Loader (241) Out Of Memory Error " + e.getLocalizedMessage()); System.gc(); } return bitmap; } Here is what I see in the debugger: bytesRead = 442 So the Bitmap data is 442 Bytes. Why would it be trying to create a 23671KB Bitmap and running out of memory?

    Read the article

  • Why does this code sample produce a memory leak?

    - by citronas
    In the university we were given the following code sample and we were being told, that there is a memory leak when running this code. The sample should demonstrate that this is a situation where the garbage collector can't work. As far as my object oriented programming goes, the only codeline able to create a memory leak would be items=Arrays.copyOf(items,2 * size+1); The documentation says, that the elements are copied. Does that mean the reference is copied (and therefore another entry on the heap is created) or the object itself is being copied? As far as I know, Object and therefore Object[] are implemented as a reference type. So assigning a new value to 'items' would allow the garbage collector to find that the old 'item' is no longer referenced and can therefore be collected. In my eyes, this the codesample does not produce a memory leak. Could somebody prove me wrong? =) import java.util.Arrays; public class Foo { private Object[] items; private int size=0; private static final int ISIZE=10; public Foo() { items= new Object[ISIZE]; } public void push(final Object o){ checkSize(); items[size++]=o; } public Object pop(){ if (size==0) throw new ///... return items[--size]; } private void checkSize(){ if (items.length==size){ items=Arrays.copyOf(items,2 * size+1); } } }

    Read the article

  • Link failure with either abnormal memory consumption or LNK1106 in Visual Studio 2005.

    - by Corvin
    Hello, I am trying to build a solution for windows XP in Visual Studio 2005. This solution contains 81 projects (static libs, exe's, dlls) and is being successfully used by our partners. I copied the solution bundle from their repository and tried setting it up on 3 similar machines of people in our group. I was successful on two machines and the solution failed to build on my machine. The build on my machine encountered two problems: During a simple build creation of the biggest static library (about 522Mb in debug mode) would fail with the message "13libd\ui1d.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x20101879" Full solution rebuild creates this library, however when it comes to linking the library to main .exe file, devenv.exe spawns link.exe which consumes about 80Mb of physical memory and 250MB of virtual and spawns another link.exe, which does the same. This goes on until the system runs out of memory. On PCs of my colleagues where successful build could be performed, there is only one link.exe process which uses all the memory required for linking (about 500Mb physical). There is a plenty of hard drive space on my machine and the file system is NTFS. All three of our systems are similar - Core2Quad processors, 4Gb of RAM, Windows XP SP3. We are using Visual studio installed from the same source. I tried using a different RAM and CPU, using dedicated graphics adapter to eliminate possibility of video memory sharing influencing the build, putting solution files to different location, using different versions of VS 2005 (Professional, Standard and Team Suite), changing the amount of available virtual memory, running memtest86 and building the project from scratch (i.e. a clean bundle). I have read what MSDN says about LNK1106, none of the cases apply to me except for maybe "out of heap space", however I am not sure how I should fight this. The only idea that I have left is reinstalling the OS, however I am not sure that it would help and I am not sure that my situation wouldn't repeat itself on a different machine. Would anyone have any sort of advice for me? Thanks

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >