Search Results

Search found 1544 results on 62 pages for 'heap corruption'.

Page 5/62 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Heap Behavior in C++

    - by wowus
    Is there anything wrong with the optimization of overloading the global operator new to round up all allocations to the next power of two? Theoretically, this would lower fragmentation at the cost of higher worst-case memory consumption, but does the OS already have redundant behavior with this technique, or does it do its best to conserve memory? Basically, given that memory usage isn't as much of an issue as performance, should I do this?

    Read the article

  • Thread safety with heap-allocated memory

    - by incrediman
    I was reading this: http://en.wikipedia.org/wiki/Thread_safety Is the following function thread-safe? void foo(int y){ int * x = new int[50]; /*...do some stuff with the allocated memory...*/ delete x; } In the article it says that to be thread-safe you can only use variables from the stack. Really? Why? Wouldn't subsequent calls of the above function allocate memory elsewhere?

    Read the article

  • Why is heap size fixed on JVMs?

    - by themel
    Can anyone explain to me why JVMs (I didn't check too many, but I've never seen one that didn't do it that way) need to run on a fixed heap size? I know it's easier to implement on a simple contiguous heap, but the Sun JVM is now over a decade old, so I'd expect them to have had time to improve this. Needing to define the maximum memory size of your program at startup time seems such a 1960s thing to do, and then there are the bad interactions with OS virtual memory management (GC retrieving swapped out data, inability to determine how much memory the Java process is really using from the OS side, huge amounts of VM space wasted (I know, you don't care on your fancy 48bit machines...)). I also guess that the various sad attempts to build small operating systems inside the JVM (EE application servers, OSGi) are at least partially to blame on this circumstance, because running multiple Java processes on a system invariably leads to wasted resources because you have to give each of them the memory it might have to use at peak. Surprisingly, Google didn't yield the storms of outrage over this that I would expect, but they may just have been buried under the millions of people finding out about fixed heap size and just accepting it for a fact.

    Read the article

  • Oracle Unbreakable Enterprise Kernel and Emulex HBA Eliminate Silent Data Corruption

    - by sergio.leunissen
    Yesterday, Emulex announced that it has added support for T10 Protection Information (T10-PI), formerly called T10-DIF, to a number of its HBAs. When used with Oracle's Unbreakable Enterprise Kernel, this will prevent silent data corruption and help ensure the integrity and regulatory compliance of user data as it is transferred from the application to the SAN From the press release: Traditionally, protecting the integrity of customers' data has been done with multiple discrete solutions, including Error Correcting Code (ECC) and Cyclic Redundancy Check (CRC), but there have been coverage gaps across the I/O path from the operating system to the storage. The implementation of the T10-PI standard via Emulex's BlockGuard feature, in conjunction with other industry player's implementations, ensures that data is validated as it moves through the data path, from the application, to the HBA, to storage, enabling seamless end-to-end integrity. Read the white paper and don't miss the live webcast on eliminating silent data corruption on December 16th!

    Read the article

  • Where my memory is alloced, Stack or Heap, Can I find it at Run-time?

    - by AKN
    I know that memory alloced using new, gets its space in heap, and so we need to delete it before program ends, to avoid memory leak. Let's look at this program... Case 1: char *MyData = new char[20]; _tcscpy(MyData,"Value"); . . . delete[] MyData; MyData = NULL; Case 2: char *MyData = new char[20]; MyData = "Value"; . . . delete[] MyData; MyData = NULL; In case 2, instead of allocating value to the heap memory, it is pointing to a string literal. Now when we do a delete it would crash, AS EXPECTED, since it is not trying to delete a heap memory. Is there a way to know where the pointer is pointing to heap or stack? By this the programmer Will not try to delete any stack memory He can investigate why does this ponter, that was pointing to a heap memory initially, is made to refer local literals? What happened to the heap memory in the middle? Is it being made to point by another pointer and delete elsewhere and all that?

    Read the article

  • Graphical corruption during Linux startup and shutdown - should I be worried?

    - by Macha
    For the last month, when starting up or shutting down my laptop under Linux, I would get graphical corruption. Startup has an colour inverted, grainy rendition of what should be displayed while shutdown has a red background with all the text replaced by grey rectangles. At the very least this affects Fedora, Ubuntu and Xubuntu. Windows is not affected. Outside of startup/shutdown the system is fine. Should I be worrying about this?

    Read the article

  • In windbg, how do I get a heap header address from !heap -l results?

    - by Kevin
    I am playing around with windbg's !heap command, particular the "-l" switch which detects memory leaks. When -l does detect a leak, I am having problems navigating from its results to a stack trace for the source of the leak. Here is a snippet of the results from !heap -l 0:066 !heap -l Searching the memory for potential unreachable busy blocks. Entry User Heap Segment Size PrevSize Unused Flags 0324b500 0324b508 01580000 03230000 20 60 a busy 0324b520 0324b528 01580000 03230000 20 20 a busy 0324b5c8 0324b5d0 01580000 03230000 20 28 a busy Windbg's documentation for !heap tells me to use dt _DPH_BLOCK_INFORMATION with the header address, followed by dds with the blocks' StackTrace field. But the output for !heap -l doesn't specify a header address! It's only specififying Entry, User, Heap, and Segment. I've racked my brain looking over the other commands but can't figure out how to get the header address from any of these fields. Can someone help?

    Read the article

  • Too many heap subpools might break the upgrade

    - by Mike Dietrich
    Recently one of our new upcoming Oracle Database 11.2 reference customers did upgrade their production database - a huge EBS system - from Oracle 9.2.0.8 to Oracle Database 11.2.0.2. They've tested very well, we've optimized the upgrade process, the recompilation timings etc.  But once the live upgrade was done it did fail in the JAVA component piece with this error: begin if initjvmaux.startstep('CREATE_JAVA_SYSTEM') then * ORA-29553: classw in use: SYS.javax/mail/folder ORA-06512: at "SYS.INITJVMAUX", line 23 ORA-06512: at line 5 Support diagnosis was pretty quick - and refered to:Bug 10165223 - ORA-29553: class in use: sys.javax/mail/folder during database upgrade But how could this happen? Actually I don't know as we have used the same init.ora setup on test and production. The only difference: the prod system has more CPUs and RAM. Anyway, the bug names as workarounds to either decrease the SGA to less than 1GB or decrease the number of heap subpools to 1. Finally this query did help to diagnose the number of heap subpools: select count(distinct kghluidx) num_subpools from x$kghlu where kghlushrpool = 1; The result was 2 - so we did run the upgrade now with this parameter set: _kghdsidx_count=1 And finally it did work well. One sad thing:After the upgrade did fail Support did recommend to restore the whole database - which took an additional 3-4 hours. As the ORACLE SERVER component has been already upgraded successfully at the stage where the error did happen it would have been fine to go on with the manual upgrade and start catupgrd.sql script. It would have been detected that the ORACLE SERVER is upgraded already and just picked up the non-upgraded components. The good news:Finally I had one extra slide to add to our workshop presentation

    Read the article

  • Random memory corruption going undetected by memtest86

    - by sds
    Thinkpad t520; Ubuntu 12.04.1 LTS; 3.2.0-33-generic 16GB of ram Memtest86+ ran for 26 hours, 9 passes, no errors Booted into "recovery mode"; Ran fsck all filesystems - no errors; "check all packages" - no errors apparent random memory corruption: perl/R/chrome segfault every now and then, seemingly at random; sort(1) produces corrupt unsorted files. What could be possibly wrong and how do I debug it?

    Read the article

  • CVE-2012-0444 Memory corruption vulnerability in Ogg Vorbis

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-0444 Memory corruption vulnerability 10.0 libvorbis Solaris 11 11/11 SRU 8.5 Solaris 10 SPARC: 148006-01 X86: 148007-01 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Getting the start address of the current process's heap?

    - by beta
    Hey, I am exploring the lower level workings of the system, and was wondering how malloc determines the start address of the heap. Is the heap a constant offset or is there a call of some sort to get the start address? Does the stack effect the start address of the heap? Thanks, Braden McDorman

    Read the article

  • How to get just free heap size (not together w stack/method mem) in Java?

    - by Pentium10
    I want to calculate the heap usage for my app. I would like to get a procent value of Heap size only. How do I get the value in code for the current running app? EDIT The upvoted answer is NOT complete/correct. The values returned by those methods include stack and method area too, and I need to monitor only heap size. With that code I got HeapError exception when I reached 43%, so I can't use those methods to monitor just heap.

    Read the article

  • Moving files with batch files from one pc to a server, to a another pc - worried about disk corruption

    - by AnchientAnt
    I use scheduled tasks that calls a batch file, that calls more batch files to move about three files from a pc, to a server, then to multiple other pcs. It all happens very quickly, as they are small files. Are there any pitfalls for how fast these transfers happen? I'm just mildly concerned about causing some disk corruption somehow. I use logic like 1. Call MapToPc if files exist then move file to folder on server. Disconnect 2. Call SendtoPCs If files exist (the files just moved to the server) then MapToPCs Move all files Disconnect All of this happens in about 2 secs or less. edit: this on windows 7, server 2003, xp respectively

    Read the article

  • CVE-2012-5195 Heap Buffer Overrun vulnerability in Perl

    - by Ritwik Ghoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-5195 Heap Buffer Overrun vulnerability 5.1 Perl 5.12 Solaris 11.1 11.1.7.5.0 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Surprising corruption and never-ending fsck after resizing a filesystem.

    - by Steve Kemp
    System in question has Debian Lenny installed, running a 2.65.27.38 kernel. System has 16Gb memory, and 8x1Tb drives running behind a 3Ware RAID card. The storage is managed via LVM. Short version: Running a KVM guest which had 1.7Tb storage allocated to it. The guest was reaching a full-disk. So we decided to resize the disk that it was running upon We're pretty familiar with LVM, and KVM, so we figured this would be a painless operation: Stop the KVM guest. Extend the size of the LVM partition: "lvextend -L+500Gb ..." Check the filesystem : "e2fsck -f /dev/mapper/..." Resize the filesystem: "resize2fs /dev/mapper/" Start the guest. The guest booted successfully, and running "df" showed the extra space, however a short time later the system decided to remount the filesystem read-only, without any explicit indication of error. Being paranoid we shut the guest down and ran the filesystem check again, given the new size of the filesystem we expected this to take a while, however it has now been running for 24 hours and there is no indication of how long it will take. Using strace I can see the fsck is "doing stuff", similarly running "vmstat 1" I can see that there are a lot of block input/output operations occurring. So now my question is threefold: Has anybody come across a similar situation? Generally we've done this kind of resize in the past with zero issues. What is the most likely cause? (3Ware card shows the RAID arrays of the backing stores as being A-OK, the host system hasn't rebooted and nothing in dmesg looks important/unusual) Ignoring brtfs + ext3 (not mature enough to trust) should we make our larger partitions in a different filesystem in the future to avoid either this corruption (whatever the cause) or reduce the fsck time? xfs seems like the obvious candidate?

    Read the article

  • Screen Corruption in half the screen only

    - by Guy DAmico
    About 50% of the my NATTY desktop screen is corrupted. Once that happens I can re-boot as many times as I want but the problem continues. If I logout and then into WINDOWS for a day I may be successful and boot UBUNTU with a good screen. The desktop is formatted correctly, there's no pixelation, rather there is a fine grained white crosshatch pattern covering the entire screen. If I open any application the screen corruption worsens eventually to the point I can no longer make out anything. I ran ram memory test w/o any errors. I have no display issues when running WINDOWS 7. Any ideas. My computer is a Dual Boot stock DELL 5150 w/3gig of ram an on board video.

    Read the article

  • The DBA Team tackles data corruption

    Paul Randal joins the team in this instalment of the DBA Team saga. In this episode, Monte Bank is trying to cover up insider trading - using data corruption to eliminate the evidence, and a patsy DBA to take the blame. It's a great story with useful advice on how to perform thorough data recovery tasks. "A real time saver" Andy Doyle, Head of IT ServicesAndy and his team saved time by automating backup and restores with SQL Backup Pro. Find out how much time you could save. Download a free trial now.

    Read the article

  • URGENT: Patches Needed to Prevent Data Corruption in Oracle Payments

    - by LuciaC
    Development are seeing a number of datafix bugs being logged related to PPR committing data in Payments (IBY) and missing corresponding payments in Payables.  These bugs have been investigated and fixed, however customers need to proactively apply these fixes to prevent data corruption. There are two root cause patches available for this case of partial data commit.  It is critical that all R12/12.1 Payments customers apply the following two patches ASAP: a) Patch 11699958: R12: Error during PPR Leads to Incomplete Data Commit and Inconsistent Status (Doc ID 1338425.1)b) Patches 15867522: Confirmed PPR Batches Show Payment Initiated - Data Exist Only in IBY Tables (Doc ID 1506611.1)

    Read the article

  • How to reliably take Java Heap Dumps?

    - by karlcyr
    My team is running into difficulties when trying to take good heap dumps triggered by OutOfMemoryErrors. For specific reasons we are currently taking the dumps with jmap called from a bash script instead of using the HeapDumpOnOutOfMemoryError flag. We're using a 64-bit 1.6 JVM with a heap size around 3 GB. Our heap dumps fail 90% of the time (guesstimate). Is there anything we can do to improve our odds of getting a clean heap dump we can use to troubleshoot memory problems? I have read that jmap had major issues in Java 1.4 but that those issues should be mostly addressed now.

    Read the article

  • Default maximum heap size -- Ubuntu 10.04 LTS, openjdk6-jre

    - by sachin
    I just installed openjdk6-jre on Ubuntu 10.04 java version "1.6.0_20" OpenJDK Runtime Environment (IcedTea6 1.9.2) (6b20-1.9.2-0ubuntu1~10.04.1) OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode) Every time I run "java" I get this error: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. This can be solved by specifying a maximum heap size and running "java -Xmx256m" But is there anyway to permanently fix this error (i.e. set the default heap size to 256MB so that I do not need to specify the max heap size every time I run the command)

    Read the article

  • What do the 'size' numbers mean in the windbg !heap output?

    - by pj4533
    I see output like this in my DMP file: Heap entries for Segment00 in Heap 00150000 00150640: 00640 . 00040 [01] - busy (40) 00150680: 00040 . 01808 [01] - busy (1800) 00151e88: 01808 . 00210 [01] - busy (208) 00152098: 00210 . 00228 [00] 001522c0: 00228 . 00030 [01] - busy (22) 001522f0: 00030 . 00018 [01] - busy (10) 00152308: 00018 . 00048 [01] - busy (3c) The WinDbg docs say this: Heap entries for Segment00 in Heap 250000 0x01 - HEAP_ENTRY_BUSY 0x02 - HEAP_ENTRY_EXTRA_PRESENT 0x04 - HEAP_ENTRY_FILL_PATTERN 0x08 - HEAP_ENTRY_VIRTUAL_ALLOC 0x10 - HEAP_ENTRY_LAST_ENTRY 0x20 - HEAP_ENTRY_SETTABLE_FLAG1 0x40 - HEAP_ENTRY_SETTABLE_FLAG2 Entry Prev Cur 0x80 - HEAP_ENTRY_SETTABLE_FLAG3 Address Size Size flags (Bytes used) (Tag name) 00250000: 00000 . 00b90 [01] - busy (b90) 00250b90: 00b90 . 00038 [01] - busy (38) 00250bc8: 00038 . 00040 [07] - busy (24), tail fill (NTDLL!LDR Database) The spacing is weird in the docs though. Does that mean 'entry address' and 'prev size' and 'cur size', or are the 'entry' 'prev' and 'cur' not for the line below? What does 'prev size' and 'cur size' mean? Especially with regard to 'bytes used'. What is the difference between 'bytes used' and 'cur size'?

    Read the article

  • Gnome3 shell video corruption with ATI Radeon HD 4850 on 11.10 Oneiric

    - by AndyAtTheWebists
    I have a problem similar to what's been mentioned in a few other questions, namely: Ati incompatible with Gnome-shell? Gnome Shell Glitched Top Bar Ubuntu 11.10 I have read a number of other posts here and on other forums and have tried a bunch of different solutions. The Problem The problem manifests itself only in Gnome3. I have tried KDE, Unity and KFCE and all are fine. The graphics corruption is visible only on gnome panels (see images below). Everything works fine with the free ATI drivers, but they just lack power. The problem occurs with the proprietary ones from AMD/ATI. I have installed version 11.11 and 12.1 as per the instructions on wiki.cchtml.com/index.php/Ubuntu_Oneiric_Installation_Guide. I have the same exact problem in both cases. I have tried this on clean installs of Ubuntu 11.10 Oneiric Ocelot (x64 and x86) and on Linux Mint 12 (x64) with the same results. Also, it looks like after some time of not using it, the PC just freezes. Maybe an overheat? Will look into it. Things I've Tried I have tried the following fixes: Different versions of drivers from ATI including the latest - this worked for some, not for me. Installing Oneiric specific package generated from driver, as well as the default install Removed the Unity Global menu Disabled file manager handling desktop Disabled Compiz "detect refresh rate" Sync to VBlank in Compiz on and off Please help! This is the first time in 10 years that I've finally had the chance to switch my primary desktop to Linux (stopped doing .NET dev work), and this is really getting me down. This is what the problem looks like: And:

    Read the article

  • Using Bulk Operations with Coherence Off-Heap Storage

    - by jpurdy
    Some NamedCache methods (including clear(), entrySet(Filter), aggregate(Filter, …), invoke(Filter, …)) may generate large intermediate results. The size of these intermediate results may result in out-of-memory exceptions on cache servers, and in some cases on cache clients. This may be particularly problematic if out-of-memory exceptions occur on more than one server (since these operations may be cluster-wide) or if these exceptions cause additional memory use on the surviving servers as they take over partitions from the failed servers. This may be particularly problematic with clusters that use off-heap storage (such as NIO or Elastic Data storage options), since these storage options allow greater than normal cache sizes but do nothing to address the size of intermediate results or final result sets. One workaround is to use a PartitionedFilter, which allows the application to break up a larger operation into a number of smaller operations, each targeting either a set of partitions (useful for reducing the load on each cache server) or a set of members (useful for managing client result set sizes). It is also possible to return a key set, and then pull in the full entries using that key set. This also allows the application to take advantage of near caching, though this may be of limited value if the result is large enough to result in near cache thrashing.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >