Search Results

Search found 960 results on 39 pages for 'heap'.

Page 27/39 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • getting java.lang.OutOfMemoryError exception while running a Midlet (using netbeans)

    - by Jeeka
    I am writing a Midlet(using Netbeans) which reads a file containing exactly 2400 lines (each line being 32 characters long) and (extract a part of each line) puts them in an array. I am doing the same for 11 such files( all files have exactly 2400 lines).The Midlet runs fine for reading 6 files and putting them in 6 arrays. However, the Midlet stops while doing it for the 7th file throwing the following exception: TRACE: , startApp threw an Exception java.lang.OutOfMemoryError (stack trace incomplete) java.lang.OutOfMemoryError (stack trace incomplete) I have tried the modifying the netbeans.conf file to increase the heap memory ( as suggested by many forums and blogs) but nothing works for me. Here are the parameters that i had modified in the netbeans.conf file: -J-Xss2m -J-Xms1024m -J-Xmx1024m -J-XX:PermSize=1024m -J-XX:MaxPermSize=1536m -J-XX:+UseConcMarkSweepGC -J-XX:+CMSClassUnloadingEnabled -J-XX:+CMSPermGenSweepingEnabled Can anyone please help me to get me out of this ! I badly need this to be sorted out ASAP ! Thanks in advance !

    Read the article

  • Meaning of the "Unloading class" messages

    - by elec
    Anyone can explain why the lines below appear in the output console at runtime ? (one possible answer would be full permGen, but this can be ruled out since the program only uses 24MB out of the max100MB available in PermGen) [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor28] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor14] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor4] [Unloading class sun.reflect.GeneratedMethodAccessor5] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor38] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor36] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor22] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor8] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor39] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor16] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor2] [Unloading class sun.reflect.GeneratedConstructorAccessor1] The program runs with the following params: -Xmx160M -XX:MaxPermSize=96M -XX:PermSize=96M -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+PrintGCTaskTimeStamps -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:/logs/gc.log There's plenty of space in the heap and in permGen.

    Read the article

  • Tuning JVM (GC) for high responsive server application

    - by elgcom
    I am running an application server on Linux 64bit with 8 core CPUs and 6 GB memory. The server must be highly responsive. After some inspection I found that the application running on the server creates rather a huge amount of short-lived objects, and has only about 200~400 MB long-lived objects(as long as there is no memory leak) After reading http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html I use these JVM options -Xms2g -Xmx2g -XX:MaxPermSize=256m -XX:NewRatio=1 -XX:+UseConcMarkSweepGC Result: the minor GC takes 0.01 ~ 0.02 sec, the major GC takes 1 ~ 3 sec the minor GC happens constantly. How can I further improve or tune the JVM? larger heap size? but will it take more time for GC? larger NewSize and MaxNewSize (for young generation)? other collector? parallel GC? is it a good idea to let major GC take place more often? and how?

    Read the article

  • What is it about Fibonacci numbers?

    - by Ian Bishop
    Fibonacci numbers have become a popular introduction to recursion for Computer Science students and there's a strong argument that they persist within nature. For these reasons, many of us are familiar with them. They also exist within Computer Science elsewhere too; in surprisingly efficient data structures and algorithms based upon the sequence. There are two main examples that come to mind: Fibonacci heaps which have better amortized running time than binomial heaps. Fibonacci search which shares O(log N) running time with binary search on an ordered array. Is there some special property of these numbers that gives them an advantage over other numerical sequences? Is it a density quality? What other possible applications could they have? It seems strange to me as there are many natural number sequences that occur in other recursive problems, but I've never seen a Catalan heap.

    Read the article

  • Forbid GridView to load all views at once

    - by efpies
    I have about 700 items to display in the grid view. On a Samsung Galaxy Tab 10.1 this is not a problem: it has enough memory. On a HTC Explorer the heap is overflown. So I want to load data dynamically regarding current scroll position (N rows for screen + 5 rows as a tail). And I want to show a scrollbar which represents the position in total rows. But I don't want to draw items that I don't see. In other words, I want to create something similar to UITableView in iOS. How can I do this?

    Read the article

  • Where in memory are stored nullable types?

    - by Ondrej Slinták
    This is maybe a follow up to question about nullable types. Where exactly are nullable value types (int?...) stored in memory? First I thought it's clear enough, as Nullable<T> is struct and those are value types. Then I found Jon Skeet's article "Memory in .NET", which says: Note that a value type variable can never have a value of null - it wouldn't make any sense, as null is a reference type concept, meaning "the value of this reference type variable isn't a reference to any object at all". I am little bit confused after reading this statement. So let's say I have int? a = null;. As int is normally a value type, is it stored somehow inside struct Nullable<T> in stack (I used "normally" because I don't know what happens with value type when it becomes nullable)? Or anything else happens here - perhaps in heap?

    Read the article

  • Leak caused by fread

    - by Jack
    I'm profiling code of a game I wrote and I'm wondering how it is possible that the following snippet causes an heap increase of 4kb (I'm profiling with Heapshot Analysis of Xcode) every time it is executed: u8 WorldManager::versionOfMap(FILE *file) { char magic[4]; u8 version; fread(magic, 4, 1, file); <-- this is the line fread(&version,1,1,file); fseek(file, 0, SEEK_SET); return version; } According to the profiler the highlighted line allocates 4.00Kb of memory with a malloc every time the function is called, memory which is never released. This thing seems to happen with other calls to fread around the code, but this was the most eclatant one. Is there anything trivial I'm missing? Is it something internal I shouldn't care about? Just as a note: I'm profiling it on an iPhone and it's compiled as release (-O2).

    Read the article

  • Transform PDF to HTML, keep layout

    - by Tgr
    What methods are there to transform a PDF to HTML? It could be anything - online service, software, library. (Opensource preferred. In the last case, php or python would be preferred.) It has to keep the original layout (including page numbers, footnotes and such), keep the images (combining them to one single background image per page is acceptable) and keep the links. It should preferably output valid XHTML and clean up PDF features such as ligatures, but if there is some post-processing required, I can live with that. Something with a clean, relatively semantic HTML output would be great. The closest one I found was zamzar.org, but it choked on links. (Also, the HTML output is an ugly heap of absolutely positioned divs and needs post-processing because of encoding problems.)

    Read the article

  • "Ambiguous template specialization" problem

    - by Setien
    I'm currently porting a heap of code that has previously only been compiled with Visual Studio 2008. In this code, there's an arrangement like this: template <typename T> T convert( const char * s ) { // slow catch-all std::istringstream is( s ); T ret; is >> ret; return ret; } template <> inline int convert<int>( const char * s ) { return (int)atoi( s ); } Generally, there are a lot of specializations of the templated function with different return types that are invoked like this: int i = convert<int>( szInt ); The problem is, that these template specializations result in "Ambiguous template specialization". If it was something besides the return type that differentiated these function specializations, I could obviously just use overloads, but that's not an option. How do I solve this without having to change all the places the convert functions are called?

    Read the article

  • how much concurrent http request can erlang handle

    - by user209123
    I am developing a application for benchmarking purposes, for which I require to create large number of http connection in a short time, I created a program in java to test how much threads is java able to create, it turns out in my 2GB single core machine, the limit is variable between 5000 and 6000 with 1 GB of memory given to JVM after which it hits outofmemoryerror with heap limit reached. It is suggested around that erlang will be able to generate much more concurrent processes, I am willing to learn erlang if it is capable of solving the problem , although I am interested in knowing can erlang be able to say generate somewhere around 100000 processes which are essentially http requests waiting for responses, in a matter of few seconds without reaching any limit like memory error etc.,

    Read the article

  • java best way to transfer images

    - by d.raev
    I have a application that reads a PDF, transform the content to collection of TIF files, and send them to Glass Fish Server for saving. Usually there are 1-5 pages and it works nice, but when I got a input file with 100+ pages... it throws error on the transfer. Java heap space at java.util.Arrays.copyOf(Arrays.java:2786) at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94) Putting more resources is not a good option in my case, so I m looking for a way to optimize it somehow. I store the data in: HashMap<TifProfile, List<byte[]> Is there a better way to store or send them ? EDIT I did some tests and the final collections for PDF with 80 pages has size over 280mb (240 tiffs with different settings inside)

    Read the article

  • Issue with Visual C++ 2010 (Express) External Tools command

    - by espais
    I posted this on SuperUser...but I was hoping the pros here at SO might have a good idea about how to fix this as well.... Normally we develop in VS 2005 Pro, but I wanted to give VS 2010 a spin. We have custom build tools based off of GNU make tools that are called when creating an executable. This is the error that I see whenever I call my external tool: ...\gnu\make.exe): * couldn't commit memory for cygwin heap, Win32 error 487 The caveat is that it still works perfectly fine in VS2005, as well as being called straight from the command line. Also, my external tool is setup exactly the same as in VS 2005. Is there some setting somewhere that could cause this error to be thrown?

    Read the article

  • How to attach boost::shared_ptr (or another smart pointer) to reference counter of object's parent?

    - by Checkers
    I remember encountering this concept before, but can't find it in Google now. If I have an object of type A, which directly embeds an object of type B: class A { B b; }; How can I have a smart pointer to B, e. g. boost::shared_ptr<B>, but use reference count of A? Assume an instance of A itself is heap-allocated I can safely get its shared count using, say, enable_shared_from_this.

    Read the article

  • Reallocating memory via "new" in C++

    - by BSchlinker
    Quick question regarding memory management in C++ If I do the following operation: pointer = new char [strlen(someinput_input)+1]; And then perform it again, with perhaps a different result being returned from strlen(someinput_input). Does this result in memory being left allocated from the previous "new" statement? IE, is each new statement receiving another block of HEAP memory from the OS, or is it simply reallocating? Assuming I do a final delete pointer[]; will that deallocate any and all memory that I ever allocated via new to that pointer? Thanks

    Read the article

  • Android: Is decreasing size of .png files have some effect to resulted Bitmap in memory

    - by nahab
    I'm writing game with a large amount of .png pictures. All worked fine. Than I added new activity with WebView and got memory shortage. After that I made some experiment - replace game .png images with ones that just fully filled with some color. As result memory shortage had gone. But I suppose that Bitmap internally hold each pixel separately so such changes should have no effect. Maybe this because of initial images have alpha channel and my test images have not it? But actually question is: Will decreasing .png images files sizes make some effect on decreasing usage of VM application heap or not?

    Read the article

  • Memory allocation for collections in .NET

    - by Yogendra
    This might be a dupe. I did not find enough information on this. I was discussing memory allocation for collections in .Net. Where is the memory for elements allocated in a collection? List<int> myList = new List<int>(); The variable myList is allocated on stack and it references the List object created on heap. The question is when int elements are added to the myList, where would they be created ? Can anyone point the right direction?

    Read the article

  • Is there any way to determine what type of memory the segments returned by VirtualQuery() are?

    - by bdbaddog
    Greetings, I'm able to walk a processes memory map using logic like this: MEMORY_BASIC_INFORMATION mbi; void *lpAddress=(void*)0; while (VirtualQuery(lpAddress,&mbi,sizeof(mbi))) { fprintf(fptr,"Mem base:%-10x start:%-10x Size:%-10x Type:%-10x State:%-10x\n", mbi.AllocationBase, mbi.BaseAddress, mbi.RegionSize, mbi.Type,mbi.State); lpAddress=(void *)((unsigned int)mbi.BaseAddress + (unsigned int)mbi.RegionSize); } I'd like to know if a given segment is used for static allocation, stack, and/or heap and/or other? Is there any way to determine that?

    Read the article

  • overview/history of resident memory usage

    - by kapet
    I have a fairly complicated program (Python with SWIG'ed C++ code, long running server) that shows a constantly growing resident memory usage. I've been digging with the usual tools for the leak (valgrind, Pythons gc module, etc.) but to no avail so far. I'm a bit afraid that the actual problem is memory fragmentation within Python and/or libc managed memory. Anyway, my question is more specific right now: Is there a tool to visualize resident memory usage and ideally show how it develops over time? I think the raw data is in /proc/$PID/smaps but I was hoping there's some tool that shows me a nice graph of the amounts used by mmap'ed files vs. anonymous mmap'ed memory vs. heap over time so that it's easier to see (literally) what's changing. I couldn't find anything though. Does anybody know of a ready to use tool that graphs memory usage over space and time in an intuitive way?

    Read the article

  • Full GC real time is much more that user+sys times

    - by Stas
    Hi. We have a Web Java based application running on JBoss with allowed maximum heap size of about 1.2 GB (total machine physical memory is 2 GB). At some point the application stops responding (to clients) for several minutes. After some analysis we found out that the culprit is the Full GC. Here's an excerpt from the verbose GC log: 74477.402: [Full GC [PSYoungGen: 3648K-0K(332160K)] [PSOldGen: 778476K-589497K(819200K)] 782124K-589497K(1151360K) [PSPermGen: 102671K-102671K(171328K)], 646.1546860 secs] [Times: user=3.84 sys=3.72, real=646.17 secs] What I don't understand is how is it possible that the real time spent on Full GC is about 11 minutes (646 seconds), while user+sys times are just 7.5 seconds. 7.5 seconds sound to me much more logical time to spend for cleaning <200 MB from the old generation. Where does all the other time go? Thanks a lot.

    Read the article

  • delete & new in c++

    - by singh
    Hi This may be very simple question,But please help me. i wanted to know what exactly happens when i call new & delete , For example in below code char * ptr=new char [10]; delete [] ptr; call to new returns me memory address. Does it allocate exact 10 bytes on heap, Where information about size is stored.When i call delete on same pointer,i see in debugger that there are a lot of byte get changed before and after the 10 Bytes. Is there any header for each new which contain information about number of byte allocated by new. Thanks a lot

    Read the article

  • How can I get the edit control in a cell of a DataGridView to validate itself?

    - by Stuart Helwig
    It appears that the only way to capture the keypress events within a cell of a DataGridView control, in order to validate user input as they type, is to us the DataGridView controls OnEditControlShowing event, hook up a method to the edit control's (e.Control) keypress event and do some validation. My problem is that I've built a heap of custom DataGridView column classes, with their own custom cell types. These cells have their own custom edit controls (things like DateTimePickers, and Numeric or Currency textboxes.) I want to do some numeric validation for those cells that have Numeric of Currency Textboxes as their edit controls but not all the other cell types. How can I determine, within the DataGridView's "OnEditControlShowing" override, whether or not a particular edit control needs some numeric validation? (In the meantime I've restorted to setting the Tag property of my custom edit controls to a known value and any Edit Controls I find in the OnEditControlShowing override, I hook to my validation routine - I don't like that much!)

    Read the article

  • Multimap Space Issue: Guava

    - by Arpssss
    In my Java code, I am using Guavas Multimap (com.google.common.collect.Multimap) by using this: Multimap< Integer, Integer Index = HashMultimap.create() Here, Multimap key is some portion of URL and value is another portion of URL (converted into integer). Now, I assign my JVM 2560 Mb (2.5 GB) heap space (by using Xmx and Xms). However, it can only store 9 millions of such (key,value) pairs of integers (approx 10 million). But, theoretically (according to memory occupied by int) it should store more. Can anybody help me, 1) Why is this happening (means Multimap is taking lots of space) ? I checked my code with out inserting pairs in Multimap, it takes only 1/2 MB space. 2) Is there any other way or home-baked solution to solve this space issue ? More clearly, how to solve this memory issue ? Thanks in advance and any idea is perfectly OK for me.

    Read the article

  • Cant free memory.

    - by atch
    In code: int a[3][4] = {1,2,3,4, 5,6,7,8, 9,10,11,12}; template<class T, int row, int col> void invert(T a[row][col]) { T* columns = new T[col]; T* const free_me = columns; for (int i = 0; i < col; ++i) { for (int j = 0; j < row; ++j) { *columns = a[j][i]; ++columns;//SOMETIMES VALUE IS 0 } } delete[] free_me;//I'M GETTING ERROR OF HEAP ABUSE IN THIS LINE } int main(int argc, char* argv[]) { invert<int,3,4>(a); } I've observed that while iterating, value of variable columns equals zero and I think thats the problem. Thanks for your help.

    Read the article

  • What Android tools and methods work best to find memory/resource leaks?

    - by jottos
    I've got an Android app developed, and I'm at the point of a phone app development where everything seems to be working well and you want to declare victory and ship, but you know there just have to be some memory and resource leaks in there; and there's only 16mb of heap on the Android and its apparently surprisingly easy to leak in an Android app. I've been looking around and so far have only been able to dig up info on 'hprof' and 'traceview' and neither gets a lot of favorable reviews. What tools or methods have you come across or developed and care to share maybe in an OS project?

    Read the article

  • C++ Singleton design pattern.

    - by Artem Barger
    Recently I've bumped into realization/implementation of Singleton design pattern for C++. It has looked in the following way (I have adopted it from real life example): // a lot of methods is omitted here class Singleton { public: static Singleton* getInstance( ); ~Singleton( ); private: Singleton( ); static Singleton* instance; }; From this declaration I can deduce that instance field is initiated on the heap, that means there is a memory allocation. That is completely unclear for me is when does exactly memory is going to be deallocated? Or there is a bug and memory leak? It seems like there is a problem in implementation. PS. And main question how to implement it in the right way?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >