Search Results

Search found 1544 results on 62 pages for 'heap corruption'.

Page 53/62 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • SetupAPI.DLL to HID.DLL

    - by lexdean
    With the SetupAPI.DLL I execute Function SetupDiGetClassDevs and get a pointer or handle Then Begin a loop Then I run Return = Function SetupDiEnumDeviceInterfaces with SP_DEVICE_INTERFACE_DATA.cbSize = 0 to get my size of what SP_DEVICE_INTERFACE_DATA needs to be Then I set size and execute again SP_DEVICE_INTERFACE_DATA.cbSize = return (the size) Function SetupDiEnumDeviceInterfaces From the data structures I get DevicePath from SP_DEVINFO_DATA And a heap of information from the registry if I want it I beleave What I’m realy wanting is to access the HID.DLL and call HidD_GetAttributes to get VendorID, ProductID, and VersionNumber To this Enumerated device, so I can ID the device. I expect this particular info would come from the USB device its self. Can any one show me how to do that. By the way in my version of windows XP I’m using the registry path, I cannot find HKEY_LOCAL_MACHINE\Enum\HID...\Class I do not even get HKEY_LOCAL_MACHINE\Enum\ I think this is because I have not executed the SetupDiEnumDeviceInterfaces Function Why? I find Lake View Research the only data that is in complete And does not cover this subject. Why is it all over the net when its junk???????? Thanks in advance, J Lex Dean.

    Read the article

  • Ant build from Android-generated build file fails - how to fix?

    - by Eno
    Building our Android app from Ant fails with this error: [apply] [apply] UNEXPECTED TOP-LEVEL ERROR: [apply] java.lang.OutOfMemoryError: Java heap space [apply] at java.util.HashMap.<init>(HashMap.java:209) [apply] at java.util.HashSet.<init>(HashSet.java:86) [apply] at com.android.dx.ssa.Dominators.compress(Dominators.java:96) [apply] at com.android.dx.ssa.Dominators.eval(Dominators.java:132) [apply] at com.android.dx.ssa.Dominators.run(Dominators.java:213) [apply] at com.android.dx.ssa.DomFront.run(DomFront.java:84) [apply] at com.android.dx.ssa.SsaConverter.placePhiFunctions(SsaConverter.java:265) [apply] at com.android.dx.ssa.SsaConverter.convertToSsaMethod(SsaConverter.java:51) [apply] at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:100) [apply] at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:74) [apply] at com.android.dx.dex.cf.CfTranslator.processMethods(CfTranslator.java:269) [apply] at com.android.dx.dex.cf.CfTranslator.translate0(CfTranslator.java:131) [apply] at com.android.dx.dex.cf.CfTranslator.translate(CfTranslator.java:85) [apply] at com.android.dx.command.dexer.Main.processClass(Main.java:297) [apply] at com.android.dx.command.dexer.Main.processFileBytes(Main.java:276) [apply] at com.android.dx.command.dexer.Main.access$100(Main.java:56) [apply] at com.android.dx.command.dexer.Main$1.processFileBytes(Main.java:228) [apply] at com.android.dx.cf.direct.ClassPathOpener.processArchive(ClassPathOpener.java:245) [apply] at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:130) [apply] at com.android.dx.cf.direct.ClassPathOpener.process(ClassPathOpener.java:108) [apply] at com.android.dx.command.dexer.Main.processOne(Main.java:245) [apply] at com.android.dx.command.dexer.Main.processAllFiles(Main.java:183) [apply] at com.android.dx.command.dexer.Main.run(Main.java:139) [apply] at com.android.dx.command.dexer.Main.main(Main.java:120) [apply] at com.android.dx.command.Main.main(Main.java:87) BUILD FAILED Ive tried giving Ant more memory by setting ANT_OPTS="-Xms256m -Xmx512m". (This build machine has 1Gb RAM). Do I just need more memory or is there anything else I can try?

    Read the article

  • How to get rank from full-text search query with Linq to SQL?

    - by Stephen Jennings
    I am using Linq to SQL to call a stored procedure which runs a full-text search and returns the rank plus a few specific columns from the table Article. The rank column is the rank returned from the SQL function FREETEXTTABLE(). I've added this sproc to the data model designer with return type Article. This is working to get the columns I need; however, it discards the ranking of each search result. I'd like to get this information so I can display it to the user. So far, I've tried creating a new class RankedArticle which inherits from Article and adds the column Rank, then changing the return type of my sproc mapping to RankedArticle. When I try this, an InvalidOperationException gets thrown: Data member 'Int32 ArticleID' of type 'Heap.Models.Article' is not part of the mapping for type 'RankedArticle'. Is the member above the root of an inheritance hierarchy? I can't seem to find any other questions or Google results from people trying to get the rank column, so I'm probably missing something obvious here.

    Read the article

  • Java Appengine APPSTATS causing java out of memory error

    - by aloo
    I have several servlets in my java appengine app that do in memory sorting and take on the order of seconds to complete. These complete error free. However, I recently enabled appstats for appengine and started receiving the following error: java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Unknown Source) at java.lang.AbstractStringBuilder.expandCapacity(Unknown Source) at java.lang.AbstractStringBuilder.append(Unknown Source) at java.lang.StringBuilder.append(Unknown Source) at java.lang.StringBuilder.append(Unknown Source) at java.lang.StringBuilder.append(Unknown Source) at com.google.appengine.repackaged.com.google.protobuf.TextFormat$TextGenerator.write(TextFormat.java:344) at com.google.appengine.repackaged.com.google.protobuf.TextFormat$TextGenerator.print(TextFormat.java:332) at com.google.appengine.repackaged.com.google.protobuf.TextFormat.printUnknownFields(TextFormat.java:249) at com.google.appengine.repackaged.com.google.protobuf.TextFormat.print(TextFormat.java:47) at com.google.appengine.repackaged.com.google.protobuf.TextFormat.printToString(TextFormat.java:73) at com.google.appengine.tools.appstats.Recorder.makeSummary(Recorder.java:157) at com.google.appengine.tools.appstats.Recorder.makeSyncCall(Recorder.java:239) at com.google.apphosting.api.ApiProxy.makeSyncCall(ApiProxy.java:98) at com.google.appengine.api.datastore.DatastoreApiHelper.makeSyncCall(DatastoreApiHelper.java:54) at com.google.appengine.api.datastore.PreparedQueryImpl.runQuery(PreparedQueryImpl.java:127) at com.google.appengine.api.datastore.PreparedQueryImpl.asQueryResultList(PreparedQueryImpl.java:81) at org.datanucleus.store.appengine.query.DatastoreQuery.fulfillEntityQuery(DatastoreQuery.java:379) at org.datanucleus.store.appengine.query.DatastoreQuery.executeQuery(DatastoreQuery.java:289) at org.datanucleus.store.appengine.query.DatastoreQuery.performExecute(DatastoreQuery.java:239) at org.datanucleus.store.appengine.query.JDOQLQuery.performExecute(JDOQLQuery.java:89) at org.datanucleus.store.query.Query.executeQuery(Query.java:1489) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1371) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:243) at com.poo.pooserver.dataaccess.DataAccessHelper.getPooStream(DataAccessHelper.java:204) at com.poo.pooserver.GetPooStreamServlet.doPost(GetPooStreamServlet.java:58) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.tools.appstats.AppstatsFilter.doFilter(AppstatsFilter.java:92) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)

    Read the article

  • Books and resources for Java Performance tuning - when working with databases, huge lists

    - by Arvind
    Hi All, I am relatively new to working on huge applications in Java. I am working on a Java web service which is pretty heavily used by various clients. The service basically queries the database (hibernate) and then works with a lot of Lists (there are adapters to convert list returned from DB to the interface which the service publishes) and I am seeing lot of issues with the service like high CPU usage or high heap space. While I can troubleshoot the performance issues using a profiler, I want to actually learn about what all I need to take care when I actually write code. Like what kind of List to use or things like using StringBuilder instead of String, etc... Is there any book or blogs which I can refer which will help me while I write new services? Also my application is multithreaded - each service call from a client is a new thread, and I want to know some best practices around that area as well. I did search the web but I found many tips which are not relevant in the latest Java 6 releases, so wanted to know what kind of resources would help a developer starting out now on Java for heavily used applications. Arvind

    Read the article

  • Using Hibernate's ScrollableResults to slowly read 90 million records

    - by at
    I simply need to read each row in a table in my MySQL database using Hibernate and write a file based on it. But there are 90 million rows and they are pretty big. So it seemed like the following would be appropriate: ScrollableResults results = session.createQuery("SELECT person FROM Person person") .setReadOnly(true).setCacheable(false).scroll(ScrollMode.FORWARD_ONLY); while (results.next()) storeInFile(results.get()[0]); The problem is the above will try and load all 90 million rows into RAM before moving on to the while loop... and that will kill my memory with OutOfMemoryError: Java heap space exceptions :(. So I guess ScrollableResults isn't what I was looking for? What is the proper way to handle this? I don't mind if this while loop takes days (well I'd love it to not). I guess the only other way to handle this is to use setFirstResult and setMaxResults to iterate through the results and just use regular Hibernate results instead of ScrollableResults. That feels like it will be inefficient though and will start taking a ridiculously long time when I'm calling setFirstResult on the 89 millionth row... UPDATE: setFirstResult/setMaxResults doesn't work, it turns out to take an unusably long time to get to the offsets like I feared. There must be a solution here! Isn't this a pretty standard procedure?? I'm willing to forgo Hibernate and use JDBC or whatever it takes. UPDATE 2: the solution I've come up with which works ok, not great, is basically of the form: select * from person where id > <offset> and <other_conditions> limit 1 Since I have other conditions, even all in an index, it's still not as fast as I'd like it to be... so still open for other suggestions..

    Read the article

  • Looking for a lock-free RT-safe single-reader single-writer structure

    - by moala
    Hi, I'm looking for a lock-free design conforming to these requisites: a single writer writes into a structure and a single reader reads from this structure (this structure exists already and is safe for simultaneous read/write) but at some time, the structure needs to be changed by the writer, which then initialises, switches and writes into a new structure (of the same type but with new content) and at the next time the reader reads, it switches to this new structure (if the writer multiply switches to a new lock-free structure, the reader discards these structures, ignoring their data). The structures must be reused, i.e. no heap memory allocation/free is allowed during write/read/switch operation, for RT purposes. I have currently implemented a ringbuffer containing multiple instances of these structures; but this implementation suffers from the fact that when the writer has used all the structures present in the ringbuffer, there is no more place to change from structure... But the rest of the ringbuffer contains some data which don't have to be read by the reader but can't be re-used by the writer. As a consequence, the ringbuffer does not fit this purpose. Any idea (name or pseudo-implementation) of a lock-free design? Thanks for having considered this problem.

    Read the article

  • What are some Java memory management best practices?

    - by Ascalonian
    I am taking over some applications from a previous developer. When I run the applications through Eclipse, I see the memory usage and the heap size increase a lot. Upon further investigation, I see that they were creating an object over-and-over in a loop as well as other things. I started to go through and do some clean up. But the more I went through, the more questions I had like "will this actually do anything?" For example, instead of declaring a variable outside the loop mentioned above and just setting its value in the loop... they created the object in the loop. What I mean is: for(int i=0; i < arrayOfStuff.size(); i++) { String something = (String) arrayOfStuff.get(i); ... } versus String something = null; for(int i=0; i < arrayOfStuff.size(); i++) { something = (String) arrayOfStuff.get(i); } Am I incorrect to say that the bottom loop is better? Perhaps I am wrong. Also, what about after the second loop above, I set "something" back to null? Would that clear out some memory? In either case, what are some good memory management best practices I could follow that will help keep my memory usage low in my applications? Update: I appreciate everyones feedback so far. However, I was not really asking about the above loops (although by your advice I did go back to the first loop). I am trying to get some best practices that I can keep an eye out for. Something on the lines of "when you are done using a Collection, clear it out". I just really need to make sure not as much memory is being taken up by these applications.

    Read the article

  • Eclipse randomly exits after installation of Blackberry plugin/SDK

    - by canadiancreed
    Hello all Since adding the Blackberry Java classes from their website into eclipse, I've had it where eclipse will randomly close, with no discernible pattern, rhyme, error or reason. Here is the environment/software packages that I am using: Windows XP SP2 Eclipse v3.5.1 Blackberry Java Plugin v1.1.1.200911111641-15 Blackberry Java SDK 4.5.0.21 I've tried the usual steps of complete uninstall and reinstallation of Eclipse and the accompanying plugins on multiple systems with the same configuration, including one that had a fresh install of Windows XP SP2. Upgrading to Eclipse 3.6 didn't work (the plugin wont' install as it's the wrong version), nor the downgrade to 3.4 for the same reason. I also increased the heap size to 512 (system has two gigs of memory) as some research into Eclipse doing this type of thing with Groovy was resolved that way, but again, no dice. Eclipse works great when the blackberry plugins are not installed, and no entries of errors or issues in the event log are helping to show what the issue with these plug-ins might be. So if anyone has ran into this issue, and even better, has a solution, I'd love to hear about it. Thanks in advance. EDIT: An additional to my issue, autoComplete with the Blackberry SDK seems to make this extremely unstable, like almost a guaranteed crash. Is this fixable at all?

    Read the article

  • RFC regarding WAM

    - by Noctis Skytower
    Request For Comment regarding Whitespace's Assembly Mnemonics What follows in a first generation attempt at creating mnemonics for a whitespace assembly language. STACK ===== push number copy copy number swap away away number MATH ==== add sub mul div mod HEAP ==== set get FLOW ==== part label call label goto label zero label less label back exit I/O === ochr oint ichr iint In the interest of making improvements to this small and simple instruction set, this is a second attempt. hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo save Store load Retrieve L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller exit End the program print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack What do you think of the following revised list for Whitespace's assembly instructions? I'm still thinking outside of the box somewhat and trying to come up with a better mnemonic set than last time. When the previous interpreter was written, it was completed over two contiguous, rushed evenings. This rewrite deserves significantly more time now that it is the summer. Of course, the next version of Whitespace (0.4) may have its instructions revised even more, but this is just a redesign of what originally was done in a very short amount of time. Hopefully, the instructions make more sense once someone new to programmings thinks about them.

    Read the article

  • Using ServletOutputStream to write very large files in a Java servlet without memory issues

    - by Martin
    I am using IBM Websphere Application Server v6 and Java 1.4 and am trying to write large CSV files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment. The smaller files aren't causing too much of a problem but with the larger files it appears that it is being written into the heap which is then causing an OutOfMemory error and bringing down the entire server. These files can only be served out to authenticated users over https which is why I am serving them through a Servlet instead of just sticking them in Apache. The code I am using is (some fluff removed around this): resp.setHeader("Content-length", "" + fileLength); resp.setContentType("application/vnd.ms-excel"); resp.setHeader("Content-Disposition","attachment; filename=\"export.csv\""); FileInputStream inputStream = null; try { inputStream = new FileInputStream(path); byte[] buffer = new byte[1024]; int bytesRead = 0; do { bytesRead = inputStream.read(buffer, offset, buffer.length); resp.getOutputStream().write(buffer, 0, bytesRead); } while (bytesRead == buffer.length); resp.getOutputStream().flush(); } finally { if(inputStream != null) inputStream.close(); } The FileInputStream doesn't seem to be causing a problem as if I write to another file or just remove the write completly the memory usage doesn't appear to be a problem. What I am thinking is that the resp.getOutputStream().write is being stored in memory until the data can be sent through to the client. So the entire file might be read and stored in the resp.getOutputStream() causing my memory issues and crashing! I have tried Buffering these streams and also tried using Channels from java.nio, none of which seems to make any bit of difference to my memory issues. I have also flushed the outputstream once per iteration of the loop and after the loop, which didn't help.

    Read the article

  • Oracle T4CPreparedStatement memory leaks?

    - by Jay
    A little background on the application that I am gonna talk about in the next few lines: XYZ is a data masking workbench eclipse RCP application: You give it a source table column, and a target table column, it would apply a trasformation (encryption/shuffling/etc) and copy the row data from source table to target table. Now, when I mask n tables at a time, n threads are launched by this app. Here is the issue: I have run into a production issue on first roll out of the above said app. Unfortunately, I don't have any logs to get to the root. However, I tried to run this app in test region and do a stress test. When I collected .hprof files and ran 'em through an analyzer (yourKit), I noticed that objects of oracle.jdbc.driver.T4CPreparedStatement was retaining heap. The analysis also tells me that one of my classes is holding a reference to this preparedstatement object and thereby, n threads have n such objects. T4CPreparedStatement seemed to have character arrays: lastBoundChars and bindChars each of size char[300000]. So, I researched a bit (google!), obtained ojdbc6.jar and tried decompiling T4CPreparedStatement. I see that T4CPreparedStatement extends OraclePreparedStatement, which dynamically manages array size of lastBoundChars and bindChars. So, my questions here are: Have you ever run into an issue like this? Do you know the significance of lastBoundChars / bindChars? I am new to profiling, so do you think I am not doing it correct? (I also ran the hprofs through MAT - and this was the main identified issue - so, I don't really think I could be wrong?) I have found something similar on the web here: http://forums.oracle.com/forums/thread.jspa?messageID=2860681 Appreciate your suggestions / advice.

    Read the article

  • Visual C++ Testing problem

    - by JamesMCCullum
    Hi there I have installed VisualAssert and cFix. I have been using Visual Studio C++ and programming in CLI/C++. I have a working Chess Game Program that works perfectly by itself.....and I have been studying testing and have many examples(with tutorials) I have found on the net, that compile and run in Visual Studio..... But as soon as I try and implement those tests on my chess game......I get this problem.... This is what its telling me 1>------ Build started: Project: ChessRound1, Configuration: Debug Win32 ------ 1>Compiling... 1>stdafx.cpp 1>C:\Program Files\VisualAssert\include\cfixpe.h(137) : error C3641: 'CfixpCrtInitEmbedding' : invalid calling convention '__cdecl ' for function compiled with /clr:pure or /clr:safe 1>C:\Program Files\VisualAssert\include\cfixpe.h(235) : error C4394: 'CfixpCrtInitEmbeddingRegistration' : per-appdomain symbol should not be marked with __declspec(allocate) 1>C:\Program Files\VisualAssert\include\cfixpe.h(235) : error C2393: 'CfixpCrtInitEmbeddingRegistration' : per-appdomain symbol cannot be allocated in segment '.CRT$XCX' 1>C:\Program Files\VisualAssert\include\cfixpe.h(244) : error C2440: 'initializing' : cannot convert from 'void (__cdecl *)(void)' to 'const CFIX_CRT_INIT_ROUTINE' 1> Address of a function yields __clrcall calling convention in /clr:pure and /clr:safe; consider using __clrcall in target type 1>C:\Program Files\VisualAssert\include\cfixpe.h(137) : error C3641: 'CfixpCrtInitEmbedding' : invalid calling convention '__cdecl ' for function compiled with /clr:pure or /clr:safe 1>Build log was saved at "file://c:\Users\james\Documents\Visual Studio 2008\Projects\ChessRound1\ChessRound1\Debug\BuildLog.htm" 1>ChessRound1 - 4 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Any ideas what I'm doing wrong? Im working with windows forms and have a heap of cpp source files. Any help would be appreciated. Thanks

    Read the article

  • Assigning a property across threads

    - by Mike
    I have set a property across threads before and I found this post http://stackoverflow.com/questions/142003/cross-thread-operation-not-valid-control-accessed-from-a-thread-other-than-the-t about getting a property. I think my issue with the code below is setting the variable to the collection is an object therefore on the heap and therefore is just creating a pointer to the same object So my question is besides creating a deep copy, or copying the collection into a different List object is there a better way to do the following to aviod the error during the for loop. Cross-thread operation not valid: Control 'lstProcessFiles' accessed from a thread other than the thread it was created on. Code: private void btnRunProcess_Click(object sender, EventArgs e) { richTextBox1.Clear(); BackgroundWorker bg = new BackgroundWorker(); bg.DoWork += new DoWorkEventHandler(bg_DoWork); bg.RunWorkerCompleted += new RunWorkerCompletedEventHandler(bg_RunWorkerCompleted); bg.RunWorkerAsync(lstProcessFiles.SelectedItems); } void bg_DoWork(object sender, DoWorkEventArgs e) { WorkflowEngine engine = new WorkflowEngine(); ListBox.SelectedObjectCollection selectedCollection=null; if (lstProcessFiles.InvokeRequired) { // Try #1 selectedCollection = (ListBox.SelectedObjectCollection) this.Invoke(new GetSelectedItemsDelegate(GetSelectedItems), new object[] { lstProcessFiles }); // Try #2 //lstProcessFiles.Invoke( // new MethodInvoker(delegate { // selectedCollection = lstProcessFiles.SelectedItems; })); } else { selectedCollection = lstProcessFiles.SelectedItems; } // *********Same Error on this line******************** // Cross-thread operation not valid: Control 'lstProcessFiles' accessed // from a thread other than the thread it was created on. foreach (string l in selectedCollection) { if (engine.LoadProcessDocument(String.Format(@"C:\TestDirectory\{0}", l))) { try { engine.Run(); WriteStep(String.Format("Ran {0} Succussfully", l)); } catch { WriteStep(String.Format("{0} Failed", l)); } engine.PrintProcess(); WriteStep(String.Format("Rrinted {0} to debug", l)); } } } private delegate void WriteDelegate(string p); private delegate ListBox.SelectedObjectCollection GetSelectedItemsDelegate(ListBox list); private ListBox.SelectedObjectCollection GetSelectedItems(ListBox list) { return list.SelectedItems; }

    Read the article

  • Memory leak of java.lang.ref.WeakReference objects inside JDK classes

    - by mandye
    The following simple code reproduces the growth of java.lang.ref.WeakReference objects in the heap: public static void main(String[] args) throws Exception { while (true) { java.util.logging.Logger.getAnonymousLogger(); Thread.sleep(1); } } Here is the output of jmap command within a few seconds interval: user@t1007:~ jmap -d64 -histo:live 29201|grep WeakReference 8: 22493 1079664 java.lang.ref.WeakReference 31: 1 32144 [Ljava.lang.ref.WeakReference; 106: 17 952 com.sun.jmx.mbeanserver.WeakIdentityHashMap$IdentityWeakReference user@t1007:~ jmap -d64 -histo:live 29201|grep WeakReference 8: 23191 1113168 java.lang.ref.WeakReference 31: 1 32144 [Ljava.lang.ref.WeakReference; 103: 17 952 com.sun.jmx.mbeanserver.WeakIdentityHashMap$IdentityWeakReference user@t1007:~ jmap -d64 -histo:live 29201|grep WeakReference 8: 23804 1142592 java.lang.ref.WeakReference 31: 1 32144 [Ljava.lang.ref.WeakReference; 103: 17 952 com.sun.jmx.mbeanserver.WeakIdentityHashMap$IdentityWeakReference Note that jmap command forces FullGC. JVM settings: export JVM_OPT="\ -d64 \ -Xms200m -Xmx200m \ -XX:MaxNewSize=64m \ -XX:NewSize=64m \ -XX:+UseParNewGC \ -XX:+UseConcMarkSweepGC \ -XX:MaxTenuringThreshold=10 \ -XX:SurvivorRatio=2 \ -XX:CMSInitiatingOccupancyFraction=60 \ -XX:+UseCMSInitiatingOccupancyOnly \ -XX:+CMSParallelRemarkEnabled \ -XX:+DisableExplicitGC \ -XX:+CMSClassUnloadingEnabled \ -XX:+PrintGCTimeStamps \ -XX:+PrintGCDetails \ -XX:+PrintTenuringDistribution \ -XX:+PrintGCApplicationConcurrentTime \ -XX:+PrintGCApplicationStoppedTime \ -XX:+PrintGCApplicationStoppedTime \ -XX:+PrintClassHistogram \ -XX:+ParallelRefProcEnabled \ -XX:SoftRefLRUPolicyMSPerMB=1 \ -verbose:gc \ -Xloggc:$GCLOGFILE" java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) Server VM (build 16.0-b13, mixed mode) Solaris 10/Sun Fire(TM) T1000

    Read the article

  • can a OOM be caused by not finding enough contiguous memory?

    - by raticulin
    I start some java code with -Xmx1024m, and at some point I get an hprof due to OOM. The hprof shows just 320mb, and give me a stack trace: at java.util.Arrays.copyOfRange([CII)[C (Arrays.java:3209) at java.lang.String.<init>([CII)V (String.java:215) at java.lang.StringBuilder.toString()Ljava/lang/String; (StringBuilder.java:430) ... This comes from a large string I am copying. I remember reading somewhere (cannot find where) what happened is these cases is: process still has not consumed 1gb of memory, is way below even if heap still below 1gb, it needs some amount of memory, and for copyOfRange() it has to be continuous memory, so even if it is not over the limit yet, it cannot find a large enough piece of memory on the host, it fails with an OOM. I have tried to look for doc on this (copyOfRange() needs a block of continuous memory), but could not find any. The other possible culprit would be not enough permgen memory. Can someone confirm or refute the continuous memory hypothesis? Any pointer to some doc would help too.

    Read the article

  • x86_64 printf segfault after brk call

    - by gmb11
    While i was trying do use brk (int 0x80 with 45 in %rax) to implement a simple memory manager program in assembly and print the blocks in order, i kept getting segfault. After a while i could only reproduce the error, but have no idea why is this happening: .section .data helloworld: .ascii "hello world" .section .text .globl _start _start: push %rbp mov %rsp, %rbp movq $45, %rax movq $0, %rbx #brk(0) should just return the current break of the programm int $0x80 #incq %rax #segfault #addq $1, %rax #segfault movq $0, %rax #works fine? #addq $1, %rax #segfault again? movq $helloworld, %rdi call printf movq $1, %rax #exit int $0x80 In the example here, if the commented lines are uncommented, i have a segfault, but some commands (like de movq $0, %rax) work just fine. In my other program, the first couple printf work, but the third crashes... Looking for other questions, i heard that printf sometimes allocates some memory, and that the brk shouldn't be used, because in this case it corrupts the heap or something... I'm very confused, does anyone know something about that? EDIT: I've just found out that for printf to work you need %rax=0.

    Read the article

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

  • J2ME cache issue

    - by kiennt
    I have to write a J2ME app to retrieve images from server and display in mobile phone. I have seen and test that Snaptu have a mechanism to cache image, event with 100 images (both normal size and zoom size). I wonder how they can do that? I though that those guys use rms to save image stream to data. But when i check in working folder of simulater( I use Windows XP and Sun Wireless Toolkit 3.0, the Emulator device i use to run my program is CLDC Device 1 - my working folder is C:\Document And Settings\Administrator\javame-sdk\3.0\work\6\appdb), i see some .db file. When i delete these files, i still can view cache image in my emulator???? I also thought that those guys use heap memory to save image. But it is not correct because when i set limit device memory is 2MB (like some mobile phones), and i load and view 100 images in zoom size, it didn't make OutOfMemory Error? It so weird. Any one can help me? Thanks

    Read the article

  • Can't start the portable version of NetBeans 6.9.1 IDE

    - by Coder
    I downloaded the portable version of netbeans netbeans-6.9.1-201007282301-ml.zip from the netbeans site and changed the config file in etc/netbeans.conf as indicated on the netbeans site. The file contents are below. # ${HOME} will be replaced by JVM user.home system property #netbeans_default_userdir="${HOME}/.netbeans/6.9" netbeans_default_userdir=".netbeans/6.9" # Options used by NetBeans launcher by default, can be overridden by explicit # command line switches: netbeans_default_options="-J-client -J-Xss2m -J-Xms32m -J-XX:PermSize=32m -J-XX:MaxPermSize=200m -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true" # Note that a default -Xmx is selected for you automatically. # You can find this value in var/log/messages.log file in your userdir. # The automatically selected value can be overridden by specifying -J-Xmx here # or on the command line. # If you specify the heap size (-Xmx) explicitely, you may also want to enable # Concurrent Mark & Sweep garbage collector. In such case add the following # options to the netbeans_default_options: # -J-XX:+UseConcMarkSweepGC -J-XX:+CMSClassUnloadingEnabled -J-XX:+CMSPermGenSweepingEnabled # (see http://wiki.netbeans.org/wiki/view/FaqGCPauses) # Default location of JDK, can be overridden by using --jdkhome <dir>: #netbeans_jdkhome="/path/to/jdk" netbeans_jdkhome="C:\Program Files\Java\jdk1.6.0_24\" # Additional module clusters, using ${path.separator} (';' on Windows or ':' on Unix): #netbeans_extraclusters="/absolute/path/to/cluster1:/absolute/path/to/cluster2" # If you have some problems with detect of proxy settings, you may want to enable # detect the proxy settings provided by JDK5 or higher. # In such case add -J-Djava.net.useSystemProxies=true to the netbeans_default_options. But it refuses to start when i try to run it. If i change the JDK path to something incorrect it complains that it can't find the jdk so i think the jdk path is correct. It also creates a .netbeans directory when i try to start it. I don't see any errors and it just doesn't do anything else observable. Does anybody know how to set up this version of netbeans? Thanks.

    Read the article

  • "Scheduling restart of crashed service", but no call to onStart() follows

    - by kostmo
    In the 1.6 API, is there a way to ensure that the onStart() method of a Service is called after the service is killed due to memory pressure? From the logs, it seems that the "process" that the service belongs to is restarted, but the service itself is not. I have placed a Log.d() call in the onStart() method, and this is not reached. To test my service under memory pressure, I spawn it from an activity, then launch the web browser and visit some Javascript-heavy websites like Slashdot until my service is killed. The logcat reads: 03-07 16:44:13.778: INFO/ActivityManager(52): Process com.kostmo.charbuilder.full (pid 2909) has died. 03-07 16:44:13.778: WARN/ActivityManager(52): Scheduling restart of crashed service com.kostmo.charbuilder.full/com.kostmo.charbuilder.DownloadImagesService in 5000ms 03-07 16:44:13.778: INFO/ActivityManager(52): Low Memory: No more background processes. 03-07 16:44:13.778: ERROR/ActivityThread(52): Failed to find provider info for android.server.checkin 03-07 16:44:13.778: WARN/Checkin(52): Can't log event SYSTEM_SERVICE_LOOPING: java.lang.IllegalArgumentException: Unknown URL content://android.server.checkin/events 03-07 16:44:18.908: INFO/ActivityManager(52): Start proc com.kostmo.charbuilder.full for service com.kostmo.charbuilder.full/com.kostmo.charbuilder.DownloadImagesService: pid=3560 uid=10027 gids={3003, 1015} 03-07 16:44:19.868: DEBUG/ddm-heap(3560): Got feature list request 03-07 16:44:20.128: INFO/ActivityThread(3560): Publishing provider com.kostmo.charbuilder.full.provider.character: com.kostmo.charbuilder.provider.ImageFileContentProvider

    Read the article

  • Boost causes an invalid block while overloading new/delete operators

    - by user555746
    Hi, I have a problem which appears to a be an invalid memory block that happens during a Boost call to Boost:runtime:cla::parser::~parser. When that global delete is called on that object, C++ asserts on the memory block as an invalid: dbgdel.cpp(52): /* verify block type */ _ASSERTE(_BLOCK_TYPE_IS_VALID(pHead->nBlockUse)); An investigation I did revealed that the problem happened because of a global overloading of the new/delete operators. Those overloadings are placed in a separate DLL. I discovered that the problem happens only when that DLL is compiled in RELEASE while the main application is compiled in DEBUG. So I thought that the Release/Debug build flavors might have created a problem like this in Boost/CRT when overloading new/delete operators. So I then tried to explicitly call to _malloc_dbg and _free_dbg withing the overloading functions even in release mode, but it didn't solve the invalid heap block problem. Any idea what the root cause of the problem is? is that situation solvable? I should stress that the problem began only when I started to use Boost. Before that CRT never complained about any invalid memory block. So could it be an internal Boost bug? Thanks!

    Read the article

  • Why do System.IO.Log SequenceNumbers have variable length?

    - by Doug McClean
    I'm trying to use the System.IO.Log features to build a recoverable transaction system. I understand it to be implemented on top of the Common Log File System. The usual ARIES approach to write-ahead logging involves persisting log record sequence numbers in places other than the log (for example, in the header of the database page modified by the logged action). Interestingly, the documentation for CLFS says that such sequence numbers are always 64-bit integers. Confusingly, however, the .Net wrapper around those SequenceNumbers can be constructed from a byte[] but not from a UInt64. It's value can also be read as a byte[], but not as a UInt64. Inspecting the implementation of SequenceNumber.GetBytes() reveals that it can in fact return arrays of either 8 or 16 bytes. This raises a few questions: Why do the .Net sequence numbers differ in size from the CLFS sequence numbers? Why are the .Net sequence numbers variable in length? Why would you need 128 bits to represent such a sequence number? It seems like you would truncate the log well before using up a 64-bit address space (16 exbibytes, or around 10^19 bytes, more if you address longer words)? If log sequence numbers are going to be represented as 128 bit integers, why not provide a way to serialize/deserialize them as pairs of UInt64s instead of rather-pointlessly incurring heap allocations for short-lived new byte[]s every time you need to write/read one? Alternatively, why bother making SequenceNumber a value type at all? It seems an odd tradeoff to double the storage overhead of log sequence numbers just so you can have an untruncated log longer than a million terabytes, so I feel like I'm missing something here, or maybe several things. I'd much appreciate it if someone in the know could set me straight.

    Read the article

  • RFC: Whitespace's Assembly Mnemonics

    - by Noctis Skytower
    Request For Comment regarding Whitespace's Assembly Mnemonics What follows in a first generation attempt at creating mnemonics for a whitespace assembly language. STACK ===== push number copy copy number swap away away number MATH ==== add sub mul div mod HEAP ==== set get FLOW ==== part label call label goto label zero label less label back exit I/O === ochr oint ichr iint In the interest of making improvements to this small and simple instruction set, this is a second attempt. hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo save Store load Retrieve L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller exit End the program print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack What is the general consensus on the following revised list for Whitespace's assembly instructions? They definitely come from thinking outside of the box and trying to come up with a better mnemonic set than last time. When the previous python interpreter was written, it was completed over two contiguous, rushed evenings. This rewrite deserves significantly more time now that it is the summer. Of course, the next version of Whitespace (0.4) may have its instructions revised even more, but this is just a redesign of what originally was done in a few hours. Hopefully, the instructions make more sense to those new to programming jargon.

    Read the article

  • ID3D10Device Memory Allocation Strategy and E_OUTOFMEMORY

    - by Buzz
    Hi,guys, I want to know more detail of memory allocation strategy in D3D10Device. Could you give me some help? First questions is: I know D3D10 has done some work on memory virtualization that means client don't need to consider where the buffer was reserved, GPU memory, AGP memory or Process system memory. Is this correct? Second question is: When I use ID3D10Device to CreateBuffer continuously, no matter what buffer desc type is, for example ID3D10Device::CreateBuffer( ... D3D10_USAGE_DEFAULT ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_IMMUTABLE ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_DYNAMIC ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_STAGING ... ); etc, if CreateBuffer return error code "E_OUTOFMEMORY", does that mean process virtual memory is exhausted? And at this time, memory allocation on process default heap would also be failed? Thanks in advance!

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >