Search Results

Search found 960 results on 39 pages for 'heap'.

Page 17/39 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How to safely operate on parameters in threads, using C++ & Pthreads?

    - by ChrisCphDK
    Hi. I'm having some trouble with a program using pthreads, where occassional crashes occur, that could be related to how the threads operate on data So I have some basic questions about how to program using threads, and memory layout: Assume that a public class function performs some operations on some strings, and returns the result as a string. The prototype of the function could be like this: std::string SomeClass::somefunc(const std::string &strOne, const std::string &strTwo) { //Error checking of strings have been omitted std::string result = strOne.substr(0,5) + strTwo.substr(0,5); return result; } Is it correct to assume that strings, being dynamic, are stored on the heap, but that a reference to the string is allocated on the stack at runtime? Stack: [Some mem addr] pointer address to where the string is on the heap Heap: [Some mem addr] memory allocated for the initial string which may grow or shrink To make the function thread safe, the function is extended with the following mutex (which is declared as private in the "SomeClass") locking: std::string SomeClass::somefunc(const std::string &strOne, const std::string &strTwo) { pthread_mutex_lock(&someclasslock); //Error checking of strings have been omitted std::string result = strOne.substr(0,5) + strTwo.substr(0,5); pthread_mutex_unlock(&someclasslock); return result; } Is this a safe way of locking down the operations being done on the strings (all three), or could a thread be stopped by the scheduler in the following cases, which I'd assume would mess up the intended logic: a. Right after the function is called, and the parameters: strOne & strTwo have been set in the two reference pointers that the function has on the stack, the scheduler takes away processing time for the thread and lets a new thread in, which overwrites the reference pointers to the function, which then again gets stopped by the scheduler, letting the first thread back in? b. Can the same occur with the "result" string: the first string builds the result, unlocks the mutex, but before returning the scheduler lets in another thread which performs all of it's work, overwriting the result etc. Or are the reference parameters / result string being pushed onto the stack while another thread is doing performing it's task? Is the safe / correct way of doing this in threads, and "returning" a result, to pass a reference to a string that will be filled with the result instead: void SomeClass::somefunc(const std::string &strOne, const std::string &strTwo, std::string result) { pthread_mutex_lock(&someclasslock); //Error checking of strings have been omitted result = strOne.substr(0,5) + strTwo.substr(0,5); pthread_mutex_unlock(&someclasslock); } The intended logic is that several objects of the "SomeClass" class creates new threads and passes objects of themselves as parameters, and then calls the function: "someFunc": int SomeClass::startNewThread() { pthread_attr_t attr; pthread_t pThreadID; if(pthread_attr_init(&attr) != 0) return -1; if(pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED) != 0) return -2; if(pthread_create(&pThreadID, &attr, proxyThreadFunc, this) != 0) return -3; if(pthread_attr_destroy(&attr) != 0) return -4; return 0; } void* proxyThreadFunc(void* someClassObjPtr) { return static_cast<SomeClass*> (someClassObjPtr)->somefunc("long string","long string"); } Sorry for the long description. But I hope the questions and intended purpose is clear, if not let me know and I'll elaborate. Best regards. Chris

    Read the article

  • Inappropriate Updates?

    - by Tony Davis
    A recent Simple-talk article by Kathi Kellenberger dissected the fastest SQL solution, submitted by Peter Larsson as part of Phil Factor's SQL Speed Phreak challenge, to the classic "running total" problem. In its analysis of the code, the article re-ignited a heated debate regarding the techniques that should, and should not, be deemed acceptable in your search for fast SQL code. Peter's code for running total calculation uses a variation of a somewhat contentious technique, sometimes referred to as a "quirky update": SET @Subscribers = Subscribers = @Subscribers + PeopleJoined - PeopleLeft This form of the UPDATE statement, @variable = column = expression, is documented and it allows you to set a variable to the value returned by the expression. Microsoft does not guarantee the order in which rows are updated in this technique because, in relational theory, a table doesn’t have a natural order to its rows and the UPDATE statement has no means of specifying the order. Traditionally, in cases where a specific order is requires, such as for running aggregate calculations, programmers who used the technique have relied on the fact that the UPDATE statement, without the WHERE clause, is executed in the order imposed by the clustered index, or in heap order, if there isn’t one. Peter wasn’t satisfied with this, and so used the ingenious device of assuring the order of the UPDATE by the use of an "ordered CTE", based on an underlying temporary staging table (a heap). However, in either case, the ordering is still not guaranteed and, in addition, would be broken under conditions of parallelism, or partitioning. Many argue, with validity, that this reliance on a given order where none can ever be guaranteed is an abuse of basic relational principles, and so is a bad practice; perhaps even irresponsible. More importantly, Microsoft doesn't wish to support the technique and offers no guarantee that it will always work. If you put it into production and it breaks in a later version, you can't file a bug. As such, many believe that the technique should never be tolerated in a production system, under any circumstances. Is this attitude justified? After all, both forms of the technique, using a clustered index to guarantee the order or using an ordered CTE, have been tested rigorously and are proven to be robust; although not guaranteed by Microsoft, the ordering is reliable, provided none of the conditions that are known to break it are violated. In Peter's particular case, the technique is being applied to a temporary table, where the developer has full control of the data ordering, and indexing, and knows that the table will never be subject to parallelism or partitioning. It might be argued that, in such circumstances, the technique is not really "quirky" at all and to ban it from your systems would server no real purpose other than to deprive yourself of a reliable technique that has uses that extend well beyond the running total calculations. Of course, it is doubly important that such a technique, including its unsupported status and the assumptions that underpin its success, is fully and clearly documented, preferably even when posting it online in a competition or forum post. Ultimately, however, this technique has been available to programmers throughout the time Sybase and SQL Server has existed, and so cannot be lightly cast aside, even if one sympathises with Microsoft for the awkwardness of maintaining an archaic way of doing updates. After all, a Table hint could easily be devised that, if specified in the WITH (<Table_Hint_Limited>) clause, could be used to request the database engine to do the update in the conventional order. Then perhaps everyone would be satisfied. Cheers, Tony.

    Read the article

  • Anatomy of a .NET Assembly - Signature encodings

    - by Simon Cooper
    If you've just joined this series, I highly recommend you read the previous posts in this series, starting here, or at least these posts, covering the CLR metadata tables. Before we look at custom attribute encoding, we first need to have a brief look at how signatures are encoded in an assembly in general. Signature types There are several types of signatures in an assembly, all of which share a common base representation, and are all stored as binary blobs in the #Blob heap, referenced by an offset from various metadata tables. The types of signatures are: Method definition and method reference signatures. Field signatures Property signatures Method local variables. These are referenced from the StandAloneSig table, which is then referenced by method body headers. Generic type specifications. These represent a particular instantiation of a generic type. Generic method specifications. Similarly, these represent a particular instantiation of a generic method. All these signatures share the same underlying mechanism to represent a type Representing a type All metadata signatures are based around the ELEMENT_TYPE structure. This assigns a number to each 'built-in' type in the framework; for example, Uint16 is 0x07, String is 0x0e, and Object is 0x1c. Byte codes are also used to indicate SzArrays, multi-dimensional arrays, custom types, and generic type and method variables. However, these require some further information. Firstly, custom types (ie not one of the built-in types). These require you to specify the 4-byte TypeDefOrRef coded token after the CLASS (0x12) or VALUETYPE (0x11) element type. This 4-byte value is stored in a compressed format before being written out to disk (for more excruciating details, you can refer to the CLI specification). SzArrays simply have the array item type after the SZARRAY byte (0x1d). Multidimensional arrays follow the ARRAY element type with a series of compressed integers indicating the number of dimensions, and the size and lower bound of each dimension. Generic variables are simply followed by the index of the generic variable they refer to. There are other additions as well, for example, a specific byte value indicates a method parameter passed by reference (BYREF), and other values indicating custom modifiers. Some examples... To demonstrate, here's a few examples and what the resulting blobs in the #Blob heap will look like. Each name in capitals corresponds to a particular byte value in the ELEMENT_TYPE or CALLCONV structure, and coded tokens to custom types are represented by the type name in curly brackets. A simple field: int intField; FIELD I4 A field of an array of a generic type parameter (assuming T is the first generic parameter of the containing type): T[] genArrayField FIELD SZARRAY VAR 0 An instance method signature (note how the number of parameters does not include the return type): instance string MyMethod(MyType, int&, bool[][]); HASTHIS DEFAULT 3 STRING CLASS {MyType} BYREF I4 SZARRAY SZARRAY BOOLEAN A generic type instantiation: MyGenericType<MyType, MyStruct> GENERICINST CLASS {MyGenericType} 2 CLASS {MyType} VALUETYPE {MyStruct} For more complicated examples, in the following C# type declaration: GenericType<T> : GenericBaseType<object[], T, GenericType<T>> { ... } the Extends field of the TypeDef for GenericType will point to a TypeSpec with the following blob: GENERICINST CLASS {GenericBaseType} 3 SZARRAY OBJECT VAR 0 GENERICINST CLASS {GenericType} 1 VAR 0 And a static generic method signature (generic parameters on types are referenced using VAR, generic parameters on methods using MVAR): TResult[] GenericMethod<TInput, TResult>( TInput, System.Converter<TInput, TOutput>); GENERIC 2 2 SZARRAY MVAR 1 MVAR 0 GENERICINST CLASS {System.Converter} 2 MVAR 0 MVAR 1 As you can see, complicated signatures are recursively built up out of quite simple building blocks to represent all the possible variations in a .NET assembly. Now we've looked at the basics of normal method signatures, in my next post I'll look at custom attribute application signatures, and how they are different to normal signatures.

    Read the article

  • Talking JavaOne with Rock Star Kirk Pepperdine

    - by Janice J. Heiss
    Kirk Pepperdine is not only a JavaOne Rock Star but a Java Champion and a highly regarded expert in Java performance tuning who works as a consultant, educator, and author. He is the principal consultant at Kodewerk Ltd. He speaks frequently at conferences and co-authored the Ant Developer's Handbook. In the rapidly shifting world of information technology, Pepperdine, as much as anyone, keeps up with what's happening with Java performance tuning. Pepperdine will participate in the following sessions: CON5405 - Are Your Garbage Collection Logs Speaking to You? BOF6540 - Java Champions and JUG Leaders Meet Oracle Executives (with Jeff Genender, Mattias Karlsson, Henrik Stahl, Georges Saab) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Ellen Kraffmiller Martijn Verburg, Jeff Genender, and Henri Tremblay) I asked him what technological changes need to be taken into account in performance tuning. “The volume of data we're dealing with just seems to be getting bigger and bigger all the time,” observed Pepperdine. “A couple of years ago you'd never think of needing a heap that was 64g, but today there are deployments where the heap has grown to 256g and tomorrow there are plans for heaps that are even larger. Dealing with all that data simply requires more horse power and some very specialized techniques. In some cases, teams are trying to push hardware to the breaking point. Under those conditions, you need to be very clever just to get things to work -- let alone to get them to be fast. We are very quickly moving from a world where everything happens in a transaction to one where if you were to even consider using a transaction, you've lost." When asked about the greatest misconceptions about performance tuning that he currently encounters, he said, “If you have a performance problem, you should start looking at code at the very least and for that extra step, whip out an execution profiler. I'm not going to say that I never use execution profilers or look at code. What I will say is that execution profilers are effective for a small subset of performance problems and code is literally the last thing you should look at.And what is the most exciting thing happening in the world of Java today? “Interesting question because so many people would say that nothing exciting is happening in Java. Some might be disappointed that a few features have slipped in terms of scheduling. But I'd disagree with the first group and I'm not so concerned about the slippage because I still see a lot of exciting things happening. First, lambda will finally be with us and with lambda will come better ways.” For JavaOne, he is proctoring for Heinz Kabutz's lab. “I'm actually looking forward to that more than I am to my own talk,” he remarked. “Heinz will be the third non-Sun/Oracle employee to present a lab and the first since Oracle began hosting JavaOne. He's got a great message. He's spent a ton of time making sure things are going to work, and we've got a great team of proctors to help out. After that, getting my talk done, the Java Champion's panel session and then kicking back and just meeting up and talking to some Java heads."Finally, what should Java developers know that they currently do not know? “’Write Once, Run Everywhere’ is a great slogan and Java has come closer to that dream than any other technology stack that I've used. That said, different hardware bits work differently and as hard as we try, the JVM can't hide all the differences. Plus, if we are to get good performance we need to work with our hardware and not against it. All this implies that Java developers need to know more about the hardware they are deploying to.” Originally published on blogs.oracle.com/javaone.

    Read the article

  • Talking JavaOne with Rock Star Kirk Pepperdine

    - by Janice J. Heiss
    Kirk Pepperdine is not only a JavaOne Rock Star but a Java Champion and a highly regarded expert in Java performance tuning who works as a consultant, educator, and author. He is the principal consultant at Kodewerk Ltd. He speaks frequently at conferences and co-authored the Ant Developer's Handbook. In the rapidly shifting world of information technology, Pepperdine, as much as anyone, keeps up with what's happening with Java performance tuning. Pepperdine will participate in the following sessions: CON5405 - Are Your Garbage Collection Logs Speaking to You? BOF6540 - Java Champions and JUG Leaders Meet Oracle Executives (with Jeff Genender, Mattias Karlsson, Henrik Stahl, Georges Saab) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Ellen Kraffmiller Martijn Verburg, Jeff Genender, and Henri Tremblay) I asked him what technological changes need to be taken into account in performance tuning. “The volume of data we're dealing with just seems to be getting bigger and bigger all the time,” observed Pepperdine. “A couple of years ago you'd never think of needing a heap that was 64g, but today there are deployments where the heap has grown to 256g and tomorrow there are plans for heaps that are even larger. Dealing with all that data simply requires more horse power and some very specialized techniques. In some cases, teams are trying to push hardware to the breaking point. Under those conditions, you need to be very clever just to get things to work -- let alone to get them to be fast. We are very quickly moving from a world where everything happens in a transaction to one where if you were to even consider using a transaction, you've lost." When asked about the greatest misconceptions about performance tuning that he currently encounters, he said, “If you have a performance problem, you should start looking at code at the very least and for that extra step, whip out an execution profiler. I'm not going to say that I never use execution profilers or look at code. What I will say is that execution profilers are effective for a small subset of performance problems and code is literally the last thing you should look at.And what is the most exciting thing happening in the world of Java today? “Interesting question because so many people would say that nothing exciting is happening in Java. Some might be disappointed that a few features have slipped in terms of scheduling. But I'd disagree with the first group and I'm not so concerned about the slippage because I still see a lot of exciting things happening. First, lambda will finally be with us and with lambda will come better ways.” For JavaOne, he is proctoring for Heinz Kabutz's lab. “I'm actually looking forward to that more than I am to my own talk,” he remarked. “Heinz will be the third non-Sun/Oracle employee to present a lab and the first since Oracle began hosting JavaOne. He's got a great message. He's spent a ton of time making sure things are going to work, and we've got a great team of proctors to help out. After that, getting my talk done, the Java Champion's panel session and then kicking back and just meeting up and talking to some Java heads."Finally, what should Java developers know that they currently do not know? “’Write Once, Run Everywhere’ is a great slogan and Java has come closer to that dream than any other technology stack that I've used. That said, different hardware bits work differently and as hard as we try, the JVM can't hide all the differences. Plus, if we are to get good performance we need to work with our hardware and not against it. All this implies that Java developers need to know more about the hardware they are deploying to.”

    Read the article

  • outofmemoryerror when running jar but not when running in netbeans/ apache poi

    - by Laughy
    I basically have a program that filters records from one excel file to another excel file using the apache poi. My program runs fine when it runs using netbeans. However, upon doing a clean and build and double clicking the .jar file inside the dist folder, it runs for very long( too long!) and gives me the following error( that I got by running the program from command prompt ). Is there any work around for it? I have already increase the heap size to be -Xms1500m inside netbeans before cleaning and building. Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space at org.apache.xmlbeans.impl.store.Saver$TextSaver.resize(Saver.java:1592) at org.apache.xmlbeans.impl.store.Saver$TextSaver.preEmit(Saver.java:1223) at org.apache.xmlbeans.impl.store.Saver$TextSaver.emit(Saver.java:1144) at org.apache.xmlbeans.impl.store.Saver$TextSaver.emitElement(Saver.java:926) at org.apache.xmlbeans.impl.store.Saver.processElement(Saver.java:456) at org.apache.xmlbeans.impl.store.Saver.process(Saver.java:307) at org.apache.xmlbeans.impl.store.Saver$TextSaver.saveToString(Saver.java:1727) at org.apache.xmlbeans.impl.store.Cursor._xmlText(Cursor.java:546) at org.apache.xmlbeans.impl.store.Cursor.xmlText(Cursor.java:2436) at org.apache.xmlbeans.impl.values.XmlObjectBase.xmlText(XmlObjectBase.java:1455) at org.apache.poi.xssf.model.SharedStringsTable.getKey(SharedStringsTable.java:130) at org.apache.poi.xssf.model.SharedStringsTable.addEntry(SharedStringsTable.java:176) at org.apache.poi.xssf.usermodel.XSSFCell.setCellType(XSSFCell.java:755) at equity.EquityFrame_Updated.copyRowsFromOldToNew(EquityFrame_Updated.java:646) at equity.EquityFrame_Updated.init(EquityFrame_Updated.java:133) at equity.EquityFrame_Updated.createAndShowGUI(EquityFrame_Updated.java:71) at equity.EquityFrame_Updated.<init>(EquityFrame_Updated.java:50) at equity.FileOpener.generateButtonPressed(FileOpener.java:160) at equity.FileOpener.access$100(FileOpener.java:17) at equity.FileOpener$2.actionPerformed(FileOpener.java:61) at javax.swing.AbstractButton.fireActionPerformed(Unknown Source) at javax.swing.AbstractButton$Handler.actionPerformed(Unknown Source) at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source) at javax.swing.DefaultButtonModel.setPressed(Unknown Source) at javax.swing.plaf.basic.BasicButtonListener.mouseReleased(Unknown Source) at java.awt.Component.processMouseEvent(Unknown Source) at javax.swing.JComponent.processMouseEvent(Unknown Source) at java.awt.Component.processEvent(Unknown Source) at java.awt.Container.processEvent(Unknown Source) at java.awt.Component.dispatchEventImpl(Unknown Source) at java.awt.Container.dispatchEventImpl(Unknown Source) at java.awt.Component.dispatchEvent(Unknown Source)

    Read the article

  • Hibernate: OutOfMemoryError persisting Blob when printing log message

    - by paul
    I have a Hibernate Entity: @Entity class Foo { //... @Lob public byte[] getBytes() { return bytes; } //.... } My VM is configured with a maximum heap size of 512 MB. When I try to persist an object which has a 75 MB large object, I get an OutOfMemoryError. The names of the methods in the stack trace (StringBuilder, ByteArrayBlobType.toLoggableString, pretty.Printer.toString) suggest that hibernate is trying to write a very large log message that contains my object. Am I correct about why hibernate is using so much memory? What is the simplest way to work around this problem? java.lang.OutOfMemoryError: Java heap space at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:44) at java.lang.StringBuilder.<init>(StringBuilder.java:81) at org.hibernate.type.ByteArrayBlobType.toString(ByteArrayBlobType.java:117) at org.hibernate.type.ByteArrayBlobType.toLoggableString(ByteArrayBlobType.java:127) at org.hibernate.pretty.Printer.toString(Printer.java:53) at org.hibernate.pretty.Printer.toString(Printer.java:90) at org.hibernate.event.def.AbstractFlushingEventListener.flushEverythingToExecutions(AbstractFlushingEventListener.java:97) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:26) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000) at org.jboss.seam.persistence.HibernateSessionProxy.flush(HibernateSessionProxy.java:181)

    Read the article

  • OutOfMemoryException, large Private Data

    - by Captain Comic
    Hello, In previous series: http://stackoverflow.com/questions/2543648/outofmemoryexception-stack-size-is-huge-large-number-of-threads I have a .net windows service that consumes a lot of memory. The GC heap is not big. Also the stack size is not big. What is big is something called a private data. Also I can see in task manager that my application consumes a lot something that taskmanager calls a handle. My application consumes 2326 handles. I believe that these handles are some windows handles that occupy private data. I can see that this private data is occupied by blocks marked as Thread Environment Block. What is that? Screenshot of my application memory usage by VMMap Screenshot of my application memory usage by Task Manager UPDATE I run ProcessExplorer. I have two instances of my service running at the moment. I can see that they consume a lot of virtual memory for Gen2 GC. This look suspicios. Also total reserved for GC Heap size is the same for two processes.

    Read the article

  • Need help with buffer overrun.

    - by Morinar
    I've got a buffer overrun I absolutely can't see to figure out (in C). First of all, it only happens maybe 10% of the time or so. The data that it is pulling from the DB each time doesn't seem to be all that much different between executions... at least not different enough for me to find any discernible pattern as to when it happens. The exact message from Visual Studio is this: A buffer overrun has occurred in hub.exe which has corrupted the program's internal state. Press Break to debug the program or Continue to terminate the program. For more details please see Help topic 'How to debug Buffer Overrun Issues'. If I debug, I find that it is broken in __report_gsfailure() which I'm pretty sure is from the /GS flag on the compiler and also signifies that this is an overrun on the stack rather than the heap. I can also see the function it threw this on as it was leaving, but I can't see anything in there that would cause this behavior, the function has also existed for a long time (10+ years, albeit with some minor modifications) and as far as I know, this has never happened. I'd post the code of the function, but it's decently long and references a lot of proprietary functions/variables/etc. I'm basically just looking for either some idea of what I should be looking for that I haven't or perhaps some tools that may help. Unfortunately, nearly every tool I've found only helps with debugging overruns on the heap, and unless I'm mistaken, this is on the stack. Thanks in advance.

    Read the article

  • How to make sure Solr/Lucene won't die with java.lang.OutOfMemoryError?

    - by taw
    I'm really puzzled why it keeps dying with java.lang.OutOfMemoryError during indexing even though it has a few GBs of memory. Is there a fundamental reason why it needs manual tweaking of config files / jvm parameters instead of it just figuring out how much memory is available and limiting itself to that? No other programs except Solr ever have this kind of problem. Yes, I can keep tweaking JVM heap size every time such crashes happen, but this is all so backwards. Here's stack trace of the latest such crash in case it is relevant: SEVERE: java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOfRange(Arrays.java:3209) at java.lang.String.<init>(String.java:216) at org.apache.lucene.index.TermBuffer.toTerm(TermBuffer.java:122) at org.apache.lucene.index.SegmentTermEnum.term(SegmentTermEnum.java:169) at org.apache.lucene.search.FieldCacheImpl$StringIndexCache.createValue(FieldCacheImpl.java:701) at org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:208) at org.apache.lucene.search.FieldCacheImpl.getStringIndex(FieldCacheImpl.java:676) at org.apache.lucene.search.FieldComparator$StringOrdValComparator.setNextReader(FieldComparator.java:667) at org.apache.lucene.search.TopFieldCollector$OneComparatorNonScoringCollector.setNextReader(TopFieldCollector.java:94) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:245) at org.apache.lucene.search.Searcher.search(Searcher.java:171) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:988) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:884) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:341) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:182) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Can i use a different parser for Axis 1.4?

    - by NishM
    The current SAX parser takes a lot of time (20 minutes) and heap memory(around 400mb) to deserialize the response coming from the soap server as per the logs. Our response XMLs are of average size 4 mb. A part of the log when it runs the applicaiton out of heap is below DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing handler org.apache.axis.message.SOAPHandler@16d22f1 DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(newElem00) DEBUG (org.apache.axis.message.MessageElement) New MessageElement (org.apache.axis.message.MessageElement@112c22) named {}name DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing element name DEBUG (org.apache.axis.utils.NSStack) NSPush (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::startElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::endElement(, name) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(popHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Popping handler org.apache.axis.message.SOAPHandler@16d22f1 DEBUG (org.apache.axis.utils.NSStack) NSPop (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Popped element stack to org.apache.axis.message.MessageElement:property DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::endElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::startElement(, value) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(pushHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing handler org.apache.axis.message.SOAPHandler@16880ba DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(newElem00) DEBUG (org.apache.axis.message.MessageElement) New MessageElement (org.apache.axis.message.MessageElement@1db74af) named {}value DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing element value DEBUG (org.apache.axis.utils.NSStack) NSPush (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::startElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::endElement(, value) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(popHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Popping handler org.apache.axis.message.SOAPHandler@16880ba DEBUG (org.apache.axis.utils.NSStack) NSPop (32) I cannot use Axis2 because of technical reasons. I have tried using HTTP Commons client instead of HTTP client but the response time remains the same. How can i link a different parser(example xerces 2.10.0 or xstream 1.3.1?) to Axis 1.4 framework in this context so that memory management and response time is favorable?.

    Read the article

  • What is the relationship between Turing Machine & Modern Computer ? [closed]

    - by smwikipedia
    I heard a lot that modern computers are based on Turing machine. I just cannot build a bridge between a conceptual Turing Machine and a modern computer. Could someone help me build this bridge? Below is my current understanding. I think the computer is a big general-purpose Turing machine. Each program we write is a small specific-purpose Turing machine. The classical Turing machine do its job based on the input and its current state inside and so do our programs. Let's take a running program (a process) as an example. We know that in the process's address space, there's areas for stack, heap, and code. A classical Turing machine doesn't have the ability to remember many things, so we borrow the concept of stack from the push-down automaton. The heap and stack areas contains the state of our specific-purpose Turing machine (our program). The code area represents the logic of this small Turing machine. And various I/O devices supply input to this Turing machine.

    Read the article

  • How should I go about implementing a points-to analysis in Maude?

    - by reprogrammer
    I'm going to implement a points-to analysis algorithm. I'd like to implement this analysis mainly based on the algorithm by Whaley and Lam. Whaley and Lam use a BDD based implementation of Datalog to represent and compute the points-to analysis relations. The following lists some of the relations that are used in a typical points-to analysis. Note that D(w, z) :- A(w, x),B(x, y), C(y, z) means D(w, z) is true if A(w, x), B(x, y), and C(y, z) are all true. BDD is the data structure used to represent these relations. Relations input vP0 (variable : V, heap : H) input store (base : V, field : F, source : V) input load (base : V, field : F, dest : V) input assign (dest : V, source : V) output vP (variable : V, heap : H) output hP (base : H, field : F, target : H) Rules vP(v, h) :- vP0(v, h) vP(v1, h) :- assign(v1, v2), vP(v2, h) hP(h1, f,h2) :- store(v1, f, v2), vP(v1, h1), vP(v2, h2) vP(v2, h2) :- load(v1, f, v2), vP(v1, h1), hP(h1, f, h2) I need to understand if Maude is a good environment for implementing points-to analysis. I noticed that Maude uses a BDD library called BuDDy. But, it looks like that Maude uses BDDs for a different purpose, i.e. unification. So, I thought I might be able to use Maude instead of a Datalog engine to compute the relations of my points-to analysis. I assume Maude propagates independent information concurrently. And this concurrency could potentially make my points-to analysis faster than sequential processing of rules. But, I don't know the best way to represent my relations in Maude. Should I implement BDD in Maude myself, or Maude's internal unification based on BDD has the same effect?

    Read the article

  • understanding valgrind output

    - by sbsp
    Hi, i made a post earlier asking about checking for memory leaks etc, i did say i wasnt to familiar with the terminal in linux but someone said to me it was easy with valgrind i have managed to get it running etc but not to sure what the output means. Glancing over, all looks good to me but would like to run it past you experience folk for confirmation if possible. THe output is as follows ^C==2420== ==2420== HEAP SUMMARY: ==2420== in use at exit: 2,240 bytes in 81 blocks ==2420== total heap usage: 82 allocs, 1 frees, 2,592 bytes allocated ==2420== ==2420== LEAK SUMMARY: ==2420== definitely lost: 0 bytes in 0 blocks ==2420== indirectly lost: 0 bytes in 0 blocks ==2420== possibly lost: 0 bytes in 0 blocks ==2420== still reachable: 2,240 bytes in 81 blocks ==2420== suppressed: 0 bytes in 0 blocks ==2420== Reachable blocks (those to which a pointer was found) are not shown. ==2420== To see them, rerun with: --leak-check=full --show-reachable=yes ==2420== ==2420== For counts of detected and suppressed errors, rerun with: -v ==2420== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 13 from 8) Is all good here? the only thing concerning me is the still reachable part. Is that ok? Thanks everyone

    Read the article

  • What is the relationship between Turing Machine & Modern Computer ?

    - by smwikipedia
    I heard a lot that modern computers are based on Turing machine. I just cannot build a bridge from a conceptual Turing Machine to a real modern computer. Could someone help me build this bridge? Below is my current understanding. I think the computer is a big general-purpose Turing machine. Each program we write is a small specific-purpose Turing machine. The classical Turing machine do its job based on the input and its current state inside and so do our programs. Let's take a running program (a process) as an example. We know that in the process's address space, there's areas for stack, heap, and code. A classical Turing machine doesn't have the ability to remember many things, so we borrow the concept of stack from the push-down automaton. The heap and stack areas contains the state of our specific-purpose Turing machine (our program). The code area represents the logic of this small Turing machine. And various I/O devices supply input to this Turing machine.

    Read the article

  • Eclipse does not start on Windows 7

    - by van
    Suddenly Eclipse today has decides to stop working. The last thing I did was close all perspectives and close Eclipse. When loading eclipse from the command prompt, using: "eclipse.exe -clean" the splash screen loads for a split second then exits. When I run the command: eclipsec -consoleLog -debug it results in the following output: Start VM: -Dosgi.requiredJavaVersion=1.6 -Dhelp.lucene.tokenizer=standard -Xms4096m -Xmx4096m -XX:MaxPermSize=512m -Djava.class.path=d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3. 0.v20130327-1440.jar -os win32 -ws win32 -arch x86_64 -showsplash d:\devtools\eclipse\\plugins\org.eclipse.platform_4.3.0.v20130605-20 00\splash.bmp -launcher d:\devtools\eclipse\eclipsec.exe -name Eclipsec --launcher.library d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher.win 32.win32.x86_64_1.1.200.v20130521-0416\eclipse_1503.dll -startup d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3.0.v201303 27-1440.jar --launcher.appendVmargs -product org.eclipse.epp.package.standard.product -consoleLog -debug -vm C:/Program Files/Java/jdk1.6.0_37/bin\..\jre\bin\server\jvm.dll -vmargs -Dosgi.requiredJavaVersion=1.6 -Dhelp.lucene.tokenizer=standard -Xms4096m -Xmx4096m -XX:MaxPermSize=512m -Djava.class.path=d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3. 0.v20130327-1440.jar Error occurred during initialization of VM Incompatible minimum and maximum heap sizes specified Checking Task Manager shows no Java process running and both the CPU and memory usage are very low. I have tried: Re-installing Eclipse Re-starting my machine But running eclipsec -consoleLog -debug from the command prompt still results in the issue: Error occurred during initialization of VM Incompatible minimum and maximum heap sizes specified

    Read the article

  • Turing Machine & Modern Computer

    - by smwikipedia
    I heard a lot that modern computers are based on Turing machine. I'd like to share my understanding and hear your comments. I think the computer is a big general-purpose Turing machine. Each program we write is a small specific-purpose Turing machine. The classical Turing machine do its job based on the input and its current state inside and so do our programs. Let's take a running program (a process) as an example. We know that in the process's address space, there's areas for stack, heap, and code. A classical Turing machine doesn't have the ability to remember many things, so we borrow the concept of stack from the push-down automaton. The heap and stack areas contains the state of our specific-purpose Turing machine (our program). The code area represents the logic of this small Turing machine. And various I/O devices supply input to this Turing machine. The above is my naive understanding about the working paradigm of modern computer. I couln't wait to hear your comments. Thanks very much.

    Read the article

  • What happens to my PriorityQueue if my Comparator throws an exception while it's busy bubbling up or

    - by nieldw
    Hi, I'm trying order pairs of integers ascendantly where a pair is considered less than another pair if both its entries are strictly less than those of the other pair, and larger than the other pair if both its entries are strictly larger than those of the other pair. All other cases are considered incomparable. They way I want to solve this is by defining a Comparator that implements the above, but will throw an exception for incomparable cases, and provide that to a PriorityQueue. Of course, while inserting a pair the priority queue does several comparisons while bubbling the new entry up to its correct position in the heap, and many of these will be comparable. But it may happen during the bubbling process that a pair is encountered with which this new pair is incomparable, and an exception will be thrown. If this happens, what will be the state of the PriorityQueue? Will the pair I was trying to insert sit in the heap at the last position it was in before the exception was thrown? If I use the PriorityQueue's remove(Object o) method, will the PriorityQueue be restored to a consistent state? Thanks

    Read the article

  • Java OutOfMemoryError message changes when trying to create Arrays of different sizes

    - by Gordon
    In the question by DKSRathore How to simulate the Out Of memory : Requested array size exceeds VM limit some odd behavior was noted when creating an arrays. When creating an array of size Integer.MAX_VALUE an exception with the error java.lang.OutOfMemoryError Requested array size exceeds VM limit was thrown. However when an array was created with a size less than the max but still above the virtual machine memory limit the error message read java.lang.OutOfMemoryError: Java heap space. Testing further I managed to narrow down where the error messages changes. long[] l = new long[2147483645]; exceptions message reads - Requested array size exceeds VM limit long[] l = new long[2147483644]; exceptions message reads - Java heap space errors I increased my virtual machine memory and still produced the same result. Has anyone any idea why this happens? Some extra info: Integer.MAX_VALUE = 2147483647. Edit: Here's the code I used to find the value, might be helpful. int max = Integer.MAX_VALUE; boolean done = false; while (!done) { try { max--; // Throws an error long[] l = new long[max]; // Exit if an error is no longer thrown done = true; } catch (OutOfMemoryError e) { if (!e.getMessage().contains("Requested array size exceeds VM limit")) { System.out.println("Message changes at " + max); done = true; } } }

    Read the article

  • How do I find Microsoft APIs?

    - by Stephen
    I'm a java programmer, and if I see something that: I don't know about or just want to find a method description without opening an ide or am on support I type java [classname] into google, and there it is. If I try this crazy stunt for C# I'll come up with a whole heap of tutorials (how do I use it etc). If I manage to get to MSDN, I have to wade through a page describing every .net technology to see how their syntax references the same object, and then I have to find the appropriate page from there ([class name] Constructor) for example. This is even more pronounced, because I don't have Visual Studio, so I've got nothing to make it easier. There must be something I'm missing or don't know... how does this situation work for Microsoft developers? how can I make my life easier/searches better? are there techniques that work no matter what computer I'm on (e.g. require no computer setup/downloads) Notes It could be thought that java is just "java", but it's just that the java apis are only referenced/defined in the core language. For all the other languages on the JVM, it's assumed that you will just learn the correct syntax to use the java apis. I presume that .Net only lists a whole heap of languages as the api classes are actually different and have different interfaces capabilities (or some approximation of this presumption). Edit While searching msdn works... in the java space I can type 'java [anyclass]' and it will generally be found... whether it's a java core api or a third party library

    Read the article

  • Why are virtual methods considered early bound?

    - by AspOnMyNet
    One definition of binding is that it is the act of replacing function names with memory addresses. a) Thus I assume early binding means function calls are replaced with memory addresses during compilation process, while with late binding this replacement happens during runtime? b) Why are virtual methods also considered early bound (thus the target method is found at compile time, and code is created that will call this method)? As far as I know, with virtual methods the call to actual method is resolved only during runtime and not compile time?! thanx EDIT: 1) A a=new A(); a.M(); As far as I know, it is not known at compile time where on the heap (thus at which memory address ) will instance a be created during runtime. Now, with early binding the function calls are replaced with memory addresses during compilation process. But how can compiler replace function call with memory address, if it doesn’t know where on the heap will object a be created during runtime ( here I’m assuming the address of method a.M will also be at same memory location as a )? 2) v-table calls are neither early nor late bound. Instead there's an offset into a table of function pointers. The offset is fixed at compile time, but which table the function pointer is chosen from depends on the runtime type of the object (the object contains a hidden pointer to its v-table), so the final function address is found at runtime. But assuming the object of type T is created via reflection ( thus app doesn’t even know of existence of type T ), then how can at compile time exist an entry point for that type of object?

    Read the article

  • Problem with using malloc in link lists (urgent ! help please)

    - by Abhinav
    I've been working on this program for five months now. Its a real time application of a sensor network. I create several link lists during the life of the program and Im using malloc for creating a new node in the link. What happens is that the program suddenly stops or goes crazy and restarts. Im using AVR and the microcontroller is ATMEGA 1281. After a lot of debugging I figured out that that the malloc is causing the problem. I do not free the memory after exiting the function that creates a new link so Im guessing that this is eventually causing the heap memory to overflow or something like that. Now if I use the free() function to deallocate the memory at the end of the function using malloc, the program just gets stuck when the control reaches free(). Is this because the memory becomes too clustered after calling free() ? I also create reference tables for example if 'head' is a new link list and I create another list called current and make it equal to head. table *head; table *current = head; After the end of the function if I use free free(current); current = NULL: Then the program gets stuck here. I dont know what to do. What am I doing wrong? Is there a way to increase the size of the heap memory Please help...

    Read the article

  • Use a non-coalescing parser in Axis2

    - by Nathan
    Does anyone know how I can get Axis2 to use a non-coalescing XMLStreamReader when it parses a SOAP message? I am writing code that reads a large base64 binary text element. Coalescing is the default behaviour, and this causes the default XMLStreamReader to load the entire text into memory rather than returning multiple CHARACTERS events. The upshot of this is that I run out of heap space when running the following code: reader = element.getTextAsStream( true ); The OutOfMemory error occurs in com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next: java.lang.OutOfMemoryError: Java heap space at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208) at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:226) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanContent(XMLDocumentFragmentScannerImpl.java:1552) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2864) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116) at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:558) at org.apache.axiom.util.stax.wrapper.XMLStreamReaderWrapper.next(XMLStreamReaderWrapper.java:225) at org.apache.axiom.util.stax.dialect.DisallowDoctypeDeclStreamReaderWrapper.next(DisallowDoctypeDeclStreamReaderWrapper.java:34) at org.apache.axiom.util.stax.wrapper.XMLStreamReaderWrapper.next(XMLStreamReaderWrapper.java:225) at org.apache.axiom.util.stax.dialect.SJSXPStreamReaderWrapper.next(SJSXPStreamReaderWrapper.java:138) at org.apache.axiom.om.impl.builder.StAXOMBuilder.parserNext(StAXOMBuilder.java:668) at org.apache.axiom.om.impl.builder.StAXOMBuilder.next(StAXOMBuilder.java:214) at org.apache.axiom.om.impl.llom.SwitchingWrapper.updateNextNode(SwitchingWrapper.java:1098) at org.apache.axiom.om.impl.llom.SwitchingWrapper.<init>(SwitchingWrapper.java:198) at org.apache.axiom.om.impl.llom.OMStAXWrapper.<init>(OMStAXWrapper.java:73) at org.apache.axiom.om.impl.llom.OMContainerHelper.getXMLStreamReader(OMContainerHelper.java:67) at org.apache.axiom.om.impl.llom.OMContainerHelper.getXMLStreamReader(OMContainerHelper.java:40) at org.apache.axiom.om.impl.llom.OMElementImpl.getXMLStreamReader(OMElementImpl.java:790) at org.apache.axiom.om.impl.llom.OMElementImplUtil.getTextAsStream(OMElementImplUtil.java:114) at org.apache.axiom.om.impl.llom.OMElementImpl.getTextAsStream(OMElementImpl.java:826) at org.example.UploadFileParser.invokeBusinessLogic(UploadFileParser.java:160)

    Read the article

  • Is a call to the following method considered late binding?

    - by AspOnMyNet
    1) Assume: • B1 defines methods virtualM() and nonvirtualM(), where former method is virtual while the latter is non-virtual • B2 derives from B1 • B2 overrides virtualM() • B2 is defined inside assembly A • Application app doesn’t have a reference to assembly A In the following code application app dynamically loads an assembly A, creates an instance of a type B2 and calls methods virtualM() and nonvirtualM(): Assembly a=Assembly.Load(“A”); Type t= a.GetType(“B2”); B1 a = ( B1 ) Activator.CreateInstance ( “t” ); a.virtualM(); a.nonvirtualM(); a) Is call to a.virtualM() considered early binding or late binding? b) I assume a call to a.nonvirtualM() is resolved during compilation time? 2) Does the term late binding refer only to looking up the target method at run time or does it also refer to creating an instance of given type at runtime? thanx EDIT: 1) A a=new A(); a.M(); As far as I know, it is not known at compile time where on the heap (thus at which memory address ) will instance a be created during runtime. Now, with early binding the function calls are replaced with memory addresses during compilation process. But how can compiler replace function call with memory address, if it doesn’t know where on the heap will object a be created during runtime ( here I’m assuming the address of method a.M will also be at same memory location as a )? 2) The method slot is determined at compile time I assume that by method slot you’re referring to the entry point in V-table?

    Read the article

  • Hibernate: Walk millions of rows and don't leak memory

    - by Autocracy
    The below code functions, but Hibernate never lets go of its grip of any object. Calling session.clear() causes exceptions regarding fetching a joined class, and calling session.evict(currentObject) before retrieving the next object also fails to free the memory. Eventually I exhaust my heap space. Checking my heap dumps, StatefulPersistenceContext is the garbage collector's root for all references pointing to my objects. public class CriteriaReportSource implements JRDataSource { private ScrollableResults sr; private Object currentObject; private Criteria c; private static final int scrollSize = 10; private int offset = 1; public CriteriaReportSource(Criteria c) { this.c = c; advanceScroll(); } private void advanceScroll() { // ((Session) Main.em.getDelegate()).clear(); this.sr = c.setFirstResult(offset) .setMaxResults(scrollSize) .scroll(ScrollMode.FORWARD_ONLY); offset += scrollSize; } public boolean next() { if (sr.next()) { currentObject = sr.get(0); if (sr.isLast()) { advanceScroll(); } return true; } return false; } public Object getFieldValue(JRField jrf) throws JRException { Object retVal = null; if(currentObject == null) { return null; } try { retVal = PropertyUtils.getProperty(currentObject, jrf.getName()); } catch (Exception ex) { Logger.getLogger(CriteriaReportSource.class.getName()).log(Level.SEVERE, null, ex); } return retVal; } }

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >