Search Results

Search found 1544 results on 62 pages for 'heap corruption'.

Page 45/62 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • What causes a JRE 6 JVM code cache leak?

    - by Arturo Knight
    Since switching to JRE 6, my server's code cache usage (non-heap) keeps growing indefinitely. My application creates a lot of classes at runtime, BUT these classes are successfully unloaded during the GC process. I can see these classes getting unloaded in the gc logs and also the permGen usage stays constant. I specifically make sure in my code that these classes are orphaned once I am finished with them and so they correctly get garbage collected from permGen. The code cache however keeps growing. I only became aware of the code cache after switching to JRE 6. So I guess my questions are: Does GC include the code cache? What could cause a code cache memory leak, specifically. Is there a bug in JDK 6 in this area?

    Read the article

  • Decoding bitmaps in Android with the right size

    - by hgpc
    I decode bitmaps from the SD card using BitmapFactory.decodeFile. Sometimes the bitmaps are bigger than what the application needs or that the heap allows, so I use BitmapFactory.Options.inSampleSize to request a subsampled (smaller) bitmap. The problem is that the platform does not enforce the exact value of inSampleSize, and I sometimes end up with a bitmap either too small, or still too big for the available memory. From http://developer.android.com/reference/android/graphics/BitmapFactory.Options.html#inSampleSize: Note: the decoder will try to fulfill this request, but the resulting bitmap may have different dimensions that precisely what has been requested. Also, powers of 2 are often faster/easier for the decoder to honor. How should I decode bitmaps from the SD card to get a bitmap of the exact size I need while consuming as little memory as possible to decode it?

    Read the article

  • Is ACE reactor timer managment thread safe?

    - by idimba
    I have a module that manages timers in my aplication. This class has basibly three functions: Instance of ACE_Reactor is used internally by the module to manage the timers. schedule timer - calls ACE_Reactor::schedule_timer(). One of the arguments is a callback, called upon timer experation. cancel timer - calls ACE_Reactor::cancel_timer() The reactor executed in private timer of execution, so schedule/cancel and timeout callback are executed in different threads. ACE_Reactor::schedule_timer() receives a heap allocatec structure ( arg argument). This structure later deleted when canceling timer or when timeout handler is called. But since cancel and timeout handler are executed in different threads it looks like there's cases that the structure is deleted twice. Isn't it responsibility of reactor to ensure that timer is canceled when timeout handler is called?

    Read the article

  • Thread and two dimensional array in objective C?

    - by mactonny
    Hey, guys, I am just starting to wrap my head around objective C and I am doing a little project on Iphone. And I just encountered a weird problem. I had to deal with images in my program so I have a lot local variables declared like temp[width][height]. If I am not using NSThread to perform image processing, it works all fine. However, if I use NSThread, it'll keep giving me EXC_BAD_ACCESS on whenever I try to access a 2-D array declared like temp[widht][height]. So I have to allocate memory from heap in order to have a 2-D array. That'll solve the problem but I still don't get it. My first thought would be stack over flow, but it worked all fine with one thread. I just don't get it.

    Read the article

  • luntbuild + maven + findbugs = OutOfMemoryException

    - by Johannes
    Hi, I've been trying to get Luntbuild to generate and publish a project site for our project including a Findbugs report. All other reports (Cobertura, Surefire, JavaDoc, Dashboard) work fine, but Findbugs bails out with an OutOfMemoryException. Excluding findbugs from report generation fixes the build --- although obviously without a Findbugs report. The funny thing is that I first encountered this problem locally and solved it by setting MAVEN_OPTS=-Xmx512m. This does not seem to be enough in Luntbuild, however: setting that exact same option as an environment variable of my builder doesn't make a difference. I've found a couple of posts on the 'net stating you should also add -XX:MaxPermSize=512m to MAVEN_OPTS and/or pass -Dmaven.findbugs.jvmargs=-Xmx512m to mvn.bat. None of these (or their combination) seem to help though so any hints would be greatly appreciated! Cheers, Johannes Relevant information: Luntbuild is 1.5.6, Maven is 2.1.0, findbugs-maven-plugin is 2.0.1. This is the Findbugs section of the relevant pom.xml: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>findbugs-maven-plugin</artifactId> <version>2.0.1</version> </plugin> This is the head of my build log: User "luntbuild" started the build Perform checkout operation for VCS setting: Vcs name: Subversion Repository url base: http://some.repository.com/repo/ Repository layout: multiple Directory for trunk: trunk Directory for branches: branches Directory for tags: tags Username: xxxx Password:xxxx Web interface: ViewVC URL to web interface: http://some.repository.com/repo/ Quiet period: modules: Source path: somepath, Branch: , Label: , Destination path: somewhere Source path: somepath, Branch: somewhere1.0.x, Label: , Destination path: somewhere-1.0.x Source path: somepath, Branch: somewhere1.1.x, Label: , Destination path: somewhere-1.1.x Update url: http://some.repository.com/repo//trunk Duration of the checkout operation: 0 minutes Perform build with builder setting: Builder name: default Builder type: Maven2 builder Command to run Maven2: "C:\maven\apache-maven-2.1.0\bin\mvn.bat" -e -f somewhere\pom.xml -P site -Dmaven.test.skip=false -DbuildDate="Tue Nov 24 11:13:24 CET 2009" -DbuildVersion="site-core138" -Dsvn.username=xxxx -Dsvn.password=xxxx -DstagingSiteURL=file:///C:/luntbuild/core-reports -Dmaven.findbugs.jvmargs=-Xmx512m Directory to run Maven2 in: Goals to build: site:stage site:stage-deploy Build properties: buildVersion="site-core138" artifactsDir="C:\\Program Files\\Luntbuild\\publish\\somewhere\\site-core\\site-core138\\artifacts" buildDate="Tue Nov 24 11:13:24 CET 2009" junitHtmlReportDir="" Environment variables: MAVEN_OPTS="-Xmx512m -XX:MaxPermSize=512m" Build success condition: result==0 and builderLogContainsLine("INFO","BUILD SUCCESSFUL") Execute command: Executing 'C:\maven\apache-maven-2.1.0\bin\mvn.bat' with arguments: '-e' '-f' 'somewhere\pom.xml' '-P' 'site' '-Dmaven.test.skip=false' '-DbuildDate=Tue Nov 24 11:13:24 CET 2009' '-DbuildVersion=site-core138' '-Dsvn.username=xxxxxx' '-Dsvn.password=xxxxxx' '-DstagingSiteURL=file:///C:/luntbuild/reports' '-Dmaven.findbugs.jvmargs=-Xmx512m' '-DbuildVersion=site-core138' '-DartifactsDir=C:\\Program Files\\Luntbuild\\publish\\somewhere\\site-core\\site-core138\\artifacts' '-DbuildDate=Tue Nov 24 11:13:24 CET 2009' '-X' 'site:stage' 'site:stage-deploy' Tail of my build log: Analyzed: C:\luntbuild\somewhere-work\somewhere\...\SomeClass.class ... Analyzed: C:\luntbuild\somewhere-work\somewhere\...\target\classes Aux: C:\luntbuild\somewhere-work\somewhere\...\target\classes Aux: c:\maven\local-repo\...\somejar-1.1.1.1-SNAPSHOT.jar Aux: c:\maven\local-repo\commons-lang\commons-lang\2.3\commons-lang-2.3.jar .... Aux: c:\maven\local-repo\org\openoffice\ridl\3.1.0\ridl-3.1.0.jar Aux: c:\maven\local-repo\org\openoffice\unoil\3.1.0\unoil-3.1.0.jar [INFO] ------------------------------------------------------------------------ [ERROR] FATAL ERROR [INFO] ------------------------------------------------------------------------ [INFO] Java heap space [INFO] ------------------------------------------------------------------------ [DEBUG] Trace java.lang.OutOfMemoryError: Java heap space at java.util.HashMap.(HashMap.java:209) at edu.umd.cs.findbugs.ba.type.TypeAnalysis$CachedExceptionSet.(TypeAnalysis.java:114) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.getCachedExceptionSet(TypeAnalysis.java:688) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.computeThrownExceptionTypes(TypeAnalysis.java:439) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.transfer(TypeAnalysis.java:411) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.transfer(TypeAnalysis.java:89) at edu.umd.cs.findbugs.ba.Dataflow.execute(Dataflow.java:356) at edu.umd.cs.findbugs.classfile.engine.bcel.TypeDataflowFactory.analyze(TypeDataflowFactory.java:82) at edu.umd.cs.findbugs.classfile.engine.bcel.TypeDataflowFactory.analyze(TypeDataflowFactory.java:44) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.analyzeMethod(AnalysisCache.java:331) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.getMethodAnalysis(AnalysisCache.java:281) at edu.umd.cs.findbugs.classfile.engine.bcel.CFGFactory.analyze(CFGFactory.java:173) at edu.umd.cs.findbugs.classfile.engine.bcel.CFGFactory.analyze(CFGFactory.java:64) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.analyzeMethod(AnalysisCache.java:331) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.getMethodAnalysis(AnalysisCache.java:281) at edu.umd.cs.findbugs.ba.ClassContext.getMethodAnalysis(ClassContext.java:937) at edu.umd.cs.findbugs.ba.ClassContext.getMethodAnalysisNoDataflowAnalysisException(ClassContext.java:921) at edu.umd.cs.findbugs.ba.ClassContext.getCFG(ClassContext.java:326) at edu.umd.cs.findbugs.detect.BuildUnconditionalParamDerefDatabase.analyzeMethod(BuildUnconditionalParamDerefDatabase.java:103) at edu.umd.cs.findbugs.detect.BuildUnconditionalParamDerefDatabase.considerMethod(BuildUnconditionalParamDerefDatabase.java:93) at edu.umd.cs.findbugs.detect.BuildUnconditionalParamDerefDatabase.visitClassContext(BuildUnconditionalParamDerefDatabase.java:79) at edu.umd.cs.findbugs.DetectorToDetector2Adapter.visitClass(DetectorToDetector2Adapter.java:68) at edu.umd.cs.findbugs.FindBugs2.analyzeApplication(FindBugs2.java:971) at edu.umd.cs.findbugs.FindBugs2.execute(FindBugs2.java:222) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:86) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:230) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:912) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:756) [INFO] ------------------------------------------------------------------------ [INFO] Total time: 17 minutes 16 seconds [INFO] Finished at: Tue Nov 24 11:31:23 CET 2009 [INFO] Final Memory: 70M/127M [INFO] ------------------------------------------------------------------------ Maven2 builder failed: build success condition not met! Note that apparently maven only uses 70MB... but that probably doesn't mean anything since the Findbugs plugin forks its own process.

    Read the article

  • Regex to match the first file in a rar archive file set in Python

    - by mridang
    I need to uncompress all the files in a directory and for this I need to find the first file in the set. I'm currently doing this using a bunch of if statements and loops. Can i do this this using regex? Here's a list of files that i need to match: yes.rar yes.part1.rar yes.part01.rar yes.part001.rar yes.r01 yes.r001 These should NOT be matched: no.part2.rar no.part02.rar no.part002.rar no.part011.rar no.r002 no.r02 I found a similar regex on this thread but it seems that Python doesn't support varible length lookarounds. A single line regex would be complicated but I'll document it well and it's not a problem. It's just one of those problems you beat your heap up, over. Thanks in advance guys. :)

    Read the article

  • phing and phpUnderControl ... working together

    - by Paul Hanssen
    Hi, Has anyone got these to work together seemlessly? I have tried, had some success using the plugin at http://phing.info/trac/wiki/Users/Documentation/CruiseControl, but have failed to: Get the metrics graphs working (nothing appears) Enable the "PMD" - project mess detection - reports Are there any other ant-specific commands that must (can) be run in addition to my phing build script? Also, the front page of the reports section dumps a heap of log information, and I'm trying to get rid of that too. Cheers for any help ... we are running phing 2.3.0 and phpUnderControl 0.4.7. Paul

    Read the article

  • Trying to cause java.lang.OutOfMemoryException

    - by portoalet
    Hi, I am trying to reproduce java.lang.OutOfMemoryException in Jboss4, which one of our client got, presumably by running the J2EE applications over days/weeks. I am trying to find a way for the webapp to spitout java.lang.OutOfMemoryException in a matter of minutes (instead of days/weeks). One thing come into mind is to write a selenium script and has the script bombards the webapps. One other thing that we can do is to reduce JVM heap size, but we would prefer not to do this, as we want to see the limit of our system. Any suggestions? ps: I don't have access to the source code.

    Read the article

  • Memory Allocation Profiling in C++

    - by Amit Kumar
    I am writing an application and am surprised to see its total memory usage is already too high. I want to profile the dynamic memory usage of my application: How many objects of each kind are there in the heap, and which functions created these objects? Also, how much memory is used by each of the object? Is there a simple way to do this? I am working on both linux and windows, so tools of any of the platforms would suffice. NOTE: I am not concerned with memory leaks here.

    Read the article

  • OutOfMemoryException - out of ideas

    - by Captain Comic
    Hi I have a net Windows service that is constantly throwing OutOfMemoryException. The service has two builds for x86 and x64 Windows. However on x64 it consumes a lot more memory. I have tried profiling it with various memory profilers. But I cannot get a clue what the problem is. The diagnosis - service consumes lot of VMSize. Also I tried to look at performance counters (perfmon.exe). What I can see is that heap size is growing and %GC time is 19%. My application has threads and locking objects, DB connections and WCF interface. See first app in list The link to picture with performance counters view http://s006.radikal.ru/i215/1003/0b/ddb3d6c80809.jpg

    Read the article

  • *** glibc detected *** perl: munmap_chunk(): invalid pointer

    - by sid_com
    At the and of a script-output (parsing a xhtml-site with XML::LibXML::Reader) I get this: * glibc detected * perl: munmap_chunk(): invalid pointer: 0x0000000000b362e0 * ======= Backtrace: ========= /lib64/libc.so.6[0x7fb84952fc76] /usr/lib64/libxml2.so.2[0x7fb848b75e17] /usr/lib64/libxml2.so.2(xmlHashFree+0xa6)[0x7fb848b691b6] ... ... ======= Memory map: ======== 00400000-0053d000 r-xp 00000000 08:01 182002 /usr/local/bin/perl 0073c000-0073d000 r--p 0013c000 08:01 182002 /usr/local/bin/perl 0073d000-00741000 rw-p 0013d000 08:01 182002 /usr/local/bin/perl 00741000-00c60000 rw-p 00000000 00:00 0 [heap] 7fb8482cd000-7fb8482e3000 r-xp 00000000 08:01 2404 /lib64/libgcc_s.so.1 ... ... Is this due a bug?

    Read the article

  • java.lang.OutOfMemory error when fetching records from Database

    - by Nisarg Mehta
    Hi All, When I try to fetch around 20,000 records and return to ArrayList then it throws java heap space error. JdbcTemplate select = new JdbcTemplate(dataSource); String SQL_SELECT_XML_IRP_ADDRESS = " SELECT * FROM "+ SCHEMA +".XML_ADDRESS "+ " WHERE FILE_NAME = ? "; Object[] parameters=new Object[] {xmlFileName}; return (ArrayList<XmlAddressDto> ) select.query(SQL_SELECT_XML_ADDRESS, parameters,new XmAddressMapExt()); Is there any solution for this ? How should i process this effectively ?

    Read the article

  • CPU and profiling not supported for remote jvisualvm session

    - by yawn
    When monitoring a remote app (using jstatd) I can neither profile nor monitor cpu consumption. Heap monitoring (provided I do not use G1) works. jvisualvm provides the message "Not supported for this JVM." in the CPU graph window. Is there anything missing in my setup? The google showed up next to no results. The local environment (Mac OS X 10.6): java version "1.6.0_15" Java(TM) SE Runtime Environment (build 1.6.0_15-b03-219) Java HotSpot(TM) 64-Bit Server VM (build 14.1-b02-90, mixed mode) The remote environment (Linux version 2.6.16.27-0.9-smp (gcc version 4.1.0 (SUSE Linux))): java version "1.6.0_16" Java(TM) SE Runtime Environment (build 1.6.0_16-b01) Java HotSpot(TM) 64-Bit Server VM (build 14.2-b01, mixed mode) Local monitoring works as advertised.

    Read the article

  • Storing varchar(max) & varbinary(max) together - Problem?

    - by Tony Basallo
    I have an app that will have entries of both varchar(max) and varbinary(max) data types. I was considering putting these both in a separate table, together, even if only one of the two will be used at any given time. The question is whether storing them together has any impact on performance. Considering that they are stored in the heap, I'm thinking that having them together will not be a problem. However, the varchar(max) column will be probably have the text in row table option set. I couldn't find any performance testing or profiling while "googling bing," probably too specific a question? The SQL Server 2008 table looks like this: Id ParentId Version VersionDate StringContent - varchar(max) BinaryContent - varbinary(max) The app will decide which of the two columns to select for when the data is queried. The string column will much used much more frequently than the binary column - will this have any impact on performance?

    Read the article

  • Random Page Cost and Planning

    - by Dave Jarvis
    A query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the table's only index, rather effectively: CREATE UNIQUE INDEX measurement_001_stc_idx ON climate.measurement_001 USING btree (station_id, taken, category_id); Reducing the server's configuration value for random_page_cost from 2.0 to 1.1 had a massive performance improvement for the given range (nearly an order of magnitude) because it suggested to PostgreSQL that it should use the index. While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan: sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1997-12-31'::date AND How do I persuade PostgreSQL to use the indexes regardless of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query. Thank you! Query SELECT extract(YEAR FROM m.taken) AS year, avg(m.amount) AS amount FROM climate.city c, climate.station s, climate.station_category sc, climate.measurement m WHERE c.id = 5182 AND earth_distance( ll_to_earth(c.latitude_decimal,c.longitude_decimal), ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND s.elevation BETWEEN 0 AND 3000 AND s.applicable = TRUE AND sc.station_id = s.id AND sc.category_id = 1 AND sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1996-12-31'::date AND m.station_id = s.id AND m.taken BETWEEN sc.taken_start AND sc.taken_end AND m.category_id = sc.category_id GROUP BY extract(YEAR FROM m.taken) ORDER BY extract(YEAR FROM m.taken) 1900 to 1996: Index "Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" " -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" " -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" " -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" " -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" " Index Cond: (s.id = sc.station_id)" " Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" " -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" " Filter: (m.category_id = 1)" " -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" " Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" " -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" " Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" "Total runtime: 2269.264 ms" 1900 to 1997: Full Table Scan "Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" " -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" " Hash Cond: (m.station_id = sc.station_id)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" " -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" " Filter: (category_id = 1)" " -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" " Filter: (category_id = 1)" " -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" " -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" " Hash Cond: (s.id = sc.station_id)" " -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" " Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" " -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" " -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" " Recheck Cond: (category_id = 1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" " -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" " Index Cond: (category_id = 1)" "Total runtime: 86165.936 ms"

    Read the article

  • OutOfMemoryError trying to upload to Blobstore locally

    - by jeanh
    Hi all, I am trying to set up a basic file upload to blobstore, but I get this OutOfMemoryError: WARNING: Error for /_ah/upload/ aghvbWdkcmVzc3IcCxIVX19CbG9iVXBsb2FkU2Vzc2lvbl9fGMACDA java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2786) at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:71) at javax.mail.internet.MimeMultipart.readTillFirstBoundary(MimeMultipart.java: 316) at javax.mail.internet.MimeMultipart.parse(MimeMultipart.java:186) at javax.mail.internet.MimeMultipart.getCount(MimeMultipart.java:109) at com.google.appengine.api.blobstore.dev.UploadBlobServlet.handleUpload(UploadBlobServlet.java: 135) at com.google.appengine.api.blobstore.dev.UploadBlobServlet.access $000(UploadBlobServlet.java:72) at com.google.appengine.api.blobstore.dev.UploadBlobServlet $1.run(UploadBlobServlet.java:100) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.api.blobstore.dev.UploadBlobServlet.doPost(UploadBlobServlet.java: 98) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java: 511); I used the Memory Analyzer on Eclipse and it said that the memory leak suspect is QueuedThreadPool. I found this information about a memory leak bug: http://jira.codehaus.org/browse/JETTY-1188 Has anyone else had this issue? Thanks, Jean

    Read the article

  • Decoding subsampled bitmaps in Android

    - by hgpc
    I decode bitmaps from the SD card using BitmapFactory.decodeFile. Sometimes the bitmaps are bigger than what the application needs or that the heap allows, so I use BitmapFactory.Options.inSampleSize to request a subsampled (smaller) bitmap. The problem is that the platform does not enforce the exact value of inSampleSize, and I sometimes end up with a bitmap either too small, or still too big for the available memory. From http://developer.android.com/reference/android/graphics/BitmapFactory.Options.html#inSampleSize: Note: the decoder will try to fulfill this request, but the resulting bitmap may have different dimensions that precisely what has been requested. Also, powers of 2 are often faster/easier for the decoder to honor. How should I decode bitmaps from the SD card to get a bitmap of the exact size I need while consuming as little memory as possible to decode it?

    Read the article

  • Advanced Java book in the lines of CLR via c# or C# in Depth?

    - by devoured elysium
    I want to learn about how things work in depth in Java. Coming from a c# background, there were a couple of very good books that go really deep in c# (C# in depth, CLR via c#, just to name the most popular). Is there anything like that in Java? I searched it up on amazon but nothing seemed to go that deep in Java as the two above go in c#. I don't want to know more about specific classes, or how to use this library or that other library, I want to learn how the objects are created on memory, how they get created on the stack, heap, etc. A more fundamental knowledge, let's say. I've read some chapters of Effective Java and The Java Programming Language but they don't seem to go so deep as I'd want them to go. Maybe there are other people that know both c# and Java that have read any of the referred books and know any that might be useful? Thanks

    Read the article

  • java OutofMemoryError

    - by dqm
    I am running the following command on unix box. java -Xms3800m -Xmx3800m org.apache.xalan.xslt.Process -out Cust.txt -in test13l.xml -xsl CustDetails.xsl It is a java command, which calls Xalan processor to parse through the xml file (test131.xml) using the xsl stylesheet (CustDetails.xsl) and returns Cust.txt. The command works fine and the output is generated. It takes 12 minutes to process an xml file size of 1.1 GB. It takes 22 minutes to process a file size of 1.44 GB. However, when I try to process a file size of 1.66 GB, it errors out with the following message: (Location of error unknown)XSLT Error (java.lang.OutOfMemoryError): null I have increased the java heap size to 3800 not sure what I can do more. Many thanks for your help.

    Read the article

  • Android OutOfMemoryError - Loading JSON File

    - by jeremynealbrown
    The app I am working on needs to read a JSON file that may be anywhere from 1.5 to 3 MB in size. It seems to have no problem opening the file and converting the data to a string, but when it attempts to convert the string to a JSONArray, OutOfMemoryErrors are thrown. The exceptions look something like this: E/dalvikvm-heap( 5307): Out of memory on a 280-byte allocation. W/dalvikvm( 5307): Exception thrown (Ljava/lang/OutOfMemoryError;) while throwing internal exception (Ljava/lang/OutOfMemoryError;) One strange thing about this is that the crash only occurs every 2nd or 3rd time the app is run, leaving me to believe that the memory consumed by the app is not being garbage collected each time the app closes. Any insight into how I might get around this issue would be greatly appreciated. I am open to the idea of loading the file in chunks, but I'm not quite sure what the best approach is for such a task. Thank you

    Read the article

  • Controlling maximum Java standalone running in Linux

    - by Gnanam
    Hi, We've developed a Java standalone program. We've configured in our Linux (RedHat ES 4) cron schedule to execute this Java standalone every 10 minutes. Each standalone may sometime take more than 1 hour to complete, or sometime it may complete even within 5 minutes. My problem/solution I'm looking for is, the number of Java standalones executing at any time should not exceed, for example, 5 threads. So, for example, before even a Java standalone/thread starts, if there are already 5 threads running, then this thread should not be started; otherwise this would indirectly start creating OutOfMemoryError problems. How do I control this? I would also like to make this 5 thread as configurable. Other Information: I've also configured -Xms and -Xmx heap size settings. Is there any tool/mechanism by which we can control this? I also heard about Java Service Wrapper. What is this all about?

    Read the article

  • The speed of .NET in numerical computing

    - by Yin Zhu
    In my experience, .net is 2 to 3 times slower than native code. (I implemented L-BFGS for multivariate optimization). I have traced the ads on stackoverflow to http://www.centerspace.net/products/ the speed is really amazing, the speed is close to native code. How can they do that? They said that: Q. Is NMath "pure" .NET? A. The answer depends somewhat on your definition of "pure .NET". NMath is written in C#, plus a small Managed C++ layer. For better performance of basic linear algebra operations, however, NMath does rely on the native Intel Math Kernel Library (included with NMath). But there are no COM components, no DLLs--just .NET assemblies. Also, all memory allocated in the Managed C++ layer and used by native code is allocated from the managed heap. Can someone explain more to me? Thanks!

    Read the article

  • Possible memory leak problem

    - by MaiTiano
    I write two pieces of c programs like following, during memcheck process using Valgrind, a lot of mem leak information is given. int GetMemory(int framewidth, int frameheight, int SR/*, int blocksize*//*,int ALL_REF_NUM*/) { //int i,j; int memory_size = 0; //int refnum = ALL_REF_NUM; int input_search_range = SR; memory_size += get_mem2D(&curFrameY, frameheight, framewidth); memory_size += get_mem2D(&curFrameU, frameheight>>1, framewidth>>1); memory_size += get_mem2D(&curFrameV, frameheight>>1, framewidth>>1); memory_size += get_mem3D(&prevFrameY, refnum, frameheight, framewidth);// to allocate reference frame buffer accoding to the reference frame number memory_size += get_mem3D(&prevFrameU, refnum, frameheight>>1, framewidth>>1); memory_size += get_mem3D(&prevFrameV, refnum, frameheight>>1, framewidth>>1); memory_size += get_mem2D(&mpFrameY, frameheight, framewidth); memory_size += get_mem2D(&mpFrameU, frameheight>>1, framewidth>>1); memory_size += get_mem2D(&mpFrameV, frameheight>>1, framewidth>>1); memory_size += get_mem2D(&searchwindow, input_search_range*2 + blocksize, input_search_range*2 + blocksize);// to allocate search window according to the searchrange /*memory_size +=*/ get_mem1D(/*&SAD_cost, height, width*/); // memory_size += get_mem2D(&searchwindow, 80, 80);// if searchrange is 32, then only 32+1+32+15 pixels is needed in each row and col, therefore the range of // search window is enough to be set to 80 ! memory_size += get_mem2Dint(&all_mv, height/blocksize, width/blocksize); return 0; } void FreeMemory(int refno) { free_mem2D(curFrameY); free_mem2D(curFrameU); free_mem2D(curFrameV); free_mem3D(prevFrameY,refno); free_mem3D(prevFrameU,refno); free_mem3D(prevFrameV,refno); free_mem2D(mpFrameY); free_mem2D(mpFrameU); free_mem2D(mpFrameV); free_mem2D(searchwindow); free_mem1D(); free_mem2Dint(all_mv); } void free_mem1D() { free(SAD_cost); } Now I hope to make sure where are the possible problems in my program? Here I may post some valgrind information to let you know about the actual error information. ==29105== by 0x804A906: main (me_search.c:1480) ==29105== ==29105== ==29105== HEAP SUMMARY: ==29105== in use at exit: 124,088 bytes in 18 blocks ==29105== total heap usage: 37 allocs, 21 frees, 749,276 bytes allocated ==29105== ==29105== 272 bytes in 1 blocks are still reachable in loss record 1 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x804885E: GetMemory (me_search.c:117) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 352 bytes in 1 blocks are still reachable in loss record 2 of 18 ==29105== at 0x4024F20: malloc (vg_replace_malloc.c:236) ==29105== by 0x409537E: __fopen_internal (iofopen.c:76) ==29105== by 0x409544B: fopen@@GLIBC_2.1 (iofopen.c:107) ==29105== by 0x804A660: main (me_search.c:1439) ==29105== ==29105== 584 bytes in 1 blocks are still reachable in loss record 3 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x8048724: GetMemory (me_search.c:106) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 584 bytes in 1 blocks are still reachable in loss record 4 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x8048747: GetMemory (me_search.c:107) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 584 bytes in 1 blocks are still reachable in loss record 5 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x8048809: GetMemory (me_search.c:114) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 584 bytes in 1 blocks are still reachable in loss record 6 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x804882C: GetMemory (me_search.c:115) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 584 bytes in 1 blocks are definitely lost in loss record 7 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x804A4F8: get_mem3D (me_search.c:1393) ==29105== by 0x804879B: GetMemory (me_search.c:110) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 584 bytes in 1 blocks are definitely lost in loss record 8 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x804A4F8: get_mem3D (me_search.c:1393) ==29105== by 0x80487C9: GetMemory (me_search.c:111) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 1,168 bytes in 1 blocks are still reachable in loss record 9 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x8048701: GetMemory (me_search.c:105) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 1,168 bytes in 1 blocks are still reachable in loss record 10 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x80487E6: GetMemory (me_search.c:113) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 1,168 bytes in 1 blocks are definitely lost in loss record 11 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x804A4F8: get_mem3D (me_search.c:1393) ==29105== by 0x804876D: GetMemory (me_search.c:109) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 6,336 bytes in 1 blocks are definitely lost in loss record 12 of 18 ==29105== at 0x4024F20: malloc (vg_replace_malloc.c:236) ==29105== by 0x804A25C: get_mem1D (me_search.c:1295) ==29105== by 0x8048866: GetMemory (me_search.c:119) ==29105== by 0x804A757: main (me_search.c:1456)

    Read the article

  • OS X contains heapsort in stdlib.h which conflicts with heapsort in sort library

    - by CryptoQuick
    I'm using Ariel Faigon's sort library, found here: http://www.yendor.com/programming/sort/ I was able to get all my code working on Linux, but unfortunately, when trying to compile with GCC on Mac, its default stdlib.h contains another heapsort, which unfortunately results in a conflicting types error. Here's the man page for Apple heapsort: http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man3/heapsort.3.html Commenting out the heapsort in the sort library header causes a whole heap of problems. (pardon the pun) I also briefly thought of commenting out my use of stdlib.h, but I use malloc and realloc, so that won't work at all. Any ideas?

    Read the article

  • Problem with istringstream in C++

    - by helixed
    Hello, I'm sure I'm just doing something stupid here, but I can't quite figure out what it is. When I try to run this code: #include <iostream> #include <string> #include <sstream> using namespace std; int main(int argc, char *argv[]) { string s("hello"); istringstream input(s, istringstream::in); string s2; input >> s2; cout << s; } I get this error: malloc: *** error for object 0x100016200: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug The only thing I can think of is that I allocated s2 on the stack, but I thought strings manage their own content on the heap. Any help here would be appreciated. Thanks, helixed

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >