Search Results

Search found 960 results on 39 pages for 'heap'.

Page 12/39 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Where unmanaged resources are allocated.

    - by Harsha
    Hello all, I am not a comp science guy. Managed resources are allocated on the heap. But I would like to know where unmanaged resources are allocated. If unmanaged resources are also allocated on the heap, is it the same heap used by managed resources or a different one? Thanks in advance. Harsha

    Read the article

  • how to generate thread dump java on out of memory error

    - by Jigar
    does java 6 generate thread dump in addition to heap dump (java_pid14941.hprof) this is what happened to one of my applications. java.lang.OutOfMemoryError: GC overhead limit exceeded Dumping heap to java_pid14941.hprof ... I did find ava_pid14941.hprof in working directory, but didn't find any file which contains thread dump. I need to know what all the threads were doing when I got this OutOfMemory error. Is there any configuration option which will generate thread dump in addition to heap dump on out of memory exception?

    Read the article

  • How to show percentage of 'memory used' in a win32 process?

    - by pj4533
    I know that memory usage is a very complex issue on Windows. I am trying to write a UI control for a large application that shows a 'percentage of memory used' number, in order to give the user an indication that it may be time to clear up some memory, or more likely restart the application. One implementation used ullAvailVirtual from MEMORYSTATUSEX as a base, then used HeapWalk() to walk the process heap looking for additional free memory. The HeapWalk() step was needed because we noticed that after a while of running the memory allocated and freed by the heap was never returned and reported by the ullAvailVirtual number. After hours of intensive working, the ullAvailVirtual number no longer would accurately report the amount of memory available. However, this method proved not ideal, due to occasional odd errors that HeapWalk() would return, even when the process heap was not corrupted. Further, since this is a UI control, the heap walking code was executing every 5-10 seconds. I tried contacting Microsoft about why HeapWalk() was failing, escalated a case via MSDN, but never got an answer other than "you probably shouldn't do that". So, as a second implementation, I used PagefileUsage from PROCESS_MEMORY_COUNTERS as a base. Then I used VirtualQueryEx to walk the virtual address space adding up all regions that weren't MEM_FREE and returned a value for GetMappedFileNameA(). My thinking was that the PageFileUsage was essentially 'private bytes' so if I added to that value the total size of the DLLs my process was using, it would be a good approximation of the amount of memory my process was using. This second method seems to (sorta) work, at least it doesn't cause crashes like the heap walker method. However, when both methods are enabled, the values are not the same. So one of the methods is wrong. So, StackOverflow world...how would you implement this? which method is more promising, or do you have a third, better method? should I go back to the original method, and further debug the odd errors? should I stay away from walking the heap every 5-10 seconds? Keep in mind the whole point is to indicate to the user that it is getting 'dangerous', and they should either free up memory or restart the application. Perhaps a 'percentage used' isn't the best solution to this problem? What is? Another idea I had was a color based system (red, yellow, green, which I could base on more factors than just a single number)

    Read the article

  • InputDispatcher Error

    - by StarDust
    INFO/ActivityManager(68): Process com.example (pid 390) has died. ERROR/InputDispatcher(68): channel '406ed580 com.example/com.example.afeTest (server)' ~ Consumer closed input channel or an error occurred. events=0x8 ERROR/InputDispatcher(68): channel '406ed580 com.example/com.example.afeTest (server)' ~ Channel is unrecoverably broken and will be disposed! ERROR/InputDispatcher(68): Received spurious receive callback for unknown input channel. fd=165, events=0x8 Can anyone tell what may be the reason behind this error? I've ported a native code on the Android-ndk. One thing I noticed regarding fd (that may be some reason :S) My code uses fd_sets which was defined in winsock2.h But I didn't find fd_sets defined in android-ndk. So I had included "select.h" where fd_set is a typedef in the android-ndk: typedef __kernel_fd_set fd_set; Here is the log cat: 04-06 11:15:32.405: INFO/DEBUG(31): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 04-06 11:15:32.405: INFO/DEBUG(31): Build fingerprint: 'generic/sdk/generic:2.3.3/GRI34/101070:eng/test-keys' 04-06 11:15:32.415: INFO/DEBUG(31): pid: 335, tid: 348 >>> com.example <<< 04-06 11:15:32.426: INFO/DEBUG(31): signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr deadbaad 04-06 11:15:32.426: INFO/DEBUG(31): r0 deadbaad r1 0000000c r2 00000027 r3 00000000 04-06 11:15:32.445: INFO/DEBUG(31): r4 00000080 r5 afd46668 r6 0000a000 r7 00000078 04-06 11:15:32.445: INFO/DEBUG(31): r8 804ab00d r9 002a9778 10 00100000 fp 00000001 04-06 11:15:32.445: INFO/DEBUG(31): ip ffffffff sp 44295d10 lr afd19375 pc afd15ef0 cpsr 00000030 04-06 11:15:32.756: INFO/DEBUG(31): #00 pc 00015ef0 /system/lib/libc.so 04-06 11:15:32.756: INFO/DEBUG(31): #01 pc 00013852 /system/lib/libc.so 04-06 11:15:32.767: INFO/DEBUG(31): code around pc: 04-06 11:15:32.785: INFO/DEBUG(31): afd15ed0 68241c23 d1fb2c00 68dae027 d0042a00 04-06 11:15:32.785: INFO/DEBUG(31): afd15ee0 20014d18 6028447d 48174790 24802227 04-06 11:15:32.785: INFO/DEBUG(31): afd15ef0 f7f57002 2106eb56 ec92f7f6 0563aa01 04-06 11:15:32.796: INFO/DEBUG(31): afd15f00 60932100 91016051 1c112006 e818f7f6 04-06 11:15:32.807: INFO/DEBUG(31): afd15f10 2200a905 f7f62002 f7f5e824 2106eb42 04-06 11:15:32.815: INFO/DEBUG(31): code around lr: 04-06 11:15:32.815: INFO/DEBUG(31): afd19354 b0834a0d 589c447b 26009001 686768a5 04-06 11:15:32.825: INFO/DEBUG(31): afd19364 220ce008 2b005eab 1c28d003 47889901 04-06 11:15:32.836: INFO/DEBUG(31): afd19374 35544306 d5f43f01 2c006824 b003d1ee 04-06 11:15:32.836: INFO/DEBUG(31): afd19384 bdf01c30 000281a8 ffffff88 1c0fb5f0 04-06 11:15:32.846: INFO/DEBUG(31): afd19394 43551c3d a904b087 1c16ac01 604d9004 04-06 11:15:32.856: INFO/DEBUG(31): stack: 04-06 11:15:32.856: INFO/DEBUG(31): 44295cd0 00000408 04-06 11:15:32.867: INFO/DEBUG(31): 44295cd4 afd18407 /system/lib/libc.so 04-06 11:15:32.875: INFO/DEBUG(31): 44295cd8 afd4270c /system/lib/libc.so 04-06 11:15:32.875: INFO/DEBUG(31): 44295cdc afd426b8 /system/lib/libc.so 04-06 11:15:32.885: INFO/DEBUG(31): 44295ce0 00000000 04-06 11:15:32.896: INFO/DEBUG(31): 44295ce4 afd19375 /system/lib/libc.so 04-06 11:15:32.896: INFO/DEBUG(31): 44295ce8 804ab00d /data/data/com.example/lib/libAFE.so 04-06 11:15:32.896: INFO/DEBUG(31): 44295cec afd183d9 /system/lib/libc.so 04-06 11:15:32.906: INFO/DEBUG(31): 44295cf0 00000078 04-06 11:15:32.906: INFO/DEBUG(31): 44295cf4 00000000 04-06 11:15:32.906: INFO/DEBUG(31): 44295cf8 afd46668 04-06 11:15:32.906: INFO/DEBUG(31): 44295cfc 0000a000 [heap] 04-06 11:15:32.916: INFO/DEBUG(31): 44295d00 00000078 04-06 11:15:32.927: INFO/DEBUG(31): 44295d04 afd18677 /system/lib/libc.so 04-06 11:15:32.927: INFO/DEBUG(31): 44295d08 df002777 04-06 11:15:32.945: INFO/DEBUG(31): 44295d0c e3a070ad 04-06 11:15:32.945: INFO/DEBUG(31): #00 44295d10 002c43a0 [heap] 04-06 11:15:32.945: INFO/DEBUG(31): 44295d14 002a9900 [heap] 04-06 11:15:32.956: INFO/DEBUG(31): 44295d18 afd46608 04-06 11:15:32.966: INFO/DEBUG(31): 44295d1c afd11010 /system/lib/libc.so 04-06 11:15:32.976: INFO/DEBUG(31): 44295d20 002c4298 [heap] 04-06 11:15:32.976: INFO/DEBUG(31): 44295d24 fffffbdf 04-06 11:15:33.006: INFO/DEBUG(31): 44295d28 000000da 04-06 11:15:33.006: INFO/DEBUG(31): 44295d2c afd46450 04-06 11:15:33.006: INFO/DEBUG(31): 44295d30 000001b4 04-06 11:15:33.026: INFO/DEBUG(31): 44295d34 afd13857 /system/lib/libc.so 04-06 11:15:33.026: INFO/DEBUG(31): #01 44295d38 afd46450 04-06 11:15:33.035: INFO/DEBUG(31): 44295d3c afd13857 /system/lib/libc.so 04-06 11:15:33.056: INFO/DEBUG(31): 44295d40 804ab00d /data/data/com.example/lib/libAFE.so 04-06 11:15:33.056: INFO/DEBUG(31): 44295d44 44295e8c 04-06 11:15:33.056: INFO/DEBUG(31): 44295d48 804ab00d /data/data/com.example/lib/libAFE.so 04-06 11:15:33.056: INFO/DEBUG(31): 44295d4c 804bfec3 /data/data/com.example/lib/libAFE.so 04-06 11:15:33.056: INFO/DEBUG(31): 44295d50 002c43a0 [heap] 04-06 11:15:33.066: INFO/DEBUG(31): 44295d54 44295e8c 04-06 11:15:33.066: INFO/DEBUG(31): 44295d58 804ab00d /data/data/com.example/lib/libAFE.so 04-06 11:15:33.076: INFO/DEBUG(31): 44295d5c 002a9778 [heap] 04-06 11:15:33.085: INFO/DEBUG(31): 44295d60 00000078 04-06 11:15:33.085: INFO/DEBUG(31): 44295d64 afd14769 /system/lib/libc.so 04-06 11:15:33.085: INFO/DEBUG(31): 44295d68 44295e8c 04-06 11:15:33.085: INFO/DEBUG(31): 44295d6c 805d9763 /data/data/com.example/lib/libAFE.so 04-06 11:15:33.085: INFO/DEBUG(31): 44295d70 44295e8c 04-06 11:15:33.085: INFO/DEBUG(31): 44295d74 8051dc35 /data/data/com.example/lib/libAFE.so 04-06 11:15:33.085: INFO/DEBUG(31): 44295d78 0000003a 04-06 11:15:33.085: INFO/DEBUG(31): 44295d7c 002a9900 [heap] 04-06 11:15:37.126: DEBUG/Zygote(33): Process 335 terminated by signal (11) 04-06 11:15:37.146: INFO/ActivityManager(68): Process com.example (pid 335) has died. 04-06 11:15:37.178: ERROR/InputDispatcher(68): channel '406f03a0 com.example/com.example.afeTest (server)' ~ Consumer closed input channel or an error occurred. events=0x8 04-06 11:15:37.178: ERROR/InputDispatcher(68): channel '406f03a0 com.example/com.example.afeTest (server)' ~ Channel is unrecoverably broken and will be disposed! 04-06 11:15:37.185: INFO/BootReceiver(68): Copying /data/tombstones/tombstone_09 to DropBox (SYSTEM_TOMBSTONE) 04-06 11:15:37.576: DEBUG/dalvikvm(68): GC_FOR_MALLOC freed 266K, 47% free 4404K/8199K, external 3520K/3903K, paused 306ms 04-06 11:15:37.835: DEBUG/dalvikvm(68): GC_FOR_MALLOC freed 203K, 47% free 4457K/8391K, external 3520K/3903K, paused 120ms 04-06 11:15:37.886: INFO/WindowManager(68): WIN DEATH: Window{406f03a0 com.example/com.example.afeTest paused=false} 04-06 11:15:38.095: DEBUG/dalvikvm(68): GC_FOR_MALLOC freed 67K, 47% free 4518K/8391K, external 3511K/3903K, paused 94ms 04-06 11:15:38.095: INFO/dalvikvm-heap(68): Grow heap (frag case) to 10.575MB for 196628-byte allocation 04-06 11:15:38.126: DEBUG/dalvikvm(126): GC_EXPLICIT freed 110K, 51% free 2903K/5895K, external 4701K/5293K, paused 2443ms 04-06 11:15:38.217: DEBUG/dalvikvm(68): GC_FOR_MALLOC freed 1K, 46% free 4708K/8647K, external 3511K/3903K, paused 96ms 04-06 11:15:38.225: INFO/WindowManager(68): WIN DEATH: Window{406f72f8 com.example/com.example.afeTest paused=false} 04-06 11:15:38.405: DEBUG/dalvikvm(68): GC_FOR_MALLOC freed 492K, 50% free 4345K/8647K, external 3511K/3903K, paused 96ms 04-06 11:15:38.485: ERROR/InputDispatcher(68): Received spurious receive callback for unknown input channel. fd=164, events=0x8

    Read the article

  • Bad_alloc exception when using new for a struct c++

    - by bsg
    Hi, I am writing a query processor which allocates large amounts of memory and tries to find matching documents. Whenever I find a match, I create a structure to hold two variables describing the document and add it to a priority queue. Since there is no way of knowing how many times I will do this, I tried creating my structs dynamically using new. When I pop a struct off the priority queue, the queue (STL priority queue implementation) is supposed to call the object's destructor. My struct code has no destructor, so I assume a default destructor is called in that case. However, the very first time that I try to create a DOC struct, I get the following error: Unhandled exception at 0x7c812afb in QueryProcessor.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x0012f5dc.. I don't understand what's happening - have I used up so much memory that the heap is full? It doesn't seem likely. And it's not as if I've even used that pointer before. So: first of all, what am I doing that's causing the error, and secondly, will the following code work more than once? Do I need to have a separate pointer for each struct created, or can I re-use the same temporary pointer and assume that the queue will keep a pointer to each struct? Here is my code: struct DOC{ int docid; double rank; public: DOC() { docid = 0; rank = 0.0; } DOC(int num, double ranking) { docid = num; rank = ranking; } bool operator>( const DOC & d ) const { return rank > d.rank; } bool operator<( const DOC & d ) const { return rank < d.rank; } }; //a lot of processing goes on here; when a matching document is found, I do this: rank = calculateRanking(table, num); //if the heap is not full, create a DOC struct with the docid and rank and add it to the heap if(q.size() < 20) { doc = new DOC(num, rank); q.push(*doc); doc = NULL; } //if the heap is full, but the new rank is greater than the //smallest element in the min heap, remove the current smallest element //and add the new one to the heap else if(rank > q.top().rank) { q.pop(); cout << "pushing doc on to queue" << endl; doc = new DOC(num, rank); q.push(*doc); } Thank you very much, bsg.

    Read the article

  • extern and static variable storage ???

    - by Riyaz
    when will memory created for extern and static variable. Is it in stack or heap. Since its life time is till the program end, it cant be in stack it must be in heap. But size of the heap will known only at the run time. So somewhat confusion here ......

    Read the article

  • Freeing of allocated memory in Solaris/Linux

    - by user355159
    Hi, I have written a small program and compiled it under Solaris/Linux platform to measure the performance of applying this code to my application. The program is written in such a way, initially using sbrk(0) system call, i have taken base address of the heap region. After that i have allocated an 1.5GB of memory using malloc system call, Then i used memcpy system call to copy 1.5GB of content to the allocated memory area. Then, I freed the allocated memory. After freeing, i used again sbrk(0) system call to view the heap size. This is where i little confused. In solaris, eventhough, i freed the memory allocated (of nearly 1.5GB) the heap size of the process is huge. But i run the same application in linux, after freeing, i found that the heap size of the process is equal to the size of the heap memory before allocation of 1.5GB. I know Solaris does not frees memory immediately, but i don't know how to tune the solaris kernel to immediately free the memory after free() system call. Also, please explain why the same problem does not comes under Linux? Can anyone help me out of this? Thanks, Santhosh.

    Read the article

  • System.Windows.Forms.WebBrowser : Force X86?

    - by heap
    This object always uses the default on the system, so on an x64 machine, it will use an x64 Internet Explorer object. Is there any way I can force it to use the x86 IE? The web page element the browser accesses does not work on x64 and is out of my control.

    Read the article

  • Essbase 11.1.2 - JVM_OPTION settings for Essbase

    - by sujata
    When tuning the heap size for Essbase, there are two JVM_OPTIONS settings available for Essbase - one for the Essbase agent and one for the Essbase applications that are using custom-defined functions (CDFs), custom-defined macros (CDMs), data mining, triggers or external authentication. ESS_JVM_OPTION setting is used for the application and mainly for CDFs, CDMs, data mining, triggers, external authentication ESS_CSS_JVM_OPTION setting is used to set the heap size for the Essbase agent

    Read the article

  • Find your HEAPS

    - by NeilHambly
    I will not go into a full discussion as to why you would want to convert HEAP into a Clustered table .. as there are plenty of resources out there that describe those elements and the relevant Pro's & Con's However you may just want to understand which database tables are of the HEAP variety and how many of them "percentage wise" exist in each of your Databases So here is a useful script I have (it uses the sp_msforeachDB to iterate through all DBs on an instance), that easily...(read more)

    Read the article

  • ????ASMM

    - by Liu Maclean(???)
    ???Oracle??????????????SGA/PGA???,????10g????????????ASMM????,????????ASMM?????????Oracle??????????,?ASMM??????DBA????????????;????????ASMM???????????????DBA???:????????????DB,?????????????DBA?????????????????????????????????,ASMM??????????,???????????,??????????,??????????????????;?10g release 1?10.2??????ASMM?????????????,???????ASMM????????ASMM?????startup???????????ASMM??AMM??,????????DBA????SGA/PGA?????????”??”??”???”???,???????????DBA????chemist(???????1??2??????????????)? ?????????????????ASMM?????,?????????????…… Oracle?SGA???????9i???????????,????: Buffer Cache ????????????,??????????????? Default Pool                  ??????,???DB_CACHE_SIZE?? Keep Pool                     ??????,???DB_KEEP_CACHE_SIZE?? Non standard pool         ???????,???DB_nK_cache_size?? Recycle pool                 ???,???db_recycle_cache_size?? Shared Pool ???,???shared_pool_size?? Library cache   ?????? Row cache      ???,?????? Java Pool         java?,???Java_pool_size?? Large Pool       ??,???Large_pool_size?? Fixed SGA       ???SGA??,???Oracle???????,?????????granule? ?9i?????ASMM,???????????SGA,??????MSMM??9i???buffer cache??????????,?????????????????????????,???9i?????????????,?????????????????????????? ????SGA?????: ?????shared pool?default buffer pool????????,??????????? ?9i???????????(advisor),?????????? ??????????????? ?????????,?????? ?????,?????ORA-04031?????????? ASMM?????: ?????????? ???????????????? ???????sga_target?? ???????????,??????????? ??MSMM???????: ???? ???? ?????? ???? ??????????,??????????? ??????????????????,??????????ORA-04031??? ASMM???????????:1.??????sga_target???????2.???????,???:????(memory component),????(memory broker)???????(memory mechanism)3.????(memory advisor) ASMM????????????(Automatically set),??????:shared_pool_size?db_cache_size?java_pool_size?large_pool _size?streams_pool_size;?????????????????,???:db_keep_cache_size?db_recycle_cache_size?db_nk_cache_size?log_buffer????SGA?????,????????????????,??log_buffer?fixed sga??????????????? ??ASMM?????????sga_target??,???????ASMM??????????????????db_cache_size?java_pool_size???,?????????????????????,????????????????????(???)????????,Oracle?????????(granule,?SGA<1GB?granule???4M,?SGA>1GB?granule???16M)???????,??????????????buffer cache,??????????????????(granule)??????????????????????sga_target??,???????????????????(dism,???????)???ASMM?????????????statistics_level?????typical?ALL,?????BASIC??MMON????(Memory Monitor is a background process that gathers memory statistics (snapshots) stores this information in the AWR (automatic workload repository). MMON is also responsible for issuing alerts for metrics that exceed their thresholds)?????????????????????ASMM?????,???????????sga_target?????statistics_level?BASIC: SQL> show parameter sga NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ lock_sga boolean FALSE pre_page_sga boolean FALSE sga_max_size big integer 2000M sga_target big integer 2000M SQL> show parameter sga_target NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ sga_target big integer 2000M SQL> alter system set statistics_level=BASIC; alter system set statistics_level=BASIC * ERROR at line 1: ORA-02097: parameter cannot be modified because specified value is invalid ORA-00830: cannot set statistics_level to BASIC with auto-tune SGA enabled ?????server parameter file?spfile??,ASMM????shutdown??????????????(Oracle???????,????????)???spfile?,?????strings?????spfile????????????????????,?: G10R2.__db_cache_size=973078528 G10R2.__java_pool_size=16777216 G10R2.__large_pool_size=16777216 G10R2.__shared_pool_size=1006632960 G10R2.__streams_pool_size=67108864 ???spfile?????????????????,???????????”???”?????,??????????”??”?? ?ASMM?????????????? ?????(tunable):????????????????????????????buffer cache?????????,cache????????????????,?????????? IO????????????????????????????Library cache????? subheap????,?????????????????????????????????(open cursors)?????????client??????????????buffer cache???????,???????????pin??buffer???(???????) ?????(Un-tunable):???????????????????,?????????????????,?????????????????????????large pool?????? ??????(Fixed Size):???????????,??????????????????????????????????????? ????????????????(memory resize request)?????????,?????: ??????(Immediate Request):???????????ASMM????????????????????????(chunk)?,??????OUT-OF-MEMORY(ORA-04031)???,????????????????????(granule)????????????????????granule,????????????,?????????????????????????????,????granule??????????????? ??????(Deferred Request):???????????????????????????,??????????????granule???????????????MMON??????????delta. ??????(Manual Request):????????????alter system?????????????????????????????????????????????????granule,??????grow?????ORA-4033??,?????shrink?????ORA-4034??? ?ASMM????,????(Memory Broker)????????????????????????????(Deferred)??????????????????????(auto-tunable component)???????????????,???????????????MMON??????????????????????????????????,????????????????;MMON????Memory Broker?????????????????????????MMON????????????????????????????????????????(resize request system queue)?MMAN????(Memory Manager is a background process that manages the dynamic resizing of SGA memory areas as the workload increases or decreases)??????????????????? ?10gR1?Shared Pool?shrink??????????,?????????????Buffer Cache???????????granule,????Buffer Cache?granule????granule header?Metadata(???buffer header??RAC??Lock Elements)????,?????????????????????shared pool????????duration(?????)?chunk??????granule?,????????????granule??10gR2????Buffer Cache Granule????????granule header?buffer?Metadata(buffer header?LE)????,??shared pool???duration?chunk????????granule,??????buffer cache?shared pool??????????????10gr2?streams pool?????????(???????streams pool duration????) ??????????(Donor,???trace????)???,?????????granule???buffer cache,????granule????????????: ????granule???????granule header ?????chunk????granule?????????buffer header ???,???chunk??????????????????????metadata? ???2-4??,???granule???? ??????????????????,??buffer cache??granule???shared pool?,???????: MMAN??????????buffer cache???granule MMAN????granule??quiesce???(Moving 1 granule from inuse to quiesce list of DEFAULT buffer cache for an immediate req) DBWR???????quiesced???granule????buffer(dirty buffer) MMAN??shared pool????????(consume callback),granule?free?chunk???shared pool??(consume)?,????????????????????granule????shared granule??????,???????????granule???????????,??????pin??buffer??Metadata(???buffer header?LE)?????buffer cache??? ???granule???????shared pool,???granule?????shared??? ?????ASMM???????????,??????????: _enabled_shared_pool_duration:?????????10g????shared pool duration??,?????sga_target?0?????false;???10.2.0.5??cursor_space_for_time???true??????false,???10.2.0.5??cursor_space_for_time????? _memory_broker_shrink_heaps:???????0??Oracle?????shared pool?java pool,??????0,??shrink request??????????????????? _memory_management_tracing: ???????MMON?MMAN??????????(advisor)?????(Memory Broker)?????trace???;??ORA-04031????????36,???8?????????????trace,???23????Memory Broker decision???,???32???cache resize???;??????????: Level Contents 0×01 Enables statistics tracing 0×02 Enables policy tracing 0×04 Enables transfer of granules tracing 0×08 Enables startup tracing 0×10 Enables tuning tracing 0×20 Enables cache tracing ?????????_memory_management_tracing?????DUMP_TRANSFER_OPS????????????????,?????????????????trace?????????mman_trace?transfer_ops_dump? SQL> alter system set "_memory_management_tracing"=63; System altered Operation make shared pool grow and buffer cache shrink!!!.............. ???????granule?????,????default buffer pool?resize??: AUTO SGA: Request 0xdc9c2628 after pre-processing, ret=0 /* ???0xdc9c2628??????addr */ AUTO SGA: IMMEDIATE, FG request 0xdc9c2628 /* ???????????Immediate???? */ AUTO SGA: Receiver of memory is shared pool, size=16, state=3, flg=0 /* ?????????shared pool,???,????16?granule,??grow?? */ AUTO SGA: Donor of memory is DEFAULT buffer cache, size=106, state=4, flg=0 /* ???????Default buffer cache,????,????106?granule,??shrink?? */ AUTO SGA: Memory requested=3896, remaining=3896 /* ??immeidate request???????3896 bytes */ AUTO SGA: Memory received=0, minreq=3896, gransz=16777216 /* ????free?granule,??received?0,gransz?granule??? */ AUTO SGA: Request 0xdc9c2628 status is INACTIVE /* ??????????,??????inactive?? */ AUTO SGA: Init bef rsz for request 0xdc9c2628 /* ????????before-process???? */ AUTO SGA: Set rq dc9c2628 status to PENDING /* ?request??pending?? */ AUTO SGA: 0xca000000 rem=3896, rcvd=16777216, 105, 16777216, 17 /* ???????0xca000000?16M??granule */ AUTO SGA: Returning 4 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 4, 1, a AUTO SGA: Resize done for pool DEFAULT, 8192 /* ???default pool?resize */ AUTO SGA: Init aft rsz for request 0xdc9c2628 AUTO SGA: Request 0xdc9c2628 after processing AUTO SGA: IMMEDIATE, FG request 0x7fff917964a0 AUTO SGA: Receiver of memory is shared pool, size=17, state=0, flg=0 AUTO SGA: Donor of memory is DEFAULT buffer cache, size=105, state=0, flg=0 AUTO SGA: Memory requested=3896, remaining=0 AUTO SGA: Memory received=16777216, minreq=3896, gransz=16777216 AUTO SGA: Request 0x7fff917964a0 status is COMPLETE /* shared pool????16M?granule */ AUTO SGA: activated granule 0xca000000 of shared pool ?????partial granule????????????trace: AUTO SGA: Request 0xdc9c2628 after pre-processing, ret=0 AUTO SGA: IMMEDIATE, FG request 0xdc9c2628 AUTO SGA: Receiver of memory is shared pool, size=82, state=3, flg=1 AUTO SGA: Donor of memory is DEFAULT buffer cache, size=36, state=4, flg=1 /* ????????shared pool,?????default buffer cache */ AUTO SGA: Memory requested=4120, remaining=4120 AUTO SGA: Memory received=0, minreq=4120, gransz=16777216 AUTO SGA: Request 0xdc9c2628 status is INACTIVE AUTO SGA: Init bef rsz for request 0xdc9c2628 AUTO SGA: Set rq dc9c2628 status to PENDING AUTO SGA: Moving granule 0x93000000 of DEFAULT buffer cache to activate list AUTO SGA: Moving 1 granule 0x8c000000 from inuse to quiesce list of DEFAULT buffer cache for an immediate req /* ???buffer cache??????0x8c000000?granule??????inuse list, ???????quiesce list? */ AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: activated granule 0x93000000 of DEFAULT buffer cache AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 / * ??dbwr??0x8c000000 granule????dirty buffer */ AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 ......................................... AUTO SGA: Rcv shared pool consuming 8192 from 0x8c000000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 90112 from 0x8c002000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 24576 from 0x8c01a000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 65536 from 0x8c022000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 131072 from 0x8c034000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 286720 from 0x8c056000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 98304 from 0x8c09e000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 106496 from 0x8c0b8000 in granule 0x8c000000; owner is DEFAULT buffer cache ..................... /* ??shared pool????0x8c000000 granule??chunk, ??granule?owner????default buffer cache */ AUTO SGA: Imm xfer 0x8c000000 from quiesce list of DEFAULT buffer cache to partial inuse list of shared pool /* ???0x8c000000 granule?default buffer cache????????shared pool????inuse list */ AUTO SGA: Returning 4 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 4, 1, 20a AUTO SGA: Init aft rsz for request 0xdc9c2628 AUTO SGA: Request 0xdc9c2628 after processing AUTO SGA: IMMEDIATE, FG request 0x7fffe9bcd0e0 AUTO SGA: Receiver of memory is shared pool, size=83, state=0, flg=1 AUTO SGA: Donor of memory is DEFAULT buffer cache, size=35, state=0, flg=1 AUTO SGA: Memory requested=4120, remaining=0 AUTO SGA: Memory received=14934016, minreq=4120, gransz=16777216 AUTO SGA: Request 0x7fffe9bcd0e0 status is COMPLETE /* ????partial transfer?? */ ?????partial transfer??????DUMP_TRANSFER_OPS????0x8c000000 partial granule???????,?: SQL> oradebug setmypid; Statement processed. SQL> oradebug dump DUMP_TRANSFER_OPS 1; Statement processed. SQL> oradebug tracefile_name; /s01/admin/G10R2/udump/g10r2_ora_21482.trc =======================trace content============================== GRANULE SIZE is 16777216 COMPONENT NAME : shared pool Number of granules in partially inuse list (listid 4) is 23 Granule addr is 0x8c000000 Granule owner is DEFAULT buffer cache /* ?0x8c000000 granule?shared pool?partially inuse list, ?????owner??default buffer cache */ Granule 0x8c000000 dump from owner perspective gptr = 0x8c000000, num buf hdrs = 1989, num buffers = 156, ghdr = 0x8cffe000 / * ?????granule?granule header????0x8cffe000, ????156?buffer block,1989?buffer header */ /* ??granule??????,??????buffer cache??shared pool chunk */ BH:0x8cf76018 BA:(nil) st:11 flg:20000 BH:0x8cf76128 BA:(nil) st:11 flg:20000 BH:0x8cf76238 BA:(nil) st:11 flg:20000 BH:0x8cf76348 BA:(nil) st:11 flg:20000 BH:0x8cf76458 BA:(nil) st:11 flg:20000 BH:0x8cf76568 BA:(nil) st:11 flg:20000 BH:0x8cf76678 BA:(nil) st:11 flg:20000 BH:0x8cf76788 BA:(nil) st:11 flg:20000 BH:0x8cf76898 BA:(nil) st:11 flg:20000 BH:0x8cf769a8 BA:(nil) st:11 flg:20000 BH:0x8cf76ab8 BA:(nil) st:11 flg:20000 BH:0x8cf76bc8 BA:(nil) st:11 flg:20000 BH:0x8cf76cd8 BA:0x8c018000 st:1 flg:622202 ............... Address 0x8cf30000 to 0x8cf74000 not in cache Address 0x8cf74000 to 0x8d000000 in cache Granule 0x8c000000 dump from receivers perspective Dumping layout Address 0x8c000000 to 0x8c018000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c018000 to 0x8c01a000 not in this pool Address 0x8c01a000 to 0x8c020000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c020000 to 0x8c022000 not in this pool Address 0x8c022000 to 0x8c032000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c032000 to 0x8c034000 not in this pool Address 0x8c034000 to 0x8c054000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c054000 to 0x8c056000 not in this pool Address 0x8c056000 to 0x8c09c000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c09c000 to 0x8c09e000 not in this pool Address 0x8c09e000 to 0x8c0b6000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c0b6000 to 0x8c0b8000 not in this pool Address 0x8c0b8000 to 0x8c0d2000 in sga heap(1,3) (idx=1, dur=4) ???????granule?????shared granule??????,?????????buffer block,????1?shared subpool??????durtaion?4?chunk,duration=4?execution duration;??duration?chunk???????????,??extent???quiesce list??????????????free?execution duration?????????????,??????duration???extent(??????extent????granule)??????? ?????????????ASMM?????????,????: V$SGAINFODisplays summary information about the system global area (SGA). V$SGADisplays size information about the SGA, including the sizes of different SGA components, the granule size, and free memory. V$SGASTATDisplays detailed information about the SGA. V$SGA_DYNAMIC_COMPONENTSDisplays information about the dynamic SGA components. This view summarizes information based on all completed SGA resize operations since instance startup. V$SGA_DYNAMIC_FREE_MEMORYDisplays information about the amount of SGA memory available for future dynamic SGA resize operations. V$SGA_RESIZE_OPSDisplays information about the last 400 completed SGA resize operations. V$SGA_CURRENT_RESIZE_OPSDisplays information about SGA resize operations that are currently in progress. A resize operation is an enlargement or reduction of a dynamic SGA component. V$SGA_TARGET_ADVICEDisplays information that helps you tune SGA_TARGET. ?????????shared pool duration???,?????????

    Read the article

  • Anatomy of a .NET Assembly - CLR metadata 2

    - by Simon Cooper
    Before we look any further at the CLR metadata, we need a quick diversion to understand how the metadata is actually stored. Encoding table information As an example, we'll have a look at a row in the TypeDef table. According to the spec, each TypeDef consists of the following: Flags specifying various properties of the class, including visibility. The name of the type. The namespace of the type. What type this type extends. The field list of this type. The method list of this type. How is all this data actually represented? Offset & RID encoding Most assemblies don't need to use a 4 byte value to specify heap offsets and RIDs everywhere, however we can't hard-code every offset and RID to be 2 bytes long as there could conceivably be more than 65535 items in a heap or more than 65535 fields or types defined in an assembly. So heap offsets and RIDs are only represented in the full 4 bytes if it is required; in the header information at the top of the #~ stream are 3 bits indicating if the #Strings, #GUID, or #Blob heaps use 2 or 4 bytes (the #US stream is not accessed from metadata), and the rowcount of each table. If the rowcount for a particular table is greater than 65535 then all RIDs referencing that table throughout the metadata use 4 bytes, else only 2 bytes are used. Coded tokens Not every field in a table row references a single predefined table. For example, in the TypeDef extends field, a type can extend another TypeDef (a type in the same assembly), a TypeRef (a type in a different assembly), or a TypeSpec (an instantiation of a generic type). A token would have to be used to let us specify the table along with the RID. Tokens are always 4 bytes long; again, this is rather wasteful of space. Cutting the RID down to 2 bytes would make each token 3 bytes long, which isn't really an optimum size for computers to read from memory or disk. However, every use of a token in the metadata tables can only point to a limited subset of the metadata tables. For the extends field, we only need to be able to specify one of 3 tables, which we can do using 2 bits: 0x0: TypeDef 0x1: TypeRef 0x2: TypeSpec We could therefore compress the 4-byte token that would otherwise be needed into a coded token of type TypeDefOrRef. For each type of coded token, the least significant bits encode the table the token points to, and the rest of the bits encode the RID within that table. We can work out whether each type of coded token needs 2 or 4 bytes to represent it by working out whether the maximum RID of every table that the coded token type can point to will fit in the space available. The space available for the RID depends on the type of coded token; a TypeOrMethodDef coded token only needs 1 bit to specify the table, leaving 15 bits available for the RID before a 4-byte representation is needed, whereas a HasCustomAttribute coded token can point to one of 18 different tables, and so needs 5 bits to specify the table, only leaving 11 bits for the RID before 4 bytes are needed to represent that coded token type. For example, a 2-byte TypeDefOrRef coded token with the value 0x0321 has the following bit pattern: 0 3 2 1 0000 0011 0010 0001 The first two bits specify the table - TypeRef; the other bits specify the RID. Because we've used the first two bits, we've got to shift everything along two bits: 000000 1100 1000 This gives us a RID of 0xc8. If any one of the TypeDef, TypeRef or TypeSpec tables had more than 16383 rows (2^14 - 1), then 4 bytes would need to be used to represent all TypeDefOrRef coded tokens throughout the metadata tables. Lists The third representation we need to consider is 1-to-many references; each TypeDef refers to a list of FieldDef and MethodDef belonging to that type. If we were to specify every FieldDef and MethodDef individually then each TypeDef would be very large and a variable size, which isn't ideal. There is a way of specifying a list of references without explicitly specifying every item; if we order the MethodDef and FieldDef tables by the owning type, then the field list and method list in a TypeDef only have to be a single RID pointing at the first FieldDef or MethodDef belonging to that type; the end of the list can be inferred by the field list and method list RIDs of the next row in the TypeDef table. Going back to the TypeDef If we have a look back at the definition of a TypeDef, we end up with the following reprensentation for each row: Flags - always 4 bytes Name - a #Strings heap offset. Namespace - a #Strings heap offset. Extends - a TypeDefOrRef coded token. FieldList - a single RID to the FieldDef table. MethodList - a single RID to the MethodDef table. So, depending on the number of entries in the heaps and tables within the assembly, the rows in the TypeDef table can be as small as 14 bytes, or as large as 24 bytes. Now we've had a look at how information is encoded within the metadata tables, in the next post we can see how they are arranged on disk.

    Read the article

  • UBJsonReader (Libgdx) unable to to read UBJson from Python(Blender)

    - by daniel
    I am working on an export tool from Blender to Libgdx, exports like custom attributes and other information (Almost completed), this is a very cool tool that will speed up a lot your works, after I completed I will send to public to contribute forum, Export format is uses python's Standard Json module and readable text, it of course works fine, but I wanna also have a Binary Json export for faster load, so users can Export Straight to Libgdx, but after I search I found that UBJson with draft9.py (simpleubjson 0.6.1) encode is seems matches with one FBXConverter's UBJsonWriter( Xoppa wrote), but when I export, I am not able to read the file, and send this errors (Java heap space) seems this is a different between byte sizes in UBJson(python) and UBJsonReader. how can I write a correct one in python that matches with Libgdx's UBJsonReader, and would be cross-platform? Exception in thread "LWJGL Application" com.badlogic.gdx.utils.GdxRuntimeException: java.lang.OutOfMemoryError: Java heap space at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:120) Caused by: java.lang.OutOfMemoryError: Java heap space at com.badlogic.gdx.utils.UBJsonReader.readString(UBJsonReader.java:162) at com.badlogic.gdx.utils.UBJsonReader.parseString(UBJsonReader.java:150) at com.badlogic.gdx.utils.UBJsonReader.parseObject(UBJsonReader.java:112) at com.badlogic.gdx.utils.UBJsonReader.parse(UBJsonReader.java:59) at com.badlogic.gdx.utils.UBJsonReader.parse(UBJsonReader.java:52) at com.badlogic.gdx.utils.UBJsonReader.parse(UBJsonReader.java:36) at com.badlogic.gdx.utils.UBJsonReader.parse(UBJsonReader.java:45) at com.me.gdximportexport.GdxImportExport.create(GdxImportExport.java:43) at com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop(LwjglApplication.java:136) at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:114) Tested on UbuntuStudio 13.10 with OpenJdk 7, and Windows 7 with jdk 7 Thanks for any guides.

    Read the article

  • Purpose of "new" keyword

    - by Channel72
    The new keyword in languages like Java, Javascript, and C# creates a new instance of a class. This syntax seems to have been inherited from C++, where new is used specifically to allocate a new instance of a class on the heap, and return a pointer to the new instance. In C++, this is not the only way to construct an object. You can also construct an object on the stack, without using new - and in fact, this way of constructing objects is much more common in C++. So, coming from a C++ background, the new keyword in languages like Java, Javascript, and C# seemed natural and obvious to me. Then I started to learn Python, which doesn't have the new keyword. In Python, an instance is constructed simply by calling the constructor, like: f = Foo() At first, this seemed a bit off to me, until it occurred to me that there's no reason for Python to have new, because everything is an object so there's no need to disambiguate between various constructor syntaxes. But then I thought - what's really the point of new in Java? Why should we say Object o = new Object();? Why not just Object o = Object();? In C++ there's definitely a need for new, since we need to distinguish between allocating on the heap and allocating on the stack, but in Java all objects are constructed on the heap, so why even have the new keyword? The same question could be asked for Javascript. In C#, which I'm much less familiar with, I think new may have some purpose in terms of distinguishing between object types and value types, but I'm not sure. Regardless, it seems to me that many languages which came after C++ simply "inherited" the new keyword - without really needing it. It's almost like a vestigial keyword. We don't seem to need it for any reason, and yet it's there. Question: Am I correct about this? Or is there some compelling reason that new needs to be in C++-inspired memory-managed languages like Java, Javascript and C#?

    Read the article

  • Why do memory-managed languages retain the `new` keyword?

    - by Channel72
    The new keyword in languages like Java, Javascript, and C# creates a new instance of a class. This syntax seems to have been inherited from C++, where new is used specifically to allocate a new instance of a class on the heap, and return a pointer to the new instance. In C++, this is not the only way to construct an object. You can also construct an object on the stack, without using new - and in fact, this way of constructing objects is much more common in C++. So, coming from a C++ background, the new keyword in languages like Java, Javascript, and C# seemed natural and obvious to me. Then I started to learn Python, which doesn't have the new keyword. In Python, an instance is constructed simply by calling the constructor, like: f = Foo() At first, this seemed a bit off to me, until it occurred to me that there's no reason for Python to have new, because everything is an object so there's no need to disambiguate between various constructor syntaxes. But then I thought - what's really the point of new in Java? Why should we say Object o = new Object();? Why not just Object o = Object();? In C++ there's definitely a need for new, since we need to distinguish between allocating on the heap and allocating on the stack, but in Java all objects are constructed on the heap, so why even have the new keyword? The same question could be asked for Javascript. In C#, which I'm much less familiar with, I think new may have some purpose in terms of distinguishing between object types and value types, but I'm not sure. Regardless, it seems to me that many languages which came after C++ simply "inherited" the new keyword - without really needing it. It's almost like a vestigial keyword. We don't seem to need it for any reason, and yet it's there. Question: Am I correct about this? Or is there some compelling reason that new needs to be in C++-inspired memory-managed languages like Java, Javascript and C#?

    Read the article

  • Types of semantic bugs, logic errors [closed]

    - by C-Otto
    I am a PhD student and currently focus on automatically finding instances of new types of bugs in (Java) programs that cannot be found by existing tools like FindBugs. The existing tool currently is used to prove/disprove termination of (Java) programs. I have some ideas (see below), but I could need more input from you (experienced programmers, potential users of my tool). What kind of bugs do you wish to find? What types of bugs exist and might be suitable for my analysis? One strength of the approach I use is detailled information about the heap. So in contrast to FindBugs, I can work with knowledge of the form "variable x and variable y are disjoint on the heap" or "variable z is not cyclic". It is also possible to see if a method might have side effects (and if so, which variables may/may not be affected by it). Example 1: Vacuous call: Graph graphOne = createGraph(); Graph graphTwo = createGraph(); Node source = graphTwo.getRootNode(); for (Node n : graphOne.getNodes()) { if (areConnected(source, n)) { graphTwo.addNode(n); } } Imagine createGraph() creates a fresh graph, so that graphOne and graphTwo are disjoint on the heap. Then, because source is taken from graphTwo instead of graphOne, the call to areConnected always returns false. In this situation I could find out that the call areConnected is useless (because it does not have any side effect and the return value always is false) which helps finding the real bug (taking source from the wrong graph). For this the information that x and y are disjoint (because graphOne and graphTwo are disjoint) is crucial. This bug is related to calling x.equals(y) where x and y are objects of different classes. In this scenario, most implementations of equals() always return false, which most likely is not the intended result. FindBugs already finds this bug (hardcoded to equals(), semantics of implementation is not checked). Example 2: Useless code: someCode(); while (something()) { yetMoreSomething(); } moreCode(); In the case that the loop (so the code in something() and yetMoreSomething()) does not modify anything visible outside the loop, it does not make sense to run this code - the program has the same behaviour as someCode(); moreCode() (i.e., without the loop). To find this out, one needs detailled information about the side effects of the (possibly useless) code. If I can prove that the code does not have any side effect that can be observed afterwards (in the example: in moreCode() or later), then the code indeed is useless. Of course, here Input/Output of any form must be seen as a side effect, so that a System.out.println(...) is not considered useless. Example 3: Ignored return value: Instead of x = foo(); and making use of x, the method is called without storing the result: foo();. If the method does not have any side effect, its invocation is useless and can be dropped. Most likely, the bug here is that the returned value should have been used. Here, too, detailled information about side effects are needed. Can you think of similar types of bugs that might be detected (only) with detailled information about the heap, side effects, semantics of called methods, ...? Did you encounter bugs related to the ones shown below in "real life"? By the way, the tool is AProVE and Java related publications can be found on my homepage. Thanks a lot, Carsten

    Read the article

  • Mastering snow and Java development at jDays in Gothenburg

    - by JavaCecilia
    Last weekend, I took the train from Stockholm to Gothenburg to attend and present at the new Java developer conference jDays. It was professionally arranged in the Swedish exhibition hall close to the amusement park Liseberg and we got a great deal out of the top-level presenters and hallway discussions. Understanding and Improving Your Java Process Our main purpose was to spread information on JVM and our monitoring tools for Java processes, so I held a crash course in the most important terms and concepts if you want to affect the performance of your Java process. From the beginning - the JVM specification to interpretation of heap usage graphs. For correct analysis, you also need to understand something about process memory - you need space for the Java heap (-Xms for initial size and -Xmx for max heap size), but the process memory also contain the thread stacks (to a size of -Xss), JVM internal data structures used for keeping track of Java objects on the heap, method compilation/optimization, native libraries, etc. If you get long pause times, make sure to monitor your application, see the allocation rate and frequency of pause times.My colleague Klara Ward then held a presentation on the Java Mission Control product, the profiling and diagnostics tools suite for HotSpot, coming soon. The room was packed and very appreciated, Klara demonstrated four different scenarios, e.g. how to diagnose and fix latencies due to lock contention for logging.My German colleague, OpenJDK ambassador Dalibor Topic travelled to Sweden to do the second keynote on "Make the Future Java". He let us in on the coming features and roadmaps of Java, now delivering major versions on a two-year schedule (Java 7 2011, Java 8 2013, etc). Also letting us in on where to download early versions of 8, to report problems early on. Software Development in teams Being a scout leader, I'm drilled in different team building and workshop techniques, creating strong groups - of course, I had to attend Henrik Berglund's session on building successful teams. He spoke about the importance of clear goals, autonomy and agreed processes. Thomas Sundberg ended the conference by doing live remote pair programming with Alex in Rumania and a concrete tips for people wanting to try it out (for local collaboration, remember to wash and change clothes). Memory Master Keynote The conference keynote was delivered by the Swedish memory master Mattias Ribbing, showing off by remembering the order of a deck of cards he'd seen once. He made it interactive by forcing the audience to learn a memory mastering technique of remembering ten ordered things by heart, asking us to shout out the order backwards and we made it! I desperately need this - bought the book, will get back on the subject. Continuous Delivery The most impressive presenter was Axel Fontaine on Continuous Delivery. Very well prepared slides with key images of his message and moved about the stage like a rock star. The topic is of course highly interesting, how to create an infrastructure enabling immediate feedback to developers and ability to release your product several times per day. Tomek Kaczanowski delivered a funny and useful presentation on good and bad tests, providing comic relief with poorly written tests and the useful rules of thumb how to rewrite them. To conclude, we had a great time and hope to see you at jDays next year :)

    Read the article

  • Difference between the address space of parent process and its child process in Linux?

    - by abbas1707
    Hi, I am confused about it. I have read that when a child is created by a parent process, child gets a copy of its parent's address space. What it means here by copy? If i use code below, then it prints same addresses of variable 'a' which is on heap in all cases. i.e in case of child and parent. So what is happening here? #include <sys/types.h> #include <stdio.h> #include <unistd.h> #include <stdlib.h> int main () { pid_t pid; int *a = (int *)malloc(4); printf ("heap pointer %p\n", a); pid = fork(); if (pid < 0) { fprintf (stderr, "Fork Failed"); exit(-1); } else if (pid == 0) { printf ("Child\n"); printf ("in child heap pointer %p\n", a); } else { wait (NULL); printf ("Child Complete\n"); printf ("in parent heap pointer %p\n", a); exit(0); } }

    Read the article

  • ANTS Memory Profiler 7.0

    - by James Michael Hare
    I had always been a fan of ANTS products (Reflector is absolutely invaluable, and their performance profiler is great as well – very easy to use!), so I was curious to see what the ANTS Memory Profiler could show me. Background While a performance profiler will track how much time is typically spent in each unit of code, a memory profiler gives you much more detail on how and where your memory is being consumed and released in a program. As an example, I’d been working on a data access layer at work to call a market data web service.  This web service would take a list of symbols to quote and would return back the quote data.  To help consolidate the thousands of web requests per second we get and reduce load on the web services, we implemented a 5-second cache of quote data.  Not quite long enough to where customers will typically notice a quote go “stale”, but just long enough to be able to collapse multiple quote requests for the same symbol in a short period of time. A 5-second cache may not sound like much, but it actually pays off by saving us roughly 42% of our web service calls, while still providing relatively up-to-date information.  The question is whether or not the extra memory involved in maintaining the cache was worth it, so I decided to fire up the ANTS Memory Profiler and take a look at memory usage. First Impressions The main thing I’ve always loved about the ANTS tools is their ease of use.  Pretty much everything is right there in front of you in a way that makes it easy for you to find what you need with little digging required.  I’ve worked with other, older profilers before (that shall remain nameless other than to hint it was created by a very large chip maker) where it was a mind boggling experience to figure out how to do simple tasks. Not so with AMP.  The opening dialog is very straightforward.  You can choose from here whether to debug an executable, a web application (either in IIS or from VS’s web development server), windows services, etc. So I chose a .NET Executable and navigated to the build location of my test harness.  Then began profiling. At this point while the application is running, you can see a chart of the memory as it ebbs and wanes with allocations and collections.  At any given point in time, you can take snapshots (to compare states) zoom in, or choose to stop at any time.  Snapshots Taking a snapshot also gives you a breakdown of the managed memory heaps for each generation so you get an idea how many objects are staying around for extended periods of time (as an object lives and survives collections, it gets promoted into higher generations where collection becomes less frequent). Generating a snapshot brings up an analysis view with very handy graphs that show your generation sizes.  Almost all my memory is in Generation 1 in the managed memory component of the first graph, which is good news to me, because Gen 2 collections are much rarer.  I once3 made the mistake once of caching data for 30 minutes and found it didn’t get collected very quick after I released my reference because it had been promoted to Gen 2 – doh! Analysis It looks like (from the second pie chart) that the majority of the allocations were in the string class.  This also is expected for me because the majority of the memory allocated is in the web service responses, so it doesn’t seem the entities I’m adapting to (to prevent being too tightly coupled to the web service proxy classes, which can change easily out from under me) aren’t taking a significant portion of memory. I also appreciate that they have clear summary text in key places such as “No issues with large object heap fragmentation were detected”.  For novice users, this type of summary information can be critical to getting them to use a tool and develop a good working knowledge of it. There is also a handy link at the bottom for “What to look for on the summary” which loads a web page of help on key points to look for. Clicking over to the session overview, it’s easy to compare the samples at each snapshot to see how your memory is growing, shrinking, or staying relatively the same.  Looking at my snapshots, I’m pretty happy with the fact that memory allocation and heap size seems to be fairly stable and in control: Once again, you can check on the large object heap, generation one heap, and generation two heap across each snapshot to spot trends. Back on the analysis tab, we can go to the [Class List] button to get an idea what classes are making up the majority of our memory usage.  As was little surprise to me, System.String was the clear majority of my allocations, though I found it surprising that the System.Reflection.RuntimeMehtodInfo came in second.  I was curious about this, so I selected it and went into the [Instance Categorizer].  This view let me see where these instances to RuntimeMehtodInfo were coming from. So I scrolled back through the graph, and discovered that these were being held by the System.ServiceModel.ChannelFactoryRefCache and I was satisfied this was just an artifact of my WCF proxy. I also like that down at the bottom of the Instance Categorizer it gives you a series of filters and offers to guide you on which filter to use based on the problem you are trying to find.  For example, if I suspected a memory leak, I might try to filter for survivors in growing classes.  This means that for instances of a class that are growing in memory (more are being created than cleaned up), which ones are survivors (not collected) from garbage collection.  This might allow me to drill down and find places where I’m holding onto references by mistake and not freeing them! Finally, if you want to really see all your instances and who is holding onto them (preventing collection), you can go to the “Instance Retention Graph” which creates a graph showing what references are being held in memory and who is holding onto them. Visual Studio Integration Of course, VS has its own profiler built in – and for a free bundled profiler it is quite capable – but AMP gives a much cleaner and easier-to-use experience, and when you install it you also get the option of letting it integrate directly into VS. So once you go back into VS after installation, you’ll notice an ANTS menu which lets you launch the ANTS profiler directly from Visual Studio.   Clicking on one of these options fires up the project in the profiler immediately, allowing you to get right in.  It doesn’t integrate with the Visual Studio windows themselves (like the VS profiler does), but still the plethora of information it provides and the clear and concise manner in which it presents it makes it well worth it. Summary If you like the ANTS series of tools, you shouldn’t be disappointed with the ANTS Memory Profiler.  It was so easy to use that I was able to jump in with very little product knowledge and get the information I was looking it for. I’ve used other profilers before that came with 3-inch thick tomes that you had to read in order to get anywhere with the tool, and this one is not like that at all.  It’s built for your everyday developer to get in and find their problems quickly, and I like that! Tweet Technorati Tags: Influencers,ANTS,Memory,Profiler

    Read the article

  • Appserver runs out of memory

    - by sarego
    We have been facing Out of Memory errors in our App server for sometime. We see the used heap size increasing gradually until finally it reaches the available heap in size. This happens every 3 weeks after which a server restart is needed to fix this. Upon analysis of the heap dumps we find the problem to be objects used in JSPs. Can JSP objects be the real cause of Appserver memory issues? How do we free up JSP objects (Objects which are being instantiated using usebean or other tags)? We have a clustered Websphere appserver with 2 nodes and an IHS.

    Read the article

  • Help with force close occurrences in my app

    - by Ken
    This is the last issue with this app. Periodic force close situations. I think something should be on another thread but I'm not sure what. Anyway, I can always count on a freeze on first install. If I wait, eventually (maybe 10 seconds) the app comes around, maybe more. here is an excerpt from logcat--the three lines occur after full layout is displayed and I attempt to touch a [game] 'peg' which should spawn a sprite, but the freeze occurs there. Can anybody tell what the issue might be?: I/System.out( 279): TouchDown (17.0,106.0) I/System.out( 279): checking (17,106 I/System.out( 279): hit for bounds Rect(3, 98 - 32, 130) [FREEZE BEGINS] W/webcore ( 279): Can't get the viewWidth after the first layout W/WindowManager( 60): Key dispatching timed out sending to com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree W/WindowManager( 60): Previous dispatch state: null W/WindowManager( 60): Current dispatch state: {{null to Window{43fd87a0 com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree paused=false} @ 1295232880017 lw=Window{43fd87a0 com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree paused=false} lb=android.os.BinderProxy@440523b8 fin=false gfw=true ed=true tts=0 wf=false fp=false mcf=Window{43fd87a0 com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree paused=false}}} I/Process ( 60): Sending signal. PID: 279 SIG: 3 I/dalvikvm( 279): threadid=3: reacting to signal 3 D/dalvikvm( 124): GC_EXPLICIT freed 1754 objects / 106104 bytes in 7365ms I/Process ( 60): Sending signal. PID: 60 SIG: 3 I/dalvikvm( 60): threadid=3: reacting to signal 3 I/dalvikvm( 60): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 263 SIG: 3 I/dalvikvm( 263): threadid=3: reacting to signal 3 I/dalvikvm( 279): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 117 SIG: 3 I/dalvikvm( 117): threadid=3: reacting to signal 3 I/dalvikvm( 117): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 254 SIG: 3 I/Process ( 60): Sending signal. PID: 121 SIG: 3 I/dalvikvm( 121): threadid=3: reacting to signal 3 D/AudioSink( 34): bufferCount (4) is too small and increased to 12 I/System.out( 279): making white sprite I/Process ( 60): Sending signal. PID: 186 SIG: 3 I/Process ( 60): Sending signal. PID: 232 SIG: 3 D/MillennialMediaAdSDK( 279): size: 1 D/MillennialMediaAdSDK( 279): num: 1 D/AdWhirl SDK( 279): Millennial success D/AdWhirl SDK( 279): Will call rotateAd() in 120 seconds I/dalvikvm( 232): threadid=3: reacting to signal 3 I/dalvikvm( 121): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 222 SIG: 3 I/MillennialMediaAdSDK( 279): Millennial ad return success D/MillennialMediaAdSDK( 279): View height: 0 D/MillennialMediaAdSDK( 279): nextUrl: [deleted] I/Process ( 60): Sending signal. PID: 239 SIG: 3 I/Process ( 60): Sending signal. PID: 213 SIG: 3 D/AdWhirl SDK( 279): Added subview D/AdWhirl SDK( 279): Pinging URL: [deleted] I/Process ( 60): Sending signal. PID: 197 SIG: 3 I/dalvikvm( 197): threadid=3: reacting to signal 3 I/Process ( 60): Sending signal. PID: 164 SIG: 3 I/dalvikvm( 164): threadid=3: reacting to signal 3 D/dalvikvm( 279): GC_FOR_MALLOC freed 7735 objects / 639688 bytes in 217ms I/Process ( 60): Sending signal. PID: 124 SIG: 3 I/dalvikvm( 124): threadid=3: reacting to signal 3 I/Process ( 60): Sending signal. PID: 158 SIG: 3 I/dalvikvm( 158): threadid=3: reacting to signal 3 I/Process ( 60): Sending signal. PID: 127 SIG: 3 E/ActivityManager( 60): ANR in com.live.brainbuilderfree (com.live.brainbuilderfree/.BrainBuilderFree) E/ActivityManager( 60): Reason: keyDispatchingTimedOut E/ActivityManager( 60): Load: 3.46 / 1.69 / 0.65 E/ActivityManager( 60): CPU usage from 28095ms to 140ms ago: E/ActivityManager( 60): system_server: 30% = 25% user + 4% kernel / faults: 3119 minor 66 major E/ActivityManager( 60): mediaserver: 11% = 7% user + 4% kernel / faults: 746 minor 17 major E/ActivityManager( 60): com.svox.pico: 1% = 0% user + 1% kernel / faults: 2833 minor 8 major E/ActivityManager( 60): d.process.acore: 1% = 0% user + 0% kernel / faults: 1146 minor 36 major E/ActivityManager( 60): ndroid.launcher: 1% = 0% user + 0% kernel / faults: 852 minor 6 major E/ActivityManager( 60): m.android.phone: 0% = 0% user + 0% kernel / faults: 621 minor 7 major E/ActivityManager( 60): kswapd0: 0% = 0% user + 0% kernel E/ActivityManager( 60): ronsoft.openwnn: 0% = 0% user + 0% kernel / faults: 337 minor 2 major E/ActivityManager( 60): adbd: 0% = 0% user + 0% kernel / faults: 3 minor E/ActivityManager( 60): zygote: 0% = 0% user + 0% kernel / faults: 169 minor E/ActivityManager( 60): events/0: 0% = 0% user + 0% kernel E/ActivityManager( 60): rild: 0% = 0% user + 0% kernel / faults: 103 minor 3 major E/ActivityManager( 60): pdflush: 0% = 0% user + 0% kernel E/ActivityManager( 60): .quicksearchbox: 0% = 0% user + 0% kernel / faults: 61 minor E/ActivityManager( 60): id.defcontainer: 0% = 0% user + 0% kernel / faults: 12 minor E/ActivityManager( 60): +rainbuilderfree: 0% = 0% user + 0% kernel E/ActivityManager( 60): +sh: 0% = 0% user + 0% kernel E/ActivityManager( 60): +app_process: 0% = 0% user + 0% kernel E/ActivityManager( 60): TOTAL: 100% = 76% user + 21% kernel + 2% iowait + 0% irq + 0% softirq I/dalvikvm( 127): threadid=3: reacting to signal 3 I/dalvikvm( 186): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 3747 objects / 228920 bytes in 609ms I/dalvikvm-heap( 60): Grow heap (frag case) to 4.759MB for 36896-byte allocation I/dalvikvm( 239): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 226 objects / 9952 bytes in 546ms I/dalvikvm( 213): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 105 objects / 5816 bytes in 492ms I/dalvikvm-heap( 60): Grow heap (frag case) to 4.815MB for 49188-byte allocation I/dalvikvm( 222): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 77 objects / 5232 bytes in 546ms I/dalvikvm( 254): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 105 objects / 55856 bytes in 521ms I/dalvikvm-heap( 60): Grow heap (frag case) to 4.876MB for 98360-byte allocation D/dalvikvm( 60): GC_FOR_MALLOC freed 58 objects / 3632 bytes in 340ms D/dalvikvm( 60): GC_FOR_MALLOC freed 1093 objects / 185256 bytes in 572ms W/WindowManager( 60): Continuing to wait for key to be dispatched I/System.out( 279): TouchMove (117.0,124.0) I/System.out( 279): TouchUP (117.0,124.0) D/dalvikvm( 60): GC_FOR_MALLOC freed 141 objects / 108328 bytes in 564ms I/ARMAssembler( 60): generated scanline__00000077:03515104_00000000_00000000 [ 33 ipp] (47 ins) at [0x313d78:0x313e34] in 11621593 ns W/InputManagerService( 60): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@43f66a10 I/dalvikvm( 239): Wrote stack traces to '/data/anr/traces.txt' I/dalvikvm( 263): Wrote stack traces to '/data/anr/traces.txt' etc...

    Read the article

  • Core dump equivalence for Java

    - by m3rLinEz
    So far I have learned about generating thread dump and heap dump using jstack and and jmap respectively. However, jstack thread dump contains only texts describing the stack on each thread. And opening heap dump (.hprof file) with Java VisualVM only shows the objects allocated in the heap. What I actually want is to be able see the stack, to switch to particular stack frame, and watch local variables. This kind of post-mortem debugging can be done normally with tools like WinDbg, gdb and a core file (for a native C++ program.) I wonder if such 'core' file (which will allow me to debug in non-live environment) exists in Java?

    Read the article

  • Why does jruby complain about valid java_opts

    - by brad
    I have set my java min/max heap size to be the same as outlined in the Sun Docs for precise heap sizing using the following: -Xms768m -Xmx768m This works fine when I start tomcat, but if I run jruby from the command line it complains saying: Error occurred during initialization of VM Incompatible minimum and maximum heap sizes specified I read in the jruby docs about some -J-X params but it seems silly that I would need to explicitly override my normal jvm settings. The problem arises when I do a deploy. I try running jruby -S rake db:migrate on my server and it complains. Is it true that I need to explicitly override my JVM settings when running jruby? It seems as though ANY Xms/Xmx values cause jruby to complain. Update So it seems that some settings do in fact work. For instance all of these work: Xmx256m Xms256m Xmx512m Xms256m Xmx512m Xms500m But these don't: Xmx512m Xms512m Xmx512m Xms501m Xmx768m Xms512m

    Read the article

  • How to free up memory?

    - by sarego
    We have been facing Out of Memory errors in our App server for sometime. We see the used heap size increasing gradually until finally it reaches the available heap in size. This happens every 3 weeks after which a server restart is needed to fix this. Upon analysis of the heap dumps we find the problem to be objects used in JSPs. Can JSP objects be the real cause of Appserver memory issues? How do we free up JSP objects (Objects which are being instantiated using usebean or other tags)? We have a clustered Websphere appserver with 2 nodes and an IHS.

    Read the article

  • Identify cause of hundreds of AJP threads in Tomcat

    - by Rich
    We have two Tomcat 6.0.20 servers fronted by Apache, with communication between the two using AJP. Tomcat in turn consumes web services on a JBoss cluster. This morning, one of the Tomcat machines was using 100% of CPU on 6 of the 8 cores on our machine. We took a heap dump using JConsole, and then tried to connect JVisualVM to get a profile to see what was taking all the CPU, but this caused Tomcat to crash. At least we had the heap dump! I have loaded the heap dump into Eclipse MAT, where I have found that we have 565 instances of java.lang.Thread. Some of these, obviously, are entirely legitimate, but the vast majority are named "ajp-6009-XXX" where XXX is a number. I know my way around Eclipse MAT pretty well, but haven't been able to find an explanation for it. If anyone has some pointers as to why Tomcat may be doing this, or some hints on finding out why using Eclipse MAT, that'd be appreciated!

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >