Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 196/555 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • How to connect two ubuntu computers with ethernet cable

    - by Lukasz Zaroda
    I'm trying to connect with ethernet cable two computers - desktop and laptop. What I want to do is transfer a lot of data from one to another. The problem is that I'm doing everything from: How to network two Ubuntu computers using ethernet (without a router)? But after that, ping always gives me "Destination host unreachable". I was searching a while but couldn't figure out what is a reason it doesn't work, maybe it's something about my devices or maybe someone will have another idea. Ethernet cable I got with my router. There is a text printed on it: Aurit Data Cable Cat.5 UTP 26AWG 4PAIR AWM PUC 75°C EIA/TIA 568B It's connecting now my desktop to router, so I can send this question. My desktop: System: Ubuntu 12.04 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 03) "ethtool -i eth0" output: driver: r8169 version: 2.3LK-NAPI firmware-version: rtl_nic/rtl8168d-1.fw bus-info: 0000:01:00.0 supports-statistics: yes supports-test: no supports-eeprom-access: no supports-register-dump: yes My laptop: System: Ubuntu 14.04 Ethernet controller: Qualcomm Atheros AR8162 Fast Ethernet (rev 08) "ethtool -i eth0" output: driver: alx version: firmware-version: bus-info: 0000:01:00.0 supports-statistics: no supports-test: no supports-eeprom-access: no supports-register-dump: no supports-priv-flags: no My iptables are accepting everything. Any ideas why I cannot reach other computer?

    Read the article

  • Using R to Analyze G1GC Log Files

    - by user12620111
    Using R to Analyze G1GC Log Files body, td { font-family: sans-serif; background-color: white; font-size: 12px; margin: 8px; } tt, code, pre { font-family: 'DejaVu Sans Mono', 'Droid Sans Mono', 'Lucida Console', Consolas, Monaco, monospace; } h1 { font-size:2.2em; } h2 { font-size:1.8em; } h3 { font-size:1.4em; } h4 { font-size:1.0em; } h5 { font-size:0.9em; } h6 { font-size:0.8em; } a:visited { color: rgb(50%, 0%, 50%); } pre { margin-top: 0; max-width: 95%; border: 1px solid #ccc; white-space: pre-wrap; } pre code { display: block; padding: 0.5em; } code.r, code.cpp { background-color: #F8F8F8; } table, td, th { border: none; } blockquote { color:#666666; margin:0; padding-left: 1em; border-left: 0.5em #EEE solid; } hr { height: 0px; border-bottom: none; border-top-width: thin; border-top-style: dotted; border-top-color: #999999; } @media print { * { background: transparent !important; color: black !important; filter:none !important; -ms-filter: none !important; } body { font-size:12pt; max-width:100%; } a, a:visited { text-decoration: underline; } hr { visibility: hidden; page-break-before: always; } pre, blockquote { padding-right: 1em; page-break-inside: avoid; } tr, img { page-break-inside: avoid; } img { max-width: 100% !important; } @page :left { margin: 15mm 20mm 15mm 10mm; } @page :right { margin: 15mm 10mm 15mm 20mm; } p, h2, h3 { orphans: 3; widows: 3; } h2, h3 { page-break-after: avoid; } } pre .operator, pre .paren { color: rgb(104, 118, 135) } pre .literal { color: rgb(88, 72, 246) } pre .number { color: rgb(0, 0, 205); } pre .comment { color: rgb(76, 136, 107); } pre .keyword { color: rgb(0, 0, 255); } pre .identifier { color: rgb(0, 0, 0); } pre .string { color: rgb(3, 106, 7); } var hljs=new function(){function m(p){return p.replace(/&/gm,"&").replace(/"}while(y.length||w.length){var v=u().splice(0,1)[0];z+=m(x.substr(q,v.offset-q));q=v.offset;if(v.event=="start"){z+=t(v.node);s.push(v.node)}else{if(v.event=="stop"){var p,r=s.length;do{r--;p=s[r];z+=("")}while(p!=v.node);s.splice(r,1);while(r'+M[0]+""}else{r+=M[0]}O=P.lR.lastIndex;M=P.lR.exec(L)}return r+L.substr(O,L.length-O)}function J(L,M){if(M.sL&&e[M.sL]){var r=d(M.sL,L);x+=r.keyword_count;return r.value}else{return F(L,M)}}function I(M,r){var L=M.cN?'':"";if(M.rB){y+=L;M.buffer=""}else{if(M.eB){y+=m(r)+L;M.buffer=""}else{y+=L;M.buffer=r}}D.push(M);A+=M.r}function G(N,M,Q){var R=D[D.length-1];if(Q){y+=J(R.buffer+N,R);return false}var P=q(M,R);if(P){y+=J(R.buffer+N,R);I(P,M);return P.rB}var L=v(D.length-1,M);if(L){var O=R.cN?"":"";if(R.rE){y+=J(R.buffer+N,R)+O}else{if(R.eE){y+=J(R.buffer+N,R)+O+m(M)}else{y+=J(R.buffer+N+M,R)+O}}while(L1){O=D[D.length-2].cN?"":"";y+=O;L--;D.length--}var r=D[D.length-1];D.length--;D[D.length-1].buffer="";if(r.starts){I(r.starts,"")}return R.rE}if(w(M,R)){throw"Illegal"}}var E=e[B];var D=[E.dM];var A=0;var x=0;var y="";try{var s,u=0;E.dM.buffer="";do{s=p(C,u);var t=G(s[0],s[1],s[2]);u+=s[0].length;if(!t){u+=s[1].length}}while(!s[2]);if(D.length1){throw"Illegal"}return{r:A,keyword_count:x,value:y}}catch(H){if(H=="Illegal"){return{r:0,keyword_count:0,value:m(C)}}else{throw H}}}function g(t){var p={keyword_count:0,r:0,value:m(t)};var r=p;for(var q in e){if(!e.hasOwnProperty(q)){continue}var s=d(q,t);s.language=q;if(s.keyword_count+s.rr.keyword_count+r.r){r=s}if(s.keyword_count+s.rp.keyword_count+p.r){r=p;p=s}}if(r.language){p.second_best=r}return p}function i(r,q,p){if(q){r=r.replace(/^((]+|\t)+)/gm,function(t,w,v,u){return w.replace(/\t/g,q)})}if(p){r=r.replace(/\n/g,"")}return r}function n(t,w,r){var x=h(t,r);var v=a(t);var y,s;if(v){y=d(v,x)}else{return}var q=c(t);if(q.length){s=document.createElement("pre");s.innerHTML=y.value;y.value=k(q,c(s),x)}y.value=i(y.value,w,r);var u=t.className;if(!u.match("(\\s|^)(language-)?"+v+"(\\s|$)")){u=u?(u+" "+v):v}if(/MSIE [678]/.test(navigator.userAgent)&&t.tagName=="CODE"&&t.parentNode.tagName=="PRE"){s=t.parentNode;var p=document.createElement("div");p.innerHTML=""+y.value+"";t=p.firstChild.firstChild;p.firstChild.cN=s.cN;s.parentNode.replaceChild(p.firstChild,s)}else{t.innerHTML=y.value}t.className=u;t.result={language:v,kw:y.keyword_count,re:y.r};if(y.second_best){t.second_best={language:y.second_best.language,kw:y.second_best.keyword_count,re:y.second_best.r}}}function o(){if(o.called){return}o.called=true;var r=document.getElementsByTagName("pre");for(var p=0;p|=||=||=|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~";this.ER="(?![\\s\\S])";this.BE={b:"\\\\.",r:0};this.ASM={cN:"string",b:"'",e:"'",i:"\\n",c:[this.BE],r:0};this.QSM={cN:"string",b:'"',e:'"',i:"\\n",c:[this.BE],r:0};this.CLCM={cN:"comment",b:"//",e:"$"};this.CBLCLM={cN:"comment",b:"/\\*",e:"\\*/"};this.HCM={cN:"comment",b:"#",e:"$"};this.NM={cN:"number",b:this.NR,r:0};this.CNM={cN:"number",b:this.CNR,r:0};this.BNM={cN:"number",b:this.BNR,r:0};this.inherit=function(r,s){var p={};for(var q in r){p[q]=r[q]}if(s){for(var q in s){p[q]=s[q]}}return p}}();hljs.LANGUAGES.cpp=function(){var a={keyword:{"false":1,"int":1,"float":1,"while":1,"private":1,"char":1,"catch":1,"export":1,virtual:1,operator:2,sizeof:2,dynamic_cast:2,typedef:2,const_cast:2,"const":1,struct:1,"for":1,static_cast:2,union:1,namespace:1,unsigned:1,"long":1,"throw":1,"volatile":2,"static":1,"protected":1,bool:1,template:1,mutable:1,"if":1,"public":1,friend:2,"do":1,"return":1,"goto":1,auto:1,"void":2,"enum":1,"else":1,"break":1,"new":1,extern:1,using:1,"true":1,"class":1,asm:1,"case":1,typeid:1,"short":1,reinterpret_cast:2,"default":1,"double":1,register:1,explicit:1,signed:1,typename:1,"try":1,"this":1,"switch":1,"continue":1,wchar_t:1,inline:1,"delete":1,alignof:1,char16_t:1,char32_t:1,constexpr:1,decltype:1,noexcept:1,nullptr:1,static_assert:1,thread_local:1,restrict:1,_Bool:1,complex:1},built_in:{std:1,string:1,cin:1,cout:1,cerr:1,clog:1,stringstream:1,istringstream:1,ostringstream:1,auto_ptr:1,deque:1,list:1,queue:1,stack:1,vector:1,map:1,set:1,bitset:1,multiset:1,multimap:1,unordered_set:1,unordered_map:1,unordered_multiset:1,unordered_multimap:1,array:1,shared_ptr:1}};return{dM:{k:a,i:"",k:a,r:10,c:["self"]}]}}}();hljs.LANGUAGES.r={dM:{c:[hljs.HCM,{cN:"number",b:"\\b0[xX][0-9a-fA-F]+[Li]?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+(?:[eE][+\\-]?\\d*)?L\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+\\.(?!\\d)(?:i\\b)?",e:hljs.IMMEDIATE_RE,r:1},{cN:"number",b:"\\b\\d+(?:\\.\\d*)?(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\.\\d+(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"keyword",b:"(?:tryCatch|library|setGeneric|setGroupGeneric)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\.",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\d+(?![\\w.])",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\b(?:function)",e:hljs.IMMEDIATE_RE,r:2},{cN:"keyword",b:"(?:if|in|break|next|repeat|else|for|return|switch|while|try|stop|warning|require|attach|detach|source|setMethod|setClass)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"literal",b:"(?:NA|NA_integer_|NA_real_|NA_character_|NA_complex_)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"literal",b:"(?:NULL|TRUE|FALSE|T|F|Inf|NaN)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"identifier",b:"[a-zA-Z.][a-zA-Z0-9._]*\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"operator",b:"|=||   Using R to Analyze G1GC Log Files   Using R to Analyze G1GC Log Files Introduction Working in Oracle Platform Integration gives an engineer opportunities to work on a wide array of technologies. My team’s goal is to make Oracle applications run best on the Solaris/SPARC platform. When looking for bottlenecks in a modern applications, one needs to be aware of not only how the CPUs and operating system are executing, but also network, storage, and in some cases, the Java Virtual Machine. I was recently presented with about 1.5 GB of Java Garbage First Garbage Collector log file data. If you’re not familiar with the subject, you might want to review Garbage First Garbage Collector Tuning by Monica Beckwith. The customer had been running Java HotSpot 1.6.0_31 to host a web application server. I was told that the Solaris/SPARC server was running a Java process launched using a commmand line that included the following flags: -d64 -Xms9g -Xmx9g -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:InitiatingHeapOccupancyPercent=80 -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps -XX:+PrintFlagsFinal -XX:+DisableExplicitGC -XX:+UnlockExperimentalVMOptions -XX:ParallelGCThreads=8 Several sources on the internet indicate that if I were to print out the 1.5 GB of log files, it would require enough paper to fill the bed of a pick up truck. Of course, it would be fruitless to try to scan the log files by hand. Tools will be required to summarize the contents of the log files. Others have encountered large Java garbage collection log files. There are existing tools to analyze the log files: IBM’s GC toolkit The chewiebug GCViewer gchisto HPjmeter Instead of using one of the other tools listed, I decide to parse the log files with standard Unix tools, and analyze the data with R. Data Cleansing The log files arrived in two different formats. I guess that the difference is that one set of log files was generated using a more verbose option, maybe -XX:+PrintHeapAtGC, and the other set of log files was generated without that option. Format 1 In some of the log files, the log files with the less verbose format, a single trace, i.e. the report of a singe garbage collection event, looks like this: {Heap before GC invocations=12280 (full 61): garbage-first heap total 9437184K, used 7499918K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 1 young (4096K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. 2014-05-14T07:24:00.988-0700: 60586.353: [GC pause (young) 7324M->7320M(9216M), 0.1567265 secs] Heap after GC invocations=12281 (full 61): garbage-first heap total 9437184K, used 7496533K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 0 young (0K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. } A simple grep can be used to extract a summary: $ grep "\[ GC pause (young" g1gc.log 2014-05-13T13:24:35.091-0700: 3.109: [GC pause (young) 20M->5029K(9216M), 0.0146328 secs] 2014-05-13T13:24:35.440-0700: 3.459: [GC pause (young) 9125K->6077K(9216M), 0.0086723 secs] 2014-05-13T13:24:37.581-0700: 5.599: [GC pause (young) 25M->8470K(9216M), 0.0203820 secs] 2014-05-13T13:24:42.686-0700: 10.704: [GC pause (young) 44M->15M(9216M), 0.0288848 secs] 2014-05-13T13:24:48.941-0700: 16.958: [GC pause (young) 51M->20M(9216M), 0.0491244 secs] 2014-05-13T13:24:56.049-0700: 24.066: [GC pause (young) 92M->26M(9216M), 0.0525368 secs] 2014-05-13T13:25:34.368-0700: 62.383: [GC pause (young) 602M->68M(9216M), 0.1721173 secs] But that format wasn't easily read into R, so I needed to be a bit more tricky. I used the following Unix command to create a summary file that was easy for R to read. $ echo "SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime" $ grep "\[GC pause (young" g1gc.log | grep -v mark | sed -e 's/[A-SU-z\(\),]/ /g' -e 's/->/ /' -e 's/: / /g' | more SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime 2014-05-13T13:24:35.091-0700 3.109 20 5029 9216 0.0146328 2014-05-13T13:24:35.440-0700 3.459 9125 6077 9216 0.0086723 2014-05-13T13:24:37.581-0700 5.599 25 8470 9216 0.0203820 2014-05-13T13:24:42.686-0700 10.704 44 15 9216 0.0288848 2014-05-13T13:24:48.941-0700 16.958 51 20 9216 0.0491244 2014-05-13T13:24:56.049-0700 24.066 92 26 9216 0.0525368 2014-05-13T13:25:34.368-0700 62.383 602 68 9216 0.1721173 Format 2 In some of the log files, the log files with the more verbose format, a single trace, i.e. the report of a singe garbage collection event, was more complicated than Format 1. Here is a text file with an example of a single G1GC trace in the second format. As you can see, it is quite complicated. It is nice that there is so much information available, but the level of detail can be overwhelming. I wrote this awk script (download) to summarize each trace on a single line. #!/usr/bin/env awk -f BEGIN { printf("SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize\n") } ###################### # Save count data from lines that are at the start of each G1GC trace. # Each trace starts out like this: # {Heap before GC invocations=14 (full 0): # garbage-first heap total 9437184K, used 325496K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) ###################### /{Heap.*full/{ gsub ( "\\)" , "" ); nf=split($0,a,"="); split(a[2],b," "); getline; if ( match($0, "first") ) { G1GC=1; IncrementalCount=b[1]; FullCount=substr( b[3], 1, length(b[3])-1 ); } else { G1GC=0; } } ###################### # Pull out time stamps that are in lines with this format: # 2014-05-12T14:02:06.025-0700: 94.312: [GC pause (young), 0.08870154 secs] ###################### /GC pause/ { DateTime=$1; SecondsSinceLaunch=substr($2, 1, length($2)-1); } ###################### # Heap sizes are in lines that look like this: # [ 4842M->4838M(9216M)] ###################### /\[ .*]$/ { gsub ( "\\[" , "" ); gsub ( "\ \]" , "" ); gsub ( "->" , " " ); gsub ( "\\( " , " " ); gsub ( "\ \)" , " " ); split($0,a," "); if ( split(a[1],b,"M") > 1 ) {BeforeSize=b[1]*1024;} if ( split(a[1],b,"K") > 1 ) {BeforeSize=b[1];} if ( split(a[2],b,"M") > 1 ) {AfterSize=b[1]*1024;} if ( split(a[2],b,"K") > 1 ) {AfterSize=b[1];} if ( split(a[3],b,"M") > 1 ) {TotalSize=b[1]*1024;} if ( split(a[3],b,"K") > 1 ) {TotalSize=b[1];} } ###################### # Emit an output line when you find input that looks like this: # [Times: user=1.41 sys=0.08, real=0.24 secs] ###################### /\[Times/ { if (G1GC==1) { gsub ( "," , "" ); split($2,a,"="); UserTime=a[2]; split($3,a,"="); SysTime=a[2]; split($4,a,"="); RealTime=a[2]; print DateTime,SecondsSinceLaunch,IncrementalCount,FullCount,UserTime,SysTime,RealTime,BeforeSize,AfterSize,TotalSize; G1GC=0; } } The resulting summary is about 25X smaller that the original file, but still difficult for a human to digest. SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ... 2014-05-12T18:36:34.669-0700: 3985.744 561 0 0.57 0.06 0.16 1724416 1720320 9437184 2014-05-12T18:36:34.839-0700: 3985.914 562 0 0.51 0.06 0.19 1724416 1720320 9437184 2014-05-12T18:36:35.069-0700: 3986.144 563 0 0.60 0.04 0.27 1724416 1721344 9437184 2014-05-12T18:36:35.354-0700: 3986.429 564 0 0.33 0.04 0.09 1725440 1722368 9437184 2014-05-12T18:36:35.545-0700: 3986.620 565 0 0.58 0.04 0.17 1726464 1722368 9437184 2014-05-12T18:36:35.726-0700: 3986.801 566 0 0.43 0.05 0.12 1726464 1722368 9437184 2014-05-12T18:36:35.856-0700: 3986.930 567 0 0.30 0.04 0.07 1726464 1723392 9437184 2014-05-12T18:36:35.947-0700: 3987.023 568 0 0.61 0.04 0.26 1727488 1723392 9437184 2014-05-12T18:36:36.228-0700: 3987.302 569 0 0.46 0.04 0.16 1731584 1724416 9437184 Reading the Data into R Once the GC log data had been cleansed, either by processing the first format with the shell script, or by processing the second format with the awk script, it was easy to read the data into R. g1gc.df = read.csv("summary.txt", row.names = NULL, stringsAsFactors=FALSE,sep="") str(g1gc.df) ## 'data.frame': 8307 obs. of 10 variables: ## $ row.names : chr "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ... ## $ SecondsSinceLaunch: num 1.16 1.47 1.97 3.83 6.1 ... ## $ IncrementalCount : int 0 1 2 3 4 5 6 7 8 9 ... ## $ FullCount : int 0 0 0 0 0 0 0 0 0 0 ... ## $ UserTime : num 0.11 0.05 0.04 0.21 0.08 0.26 0.31 0.33 0.34 0.56 ... ## $ SysTime : num 0.04 0.01 0.01 0.05 0.01 0.06 0.07 0.06 0.07 0.09 ... ## $ RealTime : num 0.02 0.02 0.01 0.04 0.02 0.04 0.05 0.04 0.04 0.06 ... ## $ BeforeSize : int 8192 5496 5768 22528 24576 43008 34816 53248 55296 93184 ... ## $ AfterSize : int 1400 1672 2557 4907 7072 14336 16384 18432 19456 21504 ... ## $ TotalSize : int 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 ... head(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount ## 1 2014-05-12T14:00:32.868-0700: 1.161 0 ## 2 2014-05-12T14:00:33.179-0700: 1.472 1 ## 3 2014-05-12T14:00:33.677-0700: 1.969 2 ## 4 2014-05-12T14:00:35.538-0700: 3.830 3 ## 5 2014-05-12T14:00:37.811-0700: 6.103 4 ## 6 2014-05-12T14:00:41.428-0700: 9.720 5 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 1 0 0.11 0.04 0.02 8192 1400 9437184 ## 2 0 0.05 0.01 0.02 5496 1672 9437184 ## 3 0 0.04 0.01 0.01 5768 2557 9437184 ## 4 0 0.21 0.05 0.04 22528 4907 9437184 ## 5 0 0.08 0.01 0.02 24576 7072 9437184 ## 6 0 0.26 0.06 0.04 43008 14336 9437184 Basic Statistics Once the data has been read into R, simple statistics are very easy to generate. All of the numbers from high school statistics are available via simple commands. For example, generate a summary of every column: summary(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount FullCount ## Length:8307 Min. : 1 Min. : 0 Min. : 0.0 ## Class :character 1st Qu.: 9977 1st Qu.:2048 1st Qu.: 0.0 ## Mode :character Median :12855 Median :4136 Median : 12.0 ## Mean :12527 Mean :4156 Mean : 31.6 ## 3rd Qu.:15758 3rd Qu.:6262 3rd Qu.: 61.0 ## Max. :55484 Max. :8391 Max. :113.0 ## UserTime SysTime RealTime BeforeSize ## Min. :0.040 Min. :0.0000 Min. : 0.0 Min. : 5476 ## 1st Qu.:0.470 1st Qu.:0.0300 1st Qu.: 0.1 1st Qu.:5137920 ## Median :0.620 Median :0.0300 Median : 0.1 Median :6574080 ## Mean :0.751 Mean :0.0355 Mean : 0.3 Mean :5841855 ## 3rd Qu.:0.920 3rd Qu.:0.0400 3rd Qu.: 0.2 3rd Qu.:7084032 ## Max. :3.370 Max. :1.5600 Max. :488.1 Max. :8696832 ## AfterSize TotalSize ## Min. : 1380 Min. :9437184 ## 1st Qu.:5002752 1st Qu.:9437184 ## Median :6559744 Median :9437184 ## Mean :5785454 Mean :9437184 ## 3rd Qu.:7054336 3rd Qu.:9437184 ## Max. :8482816 Max. :9437184 Q: What is the total amount of User CPU time spent in garbage collection? sum(g1gc.df$UserTime) ## [1] 6236 As you can see, less than two hours of CPU time was spent in garbage collection. Is that too much? To find the percentage of time spent in garbage collection, divide the number above by total_elapsed_time*CPU_count. In this case, there are a lot of CPU’s and it turns out the the overall amount of CPU time spent in garbage collection isn’t a problem when viewed in isolation. When calculating rates, i.e. events per unit time, you need to ask yourself if the rate is homogenous across the time period in the log file. Does the log file include spikes of high activity that should be separately analyzed? Averaging in data from nights and weekends with data from business hours may alias problems. If you have a reason to suspect that the garbage collection rates include peaks and valleys that need independent analysis, see the “Time Series” section, below. Q: How much garbage is collected on each pass? The amount of heap space that is recovered per GC pass is surprisingly low: At least one collection didn’t recover any data. (“Min.=0”) 25% of the passes recovered 3MB or less. (“1st Qu.=3072”) Half of the GC passes recovered 4MB or less. (“Median=4096”) The average amount recovered was 56MB. (“Mean=56390”) 75% of the passes recovered 36MB or less. (“3rd Qu.=36860”) At least one pass recovered 2GB. (“Max.=2121000”) g1gc.df$Delta = g1gc.df$BeforeSize - g1gc.df$AfterSize summary(g1gc.df$Delta) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0 3070 4100 56400 36900 2120000 Q: What is the maximum User CPU time for a single collection? The worst garbage collection (“Max.”) is many standard deviations away from the mean. The data appears to be right skewed. summary(g1gc.df$UserTime) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.040 0.470 0.620 0.751 0.920 3.370 sd(g1gc.df$UserTime) ## [1] 0.3966 Basic Graphics Once the data is in R, it is trivial to plot the data with formats including dot plots, line charts, bar charts (simple, stacked, grouped), pie charts, boxplots, scatter plots histograms, and kernel density plots. Histogram of User CPU Time per Collection I don't think that this graph requires any explanation. hist(g1gc.df$UserTime, main="User CPU Time per Collection", xlab="Seconds", ylab="Frequency") Box plot to identify outliers When the initial data is viewed with a box plot, you can see the one crazy outlier in the real time per GC. Save this data point for future analysis and drop the outlier so that it’s not throwing off our statistics. Now the box plot shows many outliers, which will be examined later, using times series analysis. Notice that the scale of the x-axis changes drastically once the crazy outlier is removed. par(mfrow=c(2,1)) boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(dominated by a crazy outlier)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") crazy.outlier.df=g1gc.df[g1gc.df$RealTime > 400,] g1gc.df=g1gc.df[g1gc.df$RealTime < 400,] boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(crazy outlier excluded)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") box(which = "outer", lty = "solid") Here is the crazy outlier for future analysis: crazy.outlier.df ## row.names SecondsSinceLaunch IncrementalCount ## 8233 2014-05-12T23:15:43.903-0700: 20741 8316 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 8233 112 0.55 0.42 488.1 8381440 8235008 9437184 ## Delta ## 8233 146432 R Time Series Data To analyze the garbage collection as a time series, I’ll use Z’s Ordered Observations (zoo). “zoo is the creator for an S3 class of indexed totally ordered observations which includes irregular time series.” require(zoo) ## Loading required package: zoo ## ## Attaching package: 'zoo' ## ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric head(g1gc.df[,1]) ## [1] "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" ## [3] "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ## [5] "2014-05-12T14:00:37.811-0700:" "2014-05-12T14:00:41.428-0700:" options("digits.secs"=3) times=as.POSIXct( g1gc.df[,1], format="%Y-%m-%dT%H:%M:%OS%z:") g1gc.z = zoo(g1gc.df[,-c(1)], order.by=times) head(g1gc.z) ## SecondsSinceLaunch IncrementalCount FullCount ## 2014-05-12 17:00:32.868 1.161 0 0 ## 2014-05-12 17:00:33.178 1.472 1 0 ## 2014-05-12 17:00:33.677 1.969 2 0 ## 2014-05-12 17:00:35.538 3.830 3 0 ## 2014-05-12 17:00:37.811 6.103 4 0 ## 2014-05-12 17:00:41.427 9.720 5 0 ## UserTime SysTime RealTime BeforeSize AfterSize ## 2014-05-12 17:00:32.868 0.11 0.04 0.02 8192 1400 ## 2014-05-12 17:00:33.178 0.05 0.01 0.02 5496 1672 ## 2014-05-12 17:00:33.677 0.04 0.01 0.01 5768 2557 ## 2014-05-12 17:00:35.538 0.21 0.05 0.04 22528 4907 ## 2014-05-12 17:00:37.811 0.08 0.01 0.02 24576 7072 ## 2014-05-12 17:00:41.427 0.26 0.06 0.04 43008 14336 ## TotalSize Delta ## 2014-05-12 17:00:32.868 9437184 6792 ## 2014-05-12 17:00:33.178 9437184 3824 ## 2014-05-12 17:00:33.677 9437184 3211 ## 2014-05-12 17:00:35.538 9437184 17621 ## 2014-05-12 17:00:37.811 9437184 17504 ## 2014-05-12 17:00:41.427 9437184 28672 Example of Two Benchmark Runs in One Log File The data in the following graph is from a different log file, not the one of primary interest to this article. I’m including this image because it is an example of idle periods followed by busy periods. It would be uninteresting to average the rate of garbage collection over the entire log file period. More interesting would be the rate of garbage collect in the two busy periods. Are they the same or different? Your production data may be similar, for example, bursts when employees return from lunch and idle times on weekend evenings, etc. Once the data is in an R Time Series, you can analyze isolated time windows. Clipping the Time Series data Flashing back to our test case… Viewing the data as a time series is interesting. You can see that the work intensive time period is between 9:00 PM and 3:00 AM. Lets clip the data to the interesting period:     par(mfrow=c(2,1)) plot(g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Complete Log File", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") clipped.g1gc.z=window(g1gc.z, start=as.POSIXct("2014-05-12 21:00:00"), end=as.POSIXct("2014-05-13 03:00:00")) plot(clipped.g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Limited to Benchmark Execution", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") box(which = "outer", lty = "solid") Cumulative Incremental and Full GC count Here is the cumulative incremental and full GC count. When the line is very steep, it indicates that the GCs are repeating very quickly. Notice that the scale on the Y axis is different for full vs. incremental. plot(clipped.g1gc.z[,c(2:3)], main="Cumulative Incremental and Full GC count", xlab="Time of Day", col="#1b9e77") GC Analysis of Benchmark Execution using Time Series data In the following series of 3 graphs: The “After Size” show the amount of heap space in use after each garbage collection. Many Java objects are still referenced, i.e. alive, during each garbage collection. This may indicate that the application has a memory leak, or may indicate that the application has a very large memory footprint. Typically, an application's memory footprint plateau's in the early stage of execution. One would expect this graph to have a flat top. The steep decline in the heap space may indicate that the application crashed after 2:00. The second graph shows that the outliers in real execution time, discussed above, occur near 2:00. when the Java heap seems to be quite full. The third graph shows that Full GCs are infrequent during the first few hours of execution. The rate of Full GC's, (the slope of the cummulative Full GC line), changes near midnight.   plot(clipped.g1gc.z[,c("AfterSize","RealTime","FullCount")], xlab="Time of Day", col=c("#1b9e77","red","#1b9e77")) GC Analysis of heap recovered Each GC trace includes the amount of heap space in use before and after the individual GC event. During garbage coolection, unreferenced objects are identified, the space holding the unreferenced objects is freed, and thus, the difference in before and after usage indicates how much space has been freed. The following box plot and bar chart both demonstrate the same point - the amount of heap space freed per garbage colloection is surprisingly low. par(mfrow=c(2,1)) boxplot(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", horizontal = TRUE, col="red") hist(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", breaks=100, col="red") box(which = "outer", lty = "solid") This graph is the most interesting. The dark blue area shows how much heap is occupied by referenced Java objects. This represents memory that holds live data. The red fringe at the top shows how much data was recovered after each garbage collection. barplot(clipped.g1gc.z[,c("AfterSize","Delta")], col=c("#7570b3","#e7298a"), xlab="Time of Day", border=NA) legend("topleft", c("Live Objects","Heap Recovered on GC"), fill=c("#7570b3","#e7298a")) box(which = "outer", lty = "solid") When I discuss the data in the log files with the customer, I will ask for an explaination for the large amount of referenced data resident in the Java heap. There are two are posibilities: There is a memory leak and the amount of space required to hold referenced objects will continue to grow, limited only by the maximum heap size. After the maximum heap size is reached, the JVM will throw an “Out of Memory” exception every time that the application tries to allocate a new object. If this is the case, the aplication needs to be debugged to identify why old objects are referenced when they are no longer needed. The application has a legitimate requirement to keep a large amount of data in memory. The customer may want to further increase the maximum heap size. Another possible solution would be to partition the application across multiple cluster nodes, where each node has responsibility for managing a unique subset of the data. Conclusion In conclusion, R is a very powerful tool for the analysis of Java garbage collection log files. The primary difficulty is data cleansing so that information can be read into an R data frame. Once the data has been read into R, a rich set of tools may be used for thorough evaluation.

    Read the article

  • Client-Server MMOG & data structures sync when joining / playing

    - by plang
    After reading a few articles on MMOG architecture, there is still one point on which I cannot find much information: it has to do with how you keep in sync server data on the client, when you join, and while you play. A pretty vague question, I agree. Let me refine it: Let's say we have an MMOG virtual world subdivided into geographical cells. A player in a cell is mostly interested in what happens in the cell itself, and all the surrounding cells, not more. When joining the game for the first time, the only thing we can do is send some sort of "database dump" of the interesting cells to the client. When playing, I guess it would be very inefficient to do the same thing regularly. I imagine the best thing to do is to send "deltas" to the client, which would allow keeping the local database in sync. Now let's say the player moves, and arrives in another cell. Surrounding cells change, and for all the new cells the player subscribes, the same technique as used when joining the game has to be used: some sort of "database dump". This mechanic of joining/moving in a cell-based MMOG virtual world interests me, and I was wondering if there were tried and tested techniques in this domain. Thanks!

    Read the article

  • Program received signal: “0”. warning: check_safe_call: could not restore current frame

    - by Kaushik
    Require urgent help!:( i m developing a game and i m dealing with around 20 images at the same time. As per my knowledge, i m allocating and deallocating the images at right places. Game runs for around 15 min fine but quits with an error message: "Program received signal: “0”. warning: check_safe_call: could not restore current frame" i also tried debugging with memory leak tools provided in Xcode but could not find any issue with memory management or any increase in memory size on simulator it works fine but not on the device. i m confused wht can be the issue. Any help is appreciated. Thanx in advance

    Read the article

  • TaskFactory.StartNew versus ThreadPool.QueueUserWorkItem

    - by Dan Tao
    Apparently the TaskFactory.StartNew method in .NET 4.0 is intended as a replacement for ThreadPool.QueueUserWorkItem (according to this post, anyway). My question is simple: does anyone know why? Does TaskFactory.StartNew have better performance? Does it use less memory? Or is it mainly for the additional functionality provided by the Task class? In the latter case, does StartNew possibly have worse performance than QueueUserWorkItem? It seems to me that StartNew would actually potentially use more memory than QueueUserWorkItem, since it returns a Task object with every call and I would expect that to result in more memory allocation. In any case, I'm interested to know which is more appropriate for a high-performance scenario.

    Read the article

  • Strange error occurring when using wcf to run query against sql server

    - by vondip
    Hi all, I am building an asp.net application, using II6 on windows server 2003 (vps hosting). I am confronted with an error I didn't receive on my development machine (windows 7, iis 7.5, 64 bit). When my wcf service tries launching my query running against a local sql server this is the error I receive: Memory gates checking failed because the free memory (43732992 bytes) is less than 5% of total memory. As a result, the service will not be available for incoming requests. To resolve this, either reduce the load on the machine or adjust the value of minFreeMemoryPercentageToActivateService on the serviceHostingEnvironment config element. and ideas??

    Read the article

  • Boost.Thread throws bad_alloc exception in VS2010

    - by the_drow
    Upon including <boost/thread.hpp> I get this exception: First-chance exception at 0x7c812afb in CSF.exe: Microsoft C++ exception: boost::exception_detail::clone_impl<boost::exception_detail::bad_alloc_> at memory location 0x0012fc3c.. First-chance exception at 0x7c812afb in CSF.exe: Microsoft C++ exception: [rethrow] at memory location 0x00000000.. I can't catch it, breaking at the memory location brings me to kernel32.dll and at this point I cannot say what's going on but it appears that the exception is thrown after the program ends and VS is capable of catching it. The testcase: #include <boost/thread.hpp> int main() { return 0; }

    Read the article

  • Singleton again, but with multi-thread and Objective-C

    - by Tonny Xu
    I know Singleton pattern has been discussed so much. But because I'm not fully understand the memory management mechanism in Objective-C, so when I combined Singleton implementation with multithreading, I got a big error and cost me a whole day to resolve it. I had a singleton object, let's call it ObjectX. I declared an object inside ObjectX that will detach a new thread, let's call that object objectWillSpawnNewThread, when I called [[ObjectX sharedInstance].objectWillSpawnNewThread startNewThread]; The new thread could not be executed correctly, and finally I found out that I should not declare the object objectWillSpawnNewThread inside the singleton class. Here are my questions: How does Objective-C allocate static object in the memory? Where does Objective-C allocate them(Main thread stack or somewhere else)? Why would it be failed if we spawn a new thread inside the singleton object? I had searched the Objective-C language [ObjC.pdf] and Objective-C memory management document, maybe I missed something, but I currently I could not find any helpful information.

    Read the article

  • GC.AddMemoryPressure

    - by Steve Sheldon
    I am writing an application in C# that makes use of a 3rd party COM DLL, this dll creates a lot of resources (like bitmaps, video, data structures) in unmanaged memory. While digging around I came across the following call for the Garbage Collector: GC.AddMemoryPressure(long long bytesAllocated) It is documented in MSDN here: http://msdn.microsoft.com/en-us/library/system.gc.addmemorypressure.aspx This sounds like something I should be calling since this external dll is createing a lot of resources the CLR is unaware of. I guess I have two questions... How do I know how much memory pressure to add when the dll is 3rd party and it's not possible for me to know exactly how much memory this dll is allocating. How important is it to do this?

    Read the article

  • F# performance in scientific computing

    - by aaa
    hello. I am curious as to how F# performance compares to C++ performance? I asked a similar question with regards to Java, and the impression I got was that Java is not suitable for heavy numbercrunching. I have read that F# is supposed to be more scalable and more performant, but how is this real-world performance compares to C++? specific questions about current implementation are: How well does it do floating-point? Does it allow vector instructions how friendly is it towards optimizing compilers? How big a memory foot print does it have? Does it allow fine-grained control over memory locality? does it have capacity for distributed memory processors, for example Cray? what features does it have that may be of interest to computational science where heavy number processing is involved? Are there actual scientific computing implementations that use it? Thanks

    Read the article

  • Question on Pointer Arithmetic

    - by pws5068
    Heyy Everybody! I am trying to create a memory management system, so that a user can call myMalloc, a method I created. I have a linked list keeping track of my free memory. My problem is when I am attempting to find the end of a free bit in my linked list. I am attempting to add the size of the memory free in that section (which is in the linked list) to the pointer to the front of the free space, like this. void *tailEnd = previousPlace->head_ptr + ((previousPlace->size+1)*(sizeof(int)); I was hoping that this would give me a pointer to the end of that segment. However, I keep getting the warning: "pointer of type 'void*' used in arithmetic" Is there a better way of doing this? Thanks!

    Read the article

  • .NET and 64-bit application

    - by user54064
    I want to make existing .NET applications (WinForms and WebForms) run on 64-bit machines, optimized to take advantage of more memory available on 64-bit machines. What do I need to do to the applications to take advantage of the memory? Do I just select the target CPU as 64-bit? What is the advantage of selecting the target versus just compiling the app for All CPUs and have the .NET optimize the app locally? Will Crystal Reports (in VS 2008) run optimized for 64-bit and take advantage of the upper memory?

    Read the article

  • Why is thread local storage so slow?

    - by dsimcha
    I'm working on a custom mark-release style memory allocator for the D programming language that works by allocating from thread-local regions. It seems that the thread local storage bottleneck is causing a huge (~50%) slowdown in allocating memory from these regions compared to an otherwise identical single threaded version of the code, even after designing my code to have only one TLS lookup per allocation/deallocation. This is based on allocating/freeing memory a large number of times in a loop, and I'm trying to figure out if it's an artifact of my benchmarking method. My understanding is that thread local storage should basically just involve accessing something through an extra layer of indirection, similar to accessing a variable via a pointer. Is this incorrect? How much overhead does thread-local storage typically have? Note: Although I mention D, I'm also interested in general answers that aren't specific to D, since D's implementation of thread-local storage will likely improve if it is slower than the best implementations.

    Read the article

  • What is the difference among NSString alloc:initWithCString versus stringWithUTF8String?

    - by mobibob
    I thought these two methods were (memory allocation-wise) equivalent, however, I was seeing "out of scope" and "NSCFString" in the debugger if I used what I thought was the convenient method (commented out below) and when I switched to the more explicit method my code stopped crashing! Notice that I am getting the string that is being stored in my container from sqlite3 query. p = (char*) sqlite3_column_text (queryStmt, 1); // GUID = (NSString*) [NSString stringWithUTF8String: (p!=NULL) ? p : ""]; GUID = [[NSString alloc] initWithCString:(p!=NULL) ? p : "" encoding:NSUTF8StringEncoding]; Also note, that if I looked at the values in the debugger and printed them with NSLog they looked correct, however, I don't think new memory was allocated and the value copied. Instead the memory pointer was stored - went out of scope - referenced later - crash!

    Read the article

  • OutOfMemoryError trying to upload to Blobstore locally

    - by jeanh
    Hi all, I am trying to set up a basic file upload to blobstore, but I get this OutOfMemoryError: WARNING: Error for /_ah/upload/ aghvbWdkcmVzc3IcCxIVX19CbG9iVXBsb2FkU2Vzc2lvbl9fGMACDA java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2786) at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:71) at javax.mail.internet.MimeMultipart.readTillFirstBoundary(MimeMultipart.java: 316) at javax.mail.internet.MimeMultipart.parse(MimeMultipart.java:186) at javax.mail.internet.MimeMultipart.getCount(MimeMultipart.java:109) at com.google.appengine.api.blobstore.dev.UploadBlobServlet.handleUpload(UploadBlobServlet.java: 135) at com.google.appengine.api.blobstore.dev.UploadBlobServlet.access $000(UploadBlobServlet.java:72) at com.google.appengine.api.blobstore.dev.UploadBlobServlet $1.run(UploadBlobServlet.java:100) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.api.blobstore.dev.UploadBlobServlet.doPost(UploadBlobServlet.java: 98) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java: 511); I used the Memory Analyzer on Eclipse and it said that the memory leak suspect is QueuedThreadPool. I found this information about a memory leak bug: http://jira.codehaus.org/browse/JETTY-1188 Has anyone else had this issue? Thanks, Jean

    Read the article

  • if cookies are disabled, does asp.net store the cookie as a session cookie instead or not?

    - by Erx_VB.NExT.Coder
    basically, if cookeis are disabled on the client, im wondering if this... dim newCookie = New HttpCookie("cookieName", "cookieValue") newCookie.Expires = DateTime.Now.AddDays(1) response.cookies.add(newCookie) notice i set a date, so it should be stored on disk, if cookies are disabled does asp.net automatically store this cookie as a session cookie (which is a cookie that lasts in browser memory until the user closes the browser, if i am not mistaken).... OR does asp.net not add the cookie at all (anywhere) in which case i would have to re-add the cookie to the collection without the date (which stores as a session cookie)... of course, this would require me doing the addition of a cookie twice... perhaps the second time unnecessarily if it is being stored in browsers memory anyway... im just trying not to store it twice as it's just bad code!! any ideas if i need to write another line or not? (which would be)... response.cookies.add(New HttpCookie("cookieName", "cookieValue") ' session cookie in client browser memory thanks guys

    Read the article

  • How to prevent DOS attacks using image resizing in an ASP.NET application?

    - by Waleed Eissa
    I'm currently developing a site where users can upload images to use as avatars, I know this makes me sound a little paranoid but I was wondering what if a malicious user uploads an image with incredibly large dimensions that will eat the server memory (as a DOS attack), I already have a limit on the file size that can be uploaded (250 k) but even that size can allow for an image with incredibly large dimensions if the image for example is a JPEG that contains one color and created with a very low quality setting. Taking into consideration that the image is uploaded as a bitmap in memory when being resized (ie. not compressed), I wonder if such DOS attacks occur, even to check the image dimensions it has to be uploaded in memory first, did you hear about any attacks that exploited this? Am I too worried?

    Read the article

  • Multithreaded program in C: calculating thread stack space

    - by SlappyTheFish
    Situation: I am writing a program in C that maintains a number of threads. Once a thread ends, a new one is created. Each thread forks - the child runs a PHP process via exec() and the parent waits for it to finish. Each PHP process takes the next item from a queue, processes it and exits. Basic code: http://www.4pmp.com/2010/03/multitasking-php-in-parallel/ Problem: The PHP processes are Symfony tasks and Symfony requires a fairly huge amount of memory. How can I safely calculate the required stack space for each thread so that PHP processes will have enough memory? The memory limit set in php.ini is 128Mb so should I allocate this much space in the stack?

    Read the article

  • GC.AddMemoryPressure in C#

    - by ssheldon
    I am writing an application in C# that makes use of a 3rd party COM DLL, this dll creates a lot of resources (like bitmaps, video, data structures) in unmanaged memory. While digging around I came across the following call for the Garbage Collector: GC.AddMemoryPressure(long long bytesAllocated) It is documented in MSDN here: http://msdn.microsoft.com/en-us/library/system.gc.addmemorypressure.aspx This sounds like something I should be calling since this external dll is createing a lot of resources the CLR is unaware of. I guess I have two questions... How do I know how much memory pressure to add when the dll is 3rd party and it's not possible for me to know exactly how much memory this dll is allocating. How important is it to do this?

    Read the article

  • How to debug unreleased COM references from managed code?

    - by Marek
    I have been searching for a tool to debug unreleased COM references, that usually cause e.g. Word/Outlook processes to hang in memory in case the code does not call Marshal.ReleaseCOMObject on all COM instances correctly. (Outlook 2007 partially fixes this for outlook addins, but this is a generic question). Is there a tool that would display at least a list of COM references (by type) held by managed code? Ideally, it would also display memory profiler-style object trees helping to debug where the reference increment occured. Debugging at runtime is not that important as being able to attach to a hung process - because the problem typically occurs when the code is done with the COM interface and someone forgot to release something - the application (e.g. winword) hangs in memory even after the calling managed application quits. If such tool does not exist, what is the (technical?) reason? It would be very useful for debugging a lot of otherwise very hard to find problems when working with COM interop.

    Read the article

  • resource acquisition is initialization "RAII"

    - by hitech
    in the example below class X{ int *r; public: X(){cout<< X is created ; r new int[10]; } ~X(){cout<< X is destroyed ; delete [] r; } }; class Y { public: Y(){ X x; throw 44; } ~Y(){cout<< Y is destroyed ;} }; I got this example of RAII from one site and ave some doubts. please help. in the contructor of x we are not considering the scenation "if the memory allocation fails" . Here for the destructor of Y is safe as in y construcotr is not allocating any memory. what if we need to do some memory allocation also in y constructor?

    Read the article

  • How do I find out what objects points to another object i Xcode Instruments

    - by Arlaharen
    I am trying to analyze some code of mine, looking for memory leaks. My problem is that some of my objects are leaking (at least as far as I can see), but the Leaks tool doesn't detect the leaks. My guess is that some iPhone OS object still holds pointers to my leaked objects. The objects I am talking about are subclasses of UIViewController that I use like this: MyController *controller = [[MyController alloc] initWithNibName:@"MyController" bundle:nil]; [self.navigationController pushViewController:controller animated:YES]; When these objects are no longer needed I do: [self.navigationController popViewControllerAnimated:YES]; Without a [controller release] call right now. Now when I look at what objects that gets created I see a lot of MyController instances that never gets destroyed. To me these are memory leaks, but to the Leaks tool they are not. Can someone here tell me if there is some way Instruments can tell me what objects are pointing to my MyController instances and thereby making them not count as memory leaks?

    Read the article

  • VS2010 profiler/leak detection

    - by Noah Roberts
    Anyone know of a profiler and leak detector that will work with VS2010 code? Preferably one that runs on Win7. I've searched here and in google. I've found one leak detector that works (Memory Validator) but I'm not too impressed. For one thing it shows a bunch of menu leaks and stuff which I'm fairly confident are not real. I also tried GlowCode but it's JUST a profiler and refuses to install on win7. I used to use AQtime. It had everything I needed, memory/resource leak detection, profiling various things, static analysis, etc. Unfortunately it gives bogus results now. My main immediate issue is that VS2010 is saying there are leaks in a program that had none in VS2005. I'm almost certain it's false positives but I can't seem to find a good tool to verify this. Memory Validator doesn't show the same ones and the reporting of leaks from VS doesn't seem rational.

    Read the article

  • Legacy Database, Fluent NHibernate, and Testing my mappings

    - by sdanna
    As the post title implies, I have a legacy database (not sure if that matters), I'm using Fluent NHibernate and I'm attempting to test my mappings using the Fluent NHibernate PersistenceSpecification class. My question is really a process one, I want to test these when I build locally in Visual Studio using the built in Unit Testing framework for now. Obviously this implies (I think) that I'm going to need a database. What are some options for getting this into the build? If I use an in memory database does NHibernate or Fluent NHibernate have some some mechanism for sucking the database schema from a target database or maybe the in memory database can do this? Will I need to manually get the schema to feed to an in memory database? Ideally I would like to get this this setup to where the other developers don't really have to think about it other than when they break the build because the tests don't pass.

    Read the article

  • GDB hardware watchpoint very slow - why?

    - by Laurynas Biveinis
    On a large C application, I have set a hardware watchpoint on a memory address as follows: (gdb) watch *((int*)0x12F5D58) Hardware watchpoint 3: *((int*)0x12F5D58) As you can see, it's a hardware watchpoint, not software, which would explain the slowness. Now the application running time under debugger has changed from less than ten seconds to one hour and counting. The watchpoint has triggered three times so far, the first time after 15 minutes when the memory page containing the address was made readable by sbrk. Surely during those 15 minutes the watchpoint should have been efficient since the memory page was inaccessible? And that still does not explain, why it's so slow afterwards. The GDB is $ gdb --version GNU gdb (GDB) 7.0-ubuntu [...] Thanks in advance for any ideas as what might be the cause or how to fix/work around it.

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >