Search Results

Search found 4206 results on 169 pages for 'equals operator'.

Page 94/169 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • error on oncreate() method

    - by user1644081
    I am begginer in Android App and using Java as when I add this code : GCMRegistrar.checkDevice(this); GCMRegistrar.checkManifest(this); final String regId = GCMRegistrar.getRegistrationId(this); if (regId.equals("")) { GCMRegistrar.register(this, SENDER_ID); } else { Log.v(TAG, "Already registered"); } I had error on : SENDER_ID Log TAG the error "cannot be resolved to available "

    Read the article

  • Else without if

    - by user2808951
    I'm trying to write a code for my computer programming class for a project due Monday, and I'm pretty new to Java, but I'm trying to write a program that will first determine if a number the user inputs is even or odd and then determine if the number is prime or not. I'm not sure if I did the algorithm right or not, so if anyone has any corrections on the program to my algorithm or anything else please say so, but my real issue is that the program is refusing to compile. Every time I try, it says it's having an else without if problem. Here's a link to my command box: http://s1341.photobucket.com/user/Emi_Nightshade/media/Capture_zps45f9a2ea.png.html Here's my code: import java.io.*; import java.util.*; public class Lesson9p1_ThuotteEmily { public static void main(String args[]) { Scanner kbReader0=new Scanner(System.in); System.out.print("\n\nPlease enter an integer. An integer is whole number, and it can be either negative or positive. Please enter your number: "); long num=kbReader0.nextLong(); if(num%2==0) //if and else with braces { System.out.println("Your integer " + num + " is even."); } else { System.out.println("Your integer " + num + " is odd."); } Scanner kbReader1=new Scanner(System.in); System.out.print("\n\nWould you like to know if your number is prime? Please enter yes or no: "); String yn=kbReader1.nextLine(); if(yn.equals.IgnoreCase("Yes")) { System.out.println("Okay. Give me a moment."); { if(num%2==0) { System.out.println("Your number isn't prime."); } else if(num==2) { System.out.println("Your number is 2, which is the only even prime number in existence. Cool, right?"); } for(int i=3;i*i<=n;i+=2) { if(n%1==0) { System.out.println("Your number isn't prime."); } } else { System.out.println("Your number is prime!"); } } } if(yn.equals.IgnoreCase("No")) { System.out.println("Okay."); } } } If anyone could help me out with this and also any problems I may have made elsewhere in the program, I'd be very grateful! Thanks.

    Read the article

  • VC++ 6.0 application crashing inside CString::Format when %d is given.

    - by viswanathan
    A VC++ 6.0 application is crashing when doing a CString::Format operation with %d format specifier. This does not occur always but occurs when the application memory grows upto 100MB or more. ALso sometimes same crash observed when a CString copy is done. The call stack would look like this mfc42u!CFixedAlloc::Alloc+82 mfc42u!CString::AllocBuffer+3f 00000038 00000038 005b5b64 mfc42u!CString::AllocBeforeWrite+31 00000038 0a5bfdbc 005b5b64 mfc42u!CString::AssignCopy+13 00000038 057cb83f 0a5bfe90 mfc42u!CString::operator=+4b and this throws an access violation exception.

    Read the article

  • Using R to Analyze G1GC Log Files

    - by user12620111
    Using R to Analyze G1GC Log Files body, td { font-family: sans-serif; background-color: white; font-size: 12px; margin: 8px; } tt, code, pre { font-family: 'DejaVu Sans Mono', 'Droid Sans Mono', 'Lucida Console', Consolas, Monaco, monospace; } h1 { font-size:2.2em; } h2 { font-size:1.8em; } h3 { font-size:1.4em; } h4 { font-size:1.0em; } h5 { font-size:0.9em; } h6 { font-size:0.8em; } a:visited { color: rgb(50%, 0%, 50%); } pre { margin-top: 0; max-width: 95%; border: 1px solid #ccc; white-space: pre-wrap; } pre code { display: block; padding: 0.5em; } code.r, code.cpp { background-color: #F8F8F8; } table, td, th { border: none; } blockquote { color:#666666; margin:0; padding-left: 1em; border-left: 0.5em #EEE solid; } hr { height: 0px; border-bottom: none; border-top-width: thin; border-top-style: dotted; border-top-color: #999999; } @media print { * { background: transparent !important; color: black !important; filter:none !important; -ms-filter: none !important; } body { font-size:12pt; max-width:100%; } a, a:visited { text-decoration: underline; } hr { visibility: hidden; page-break-before: always; } pre, blockquote { padding-right: 1em; page-break-inside: avoid; } tr, img { page-break-inside: avoid; } img { max-width: 100% !important; } @page :left { margin: 15mm 20mm 15mm 10mm; } @page :right { margin: 15mm 10mm 15mm 20mm; } p, h2, h3 { orphans: 3; widows: 3; } h2, h3 { page-break-after: avoid; } } pre .operator, pre .paren { color: rgb(104, 118, 135) } pre .literal { color: rgb(88, 72, 246) } pre .number { color: rgb(0, 0, 205); } pre .comment { color: rgb(76, 136, 107); } pre .keyword { color: rgb(0, 0, 255); } pre .identifier { color: rgb(0, 0, 0); } pre .string { color: rgb(3, 106, 7); } var hljs=new function(){function m(p){return p.replace(/&/gm,"&").replace(/"}while(y.length||w.length){var v=u().splice(0,1)[0];z+=m(x.substr(q,v.offset-q));q=v.offset;if(v.event=="start"){z+=t(v.node);s.push(v.node)}else{if(v.event=="stop"){var p,r=s.length;do{r--;p=s[r];z+=("")}while(p!=v.node);s.splice(r,1);while(r'+M[0]+""}else{r+=M[0]}O=P.lR.lastIndex;M=P.lR.exec(L)}return r+L.substr(O,L.length-O)}function J(L,M){if(M.sL&&e[M.sL]){var r=d(M.sL,L);x+=r.keyword_count;return r.value}else{return F(L,M)}}function I(M,r){var L=M.cN?'':"";if(M.rB){y+=L;M.buffer=""}else{if(M.eB){y+=m(r)+L;M.buffer=""}else{y+=L;M.buffer=r}}D.push(M);A+=M.r}function G(N,M,Q){var R=D[D.length-1];if(Q){y+=J(R.buffer+N,R);return false}var P=q(M,R);if(P){y+=J(R.buffer+N,R);I(P,M);return P.rB}var L=v(D.length-1,M);if(L){var O=R.cN?"":"";if(R.rE){y+=J(R.buffer+N,R)+O}else{if(R.eE){y+=J(R.buffer+N,R)+O+m(M)}else{y+=J(R.buffer+N+M,R)+O}}while(L1){O=D[D.length-2].cN?"":"";y+=O;L--;D.length--}var r=D[D.length-1];D.length--;D[D.length-1].buffer="";if(r.starts){I(r.starts,"")}return R.rE}if(w(M,R)){throw"Illegal"}}var E=e[B];var D=[E.dM];var A=0;var x=0;var y="";try{var s,u=0;E.dM.buffer="";do{s=p(C,u);var t=G(s[0],s[1],s[2]);u+=s[0].length;if(!t){u+=s[1].length}}while(!s[2]);if(D.length1){throw"Illegal"}return{r:A,keyword_count:x,value:y}}catch(H){if(H=="Illegal"){return{r:0,keyword_count:0,value:m(C)}}else{throw H}}}function g(t){var p={keyword_count:0,r:0,value:m(t)};var r=p;for(var q in e){if(!e.hasOwnProperty(q)){continue}var s=d(q,t);s.language=q;if(s.keyword_count+s.rr.keyword_count+r.r){r=s}if(s.keyword_count+s.rp.keyword_count+p.r){r=p;p=s}}if(r.language){p.second_best=r}return p}function i(r,q,p){if(q){r=r.replace(/^((]+|\t)+)/gm,function(t,w,v,u){return w.replace(/\t/g,q)})}if(p){r=r.replace(/\n/g,"")}return r}function n(t,w,r){var x=h(t,r);var v=a(t);var y,s;if(v){y=d(v,x)}else{return}var q=c(t);if(q.length){s=document.createElement("pre");s.innerHTML=y.value;y.value=k(q,c(s),x)}y.value=i(y.value,w,r);var u=t.className;if(!u.match("(\\s|^)(language-)?"+v+"(\\s|$)")){u=u?(u+" "+v):v}if(/MSIE [678]/.test(navigator.userAgent)&&t.tagName=="CODE"&&t.parentNode.tagName=="PRE"){s=t.parentNode;var p=document.createElement("div");p.innerHTML=""+y.value+"";t=p.firstChild.firstChild;p.firstChild.cN=s.cN;s.parentNode.replaceChild(p.firstChild,s)}else{t.innerHTML=y.value}t.className=u;t.result={language:v,kw:y.keyword_count,re:y.r};if(y.second_best){t.second_best={language:y.second_best.language,kw:y.second_best.keyword_count,re:y.second_best.r}}}function o(){if(o.called){return}o.called=true;var r=document.getElementsByTagName("pre");for(var p=0;p|=||=||=|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~";this.ER="(?![\\s\\S])";this.BE={b:"\\\\.",r:0};this.ASM={cN:"string",b:"'",e:"'",i:"\\n",c:[this.BE],r:0};this.QSM={cN:"string",b:'"',e:'"',i:"\\n",c:[this.BE],r:0};this.CLCM={cN:"comment",b:"//",e:"$"};this.CBLCLM={cN:"comment",b:"/\\*",e:"\\*/"};this.HCM={cN:"comment",b:"#",e:"$"};this.NM={cN:"number",b:this.NR,r:0};this.CNM={cN:"number",b:this.CNR,r:0};this.BNM={cN:"number",b:this.BNR,r:0};this.inherit=function(r,s){var p={};for(var q in r){p[q]=r[q]}if(s){for(var q in s){p[q]=s[q]}}return p}}();hljs.LANGUAGES.cpp=function(){var a={keyword:{"false":1,"int":1,"float":1,"while":1,"private":1,"char":1,"catch":1,"export":1,virtual:1,operator:2,sizeof:2,dynamic_cast:2,typedef:2,const_cast:2,"const":1,struct:1,"for":1,static_cast:2,union:1,namespace:1,unsigned:1,"long":1,"throw":1,"volatile":2,"static":1,"protected":1,bool:1,template:1,mutable:1,"if":1,"public":1,friend:2,"do":1,"return":1,"goto":1,auto:1,"void":2,"enum":1,"else":1,"break":1,"new":1,extern:1,using:1,"true":1,"class":1,asm:1,"case":1,typeid:1,"short":1,reinterpret_cast:2,"default":1,"double":1,register:1,explicit:1,signed:1,typename:1,"try":1,"this":1,"switch":1,"continue":1,wchar_t:1,inline:1,"delete":1,alignof:1,char16_t:1,char32_t:1,constexpr:1,decltype:1,noexcept:1,nullptr:1,static_assert:1,thread_local:1,restrict:1,_Bool:1,complex:1},built_in:{std:1,string:1,cin:1,cout:1,cerr:1,clog:1,stringstream:1,istringstream:1,ostringstream:1,auto_ptr:1,deque:1,list:1,queue:1,stack:1,vector:1,map:1,set:1,bitset:1,multiset:1,multimap:1,unordered_set:1,unordered_map:1,unordered_multiset:1,unordered_multimap:1,array:1,shared_ptr:1}};return{dM:{k:a,i:"",k:a,r:10,c:["self"]}]}}}();hljs.LANGUAGES.r={dM:{c:[hljs.HCM,{cN:"number",b:"\\b0[xX][0-9a-fA-F]+[Li]?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+(?:[eE][+\\-]?\\d*)?L\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+\\.(?!\\d)(?:i\\b)?",e:hljs.IMMEDIATE_RE,r:1},{cN:"number",b:"\\b\\d+(?:\\.\\d*)?(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\.\\d+(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"keyword",b:"(?:tryCatch|library|setGeneric|setGroupGeneric)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\.",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\d+(?![\\w.])",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\b(?:function)",e:hljs.IMMEDIATE_RE,r:2},{cN:"keyword",b:"(?:if|in|break|next|repeat|else|for|return|switch|while|try|stop|warning|require|attach|detach|source|setMethod|setClass)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"literal",b:"(?:NA|NA_integer_|NA_real_|NA_character_|NA_complex_)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"literal",b:"(?:NULL|TRUE|FALSE|T|F|Inf|NaN)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"identifier",b:"[a-zA-Z.][a-zA-Z0-9._]*\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"operator",b:"|=||   Using R to Analyze G1GC Log Files   Using R to Analyze G1GC Log Files Introduction Working in Oracle Platform Integration gives an engineer opportunities to work on a wide array of technologies. My team’s goal is to make Oracle applications run best on the Solaris/SPARC platform. When looking for bottlenecks in a modern applications, one needs to be aware of not only how the CPUs and operating system are executing, but also network, storage, and in some cases, the Java Virtual Machine. I was recently presented with about 1.5 GB of Java Garbage First Garbage Collector log file data. If you’re not familiar with the subject, you might want to review Garbage First Garbage Collector Tuning by Monica Beckwith. The customer had been running Java HotSpot 1.6.0_31 to host a web application server. I was told that the Solaris/SPARC server was running a Java process launched using a commmand line that included the following flags: -d64 -Xms9g -Xmx9g -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:InitiatingHeapOccupancyPercent=80 -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps -XX:+PrintFlagsFinal -XX:+DisableExplicitGC -XX:+UnlockExperimentalVMOptions -XX:ParallelGCThreads=8 Several sources on the internet indicate that if I were to print out the 1.5 GB of log files, it would require enough paper to fill the bed of a pick up truck. Of course, it would be fruitless to try to scan the log files by hand. Tools will be required to summarize the contents of the log files. Others have encountered large Java garbage collection log files. There are existing tools to analyze the log files: IBM’s GC toolkit The chewiebug GCViewer gchisto HPjmeter Instead of using one of the other tools listed, I decide to parse the log files with standard Unix tools, and analyze the data with R. Data Cleansing The log files arrived in two different formats. I guess that the difference is that one set of log files was generated using a more verbose option, maybe -XX:+PrintHeapAtGC, and the other set of log files was generated without that option. Format 1 In some of the log files, the log files with the less verbose format, a single trace, i.e. the report of a singe garbage collection event, looks like this: {Heap before GC invocations=12280 (full 61): garbage-first heap total 9437184K, used 7499918K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 1 young (4096K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. 2014-05-14T07:24:00.988-0700: 60586.353: [GC pause (young) 7324M->7320M(9216M), 0.1567265 secs] Heap after GC invocations=12281 (full 61): garbage-first heap total 9437184K, used 7496533K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 0 young (0K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. } A simple grep can be used to extract a summary: $ grep "\[ GC pause (young" g1gc.log 2014-05-13T13:24:35.091-0700: 3.109: [GC pause (young) 20M->5029K(9216M), 0.0146328 secs] 2014-05-13T13:24:35.440-0700: 3.459: [GC pause (young) 9125K->6077K(9216M), 0.0086723 secs] 2014-05-13T13:24:37.581-0700: 5.599: [GC pause (young) 25M->8470K(9216M), 0.0203820 secs] 2014-05-13T13:24:42.686-0700: 10.704: [GC pause (young) 44M->15M(9216M), 0.0288848 secs] 2014-05-13T13:24:48.941-0700: 16.958: [GC pause (young) 51M->20M(9216M), 0.0491244 secs] 2014-05-13T13:24:56.049-0700: 24.066: [GC pause (young) 92M->26M(9216M), 0.0525368 secs] 2014-05-13T13:25:34.368-0700: 62.383: [GC pause (young) 602M->68M(9216M), 0.1721173 secs] But that format wasn't easily read into R, so I needed to be a bit more tricky. I used the following Unix command to create a summary file that was easy for R to read. $ echo "SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime" $ grep "\[GC pause (young" g1gc.log | grep -v mark | sed -e 's/[A-SU-z\(\),]/ /g' -e 's/->/ /' -e 's/: / /g' | more SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime 2014-05-13T13:24:35.091-0700 3.109 20 5029 9216 0.0146328 2014-05-13T13:24:35.440-0700 3.459 9125 6077 9216 0.0086723 2014-05-13T13:24:37.581-0700 5.599 25 8470 9216 0.0203820 2014-05-13T13:24:42.686-0700 10.704 44 15 9216 0.0288848 2014-05-13T13:24:48.941-0700 16.958 51 20 9216 0.0491244 2014-05-13T13:24:56.049-0700 24.066 92 26 9216 0.0525368 2014-05-13T13:25:34.368-0700 62.383 602 68 9216 0.1721173 Format 2 In some of the log files, the log files with the more verbose format, a single trace, i.e. the report of a singe garbage collection event, was more complicated than Format 1. Here is a text file with an example of a single G1GC trace in the second format. As you can see, it is quite complicated. It is nice that there is so much information available, but the level of detail can be overwhelming. I wrote this awk script (download) to summarize each trace on a single line. #!/usr/bin/env awk -f BEGIN { printf("SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize\n") } ###################### # Save count data from lines that are at the start of each G1GC trace. # Each trace starts out like this: # {Heap before GC invocations=14 (full 0): # garbage-first heap total 9437184K, used 325496K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) ###################### /{Heap.*full/{ gsub ( "\\)" , "" ); nf=split($0,a,"="); split(a[2],b," "); getline; if ( match($0, "first") ) { G1GC=1; IncrementalCount=b[1]; FullCount=substr( b[3], 1, length(b[3])-1 ); } else { G1GC=0; } } ###################### # Pull out time stamps that are in lines with this format: # 2014-05-12T14:02:06.025-0700: 94.312: [GC pause (young), 0.08870154 secs] ###################### /GC pause/ { DateTime=$1; SecondsSinceLaunch=substr($2, 1, length($2)-1); } ###################### # Heap sizes are in lines that look like this: # [ 4842M->4838M(9216M)] ###################### /\[ .*]$/ { gsub ( "\\[" , "" ); gsub ( "\ \]" , "" ); gsub ( "->" , " " ); gsub ( "\\( " , " " ); gsub ( "\ \)" , " " ); split($0,a," "); if ( split(a[1],b,"M") > 1 ) {BeforeSize=b[1]*1024;} if ( split(a[1],b,"K") > 1 ) {BeforeSize=b[1];} if ( split(a[2],b,"M") > 1 ) {AfterSize=b[1]*1024;} if ( split(a[2],b,"K") > 1 ) {AfterSize=b[1];} if ( split(a[3],b,"M") > 1 ) {TotalSize=b[1]*1024;} if ( split(a[3],b,"K") > 1 ) {TotalSize=b[1];} } ###################### # Emit an output line when you find input that looks like this: # [Times: user=1.41 sys=0.08, real=0.24 secs] ###################### /\[Times/ { if (G1GC==1) { gsub ( "," , "" ); split($2,a,"="); UserTime=a[2]; split($3,a,"="); SysTime=a[2]; split($4,a,"="); RealTime=a[2]; print DateTime,SecondsSinceLaunch,IncrementalCount,FullCount,UserTime,SysTime,RealTime,BeforeSize,AfterSize,TotalSize; G1GC=0; } } The resulting summary is about 25X smaller that the original file, but still difficult for a human to digest. SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ... 2014-05-12T18:36:34.669-0700: 3985.744 561 0 0.57 0.06 0.16 1724416 1720320 9437184 2014-05-12T18:36:34.839-0700: 3985.914 562 0 0.51 0.06 0.19 1724416 1720320 9437184 2014-05-12T18:36:35.069-0700: 3986.144 563 0 0.60 0.04 0.27 1724416 1721344 9437184 2014-05-12T18:36:35.354-0700: 3986.429 564 0 0.33 0.04 0.09 1725440 1722368 9437184 2014-05-12T18:36:35.545-0700: 3986.620 565 0 0.58 0.04 0.17 1726464 1722368 9437184 2014-05-12T18:36:35.726-0700: 3986.801 566 0 0.43 0.05 0.12 1726464 1722368 9437184 2014-05-12T18:36:35.856-0700: 3986.930 567 0 0.30 0.04 0.07 1726464 1723392 9437184 2014-05-12T18:36:35.947-0700: 3987.023 568 0 0.61 0.04 0.26 1727488 1723392 9437184 2014-05-12T18:36:36.228-0700: 3987.302 569 0 0.46 0.04 0.16 1731584 1724416 9437184 Reading the Data into R Once the GC log data had been cleansed, either by processing the first format with the shell script, or by processing the second format with the awk script, it was easy to read the data into R. g1gc.df = read.csv("summary.txt", row.names = NULL, stringsAsFactors=FALSE,sep="") str(g1gc.df) ## 'data.frame': 8307 obs. of 10 variables: ## $ row.names : chr "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ... ## $ SecondsSinceLaunch: num 1.16 1.47 1.97 3.83 6.1 ... ## $ IncrementalCount : int 0 1 2 3 4 5 6 7 8 9 ... ## $ FullCount : int 0 0 0 0 0 0 0 0 0 0 ... ## $ UserTime : num 0.11 0.05 0.04 0.21 0.08 0.26 0.31 0.33 0.34 0.56 ... ## $ SysTime : num 0.04 0.01 0.01 0.05 0.01 0.06 0.07 0.06 0.07 0.09 ... ## $ RealTime : num 0.02 0.02 0.01 0.04 0.02 0.04 0.05 0.04 0.04 0.06 ... ## $ BeforeSize : int 8192 5496 5768 22528 24576 43008 34816 53248 55296 93184 ... ## $ AfterSize : int 1400 1672 2557 4907 7072 14336 16384 18432 19456 21504 ... ## $ TotalSize : int 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 ... head(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount ## 1 2014-05-12T14:00:32.868-0700: 1.161 0 ## 2 2014-05-12T14:00:33.179-0700: 1.472 1 ## 3 2014-05-12T14:00:33.677-0700: 1.969 2 ## 4 2014-05-12T14:00:35.538-0700: 3.830 3 ## 5 2014-05-12T14:00:37.811-0700: 6.103 4 ## 6 2014-05-12T14:00:41.428-0700: 9.720 5 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 1 0 0.11 0.04 0.02 8192 1400 9437184 ## 2 0 0.05 0.01 0.02 5496 1672 9437184 ## 3 0 0.04 0.01 0.01 5768 2557 9437184 ## 4 0 0.21 0.05 0.04 22528 4907 9437184 ## 5 0 0.08 0.01 0.02 24576 7072 9437184 ## 6 0 0.26 0.06 0.04 43008 14336 9437184 Basic Statistics Once the data has been read into R, simple statistics are very easy to generate. All of the numbers from high school statistics are available via simple commands. For example, generate a summary of every column: summary(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount FullCount ## Length:8307 Min. : 1 Min. : 0 Min. : 0.0 ## Class :character 1st Qu.: 9977 1st Qu.:2048 1st Qu.: 0.0 ## Mode :character Median :12855 Median :4136 Median : 12.0 ## Mean :12527 Mean :4156 Mean : 31.6 ## 3rd Qu.:15758 3rd Qu.:6262 3rd Qu.: 61.0 ## Max. :55484 Max. :8391 Max. :113.0 ## UserTime SysTime RealTime BeforeSize ## Min. :0.040 Min. :0.0000 Min. : 0.0 Min. : 5476 ## 1st Qu.:0.470 1st Qu.:0.0300 1st Qu.: 0.1 1st Qu.:5137920 ## Median :0.620 Median :0.0300 Median : 0.1 Median :6574080 ## Mean :0.751 Mean :0.0355 Mean : 0.3 Mean :5841855 ## 3rd Qu.:0.920 3rd Qu.:0.0400 3rd Qu.: 0.2 3rd Qu.:7084032 ## Max. :3.370 Max. :1.5600 Max. :488.1 Max. :8696832 ## AfterSize TotalSize ## Min. : 1380 Min. :9437184 ## 1st Qu.:5002752 1st Qu.:9437184 ## Median :6559744 Median :9437184 ## Mean :5785454 Mean :9437184 ## 3rd Qu.:7054336 3rd Qu.:9437184 ## Max. :8482816 Max. :9437184 Q: What is the total amount of User CPU time spent in garbage collection? sum(g1gc.df$UserTime) ## [1] 6236 As you can see, less than two hours of CPU time was spent in garbage collection. Is that too much? To find the percentage of time spent in garbage collection, divide the number above by total_elapsed_time*CPU_count. In this case, there are a lot of CPU’s and it turns out the the overall amount of CPU time spent in garbage collection isn’t a problem when viewed in isolation. When calculating rates, i.e. events per unit time, you need to ask yourself if the rate is homogenous across the time period in the log file. Does the log file include spikes of high activity that should be separately analyzed? Averaging in data from nights and weekends with data from business hours may alias problems. If you have a reason to suspect that the garbage collection rates include peaks and valleys that need independent analysis, see the “Time Series” section, below. Q: How much garbage is collected on each pass? The amount of heap space that is recovered per GC pass is surprisingly low: At least one collection didn’t recover any data. (“Min.=0”) 25% of the passes recovered 3MB or less. (“1st Qu.=3072”) Half of the GC passes recovered 4MB or less. (“Median=4096”) The average amount recovered was 56MB. (“Mean=56390”) 75% of the passes recovered 36MB or less. (“3rd Qu.=36860”) At least one pass recovered 2GB. (“Max.=2121000”) g1gc.df$Delta = g1gc.df$BeforeSize - g1gc.df$AfterSize summary(g1gc.df$Delta) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0 3070 4100 56400 36900 2120000 Q: What is the maximum User CPU time for a single collection? The worst garbage collection (“Max.”) is many standard deviations away from the mean. The data appears to be right skewed. summary(g1gc.df$UserTime) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.040 0.470 0.620 0.751 0.920 3.370 sd(g1gc.df$UserTime) ## [1] 0.3966 Basic Graphics Once the data is in R, it is trivial to plot the data with formats including dot plots, line charts, bar charts (simple, stacked, grouped), pie charts, boxplots, scatter plots histograms, and kernel density plots. Histogram of User CPU Time per Collection I don't think that this graph requires any explanation. hist(g1gc.df$UserTime, main="User CPU Time per Collection", xlab="Seconds", ylab="Frequency") Box plot to identify outliers When the initial data is viewed with a box plot, you can see the one crazy outlier in the real time per GC. Save this data point for future analysis and drop the outlier so that it’s not throwing off our statistics. Now the box plot shows many outliers, which will be examined later, using times series analysis. Notice that the scale of the x-axis changes drastically once the crazy outlier is removed. par(mfrow=c(2,1)) boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(dominated by a crazy outlier)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") crazy.outlier.df=g1gc.df[g1gc.df$RealTime > 400,] g1gc.df=g1gc.df[g1gc.df$RealTime < 400,] boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(crazy outlier excluded)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") box(which = "outer", lty = "solid") Here is the crazy outlier for future analysis: crazy.outlier.df ## row.names SecondsSinceLaunch IncrementalCount ## 8233 2014-05-12T23:15:43.903-0700: 20741 8316 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 8233 112 0.55 0.42 488.1 8381440 8235008 9437184 ## Delta ## 8233 146432 R Time Series Data To analyze the garbage collection as a time series, I’ll use Z’s Ordered Observations (zoo). “zoo is the creator for an S3 class of indexed totally ordered observations which includes irregular time series.” require(zoo) ## Loading required package: zoo ## ## Attaching package: 'zoo' ## ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric head(g1gc.df[,1]) ## [1] "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" ## [3] "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ## [5] "2014-05-12T14:00:37.811-0700:" "2014-05-12T14:00:41.428-0700:" options("digits.secs"=3) times=as.POSIXct( g1gc.df[,1], format="%Y-%m-%dT%H:%M:%OS%z:") g1gc.z = zoo(g1gc.df[,-c(1)], order.by=times) head(g1gc.z) ## SecondsSinceLaunch IncrementalCount FullCount ## 2014-05-12 17:00:32.868 1.161 0 0 ## 2014-05-12 17:00:33.178 1.472 1 0 ## 2014-05-12 17:00:33.677 1.969 2 0 ## 2014-05-12 17:00:35.538 3.830 3 0 ## 2014-05-12 17:00:37.811 6.103 4 0 ## 2014-05-12 17:00:41.427 9.720 5 0 ## UserTime SysTime RealTime BeforeSize AfterSize ## 2014-05-12 17:00:32.868 0.11 0.04 0.02 8192 1400 ## 2014-05-12 17:00:33.178 0.05 0.01 0.02 5496 1672 ## 2014-05-12 17:00:33.677 0.04 0.01 0.01 5768 2557 ## 2014-05-12 17:00:35.538 0.21 0.05 0.04 22528 4907 ## 2014-05-12 17:00:37.811 0.08 0.01 0.02 24576 7072 ## 2014-05-12 17:00:41.427 0.26 0.06 0.04 43008 14336 ## TotalSize Delta ## 2014-05-12 17:00:32.868 9437184 6792 ## 2014-05-12 17:00:33.178 9437184 3824 ## 2014-05-12 17:00:33.677 9437184 3211 ## 2014-05-12 17:00:35.538 9437184 17621 ## 2014-05-12 17:00:37.811 9437184 17504 ## 2014-05-12 17:00:41.427 9437184 28672 Example of Two Benchmark Runs in One Log File The data in the following graph is from a different log file, not the one of primary interest to this article. I’m including this image because it is an example of idle periods followed by busy periods. It would be uninteresting to average the rate of garbage collection over the entire log file period. More interesting would be the rate of garbage collect in the two busy periods. Are they the same or different? Your production data may be similar, for example, bursts when employees return from lunch and idle times on weekend evenings, etc. Once the data is in an R Time Series, you can analyze isolated time windows. Clipping the Time Series data Flashing back to our test case… Viewing the data as a time series is interesting. You can see that the work intensive time period is between 9:00 PM and 3:00 AM. Lets clip the data to the interesting period:     par(mfrow=c(2,1)) plot(g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Complete Log File", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") clipped.g1gc.z=window(g1gc.z, start=as.POSIXct("2014-05-12 21:00:00"), end=as.POSIXct("2014-05-13 03:00:00")) plot(clipped.g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Limited to Benchmark Execution", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") box(which = "outer", lty = "solid") Cumulative Incremental and Full GC count Here is the cumulative incremental and full GC count. When the line is very steep, it indicates that the GCs are repeating very quickly. Notice that the scale on the Y axis is different for full vs. incremental. plot(clipped.g1gc.z[,c(2:3)], main="Cumulative Incremental and Full GC count", xlab="Time of Day", col="#1b9e77") GC Analysis of Benchmark Execution using Time Series data In the following series of 3 graphs: The “After Size” show the amount of heap space in use after each garbage collection. Many Java objects are still referenced, i.e. alive, during each garbage collection. This may indicate that the application has a memory leak, or may indicate that the application has a very large memory footprint. Typically, an application's memory footprint plateau's in the early stage of execution. One would expect this graph to have a flat top. The steep decline in the heap space may indicate that the application crashed after 2:00. The second graph shows that the outliers in real execution time, discussed above, occur near 2:00. when the Java heap seems to be quite full. The third graph shows that Full GCs are infrequent during the first few hours of execution. The rate of Full GC's, (the slope of the cummulative Full GC line), changes near midnight.   plot(clipped.g1gc.z[,c("AfterSize","RealTime","FullCount")], xlab="Time of Day", col=c("#1b9e77","red","#1b9e77")) GC Analysis of heap recovered Each GC trace includes the amount of heap space in use before and after the individual GC event. During garbage coolection, unreferenced objects are identified, the space holding the unreferenced objects is freed, and thus, the difference in before and after usage indicates how much space has been freed. The following box plot and bar chart both demonstrate the same point - the amount of heap space freed per garbage colloection is surprisingly low. par(mfrow=c(2,1)) boxplot(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", horizontal = TRUE, col="red") hist(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", breaks=100, col="red") box(which = "outer", lty = "solid") This graph is the most interesting. The dark blue area shows how much heap is occupied by referenced Java objects. This represents memory that holds live data. The red fringe at the top shows how much data was recovered after each garbage collection. barplot(clipped.g1gc.z[,c("AfterSize","Delta")], col=c("#7570b3","#e7298a"), xlab="Time of Day", border=NA) legend("topleft", c("Live Objects","Heap Recovered on GC"), fill=c("#7570b3","#e7298a")) box(which = "outer", lty = "solid") When I discuss the data in the log files with the customer, I will ask for an explaination for the large amount of referenced data resident in the Java heap. There are two are posibilities: There is a memory leak and the amount of space required to hold referenced objects will continue to grow, limited only by the maximum heap size. After the maximum heap size is reached, the JVM will throw an “Out of Memory” exception every time that the application tries to allocate a new object. If this is the case, the aplication needs to be debugged to identify why old objects are referenced when they are no longer needed. The application has a legitimate requirement to keep a large amount of data in memory. The customer may want to further increase the maximum heap size. Another possible solution would be to partition the application across multiple cluster nodes, where each node has responsibility for managing a unique subset of the data. Conclusion In conclusion, R is a very powerful tool for the analysis of Java garbage collection log files. The primary difficulty is data cleansing so that information can be read into an R data frame. Once the data has been read into R, a rich set of tools may be used for thorough evaluation.

    Read the article

  • Partial Page Rendering in OAF Page

    - by PRajkumar
    Let us try to implement partial page rendering for a text item. If value of TextItem1 is null then TextItem2 will not be appreared on UI. If value of TextItem1 is not null then TextItem2 will be appreared on UI.   1. Create a New OA Workspace and Empty OA Project File> New > General> Workspace Configured for Oracle Applications File Name -- PPRProj Project Name – PPRDemoProj Default Package -- prajkumar.oracle.apps.fnd.pprdemo   2. Create Application Module AM PPRDemoProj right click > New > ADF Business Components > Application Module Name -- PPRAM Package -- prajkumar.oracle.apps.fnd.pprdemo.server   Check Application Module Class: PPRAMImpl Generate JavaFile(s)   3. Create a PPRVO View Object PPRDemoProj> New > ADF Business Components > View Objects Name – PPRVO Package – prajkumar.oracle.apps.fnd.pprdemo.server   In Attribute Page Click on New button and create transient primary key attribute with the following properties:   Attribute Property Name RowKey Type Number Updateable Always Key Attribute (Checked)   Click New button again and create transient attribute with the following properties:   Attribute Property Name TextItem2Render Type Boolean Updateable Always   Note – No Need to generate any JAVA files for PPRVO   4. Add Your View Object to Root UI Application Module Right click on PPRAM > Edit PPRAM > Data Model > Select PPRVO in Available View Objects list and shuttle to Data Model list   5. Create a OA components Page PPRDemoProj right click > New > OA Components > Page Name – PPRPG Package -- prajkumar.oracle.apps.fnd.pprdemo.webui   6. Modify the Page Layout (Top-level) Region   Attribute Property ID PageLayoutRN Region Style pageLayout Form Property True Auto Footer True Window Title PPR Demo Window Title True Title PPR Demo Page Header AM Definition prajkumar.oracle.apps.fnd.pprdemo.server.PPRAM   7. Create the Second Region (Main Content Region) Right click on PageLayoutRN > New > Region   Attribute Property ID MainRN Region Style messageComponentLayout   8. Create Two Text Items   Create First messageTextItem -- Right click on MainRN > New > messageTextInput   Attribute Property ID TextItem1 Region Style messageTextInput Prompt Text Item1 Length 20 Disable Server Side Validation True Disable Client Side Validation True Action Type firePartialAction Event TextItem1Change Submit True   Note -- Disable Client Side Validation and Event property appears after you set the Action Type property to firePartialAction   Create Second messageTextItem -- Select MainRN right click > New > messageTextInput   Attribute Property ID TextItem2 Region Style messageTextInput Prompt Text Item2 Length 20 Rendered ${oa.PPRVO1.TextItem2Render}   9. Add Following code in PPRAMImpl.java   import oracle.apps.fnd.framework.OARow; import oracle.apps.fnd.framework.OAViewObject; import oracle.apps.fnd.framework.server.OAApplicationModuleImpl; import oracle.apps.fnd.framework.server.OAViewObjectImpl; public void handlePPRAction()  {   Number val = 1;  OAViewObject vo = (OAViewObject)findViewObject("PPRVO1");  if (vo != null)   {    if (vo.getFetchedRowCount() == 0)    {     vo.setMaxFetchSize(0);     vo.executeQuery();     vo.insertRow(vo.createRow());     OARow row = (OARow)vo.first();            row.setAttribute("RowKey", val);    row.setAttribute("TextItem2Render", Boolean.FALSE);      }  } }   10. Implement Controller for Page Select PageLayoutRN in Structure pane right click > Set New Controller Package Name -- prajkumar.oracle.apps.fnd.pprdemo.webui Class Name – PPRCO   Write following code in processFormRequest function of PPRCO Controller   import oracle.apps.fnd.framework.OARow; import oracle.apps.fnd.framework.OAViewObject; public void processRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processRequest(pageContext, webBean);  PPRAMImpl am = (PPRAMImpl)pageContext.getApplicationModule(webBean);      am.invokeMethod("handlePPRAction"); } public void processFormRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processFormRequest(pageContext, webBean);        PPRAMImpl am = (PPRAMImpl)pageContext.getApplicationModule(webBean);  OAViewObject vo = (OAViewObject)am.findViewObject("PPRVO1");  OARow row = (OARow)vo.getCurrentRow();        if ("TextItem1Change".equals(pageContext.getParameter(EVENT_PARAM)))  {   if (pageContext.getParameter("TextItem1").equals(""))   {    row.setAttribute("TextItem2Render", Boolean.FALSE);   }   else   {    row.setAttribute("TextItem2Render", Boolean.TRUE);   }  } }   11. Congratulation you have successfully finished. Run Your PPRPG page and Test Your Work          

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • Do I need to store a generic rotation point/radius for rotating around a point other than the origin for object transforms?

    - by Casey
    I'm having trouble implementing a non-origin point rotation. I have a class Transform that stores each component separately in three 3D vectors for position, scale, and rotation. This is fine for local rotations based on the center of the object. The issue is how do I determine/concatenate non-origin rotations in addition to origin rotations. Normally this would be achieved as a Transform-Rotate-Transform for the center rotation followed by a Transform-Rotate-Transform for the non-origin point. The problem is because I am storing the individual components, the final Transform matrix is not calculated until needed by using the individual components to fill an appropriate Matrix. (See GetLocalTransform()) Do I need to store an additional rotation (and radius) for world rotations as well or is there a method of implementation that works while only using the single rotation value? Transform.h #ifndef A2DE_CTRANSFORM_H #define A2DE_CTRANSFORM_H #include "../a2de_vals.h" #include "CMatrix4x4.h" #include "CVector3D.h" #include <vector> A2DE_BEGIN class Transform { public: Transform(); Transform(Transform* parent); Transform(const Transform& other); Transform& operator=(const Transform& rhs); virtual ~Transform(); void SetParent(Transform* parent); void AddChild(Transform* child); void RemoveChild(Transform* child); Transform* FirstChild(); Transform* LastChild(); Transform* NextChild(); Transform* PreviousChild(); Transform* GetChild(std::size_t index); std::size_t GetChildCount() const; std::size_t GetChildCount(); void SetPosition(const a2de::Vector3D& position); const a2de::Vector3D& GetPosition() const; a2de::Vector3D& GetPosition(); void SetRotation(const a2de::Vector3D& rotation); const a2de::Vector3D& GetRotation() const; a2de::Vector3D& GetRotation(); void SetScale(const a2de::Vector3D& scale); const a2de::Vector3D& GetScale() const; a2de::Vector3D& GetScale(); a2de::Matrix4x4 GetLocalTransform() const; a2de::Matrix4x4 GetLocalTransform(); protected: private: a2de::Vector3D _position; a2de::Vector3D _scale; a2de::Vector3D _rotation; std::size_t _curChildIndex; Transform* _parent; std::vector<Transform*> _children; }; A2DE_END #endif Transform.cpp #include "CTransform.h" #include "CVector2D.h" #include "CVector4D.h" A2DE_BEGIN Transform::Transform() : _position(), _scale(1.0, 1.0), _rotation(), _curChildIndex(0), _parent(nullptr), _children() { /* DO NOTHING */ } Transform::Transform(Transform* parent) : _position(), _scale(1.0, 1.0), _rotation(), _curChildIndex(0), _parent(parent), _children() { /* DO NOTHING */ } Transform::Transform(const Transform& other) : _position(other._position), _scale(other._scale), _rotation(other._rotation), _curChildIndex(0), _parent(other._parent), _children(other._children) { /* DO NOTHING */ } Transform& Transform::operator=(const Transform& rhs) { if(this == &rhs) return *this; this->_position = rhs._position; this->_scale = rhs._scale; this->_rotation = rhs._rotation; this->_curChildIndex = 0; this->_parent = rhs._parent; this->_children = rhs._children; return *this; } Transform::~Transform() { _children.clear(); _parent = nullptr; } void Transform::SetParent(Transform* parent) { _parent = parent; } void Transform::AddChild(Transform* child) { if(child == nullptr) return; _children.push_back(child); } void Transform::RemoveChild(Transform* child) { if(_children.empty()) return; _children.erase(std::remove(_children.begin(), _children.end(), child), _children.end()); } Transform* Transform::FirstChild() { if(_children.empty()) return nullptr; return *(_children.begin()); } Transform* Transform::LastChild() { if(_children.empty()) return nullptr; return *(_children.end()); } Transform* Transform::NextChild() { if(_children.empty()) return nullptr; std::size_t s(_children.size()); if(_curChildIndex >= s) { _curChildIndex = s; return nullptr; } return _children[_curChildIndex++]; } Transform* Transform::PreviousChild() { if(_children.empty()) return nullptr; if(_curChildIndex == 0) { return nullptr; } return _children[_curChildIndex--]; } Transform* Transform::GetChild(std::size_t index) { if(_children.empty()) return nullptr; if(index > _children.size()) return nullptr; return _children[index]; } std::size_t Transform::GetChildCount() const { if(_children.empty()) return 0; return _children.size(); } std::size_t Transform::GetChildCount() { return static_cast<const Transform&>(*this).GetChildCount(); } void Transform::SetPosition(const a2de::Vector3D& position) { _position = position; } const a2de::Vector3D& Transform::GetPosition() const { return _position; } a2de::Vector3D& Transform::GetPosition() { return const_cast<a2de::Vector3D&>(static_cast<const Transform&>(*this).GetPosition()); } void Transform::SetRotation(const a2de::Vector3D& rotation) { _rotation = rotation; } const a2de::Vector3D& Transform::GetRotation() const { return _rotation; } a2de::Vector3D& Transform::GetRotation() { return const_cast<a2de::Vector3D&>(static_cast<const Transform&>(*this).GetRotation()); } void Transform::SetScale(const a2de::Vector3D& scale) { _scale = scale; } const a2de::Vector3D& Transform::GetScale() const { return _scale; } a2de::Vector3D& Transform::GetScale() { return const_cast<a2de::Vector3D&>(static_cast<const Transform&>(*this).GetScale()); } a2de::Matrix4x4 Transform::GetLocalTransform() const { Matrix4x4 p((_parent ? _parent->GetLocalTransform() : a2de::Matrix4x4::GetIdentity())); Matrix4x4 t(a2de::Matrix4x4::GetTranslationMatrix(_position)); Matrix4x4 r(a2de::Matrix4x4::GetRotationMatrix(_rotation)); Matrix4x4 s(a2de::Matrix4x4::GetScaleMatrix(_scale)); return (p * t * r * s); } a2de::Matrix4x4 Transform::GetLocalTransform() { return static_cast<const Transform&>(*this).GetLocalTransform(); } A2DE_END

    Read the article

  • MySQL for Excel 1.1.3 has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.1.3, the  latest addition to the MySQL Installer for Windows. MySQL for Excel is an application plug-in enabling data analysts to very easily access and manipulate MySQL data within Microsoft Excel. It enables you to directly work with a MySQL database from within Microsoft Excel so you can easily do tasks such as: Importing MySQL Data into Excel Exporting Excel data directly into MySQL to a new or existing table Editing MySQL data directly within Excel MySQL for Excel is installed using the MySQL Installer for Windows. The MySQL installer comes in 2 versions   Full (150 MB) which includes a complete set of MySQL products with their binaries included in the download Web (1.5 MB - a network install) which will just pull MySQL for Excel over the web and install it when run.   You can download MySQL Installer from our official Downloads page at http://dev.mysql.com/downloads/installer/. MySQL for Excel 1.1.3 introduces the following features:   Upon saving a Workbook containing Worksheets in Edit Mode, the user is asked if he wants to exit the Edit Mode on all Worksheets before their parent Workbook is saved so the Worksheets are saved unprotected, otherwise the Worksheets will remain protected and the users will be able to unprotect them later retrieving the passkeys from the application log after closing MySQL for Excel. Added background coloring to the column names header row of an Import Data operation to have the same look as the one in an Edit Data operation (i.e. gray-ish background). Connection passwords can be stored securely just like MySQL Workbench does and these secured passwords are shared with Workbench in the same way connections are. Changed the way the MySQL for Excel ribbon toggle button works, instead of just showing or hiding the add-in it actually opens and closes it. Added a connection test before any operation against the database (schema creation, data import, append, export or edition) so the operation dialog is not shown and a friendlier error message is shown.   Also this release contains the following bug fixes:   Added a check on every connection test for an expired password, if the password has been expired a dialog is now shown to the user to reset the password. Bug #17354118 - DON'T HANDLE EXPIRED PASSWORDS Added code to escape text values to be imported to an Excel worksheet that start with an equals sign so Excel does not treat those values as formulas that will fail evaluation. This is an option turned on by default that can be turned off by users if they wish to import values to be treated as Excel formulas. Bug #17354102 - ERROR IMPORTING TEXT VALUES TO EXCEL STARTING WITH AN EQUALS SIGN Added code to properly check the reason for a failing connection, if it's a failing password the user gets a dialog to retry the connection with a different password until the connection succeeds, a connection error not related to the password is thrown or the user cancels. If the failing connection is not related to a bad password an error message is shown to the users indicating the reason of the failure. Bug #16239007 - CONNECTIONS TO MYSQL SERVICES NOT RUNNING DISPLAY A WRONG PASSWORD ERROR MESSAGE Added global options dialog that can be accessed from the Schema Selection and DB Object Selection panels where the timeouts for the connection to the DB Server and for the query commands can be changed from their default values (15 seconds for the connection timeout and 30 seconds for the query timeout). MySQL Bug #68732, Bug #17191646 - QUERY TIMEOUT CANNOT BE ADJUSTED IN MYSQL FOR EXCEL Changed the Varchar(65,535) data type shown in the Export Data data type combo box to Text since the maximum row size is 65,535 bytes and any autodetected column data type with a length greater than 4,000 should be set to Text actually for the table to be created successfully. MySQL Bug #69779, Bug #17191633 - EXPORT FAILS FOR EXCEL FILES CONTAINING > 4000 CHARACTERS OF TEXT PER CELL Removed code that was replacing all spaces typed by the user in an overriden data type for a new column in an Export Data operation, also improved the data type detection code to flag as invalid data types with parenthesis but without any text inside or where the contents inside the parenthesis are not valid for the specific data type. Bug #17260260 - EXPORT DATA SET TYPE NOT WORKING WITH MEMBER VALUES CONTAINING SPACES Added support for the year data type with a length of 2 or 4 and a validation that valid values are integers between 1901-2155 (for 4-digit years) or between 0-99 (for 2-digit years). Bug #17259915 - EXPORT DATA YEAR DATA TYPE NOT RECOGNIZED IF DECLARED WITH A DISPLAY WIDTH) Fixed code for Export Data operations where users overrode the data type for columns typing Text in the data type combobox, which is a valid data type but was not recognized as such. Bug #17259490 - EXPORT DATA TEXT DATA TYPE NOT RECOGNIZED AS A VALID DATA TYPE Changed the location of the registry where the MySQL for Excel add-in is installed to HKEY_LOCAL_MACHINE instead of HKEY_CURRENT_USER so the add-in is accessible by all users and not only to the user that installed it. For this to work with Excel 2007 a hotfix may be required (see http://support.microsoft.com/kb/976477). MySQL Bug #68746, Bug #16675992 - EXCEL-ADD-IN IS ONLY INSTALLED FOR USER ACCOUNT THAT THE INSTALLATION RUNS UNDER Added support for Excel 2013 Single Document Interface, now that Excel 2013 creates 1 window per workbook also the Excel Add-In maintains an independent custom task pane in each window. MySQL Bug #68792, Bug #17272087 - MYSQL FOR EXCEL SIDEBAR DOES NOT APPEAR IN EXCEL 2013 (WITH WORKAROUND) Included the latest MySQL Utility with a code fix for the COM exception thrown when attempting to open Workbench in the Manage Connections window. Bug #17258966 - MYSQL WORKBENCH NOT OPENED BY CLICKING MANAGE CONNECTIONS HOTLABEL Fixed code for Append Data operations that was not applying a calculated automatic mapping correctly when the source and target tables had different number of columns, some columns with the same name but some of those lying on column indexes beyond the limit of the other source/target table. MySQL Bug #69220, Bug #17278349 - APPEND DOESN'T AUTOMATICALLY DETECT EXCEL COL HEADER WITH SAME NAME AS SQL FIELD Fixed some code for Edit Data operations that was escaping special characters twice (during edition in Excel and then upon sending the query to the MySQL server). MySQL Bug #68669, Bug #17271693 - A BACKSLASH IS INSERTED BEFORE AN APOSTROPHE EDITING TABLE WITH MYSQL FOR EXCEL Upgraded MySQL Utility with latest version that encapsulates dialog base classes and introduces more classes to handle Workbench connections, and removed these from the Excel project. Bug #16500331 - CAN'T DELETE CONNECTIONS CREATED WITHIN ADDIN You can access the MySQL for Excel documentation at http://dev.mysql.com/doc/refman/5.6/en/mysql-for-excel.html You can find our team’s blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Given an XML which contains a representation of a graph, how to apply it DFS algorithm? [on hold]

    - by winston smith
    Given the followin XML which is a directed graph: <?xml version="1.0" encoding="iso-8859-1" ?> <!DOCTYPE graph PUBLIC "-//FC//DTD red//EN" "../dtd/graph.dtd"> <graph direct="1"> <vertex label="V0"/> <vertex label="V1"/> <vertex label="V2"/> <vertex label="V3"/> <vertex label="V4"/> <vertex label="V5"/> <edge source="V0" target="V1" weight="1"/> <edge source="V0" target="V4" weight="1"/> <edge source="V5" target="V2" weight="1"/> <edge source="V5" target="V4" weight="1"/> <edge source="V1" target="V2" weight="1"/> <edge source="V1" target="V3" weight="1"/> <edge source="V1" target="V4" weight="1"/> <edge source="V2" target="V3" weight="1"/> </graph> With this classes i parsed the graph and give it an adjacency list representation: import java.io.IOException; import java.util.HashSet; import java.util.LinkedList; import java.util.Collection; import java.util.Iterator; import java.util.logging.Level; import java.util.logging.Logger; import practica3.util.Disc; public class ParsingXML { public static void main(String[] args) { try { // TODO code application logic here Collection<Vertex> sources = new HashSet<Vertex>(); LinkedList<String> lines = Disc.readFile("xml/directed.xml"); for (String lin : lines) { int i = Disc.find(lin, "source=\""); String data = ""; if (i > 0 && i < lin.length()) { while (lin.charAt(i + 1) != '"') { data += lin.charAt(i + 1); i++; } Vertex v = new Vertex(); v.setName(data); v.setAdy(new HashSet<Vertex>()); sources.add(v); } } Iterator it = sources.iterator(); while (it.hasNext()) { Vertex ver = (Vertex) it.next(); Collection<Vertex> adyacencias = ver.getAdy(); LinkedList<String> ls = Disc.readFile("xml/graphs.xml"); for (String lin : ls) { int i = Disc.find(lin, "target=\""); String data = ""; if (lin.contains("source=\""+ver.getName())) { Vertex v = new Vertex(); if (i > 0 && i < lin.length()) { while (lin.charAt(i + 1) != '"') { data += lin.charAt(i + 1); i++; } v.setName(data); } i = Disc.find(lin, "weight=\""); data = ""; if (i > 0 && i < lin.length()) { while (lin.charAt(i + 1) != '"') { data += lin.charAt(i + 1); i++; } v.setWeight(Integer.parseInt(data)); } if (v.getName() != null) { adyacencias.add(v); } } } } for (Vertex vert : sources) { System.out.println(vert); System.out.println("adyacencias: " + vert.getAdy()); } } catch (IOException ex) { Logger.getLogger(ParsingXML.class.getName()).log(Level.SEVERE, null, ex); } } } This is another class: import java.util.Collection; import java.util.Objects; public class Vertex { private String name; private int weight; private Collection ady; public Collection getAdy() { return ady; } public void setAdy(Collection adyacencias) { this.ady = adyacencias; } public String getName() { return name; } public void setName(String nombre) { this.name = nombre; } public int getWeight() { return weight; } public void setWeight(int weight) { this.weight = weight; } @Override public int hashCode() { int hash = 7; hash = 43 * hash + Objects.hashCode(this.name); hash = 43 * hash + this.weight; return hash; } @Override public boolean equals(Object obj) { if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final Vertex other = (Vertex) obj; if (!Objects.equals(this.name, other.name)) { return false; } if (this.weight != other.weight) { return false; } return true; } @Override public String toString() { return "Vertice{" + "name=" + name + ", weight=" + weight + '}'; } } And finally: /** * * @author user */ /* -*-jde-*- */ /* <Disc.java> Contains the main argument*/ import java.io.*; import java.util.LinkedList; /** * Lectura y escritura de archivos en listas de cadenas * Ideal para el uso de las clases para gráficas. * * @author Peralta Santa Anna Victor Miguel * @since Julio 2011 */ public class Disc { /** * Metodo para lectura de un archivo * * @param fileName archivo que se va a leer * @return El archivo en representacion de lista de cadenas */ public static LinkedList<String> readFile(String fileName) throws IOException { BufferedReader file = new BufferedReader(new FileReader(fileName)); LinkedList<String> textlist = new LinkedList<String>(); while (file.ready()) { textlist.add(file.readLine().trim()); } file.close(); /* for(String linea:textlist){ if(linea.contains("source")){ //String generado = linea.replaceAll("<\\w+\\s+\"", ""); //System.out.println(generado); } }*/ return textlist; }//readFile public static int find(String linea,String palabra){ int i,j; boolean found = false; for(i=0,j=0;i<linea.length();i++){ if(linea.charAt(i)==palabra.charAt(j)){ j++; if(j==palabra.length()){ found = true; return i; } }else{ continue; } } if(!found){ i= -1; } return i; } /** * Metodo para la escritura de un archivo * * @param fileName archivo que se va a escribir * @param tofile la lista de cadenas que quedaran en el archivo * @param append el bit que dira si se anexa el contenido o se empieza de cero */ public static void writeFile(String fileName, LinkedList<String> tofile, boolean append) throws IOException { FileWriter file = new FileWriter(fileName, append); for (int i = 0; i < tofile.size(); i++) { file.write(tofile.get(i) + "\n"); } file.close(); }//writeFile /** * Metodo para escritura de un archivo * @param msg archivo que se va a escribir * @param tofile la cadena que quedaran en el archivo * @param append el bit que dira si se anexa el contenido o se empieza de cero */ public static void writeFile(String msg, String tofile, boolean append) throws IOException { FileWriter file = new FileWriter(msg, append); file.write(tofile); file.close(); }//writeFile }// I'm stuck on what can be the best way to given an adjacency list representation of the graph how to apply it Depth-first search algorithm. Any idea of how to aproach to complete the task?

    Read the article

  • PostSharp, Obfuscation, and IL

    - by Simon Cooper
    Aspect-oriented programming (AOP) is a relatively new programming paradigm. Originating at Xerox PARC in 1994, the paradigm was first made available for general-purpose development as an extension to Java in 2001. From there, it has quickly been adapted for use in all the common languages used today. In the .NET world, one of the primary AOP toolkits is PostSharp. Attributes and AOP Normally, attributes in .NET are entirely a metadata construct. Apart from a few special attributes in the .NET framework, they have no effect whatsoever on how a class or method executes within the CLR. Only by using reflection at runtime can you access any attributes declared on a type or type member. PostSharp changes this. By declaring a custom attribute that derives from PostSharp.Aspects.Aspect, applying it to types and type members, and running the resulting assembly through the PostSharp postprocessor, you can essentially declare 'clever' attributes that change the behaviour of whatever the aspect has been applied to at runtime. A simple example of this is logging. By declaring a TraceAttribute that derives from OnMethodBoundaryAspect, you can automatically log when a method has been executed: public class TraceAttribute : PostSharp.Aspects.OnMethodBoundaryAspect { public override void OnEntry(MethodExecutionArgs args) { MethodBase method = args.Method; System.Diagnostics.Trace.WriteLine( String.Format( "Entering {0}.{1}.", method.DeclaringType.FullName, method.Name)); } public override void OnExit(MethodExecutionArgs args) { MethodBase method = args.Method; System.Diagnostics.Trace.WriteLine( String.Format( "Leaving {0}.{1}.", method.DeclaringType.FullName, method.Name)); } } [Trace] public void MethodToLog() { ... } Now, whenever MethodToLog is executed, the aspect will automatically log entry and exit, without having to add the logging code to MethodToLog itself. PostSharp Performance Now this does introduce a performance overhead - as you can see, the aspect allows access to the MethodBase of the method the aspect has been applied to. If you were limited to C#, you would be forced to retrieve each MethodBase instance using Type.GetMethod(), matching on the method name and signature. This is slow. Fortunately, PostSharp is not limited to C#. It can use any instruction available in IL. And in IL, you can do some very neat things. Ldtoken C# allows you to get the Type object corresponding to a specific type name using the typeof operator: Type t = typeof(Random); The C# compiler compiles this operator to the following IL: ldtoken [mscorlib]System.Random call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle( valuetype [mscorlib]System.RuntimeTypeHandle) The ldtoken instruction obtains a special handle to a type called a RuntimeTypeHandle, and from that, the Type object can be obtained using GetTypeFromHandle. These are both relatively fast operations - no string lookup is required, only direct assembly and CLR constructs are used. However, a little-known feature is that ldtoken is not just limited to types; it can also get information on methods and fields, encapsulated in a RuntimeMethodHandle or RuntimeFieldHandle: // get a MethodBase for String.EndsWith(string) ldtoken method instance bool [mscorlib]System.String::EndsWith(string) call class [mscorlib]System.Reflection.MethodBase [mscorlib]System.Reflection.MethodBase::GetMethodFromHandle( valuetype [mscorlib]System.RuntimeMethodHandle) // get a FieldInfo for the String.Empty field ldtoken field string [mscorlib]System.String::Empty call class [mscorlib]System.Reflection.FieldInfo [mscorlib]System.Reflection.FieldInfo::GetFieldFromHandle( valuetype [mscorlib]System.RuntimeFieldHandle) These usages of ldtoken aren't usable from C# or VB, and aren't likely to be added anytime soon (Eric Lippert's done a blog post on the possibility of adding infoof, methodof or fieldof operators to C#). However, PostSharp deals directly with IL, and so can use ldtoken to get MethodBase objects quickly and cheaply, without having to resort to string lookups. The kicker However, there are problems. Because ldtoken for methods or fields isn't accessible from C# or VB, it hasn't been as well-tested as ldtoken for types. This has resulted in various obscure bugs in most versions of the CLR when dealing with ldtoken and methods, and specifically, generic methods and methods of generic types. This means that PostSharp was behaving incorrectly, or just plain crashing, when aspects were applied to methods that were generic in some way. So, PostSharp has to work around this. Without using the metadata tokens directly, the only way to get the MethodBase of generic methods is to use reflection: Type.GetMethod(), passing in the method name as a string along with information on the signature. Now, this works fine. It's slower than using ldtoken directly, but it works, and this only has to be done for generic methods. Unfortunately, this poses problems when the assembly is obfuscated. PostSharp and Obfuscation When using ldtoken, obfuscators don't affect how PostSharp operates. Because the ldtoken instruction directly references the type, method or field within the assembly, it is unaffected if the name of the object is changed by an obfuscator. However, the indirect loading used for generic methods was breaking, because that uses the name of the method when the assembly is put through the PostSharp postprocessor to lookup the MethodBase at runtime. If the name then changes, PostSharp can't find it anymore, and the assembly breaks. So, PostSharp needs to know about any changes an obfuscator does to an assembly. The way PostSharp does this is by adding another layer of indirection. When PostSharp obfuscation support is enabled, it includes an extra 'name table' resource in the assembly, consisting of a series of method & type names. When PostSharp needs to lookup a method using reflection, instead of encoding the method name directly, it looks up the method name at a fixed offset inside that name table: MethodBase genericMethod = typeof(ContainingClass).GetMethod(GetNameAtIndex(22)); PostSharp.NameTable resource: ... 20: get_Prop1 21: set_Prop1 22: DoFoo 23: GetWibble When the assembly is later processed by an obfuscator, the obfuscator can replace all the method and type names within the name table with their new name. That way, the reflection lookups performed by PostSharp will now use the new names, and everything will work as expected: MethodBase genericMethod = typeof(#kGy).GetMethod(GetNameAtIndex(22)); PostSharp.NameTable resource: ... 20: #kkA 21: #zAb 22: #EF5a 23: #2tg As you can see, this requires direct support by an obfuscator in order to perform these rewrites. Dotfuscator supports it, and now, starting with SmartAssembly 6.6.4, SmartAssembly does too. So, a relatively simple solution to a tricky problem, with some CLR bugs thrown in for good measure. You don't see those every day!

    Read the article

  • Superclass Sensitive Actions

    - by Geertjan
    I've created a small piece of functionality that enables you to create actions for Java classes in the IDE. When the user right-clicks on a Java class, they will see one or more actions depending on the superclass of the selected class. To explain this visually, here I have "BlaTopComponent.java". I right-click on its node in the Projects window and I see "This is a TopComponent": Indeed, when you look at the source code of "BlaTopComponent.java", you'll see that it implements the TopComponent class. Next, in the screenshot below, you see that I have right-click a different class. In this case, there's an action available because the selected class implements the ActionListener class. Then, take a look at this one. Here both TopComponent and ActionListener are superclasses of the current class, hence both the actions are available to be invoked: Finally, here's a class that subclasses neither TopComponent nor ActionListener, hence neither of the actions that I created for doing something that relates to TopComponents or ActionListeners is available, since those actions are irrelevant in this context: How does this work? Well, it's a combination of my blog entries "Generic Node Popup Registration Solution" and "Showing an Action on a TopComponent Node". The cool part is that the definition of the two actions that you see above is remarkably trivial: import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.JOptionPane; import org.openide.loaders.DataObject; import org.openide.util.Utilities; public class TopComponentSensitiveAction implements ActionListener { private final DataObject context; public TopComponentSensitiveAction() { context = Utilities.actionsGlobalContext().lookup(DataObject.class); } @Override public void actionPerformed(ActionEvent ev) { //Do something with the context: JOptionPane.showMessageDialog(null, "TopComponent: " + context.getNodeDelegate().getDisplayName()); } } The above is the action that will be available if you right-click a Java class that extends TopComponent. This, in turn, is the action that will be available if you right-click a Java class that implements ActionListener: import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.JOptionPane; import org.openide.loaders.DataObject; import org.openide.util.Utilities; public class ActionListenerSensitiveAction implements ActionListener { private final DataObject context; public ActionListenerSensitiveAction() { context = Utilities.actionsGlobalContext().lookup(DataObject.class); } @Override public void actionPerformed(ActionEvent ev) { //Do something with the context: JOptionPane.showMessageDialog(null, "ActionListener: " + context.getNodeDelegate().getDisplayName()); } } Indeed, the classes, at this stage are the same. But, depending on what I want to do with TopComponents or ActionListeners, I now have a starting point, which includes access to the DataObject, from where I can get down into the source code, as shown here. This is how the two ActionListeners that you see defined above are registered in the layer, which could ultimately be done via annotations on the ActionListeners, of course: <folder name="Actions"> <folder name="Tools"> <file name="org-netbeans-sbas-impl-TopComponentSensitiveAction.instance"> <attr stringvalue="This is a TopComponent" name="displayName"/> <attr name="instanceCreate" methodvalue="org.netbeans.sbas.SuperclassSensitiveAction.create"/> <attr name="type" stringvalue="org.openide.windows.TopComponent"/> <attr name="delegate" newvalue="org.netbeans.sbas.impl.TopComponentSensitiveAction"/> </file> <file name="org-netbeans-sbas-impl-ActionListenerSensitiveAction.instance"> <attr stringvalue="This is an ActionListener" name="displayName"/> <attr name="instanceCreate" methodvalue="org.netbeans.sbas.SuperclassSensitiveAction.create"/> <attr name="type" stringvalue="java.awt.event.ActionListener"/> <attr name="delegate" newvalue="org.netbeans.sbas.impl.ActionListenerSensitiveAction"/> </file> </folder> </folder> <folder name="Loaders"> <folder name="text"> <folder name="x-java"> <folder name="Actions"> <file name="org-netbeans-sbas-impl-TopComponentSensitiveAction.shadow"> <attr name="originalFile" stringvalue="Actions/Tools/org-netbeans-sbas-impl-TopComponentSensitiveAction.instance"/> <attr intvalue="150" name="position"/> </file> <file name="org-netbeans-sbas-impl-ActionListenerSensitiveAction.shadow"> <attr name="originalFile" stringvalue="Actions/Tools/org-netbeans-sbas-impl-ActionListenerSensitiveAction.instance"/> <attr intvalue="160" name="position"/> </file> </folder> </folder> </folder> </folder> The most important parts of the layer registration are the lines that are highlighted above. Those lines connect the layer to the generic action that delegates back to the action listeners defined above, as follows: public final class SuperclassSensitiveAction extends AbstractAction implements ContextAwareAction { private final Map map; //This method is called from the layer, via "instanceCreate", //magically receiving a map, which contains all the attributes //that are defined in the layer for the file: static SuperclassSensitiveAction create(Map map) { return new SuperclassSensitiveAction(Utilities.actionsGlobalContext(), map); } public SuperclassSensitiveAction(Lookup context, Map m) { super(m.get("displayName").toString()); this.map = m; String superclass = m.get("type").toString(); //Enable the menu item only if //we're dealing with a class of type superclass: JavaSource javaSource = JavaSource.forFileObject( context.lookup(DataObject.class).getPrimaryFile()); try { javaSource.runUserActionTask(new ScanTask(this, superclass), true); } catch (IOException ex) { Exceptions.printStackTrace(ex); } //Hide the menu item if it isn't enabled: putValue(DynamicMenuContent.HIDE_WHEN_DISABLED, true); } @Override public void actionPerformed(ActionEvent ev) { ActionListener delegatedAction = (ActionListener)map.get("delegate"); delegatedAction.actionPerformed(ev); } @Override public Action createContextAwareInstance(Lookup actionContext) { return new SuperclassSensitiveAction(actionContext, map); } private class ScanTask implements Task<CompilationController> { private SuperclassSensitiveAction action = null; private String superclass; private ScanTask(SuperclassSensitiveAction action, String superclass) { this.action = action; this.superclass = superclass; } @Override public void run(final CompilationController info) throws Exception { info.toPhase(Phase.ELEMENTS_RESOLVED); new EnableIfGivenSuperclassMatches(info, action, superclass).scan( info.getCompilationUnit(), null); } } private static class EnableIfGivenSuperclassMatches extends TreePathScanner<Void, Void> { private CompilationInfo info; private final AbstractAction action; private final String superclassName; public EnableIfGivenSuperclassMatches(CompilationInfo info, AbstractAction action, String superclassName) { this.info = info; this.action = action; this.superclassName = superclassName; } @Override public Void visitClass(ClassTree t, Void v) { Element el = info.getTrees().getElement(getCurrentPath()); if (el != null) { TypeElement te = (TypeElement) el; List<? extends TypeMirror> interfaces = te.getInterfaces(); if (te.getSuperclass().toString().equals(superclassName)) { action.setEnabled(true); } else { action.setEnabled(false); } for (TypeMirror typeMirror : interfaces) { if (typeMirror.toString().equals(superclassName)){ action.setEnabled(true); } } } return null; } } } This is a pretty cool solution and, as you can see, very generic. Create a new ActionListener, register it in the layer so that it maps to the generic class above, and make sure to set the type attribute, which defines the superclass to which the action should be sensitive.

    Read the article

  • PostSharp, Obfuscation, and IL

    - by Simon Cooper
    Aspect-oriented programming (AOP) is a relatively new programming paradigm. Originating at Xerox PARC in 1994, the paradigm was first made available for general-purpose development as an extension to Java in 2001. From there, it has quickly been adapted for use in all the common languages used today. In the .NET world, one of the primary AOP toolkits is PostSharp. Attributes and AOP Normally, attributes in .NET are entirely a metadata construct. Apart from a few special attributes in the .NET framework, they have no effect whatsoever on how a class or method executes within the CLR. Only by using reflection at runtime can you access any attributes declared on a type or type member. PostSharp changes this. By declaring a custom attribute that derives from PostSharp.Aspects.Aspect, applying it to types and type members, and running the resulting assembly through the PostSharp postprocessor, you can essentially declare 'clever' attributes that change the behaviour of whatever the aspect has been applied to at runtime. A simple example of this is logging. By declaring a TraceAttribute that derives from OnMethodBoundaryAspect, you can automatically log when a method has been executed: public class TraceAttribute : PostSharp.Aspects.OnMethodBoundaryAspect { public override void OnEntry(MethodExecutionArgs args) { MethodBase method = args.Method; System.Diagnostics.Trace.WriteLine( String.Format( "Entering {0}.{1}.", method.DeclaringType.FullName, method.Name)); } public override void OnExit(MethodExecutionArgs args) { MethodBase method = args.Method; System.Diagnostics.Trace.WriteLine( String.Format( "Leaving {0}.{1}.", method.DeclaringType.FullName, method.Name)); } } [Trace] public void MethodToLog() { ... } Now, whenever MethodToLog is executed, the aspect will automatically log entry and exit, without having to add the logging code to MethodToLog itself. PostSharp Performance Now this does introduce a performance overhead - as you can see, the aspect allows access to the MethodBase of the method the aspect has been applied to. If you were limited to C#, you would be forced to retrieve each MethodBase instance using Type.GetMethod(), matching on the method name and signature. This is slow. Fortunately, PostSharp is not limited to C#. It can use any instruction available in IL. And in IL, you can do some very neat things. Ldtoken C# allows you to get the Type object corresponding to a specific type name using the typeof operator: Type t = typeof(Random); The C# compiler compiles this operator to the following IL: ldtoken [mscorlib]System.Random call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle( valuetype [mscorlib]System.RuntimeTypeHandle) The ldtoken instruction obtains a special handle to a type called a RuntimeTypeHandle, and from that, the Type object can be obtained using GetTypeFromHandle. These are both relatively fast operations - no string lookup is required, only direct assembly and CLR constructs are used. However, a little-known feature is that ldtoken is not just limited to types; it can also get information on methods and fields, encapsulated in a RuntimeMethodHandle or RuntimeFieldHandle: // get a MethodBase for String.EndsWith(string) ldtoken method instance bool [mscorlib]System.String::EndsWith(string) call class [mscorlib]System.Reflection.MethodBase [mscorlib]System.Reflection.MethodBase::GetMethodFromHandle( valuetype [mscorlib]System.RuntimeMethodHandle) // get a FieldInfo for the String.Empty field ldtoken field string [mscorlib]System.String::Empty call class [mscorlib]System.Reflection.FieldInfo [mscorlib]System.Reflection.FieldInfo::GetFieldFromHandle( valuetype [mscorlib]System.RuntimeFieldHandle) These usages of ldtoken aren't usable from C# or VB, and aren't likely to be added anytime soon (Eric Lippert's done a blog post on the possibility of adding infoof, methodof or fieldof operators to C#). However, PostSharp deals directly with IL, and so can use ldtoken to get MethodBase objects quickly and cheaply, without having to resort to string lookups. The kicker However, there are problems. Because ldtoken for methods or fields isn't accessible from C# or VB, it hasn't been as well-tested as ldtoken for types. This has resulted in various obscure bugs in most versions of the CLR when dealing with ldtoken and methods, and specifically, generic methods and methods of generic types. This means that PostSharp was behaving incorrectly, or just plain crashing, when aspects were applied to methods that were generic in some way. So, PostSharp has to work around this. Without using the metadata tokens directly, the only way to get the MethodBase of generic methods is to use reflection: Type.GetMethod(), passing in the method name as a string along with information on the signature. Now, this works fine. It's slower than using ldtoken directly, but it works, and this only has to be done for generic methods. Unfortunately, this poses problems when the assembly is obfuscated. PostSharp and Obfuscation When using ldtoken, obfuscators don't affect how PostSharp operates. Because the ldtoken instruction directly references the type, method or field within the assembly, it is unaffected if the name of the object is changed by an obfuscator. However, the indirect loading used for generic methods was breaking, because that uses the name of the method when the assembly is put through the PostSharp postprocessor to lookup the MethodBase at runtime. If the name then changes, PostSharp can't find it anymore, and the assembly breaks. So, PostSharp needs to know about any changes an obfuscator does to an assembly. The way PostSharp does this is by adding another layer of indirection. When PostSharp obfuscation support is enabled, it includes an extra 'name table' resource in the assembly, consisting of a series of method & type names. When PostSharp needs to lookup a method using reflection, instead of encoding the method name directly, it looks up the method name at a fixed offset inside that name table: MethodBase genericMethod = typeof(ContainingClass).GetMethod(GetNameAtIndex(22)); PostSharp.NameTable resource: ... 20: get_Prop1 21: set_Prop1 22: DoFoo 23: GetWibble When the assembly is later processed by an obfuscator, the obfuscator can replace all the method and type names within the name table with their new name. That way, the reflection lookups performed by PostSharp will now use the new names, and everything will work as expected: MethodBase genericMethod = typeof(#kGy).GetMethod(GetNameAtIndex(22)); PostSharp.NameTable resource: ... 20: #kkA 21: #zAb 22: #EF5a 23: #2tg As you can see, this requires direct support by an obfuscator in order to perform these rewrites. Dotfuscator supports it, and now, starting with SmartAssembly 6.6.4, SmartAssembly does too. So, a relatively simple solution to a tricky problem, with some CLR bugs thrown in for good measure. You don't see those every day!

    Read the article

  • Control-Break Style ADF Table - Comparing Values with Previous Row

    - by Steven Davelaar
    Sometimes you need to display data in an ADF Faces table in a control-break layout style, where rows should be "indented" when the break column has the same value as in the previous row. In the screen shot below, you see how the table breaks on both the RegionId column as well as the CountryId column. To implement this I didn't use fancy SQL statements. The table is based on a straightforward Locations ViewObject that is based on the Locations entity object and the Countries reference entity object, and the join query was automatically created by adding the reference EO. To get the indentation in the ADF Faces table, we simple use two rendered properties on the RegionId and CountryId outputText items:  <af:column sortProperty="RegionId" sortable="false"            headerText="#{bindings.LocationsView1.hints.RegionId.label}"            id="c5">   <af:outputText value="#{row.RegionId}" id="ot2"                  rendered="#{!CompareWithPreviousRowBean['RegionId']}">     <af:convertNumber groupingUsed="false"                       pattern="#{bindings.LocationsView1.hints.RegionId.format}"/>   </af:outputText> </af:column> <af:column sortProperty="CountryId" sortable="false"            headerText="#{bindings.LocationsView1.hints.CountryId.label}"            id="c1">   <af:outputText value="#{row.CountryId}" id="ot5"                  rendered="#{!CompareWithPreviousRowBean['CountryId']}"/> </af:column> The CompareWithPreviousRowBean managed bean is defined in request scope and is a generic bean that can be used for all the tables in your application that needs this layout style. As you can see the bean is a Map-style bean where we pass in the name of the attribute that should be compared with the previous row. The get method in the bean that is called returns boolean false when the attribute has the same value in the same row. Here is the code of the get method:  public Object get(Object key) {   String attrName = (String) key;   boolean isSame = false;   // get the currently processed row, using row expression #{row}   JUCtrlHierNodeBinding row = (JUCtrlHierNodeBinding) resolveExpression(getRowExpression());   JUCtrlHierBinding tableBinding = row.getHierBinding();   int rowRangeIndex = row.getViewObject().getRangeIndexOf(row.getRow());   Object currentAttrValue = row.getRow().getAttribute(attrName);   if (rowRangeIndex > 0)   {     Object previousAttrValue = tableBinding.getAttributeFromRow(rowRangeIndex - 1, attrName);     isSame = currentAttrValue != null && currentAttrValue.equals(previousAttrValue);   }   else if (tableBinding.getRangeStart() > 0)   {     // previous row is in previous range, we create separate rowset iterator,     // so we can change the range start without messing up the table rendering which uses     // the default rowset iterator     int absoluteIndexPreviousRow = tableBinding.getRangeStart() - 1;     RowSetIterator rsi = null;     try     {       rsi = tableBinding.getViewObject().getRowSet().createRowSetIterator(null);       rsi.setRangeStart(absoluteIndexPreviousRow);       Row previousRow = rsi.getRowAtRangeIndex(0);       Object previousAttrValue = previousRow.getAttribute(attrName);       isSame = currentAttrValue != null && currentAttrValue.equals(previousAttrValue);     }     finally     {       rsi.closeRowSetIterator();     }   }   return isSame; } The row expression defaults to #{row} but this can be changed through the rowExpression  managed property of the bean.  You can download the sample application here.

    Read the article

  • Sorting a Linked List [closed]

    - by Mohit Sehgal
    I want to sort a linked list. Here Node is class representing a node in a Linked List I have written a code to bubble sort a linked list. Program does not finishes execution. Kindly point out the mistakes. class Node { public: int data; public: Node *next; Node() { data=0;next=0; } Node(int d) { data=d; } void setData(int d) { data=d; } void print() { cout<<data<<endl; } bool operator==(Node n) { return this->data==n.data; } bool operator >(Node d) { if((this->data) > (d.data)) return true; return false; } }; class LList { public: int noOfNodes; Node *start;/*Header Node*/ LList() { start=new Node; noOfNodes=0;start=0; } void addAtFront(Node* n) { n->next=(start); start=n; noOfNodes++; } void addAtLast(Node* n) { Node *cur=(start); n->next=NULL; if(start==NULL) { start=n; noOfNodes++; return; } while(cur->next!=NULL) { cur=cur->next; } cur->next=n; noOfNodes++; } void addAtPos(Node *n,int pos) { if(pos==1) { addAtFront(n);return; } Node *cur=(start); Node *prev=NULL; int curPos=0; n->next=NULL; while(cur!=NULL) { curPos++; if(pos==curPos+1) { prev=cur; } if(pos==curPos) { n->next=cur; prev->next=n; break; } cur=cur->next; } noOfNodes++; } void removeFirst() { Node *del=start; start=start->next; delete del; noOfNodes--; return; } void removeLast() { Node *cur=start,*prev=NULL; while(cur->next!=NULL) { prev=cur; cur=cur->next; } prev->next=NULL; Node *del=cur->next; delete del; noOfNodes--; return; } void removeNodeAt(int pos) { if(pos<1) return; if(pos==1) { removeFirst();return;} int curPos=1; Node* cur=start->next; Node* prev=start; Node* del=NULL; while(curPos<pos&&cur!=NULL) { curPos++; if(curPos==pos) { del=cur; prev->next=cur->next; cur->next=NULL; delete del; noOfNodes--; break; } prev=prev->next; cur=cur->next; } } void removeNode(Node *d) { Node *cur=start; if(*d==*cur) { removeFirst();return; } cur=start->next; Node *prev=start,*del=NULL; while(cur!=NULL) { if(*cur==*d) { del=cur; prev->next=cur->next; delete del; noOfNodes--; break; } prev=prev->next; cur=cur->next; } } int getPosition(Node data) { int pos=0; Node *cur=(start); while(cur!=NULL) { pos++; if(*cur==data) { return pos; } cur=cur->next; } return -1;//not found } Node getNode(int pos) { if(pos<1) return -1;// not a valid position else if(pos>noOfNodes) return -1; // not a valid position Node *cur=(start); int curPos=0; while(cur!=NULL) { if(++curPos==pos) return *cur; cur=cur->next; } } void reverseList()//reverse the list { Node* cur=start->next; Node* d=NULL; Node* prev=start; while(cur!=NULL) { d=cur->next; cur->next=start; start=cur; prev->next=d; cur=d; } } void sortBubble() { Node *i=start,*j=start,*prev=NULL,*temp=NULL,*after=NULL; int count=noOfNodes-1;int icount=0; while(i->next!=NULL) { j=start; after=j->next; icount=0; while(++icount!=count) { if((*j)>(*after)) { temp=after->next; after->next=j; prev->next=j->next; j->next=temp; prev=after; after=j->next; } else{ prev=j; j=after; after=after->next; } } i=i->next; count--; } } void traverse() { Node *cur=(start); int c=0; while(cur!=NULL) { // cout<<"start"<<start; c++; cur->print(); cur=cur->next; } noOfNodes=c; } ~LList() { delete start; } }; int main() { int n; cin>>n; int d; LList list; Node *node; Node *temp=new Node(2123); for(int i=0;i<n;i++) { cin>>d; node=new Node(d); list.addAtLast(node); } list.addAtPos(temp,1); cout<<"traverse\n"; list.traverse(); temp=new Node(12); list.removeNode(temp); cout<<"12 removed"; list.traverse(); list.reverseList(); cout<<"\nreversed\n"; list.traverse(); cout<<"bubble sort\n"; list.sortBubble(); list.traverse(); getch(); delete node; return 0; }

    Read the article

  • PostSharp, Obfuscation, and IL

    - by simonc
    Aspect-oriented programming (AOP) is a relatively new programming paradigm. Originating at Xerox PARC in 1994, the paradigm was first made available for general-purpose development as an extension to Java in 2001. From there, it has quickly been adapted for use in all the common languages used today. In the .NET world, one of the primary AOP toolkits is PostSharp. Attributes and AOP Normally, attributes in .NET are entirely a metadata construct. Apart from a few special attributes in the .NET framework, they have no effect whatsoever on how a class or method executes within the CLR. Only by using reflection at runtime can you access any attributes declared on a type or type member. PostSharp changes this. By declaring a custom attribute that derives from PostSharp.Aspects.Aspect, applying it to types and type members, and running the resulting assembly through the PostSharp postprocessor, you can essentially declare 'clever' attributes that change the behaviour of whatever the aspect has been applied to at runtime. A simple example of this is logging. By declaring a TraceAttribute that derives from OnMethodBoundaryAspect, you can automatically log when a method has been executed: public class TraceAttribute : PostSharp.Aspects.OnMethodBoundaryAspect { public override void OnEntry(MethodExecutionArgs args) { MethodBase method = args.Method; System.Diagnostics.Trace.WriteLine( String.Format( "Entering {0}.{1}.", method.DeclaringType.FullName, method.Name)); } public override void OnExit(MethodExecutionArgs args) { MethodBase method = args.Method; System.Diagnostics.Trace.WriteLine( String.Format( "Leaving {0}.{1}.", method.DeclaringType.FullName, method.Name)); } } [Trace] public void MethodToLog() { ... } Now, whenever MethodToLog is executed, the aspect will automatically log entry and exit, without having to add the logging code to MethodToLog itself. PostSharp Performance Now this does introduce a performance overhead - as you can see, the aspect allows access to the MethodBase of the method the aspect has been applied to. If you were limited to C#, you would be forced to retrieve each MethodBase instance using Type.GetMethod(), matching on the method name and signature. This is slow. Fortunately, PostSharp is not limited to C#. It can use any instruction available in IL. And in IL, you can do some very neat things. Ldtoken C# allows you to get the Type object corresponding to a specific type name using the typeof operator: Type t = typeof(Random); The C# compiler compiles this operator to the following IL: ldtoken [mscorlib]System.Random call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle( valuetype [mscorlib]System.RuntimeTypeHandle) The ldtoken instruction obtains a special handle to a type called a RuntimeTypeHandle, and from that, the Type object can be obtained using GetTypeFromHandle. These are both relatively fast operations - no string lookup is required, only direct assembly and CLR constructs are used. However, a little-known feature is that ldtoken is not just limited to types; it can also get information on methods and fields, encapsulated in a RuntimeMethodHandle or RuntimeFieldHandle: // get a MethodBase for String.EndsWith(string) ldtoken method instance bool [mscorlib]System.String::EndsWith(string) call class [mscorlib]System.Reflection.MethodBase [mscorlib]System.Reflection.MethodBase::GetMethodFromHandle( valuetype [mscorlib]System.RuntimeMethodHandle) // get a FieldInfo for the String.Empty field ldtoken field string [mscorlib]System.String::Empty call class [mscorlib]System.Reflection.FieldInfo [mscorlib]System.Reflection.FieldInfo::GetFieldFromHandle( valuetype [mscorlib]System.RuntimeFieldHandle) These usages of ldtoken aren't usable from C# or VB, and aren't likely to be added anytime soon (Eric Lippert's done a blog post on the possibility of adding infoof, methodof or fieldof operators to C#). However, PostSharp deals directly with IL, and so can use ldtoken to get MethodBase objects quickly and cheaply, without having to resort to string lookups. The kicker However, there are problems. Because ldtoken for methods or fields isn't accessible from C# or VB, it hasn't been as well-tested as ldtoken for types. This has resulted in various obscure bugs in most versions of the CLR when dealing with ldtoken and methods, and specifically, generic methods and methods of generic types. This means that PostSharp was behaving incorrectly, or just plain crashing, when aspects were applied to methods that were generic in some way. So, PostSharp has to work around this. Without using the metadata tokens directly, the only way to get the MethodBase of generic methods is to use reflection: Type.GetMethod(), passing in the method name as a string along with information on the signature. Now, this works fine. It's slower than using ldtoken directly, but it works, and this only has to be done for generic methods. Unfortunately, this poses problems when the assembly is obfuscated. PostSharp and Obfuscation When using ldtoken, obfuscators don't affect how PostSharp operates. Because the ldtoken instruction directly references the type, method or field within the assembly, it is unaffected if the name of the object is changed by an obfuscator. However, the indirect loading used for generic methods was breaking, because that uses the name of the method when the assembly is put through the PostSharp postprocessor to lookup the MethodBase at runtime. If the name then changes, PostSharp can't find it anymore, and the assembly breaks. So, PostSharp needs to know about any changes an obfuscator does to an assembly. The way PostSharp does this is by adding another layer of indirection. When PostSharp obfuscation support is enabled, it includes an extra 'name table' resource in the assembly, consisting of a series of method & type names. When PostSharp needs to lookup a method using reflection, instead of encoding the method name directly, it looks up the method name at a fixed offset inside that name table: MethodBase genericMethod = typeof(ContainingClass).GetMethod(GetNameAtIndex(22)); PostSharp.NameTable resource: ... 20: get_Prop1 21: set_Prop1 22: DoFoo 23: GetWibble When the assembly is later processed by an obfuscator, the obfuscator can replace all the method and type names within the name table with their new name. That way, the reflection lookups performed by PostSharp will now use the new names, and everything will work as expected: MethodBase genericMethod = typeof(#kGy).GetMethod(GetNameAtIndex(22)); PostSharp.NameTable resource: ... 20: #kkA 21: #zAb 22: #EF5a 23: #2tg As you can see, this requires direct support by an obfuscator in order to perform these rewrites. Dotfuscator supports it, and now, starting with SmartAssembly 6.6.4, SmartAssembly does too. So, a relatively simple solution to a tricky problem, with some CLR bugs thrown in for good measure. You don't see those every day! Cross posted from Simple Talk.

    Read the article

  • ASP.NET MVC - dropdown list post handling problem

    - by ile
    I've had troubles for a few days already with handling form that contains dropdown list. I tried all that I've learned so far but nothing helps. This is my code: using System; using System.Collections.Generic; using System.Linq; using System.Web; using CMS; using CMS.Model; using System.ComponentModel.DataAnnotations; namespace Portal.Models { public class ArticleDisplay { public ArticleDisplay() { } public int CategoryID { set; get; } public string CategoryTitle { set; get; } public int ArticleID { set; get; } public string ArticleTitle { set; get; } public DateTime ArticleDate; public string ArticleContent { set; get; } } public class HomePageViewModel { public HomePageViewModel(IEnumerable<ArticleDisplay> summaries, Article article) { this.ArticleSummaries = summaries; this.NewArticle = article; } public IEnumerable<ArticleDisplay> ArticleSummaries { get; private set; } public Article NewArticle { get; private set; } } public class ArticleRepository { private DB db = new DB(); // // Query Methods public IQueryable<ArticleDisplay> FindAllArticles() { var result = from category in db.ArticleCategories join article in db.Articles on category.CategoryID equals article.CategoryID orderby article.Date descending select new ArticleDisplay { CategoryID = category.CategoryID, CategoryTitle = category.Title, ArticleID = article.ArticleID, ArticleTitle = article.Title, ArticleDate = article.Date, ArticleContent = article.Content }; return result; } public IQueryable<ArticleDisplay> FindTodayArticles() { var result = from category in db.ArticleCategories join article in db.Articles on category.CategoryID equals article.CategoryID where article.Date == DateTime.Today select new ArticleDisplay { CategoryID = category.CategoryID, CategoryTitle = category.Title, ArticleID = article.ArticleID, ArticleTitle = article.Title, ArticleDate = article.Date, ArticleContent = article.Content }; return result; } public Article GetArticle(int id) { return db.Articles.SingleOrDefault(d => d.ArticleID == id); } public IQueryable<ArticleDisplay> DetailsArticle(int id) { var result = from category in db.ArticleCategories join article in db.Articles on category.CategoryID equals article.CategoryID where id == article.ArticleID select new ArticleDisplay { CategoryID = category.CategoryID, CategoryTitle = category.Title, ArticleID = article.ArticleID, ArticleTitle = article.Title, ArticleDate = article.Date, ArticleContent = article.Content }; return result; } // // Insert/Delete Methods public void Add(Article article) { db.Articles.InsertOnSubmit(article); } public void Delete(Article article) { db.Articles.DeleteOnSubmit(article); } // // Persistence public void Save() { db.SubmitChanges(); } } } using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using Portal.Models; using CMS.Model; namespace Portal.Areas.CMS.Controllers { public class ArticleController : Controller { ArticleRepository articleRepository = new ArticleRepository(); ArticleCategoryRepository articleCategoryRepository = new ArticleCategoryRepository(); // // GET: /Article/ public ActionResult Index() { ViewData["categories"] = new SelectList ( articleCategoryRepository.FindAllCategories().ToList(), "CategoryId", "Title" ); Article article = new Article() { Date = DateTime.Now, CategoryID = 1 }; HomePageViewModel homeData = new HomePageViewModel(articleRepository.FindAllArticles().ToList(), article); return View(homeData); } // // GET: /Article/Details/5 public ActionResult Details(int id) { var article = articleRepository.DetailsArticle(id).Single(); if (article == null) return View("NotFound"); return View(article); } // // GET: /Article/Create //public ActionResult Create() //{ // ViewData["categories"] = new SelectList // ( // articleCategoryRepository.FindAllCategories().ToList(), "CategoryId", "Title" // ); // Article article = new Article() // { // Date = DateTime.Now, // CategoryID = 1 // }; // return View(article); //} // // POST: /Article/Create [ValidateInput(false)] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(Article article) { if (ModelState.IsValid) { try { // TODO: Add insert logic here articleRepository.Add(article); articleRepository.Save(); return RedirectToAction("Index"); } catch { return View(article); } } else { return View(article); } } // // GET: /Article/Edit/5 public ActionResult Edit(int id) { ViewData["categories"] = new SelectList ( articleCategoryRepository.FindAllCategories().ToList(), "CategoryId", "Title" ); var article = articleRepository.GetArticle(id); return View(article); } // // POST: /Article/Edit/5 [ValidateInput(false)] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(int id, FormCollection collection) { Article article = articleRepository.GetArticle(id); try { // TODO: Add update logic here UpdateModel(article, collection.ToValueProvider()); articleRepository.Save(); return RedirectToAction("Details", new { id = article.ArticleID }); } catch { return View(article); } } // // HTTP GET: /Article/Delete/1 public ActionResult Delete(int id) { Article article = articleRepository.GetArticle(id); if (article == null) return View("NotFound"); else return View(article); } // // HTTP POST: /Article/Delete/1 [AcceptVerbs(HttpVerbs.Post)] public ActionResult Delete(int id, string confirmButton) { Article article = articleRepository.GetArticle(id); if (article == null) return View("NotFound"); articleRepository.Delete(article); articleRepository.Save(); return View("Deleted"); } [ValidateInput(false)] public ActionResult UpdateSettings(int id, string value, string field) { // This highly-specific example is from the original coder's blog system, // but you can substitute your own code here. I assume you can pick out // which text field it is from the id. Article article = articleRepository.GetArticle(id); if (article == null) return Content("Error"); if (field == "Title") { article.Title = value; UpdateModel(article, new[] { "Title" }); articleRepository.Save(); } if (field == "Content") { article.Content = value; UpdateModel(article, new[] { "Content" }); articleRepository.Save(); } if (field == "Date") { article.Date = Convert.ToDateTime(value); UpdateModel(article, new[] { "Date" }); articleRepository.Save(); } return Content(value); } } } and view: <%@ Page Title="" Language="C#" MasterPageFile="~/Areas/CMS/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Portal.Models.HomePageViewModel>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Index </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <div class="naslov_poglavlja_main">Articles Administration</div> <%= Html.ValidationSummary("Create was unsuccessful. Please correct the errors and try again.") %> <% using (Html.BeginForm("Create","Article")) {%> <div class="news_forma"> <label for="Title" class="news">Title:</label> <%= Html.TextBox("Title", "", new { @class = "news" })%> <%= Html.ValidationMessage("Title", "*") %> <label for="Content" class="news">Content:</label> <div class="textarea_okvir"> <%= Html.TextArea("Content", "", new { @class = "news" })%> <%= Html.ValidationMessage("Content", "*")%> </div> <label for="CategoryID" class="news">Category:</label> <%= Html.DropDownList("CategoryId", (IEnumerable<SelectListItem>)ViewData["categories"], new { @class = "news" })%> <p> <input type="submit" value="Publish" class="form_submit" /> </p> </div> <% } %> <div class="naslov_poglavlja_main"><%= Html.ActionLink("Write new article...", "Create") %></div> <div id="articles"> <% foreach (var item in Model.ArticleSummaries) { %> <div> <div class="naslov_vijesti" id="<%= item.ArticleID %>"><%= Html.Encode(item.ArticleTitle) %></div> <div class="okvir_vijesti"> <div class="sadrzaj_vijesti" id="<%= item.ArticleID %>"><%= item.ArticleContent %></div> <div class="datum_vijesti" id="<%= item.ArticleID %>"><%= Html.Encode(String.Format("{0:g}", item.ArticleDate)) %></div> <a class="news_delete" href="#" id="<%= item.ArticleID %>">Delete</a> </div> <div class="dno"></div> </div> <% } %> </div> </asp:Content> When trying to post new article I get following error: System.InvalidOperationException: The ViewData item that has the key 'CategoryId' is of type 'System.Int32' but must be of type 'IEnumerable'. I really don't know what to do cause I'm pretty new to .net and mvc Any help appreciated! Ile EDIT: I found where I made mistake. I didn't include date. If in view form I add this line I'm able to add article: <%=Html.Hidden("Date", String.Format("{0:g}", Model.NewArticle.Date)) %> But, if I enter wrong datetype or leave title and content empty then I get the same error. In this eample there is no need for date edit, but I will need it for some other forms and validation will be necessary. EDIT 2: Error happens when posting! Call stack: App_Web_of9beco9.dll!ASP.areas_cms_views_article_create_aspx.__RenderContent2(System.Web.UI.HtmlTextWriter __w = {System.Web.UI.HtmlTextWriter}, System.Web.UI.Control parameterContainer = {System.Web.UI.WebControls.ContentPlaceHolder}) Line 31 + 0x9f bytes C#

    Read the article

  • Bluetooth connection. Problem with sony ericsson.

    - by Hugi
    I have bt client and server. Then i use method Connector.open, client connects to the port, but passed so that my server does not see them. Nokia for all normal, but with sony ericsson i have this problem. On bt adapter open one port (com 5). Listings Client /* * To change this template, choose Tools | Templates * and open the template in the editor. */ import java.util.Vector; import javax.bluetooth.*; import javax.microedition.midlet.*; import javax.microedition.lcdui.*; import javax.microedition.io.*; import java.io.*; /** * @author ????????????? */ public class Client extends MIDlet implements DiscoveryListener, CommandListener { private static Object lock=new Object(); private static Vector vecDevices=new Vector(); private ServiceRecord[] servRec = new ServiceRecord[INQUIRY_COMPLETED]; private Form form = new Form( "Search" ); private List voteList = new List( "Vote list", List.IMPLICIT ); private List vote = new List( "", List.EXCLUSIVE ); private RemoteDevice remoteDevice; private String connectionURL = null; protected int stopToken = 255; private Command select = null; public void startApp() { //view form Display.getDisplay(this).setCurrent(form); try { //device search print("Starting device inquiry..."); getAgent().startInquiry(DiscoveryAgent.GIAC, this); try { synchronized(lock){ lock.wait(); } }catch (InterruptedException e) { e.printStackTrace(); } //device count int deviceCount=vecDevices.size(); if(deviceCount <= 0) { print("No Devices Found ."); } else{ remoteDevice=(RemoteDevice)vecDevices.elementAt(0); print( "Server found" ); //create uuid UUID uuid = new UUID(0x1101); UUID uuids[] = new UUID[] { uuid }; //search service print( "Searching for service..." ); getAgent().searchServices(null,uuids,remoteDevice,this); } } catch( Exception e) { e.printStackTrace(); } } //if deivce discovered add to vecDevices public void deviceDiscovered(RemoteDevice btDevice, DeviceClass cod) { //add the device to the vector try { if(!vecDevices.contains(btDevice) && btDevice.getFriendlyName(true).equals("serverHugi")){ vecDevices.addElement(btDevice); } } catch( IOException e ) { } } public synchronized void servicesDiscovered(int transID, ServiceRecord[] servRecord) { //for each service create connection if( servRecord!=null && servRecord.length>0 ){ print( "Service found" ); connectionURL = servRecord[0].getConnectionURL(ServiceRecord.NOAUTHENTICATE_NOENCRYPT,false); //connectionURL = servRecord[0].getConnectionURL(ServiceRecord.AUTHENTICATE_NOENCRYPT,false); } if ( connectionURL != null ) { showVoteList(); } } public void serviceSearchCompleted(int transID, int respCode) { //print( "serviceSearchCompleted" ); synchronized(lock){ lock.notify(); } } //This callback method will be called when the device discovery is completed. public void inquiryCompleted(int discType) { synchronized(lock){ lock.notify(); } switch (discType) { case DiscoveryListener.INQUIRY_COMPLETED : print("INQUIRY_COMPLETED"); break; case DiscoveryListener.INQUIRY_TERMINATED : print("INQUIRY_TERMINATED"); break; case DiscoveryListener.INQUIRY_ERROR : print("INQUIRY_ERROR"); break; default : print("Unknown Response Code"); break; } } //add message at form public void print( String msg ) { form.append( msg ); form.append( "\n\n" ); } public void pauseApp() { } public void destroyApp(boolean unconditional) { } //get agent :))) private DiscoveryAgent getAgent() { try { return LocalDevice.getLocalDevice().getDiscoveryAgent(); } catch (BluetoothStateException e) { throw new Error(e.getMessage()); } } private synchronized String getMessage( final String send ) { StreamConnection stream = null; DataInputStream in = null; DataOutputStream out = null; String r = null; try { //open connection stream = (StreamConnection) Connector.open(connectionURL); in = stream.openDataInputStream(); out = stream.openDataOutputStream(); out.writeUTF( send ); out.flush(); r = in.readUTF(); print( r ); in.close(); out.close(); stream.close(); return r; } catch (IOException e) { } finally { if (stream != null) { try { stream.close(); } catch (IOException e) { } } return r; } } private synchronized void showVoteList() { String votes = getMessage( "c_getVotes" ); voteList.append( votes, null ); select = new Command( "Select", Command.OK, 4 ); voteList.addCommand( select ); voteList.setCommandListener( this ); Display.getDisplay(this).setCurrent(voteList); } private synchronized void showVote( int index ) { String title = getMessage( "c_getVote_"+index ); vote.setTitle( title ); vote.append( "Yes", null ); vote.append( "No", null ); vote.setCommandListener( this ); Display.getDisplay(this).setCurrent(vote); } public void commandAction( Command c, Displayable d ) { if ( c == select && d == voteList ) { int index = voteList.getSelectedIndex(); print( ""+index ); showVote( index ); } } } Use BlueCove in this program. Server /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package javaapplication4; import java.io.*; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; import javax.bluetooth.*; import javax.microedition.io.*; import javaapplication4.Connect; /** * * @author ????????????? */ public class SampleSPPServer { protected static int endToken = 255; private static Lock lock=new ReentrantLock(); private static StreamConnection conn = null; private static StreamConnectionNotifier streamConnNotifier = null; private void startServer() throws IOException{ //Create a UUID for SPP UUID uuid = new UUID("1101", true); //Create the service url String connectionString = "btspp://localhost:" + uuid +";name=Sample SPP Server"; //open server url StreamConnectionNotifier streamConnNotifier = (StreamConnectionNotifier)Connector.open( connectionString ); while ( true ) { Connect ct = new Connect( streamConnNotifier.acceptAndOpen() ); ct.getMessage(); } } /** * @param args the command line arguments */ public static void main(String[] args) { //display local device address and name try { LocalDevice localDevice = LocalDevice.getLocalDevice(); localDevice.setDiscoverable(DiscoveryAgent.GIAC); System.out.println("Name: "+localDevice.getFriendlyName()); } catch( Throwable e ) { e.printStackTrace(); } SampleSPPServer sampleSPPServer=new SampleSPPServer(); try { //start server sampleSPPServer.startServer(); } catch( IOException e ) { e.printStackTrace(); } } } Connect /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package javaapplication4; import java.io.*; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; import javax.bluetooth.*; import javax.microedition.io.*; /** * * @author ????????????? */ public class Connect { private static DataInputStream in = null; private static DataOutputStream out = null; private static StreamConnection connection = null; private static Lock lock=new ReentrantLock(); public Connect( StreamConnection conn ) { connection = conn; } public synchronized void getMessage( ) { Thread t = new Thread() { public void run() { try { in = connection.openDataInputStream(); out = connection.openDataOutputStream(); String r = in.readUTF(); System.out.println("read:" + r); if ( r.equals( "c_getVotes" ) ) { out.writeUTF( "vote1" ); out.flush(); } if ( r.equals( "c_getVote_0" ) ) { out.writeUTF( "Vote1" ); out.flush(); } out.close(); in.close(); } catch (Throwable e) { } finally { if (in != null) { try { in.close(); } catch (IOException e) { } } try { connection.close(); } catch( IOException e ) { } } } }; t.start(); } }

    Read the article

  • Android map performance with > 800 overlays of KML data

    - by span
    I have some a shape file which I have converted to a KML file that I wish to read coordinates from and then draw paths between the coordinates on a MapView. With the help of this great post: How to draw a path on a map using kml file? I have been able to read the the KML into an ArrayList of "Placemarks". This great blog post then showed how to take a list of GeoPoints and draw a path: http://djsolid.net/blog/android---draw-a-path-array-of-points-in-mapview The example in the above post only draws one path between some points however and since I have many more paths than that I am running into some performance problems. I'm currently adding a new RouteOverlay for each of the separate paths. This results in me having over 800 overlays when they have all been added. This has a performance hit and I would love some input on what I can do to improve it. Here are some options I have considered: Try to add all the points to a List which then can be passed into a class that will extend Overlay. In that new class perhaps it would be possible to add and draw the paths in a single Overlay layer? I'm not sure on how to implement this though since the paths are not always intersecting and they have different start and end points. At the moment I'm adding each path which has several points to it's own list and then I add that to an Overlay. That results in over 700 overlays... Simplify the KML or SHP. Instead of having over 700 different paths, perhaps there is someway to merge them into perhaps 100 paths or less? Since alot of paths are intersected at some point it should be possible to modify the original SHP file so that it merges all intersections. Since I have never worked with these kinds of files before I have not been able to find a way to do this in GQIS. If someone knows how to do this I would love for some input on that. Here is a link to the group of shape files if you are interested: http://danielkvist.net/cprg_bef_cbana_polyline.shp http://danielkvist.net/cprg_bef_cbana_polyline.shx http://danielkvist.net/cprg_bef_cbana_polyline.dbf http://danielkvist.net/cprg_bef_cbana_polyline.prj Anyway, here is the code I'm using to add the Overlays. Many thanks in advance. RoutePathOverlay.java package net.danielkvist; import java.util.List; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import android.graphics.Path; import android.graphics.Point; import android.graphics.RectF; import com.google.android.maps.GeoPoint; import com.google.android.maps.MapView; import com.google.android.maps.Overlay; import com.google.android.maps.Projection; public class RoutePathOverlay extends Overlay { private int _pathColor; private final List<GeoPoint> _points; private boolean _drawStartEnd; public RoutePathOverlay(List<GeoPoint> points) { this(points, Color.RED, false); } public RoutePathOverlay(List<GeoPoint> points, int pathColor, boolean drawStartEnd) { _points = points; _pathColor = pathColor; _drawStartEnd = drawStartEnd; } private void drawOval(Canvas canvas, Paint paint, Point point) { Paint ovalPaint = new Paint(paint); ovalPaint.setStyle(Paint.Style.FILL_AND_STROKE); ovalPaint.setStrokeWidth(2); int _radius = 6; RectF oval = new RectF(point.x - _radius, point.y - _radius, point.x + _radius, point.y + _radius); canvas.drawOval(oval, ovalPaint); } public boolean draw(Canvas canvas, MapView mapView, boolean shadow, long when) { Projection projection = mapView.getProjection(); if (shadow == false && _points != null) { Point startPoint = null, endPoint = null; Path path = new Path(); // We are creating the path for (int i = 0; i < _points.size(); i++) { GeoPoint gPointA = _points.get(i); Point pointA = new Point(); projection.toPixels(gPointA, pointA); if (i == 0) { // This is the start point startPoint = pointA; path.moveTo(pointA.x, pointA.y); } else { if (i == _points.size() - 1)// This is the end point endPoint = pointA; path.lineTo(pointA.x, pointA.y); } } Paint paint = new Paint(); paint.setAntiAlias(true); paint.setColor(_pathColor); paint.setStyle(Paint.Style.STROKE); paint.setStrokeWidth(3); paint.setAlpha(90); if (getDrawStartEnd()) { if (startPoint != null) { drawOval(canvas, paint, startPoint); } if (endPoint != null) { drawOval(canvas, paint, endPoint); } } if (!path.isEmpty()) canvas.drawPath(path, paint); } return super.draw(canvas, mapView, shadow, when); } public boolean getDrawStartEnd() { return _drawStartEnd; } public void setDrawStartEnd(boolean markStartEnd) { _drawStartEnd = markStartEnd; } } MyMapActivity package net.danielkvist; import java.util.ArrayList; import java.util.Collection; import java.util.Iterator; import android.graphics.Color; import android.os.Bundle; import android.util.Log; import com.google.android.maps.GeoPoint; import com.google.android.maps.MapActivity; import com.google.android.maps.MapView; public class MyMapActivity extends MapActivity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); MapView mapView = (MapView) findViewById(R.id.mapview); mapView.setBuiltInZoomControls(true); String url = "http://danielkvist.net/cprg_bef_cbana_polyline_simp1600.kml"; NavigationDataSet set = MapService.getNavigationDataSet(url); drawPath(set, Color.parseColor("#6C8715"), mapView); } /** * Does the actual drawing of the route, based on the geo points provided in * the nav set * * @param navSet * Navigation set bean that holds the route information, incl. * geo pos * @param color * Color in which to draw the lines * @param mMapView01 * Map view to draw onto */ public void drawPath(NavigationDataSet navSet, int color, MapView mMapView01) { ArrayList<GeoPoint> geoPoints = new ArrayList<GeoPoint>(); Collection overlaysToAddAgain = new ArrayList(); for (Iterator iter = mMapView01.getOverlays().iterator(); iter.hasNext();) { Object o = iter.next(); Log.d(BikeApp.APP, "overlay type: " + o.getClass().getName()); if (!RouteOverlay.class.getName().equals(o.getClass().getName())) { overlaysToAddAgain.add(o); } } mMapView01.getOverlays().clear(); mMapView01.getOverlays().addAll(overlaysToAddAgain); int totalNumberOfOverlaysAdded = 0; for(Placemark placemark : navSet.getPlacemarks()) { String path = placemark.getCoordinates(); if (path != null && path.trim().length() > 0) { String[] pairs = path.trim().split(" "); String[] lngLat = pairs[0].split(","); // lngLat[0]=longitude // lngLat[1]=latitude // lngLat[2]=height try { if(lngLat.length > 1 && !lngLat[0].equals("") && !lngLat[1].equals("")) { GeoPoint startGP = new GeoPoint( (int) (Double.parseDouble(lngLat[1]) * 1E6), (int) (Double.parseDouble(lngLat[0]) * 1E6)); GeoPoint gp1; GeoPoint gp2 = startGP; geoPoints = new ArrayList<GeoPoint>(); geoPoints.add(startGP); for (int i = 1; i < pairs.length; i++) { lngLat = pairs[i].split(","); gp1 = gp2; if (lngLat.length >= 2 && gp1.getLatitudeE6() > 0 && gp1.getLongitudeE6() > 0 && gp2.getLatitudeE6() > 0 && gp2.getLongitudeE6() > 0) { // for GeoPoint, first:latitude, second:longitude gp2 = new GeoPoint( (int) (Double.parseDouble(lngLat[1]) * 1E6), (int) (Double.parseDouble(lngLat[0]) * 1E6)); if (gp2.getLatitudeE6() != 22200000) { geoPoints.add(gp2); } } } totalNumberOfOverlaysAdded++; mMapView01.getOverlays().add(new RoutePathOverlay(geoPoints)); } } catch (NumberFormatException e) { Log.e(BikeApp.APP, "Cannot draw route.", e); } } } Log.d(BikeApp.APP, "Total overlays: " + totalNumberOfOverlaysAdded); mMapView01.setEnabled(true); } @Override protected boolean isRouteDisplayed() { // TODO Auto-generated method stub return false; } } Edit: There are of course some more files I'm using but that I have not posted. You can download the complete Eclipse project here: http://danielkvist.net/se.zip

    Read the article

  • asterisk/freeswitch in nat/no-nat setup

    - by pQd
    hi, my current setup - i use bunch of sip hard-phones around few offices. all devices have two sip accounts configured - one on internal sip proxy [for calls between the branches], another - at 3rd party voip providers [ since it's in different countries - those are different providers, but that's irrelevant ]. i was thinking about terminating sip calls on something like asterisk/freeswitch server and having all sip-devices log on just once to such server[s] - mostly to provide things like voicemail, groupcalls, redirections etc. it seems perfectly doable but there is one problem - i cannot find examples how to prepare for nat/no nat. for calls routed to from/to 3rd party voip operator - i'll need handling for nat/stun etc, but for handling of internal calls - i do not want any nat, all traffic should go via vpns to different branches. can you provide me some hints how to configure it? any tutorials? thanks!

    Read the article

  • What is needed to use anycast IPs?

    - by coredump
    So, there're a bunch of questions on SF about the uses and how anycast IPs are cool. My approach is something more practical. What specifically I need to have to use one of those addresses? Do I need to be an AS (Autonomous System)? If I want to use an Anycast IP on my internal network, is it possible? Do I need anything special with a registrar/operator(s) to use it? Basically, if I want to use an Anycast IP address, what exactly I need, from the equipment to configuration part.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >