Search Results

Search found 5920 results on 237 pages for 'hand drawn'.

Page 24/237 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • XNA Alpha Blending to make part of a texture transparent

    - by David
    What I am trying to do is use alpha blending in XNA to make part of a drawn texture transparent. So for instance, I clear the screen to some color, lets say Blue. Then I draw a texture that is red. Finally I draw a texture that is just a radial gradient from completely transparent in the center to completely black at the edge. What I want is the Red texture drawn earlier to be transparent in the same places as the radial gradient texture. So you should be able to see the blue back ground through the red texture. I thought that this would work. GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(SpriteBlendMode.None); spriteBatch.Draw(bg, new Vector2(0, 0), Color.White); spriteBatch.End(); spriteBatch.Begin(SpriteBlendMode.None); GraphicsDevice.RenderState.AlphaBlendEnable = true; GraphicsDevice.RenderState.AlphaSourceBlend = Blend.One; GraphicsDevice.RenderState.AlphaDestinationBlend = Blend.Zero; GraphicsDevice.RenderState.SourceBlend = Blend.Zero; GraphicsDevice.RenderState.DestinationBlend = Blend.One; GraphicsDevice.RenderState.BlendFunction = BlendFunction.Add; spriteBatch.Draw(circle, new Vector2(0, 0), Color.White); spriteBatch.End(); GraphicsDevice.RenderState.AlphaBlendEnable = false; But it just seems to ignore all my RenderState settings. I also tried setting the SpriteBlendMode to AlphaBlend. It blends the textures, but that is not the effect I want. Any help would be appreciated.

    Read the article

  • Wpf: Why is WriteableBitmap getting slower?

    - by fritz
    There is a simple MSDN example about WriteableBitmap. It shows how to draw a freehand line with the cursor by just updating one pixel when the mouse is pressed and is moving over a WPF -Image Control. writeableBitmap.Lock(); (...set the writeableBitmap.BackBuffers pixel value...) writeableBitmap.AddDirtyRect(new Int32Rect(column, row, 1, 1)); writeableBitmap.Unlock(); Now I'm trying to understand the following behaviour when moving the mouse pointer very fast: If the image/bitmap size is relatively small e.g. 800:600 pixel, then the last drawn pixel is always "synchronized" with the mouse pointers position, i.e. there is no delay, very fast reaction on mouse movements. But if the bitmap gets larger e.g. 1300:1050 pixel, you can notice a delay, the last drawn pixel always appear a bit delayed behind the moving mouse pointer. So as in both cases only one pixel gets updated with "AddDirtyRect", the reaction speed should be independent from the bitmap size!? But it seems that Writeablebitmap gets slower when it's size gets larger. Or does the whole bitmap somehow get transferred to the graphic device on every writeableBitmap.Unlock(); call , and not only the rectangle area speficied in the AddDirtyRect method? fritz

    Read the article

  • CALayer won't display

    - by Paul from Boston
    I'm trying to learn how to use CALayers for a project and am having trouble getting sublayers to display. I created a vanilla View-based iPhone app in XCode for these tests. The only real code is in the ViewController which sets up the layers and their delegates. There is a delegate, DelegateMainView, for the viewController's view layer and a second different one, DelegateStripeLayer, for an additional layer. The ViewController code is all in awakeFromNib, - (void)awakeFromNib { DelegateMainView *oknDelegate = [[DelegateMainView alloc] init]; self.view.layer.delegate = oknDelegate; CALayer *newLayer = [CALayer layer]; DelegateStripeLayer *sldDelegate = [[DelegateStripeLayer alloc] init]; newLayer.delegate = sldDelegate; [self.view.layer addSublayer:newLayer]; [newLayer setNeedsDisplay]; [self.view.layer setNeedsDisplay]; } The two different delegates are simply wrappers for the CALayer delegate method, drawLayer:inContext:, i.e., - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)context { CGRect bounds = CGContextGetClipBoundingBox(context); ... do some stuff here ... CGContextStrokePath(context); } each a bit different. The layer, view.layer, is drawn properly but newLayer is never drawn. If I put breakpoints in the two delegates, the program stops in DelegateMainView but never reaches DelegateStripeLayer. What am I missing here? Thanks.

    Read the article

  • How to draw an overlay on a SurfaceView used by Camera on Android?

    - by Cristian Castiblanco
    I have a simple program that draws the preview of the Camera into a SurfaceView. What I'm trying to do is using the onPreviewFrame method, which is invoked each time a new frame is drawn into the SurfaceView, in order to execute the invalidate method which is supposed to invoke the onDraw method. In fact, the onDraw method is being invoked, but nothing there is being printed (I guess the camera preview is overwriting the text I'm trying to draw). This is a simplify version of the SurfaceView subclass I have: public class Superficie extends SurfaceView implements SurfaceHolder.Callback { SurfaceHolder mHolder; public Camera camera; Superficie(Context context) { super(context); mHolder = getHolder(); mHolder.addCallback(this); mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void surfaceCreated(final SurfaceHolder holder) { camera = Camera.open(); try { camera.setPreviewDisplay(holder); camera.setPreviewCallback(new PreviewCallback() { public void onPreviewFrame(byte[] data, Camera arg1) { invalidar(); } }); } catch (IOException e) {} } public void invalidar(){ invalidate(); } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { Camera.Parameters parameters = camera.getParameters(); parameters.setPreviewSize(w, h); camera.setParameters(parameters); camera.startPreview(); } @Override public void draw(Canvas canvas) { super.draw(canvas); // nothing gets drawn :( Paint p = new Paint(Color.RED); canvas.drawText("PREVIEW", canvas.getWidth() / 2, canvas.getHeight() / 2, p); } }

    Read the article

  • AndEngine VS Android's Canvas VS OpenGLES - For rendering a 2D indoor vector map

    - by Orchestrator
    This is a big issue for me I'm trying to figure out for a long time already. I'm working on an application that should include a 2D vector indoor map in it. The map will be drawn out from an .svg file that will specify all the data of the lines, curved lines (path) and rectangles that should be drawn. My main requirement from the map are Support touch events to detect where exactly the finger is touching. Great image quality especially when considering the drawings of curved and diagonal lines (anti-aliasing) Optional but very nice to have - Built in ability to zoom, pan and rotate. So far I tried AndEngine and Android's canvas. With AndEngine I had troubles with implementing anti-aliasing for rendering smooth diagonal lines or drawing curved lines, and as far as I understand, this is not an easy thing to implement in AndEngine. Though I have to mention that AndEngine's ability to zoom in and pan with the camera instead of modifying the objects on the screen was really nice to have. I also had some little experience with the built in Android's Canvas, mainly with viewing simple bitmaps, but I'm not sure if it supports all of these things, and especially if it would provide smooth results. Last but no least, there's the option of just plain OpenGLES 1 or 2, that as far as I understand, with enough work should be able to support all the features I require. However it seems like something that would be hard to implement. And I've never programmed in OpenGL or anything like it, but I'm willing very much to learn. To sum it all up, I need a platform that would provide me with the ability to do the 3 things I mentioned before, but also very important - To allow me to implement this feature as fast as possible. Any kind of answer or suggestion would be very much welcomed as I'm very eager to solve this problem! Thanks!

    Read the article

  • Android: How to get a custom view to redraw partially?

    - by Peterdk
    I have a custom view that fills my entire screen. (A piano keyboard) When a user touches the key, it would cause a invalidate() to be called and the whole keyboard gets redrawn to show the new state with a touched key. Currently the view is very simple, but I plan to add a bit more nice graphics. Since the whole keyboard is dynamically rendered this would make redrawing the entire keyboard more expensive. So I thought, let's look into partial redrawing. Now I call invalidate(Rect dirty) with the correct dirty region. I set my onDraw(Canvas canvas) method to only draw the keys in the dirty region if I do indeed want a partial redraw. This results in those keys being drawn, but the rest of the keyboard is totally black/not drawn at all. Am I wrong in expecting that calling invalidate(Rect dirty) would "cache" the current canvas, and only "allows" drawing in the dirty region? Is there any way I can achieve what I want? (A way to "cache" the canvas and only redraw the dirty area?"

    Read the article

  • Asynchronous Image loading in certain custom UITableViewCells

    - by Dan
    So, I'm sure that this issue has been brought up before, but I haven't quite seen a solution for my specific problem. What I'm doing is loading a group of custom UITableViewCell's that are drawn using Loren Brichter's solution, each cell has some content, an icon (representing the user) that is asynchronously loaded into it, and a few other things. Eventually, I'm hoping to add support for images. Not every cell has an image, so in the creation of the cell, it determines if an image is required to be loaded and drawn into the cell. My problem is that I don't know how I can do that while keeping the image loaded asynchronously - with not every cell having the need to load an image (like needed with the icon) I'm not sure how to keep it in order when an cell is thrown off screen and re-rendered when the user scrolls over it. Each cell draws it content from an NSArray containing a custom object called FeedItem. All I'm really looking for is some sort of solution or idea to help me because right now I am at a loss. Thanks guys, I will appreciate the help.

    Read the article

  • Can't create an OgreBullet Trimesh

    - by Nathan Baggs
    I'm using Ogre and Bullet for a project and I currently have a first person camera set up with a Capsule Collision Shape. I've created a model of a cave (which will serve as the main part of the level) and imported it into my game. I'm now trying to create an OgreBulletCollisions::TriangleMeshCollisionShape of the cave. The code I've got so far is this but it isn't working. It compiles but the Capsule shape passes straight through the cave shape. Also I have debug outlines on and there are none being drawn around the cave mesh. Entity *cave = mSceneMgr->createEntity("Cave", "pCube1.mesh"); SceneNode *caveNode = mSceneMgr->getRootSceneNode()->createChildSceneNode(); caveNode->setPosition(0, 10, 250); caveNode->setScale(10, 10, 10); caveNode->rotate(Quaternion(0.5, 0.5, -0.5, 0.5)); caveNode->attachObject(cave); OgreBulletCollisions::StaticMeshToShapeConverter *smtsc = new OgreBulletCollisions::StaticMeshToShapeConverter(); smtsc->addEntity(cave); OgreBulletCollisions::TriangleMeshCollisionShape *tri = smtsc->createTrimesh(); OgreBulletDynamics::RigidBody *caveBody = new OgreBulletDynamics::RigidBody("cave", mWorld); caveBody->setStaticShape(tri, 0.1, 0.8); mShapes.push_back(tri); mBodies.push_back(caveBody); Any suggestions are welcome. To clarify. It compiles but the Capsule shape passes straight through the cave shape. Also I have debug outlines on and there are none being drawn around the cave mesh

    Read the article

  • CssClass and default images in ServerContol

    - by Jeff Dege
    I'm writing a ServerControl in ASP.NET 3.5, and I'm exposing CssClass, so the user can manipulate the visual appearance of the control. My problem is that I want to establish reasonable defaults, so that the user doesn't have to configure CSS unless he wants to change the defaults. My specific problem is that my control is emitting html divs, that need to display background images. I want the user to be able to specify a different image in CSS, but I want to display a default background image, and I can't make that work. The entire server control is emitted as a div, with a class name set to the value the user provided in CssClass. The div that needs the background image is enclosed within this outer div, with a class name of its own. I am currently setting the background image in CSS on the page that contains the control: <style type="text/css"> .cssClass .innerDiv { background-image: url("http://...."); } </style> With this the proper image is drawn. But if it's not there, no image is drawn. What I want is for the ServerControl to emit some CSS that will define these image urls, that would be over-ridden by any css that was added by the user, and for that default CSS to include URLs to images embedded in the ServerControl's assembly. And I'm not sure of how to do either. Nor, for that matter, am I sure this is the best approach. Any ideas?

    Read the article

  • Rails : fighting long http response times with ajax. Is it a good idea? Please, help with implementa

    - by baranov
    Hi, everybody! I've googled some tutorials, browsed some SO answers, and was unable to find a recipe for my problem. I'm writing a web site which is supposed to display almost realtime stock chart. Data is stored in constantly updating MySQL database, I wrote a find_by_sql query code which fetches all the data I need to get my chart drawn. Everything is ok, except performance - it takes from one second to one minute for different queries to fetch all the data from the database, this time includes necessary (My)SQL-server side calculations. This is simply unacceptable. I got the following idea: if the data is queried from the MySQL server one point a time instead of entire dataset, it takes only about 1-100ms to get an individual point. I imagine the data fetch process might be browser-driven. After the user presses the button in order to get a chart drawn, controller makes one request to the database and renders, say, a progress bar, say 1% ready. When the browser gets the response, it immediately makes an (ajax) request, and the server fetches the next piece of data and renders "2%". And so on, until all the data is ready and the server displays the requested chart. Could this be implemented in rails+js, is there a tutorial for solving a similar problem on the Web? I suppose if the thing is feasible at all, somebody should have already done this before. I have read several articles about ajax, I believe I do understand general principles, but never did nontrivial ajax programming myself. Thanks for your time!

    Read the article

  • Custom UIView using CALayers disappears after 180º rotation or navigation controller pop

    - by Steve Madsen
    I have a created a custom UIView subclass that is exhibiting some strange behavior. It is a spinning wheel selector, and for performance reasons it is drawn entirely into two CALayer instances. The bottom layer is the wheel itself, which is rotated using setAffineTransform: according to touches. The top layer is eye candy. drawRect: is fairly simple. If the control hasn't been drawn yet (or it's been invalidated), it calls a method that creates the images and assigns them to the layer contents property. - (void) drawRect:(CGRect)rect { if (imageLayer == nil) { [self drawIntoImageLayer]; } [self updateWheelRotation]; } When the view controller using this view first appears, everything is fine. There are two instances where the view completely disappears, however: If the device is rotated a full 180°. After a view controller is popped off the navigation stack and the view becomes visible again. drawRect: is not called either time. Interestingly enough, it IS called after a 90° orientation change, and that causes the view to re-appear. How can I ensure that a custom view using CALayers is redrawn properly in these situations?

    Read the article

  • Can you have multiple clipping regions in an HTML Canvas?

    - by emh
    I have code that loads a bunch of images into hidden img elements and then a Javascript loop which places each image onto the canvas. However, I want to clip each image so that it is a circle when placed on the canvas. My loop looks like this: $$('#avatars img').each(function(avatar) { var canvas = $('canvas'); var context = canvas.getContext('2d'); var x = Math.floor(Math.random() * canvas.width); var y = Math.floor(Math.random() * canvas.height); context.beginPath(); context.arc(x+24, y+24, 20, 0, Math.PI * 2, 1); context.clip(); context.strokeStyle = "black"; context.drawImage(document.getElementById(avatar.id), x, y); context.stroke(); }); Problem is, only the first image is drawn (or is visible). If I remove the clipping logic: $$('#avatars img').each(function(avatar) { var canvas = $('canvas'); var context = canvas.getContext('2d'); var x = Math.floor(Math.random() * canvas.width); var y = Math.floor(Math.random() * canvas.height); context.drawImage(document.getElementById(avatar.id), x, y); }); Then all my images are drawn. Is there a way to get each image individually clipped? I tried resetting the clipping area to be the entire canvas between images but that didn't work.

    Read the article

  • After drawing circles on C# form how can i know on what circle i clicked?

    - by SorinA.
    I have to represent graphically an oriented graph like in the image below. i have a C# form, when i click with the mouse on it i have to draw a node. If i click somewhere on the form where is not already a node drawn it means i cliked with the intetion of drawing a node, if it is a node there i must select it and memorize it. On the next mouse click if i touch a place where there is not already a node drawn it means like before that i want to draw a new node, if it is a node where i clicked i need to draw the line from the first memorized node to the selected one and add road cost details. i know how to draw the circles that represent the nodes of the graph when i click on the form. i'm using the following code: namespace RepGraficaAUnuiGraf { public partial class Form1 : Form { Graphics graphDrawingArea; Bitmap bmpDrawingArea; Graphics graph; public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { bmpDrawingArea = new Bitmap(Width, Height); graphDrawingArea = Graphics.FromImage(bmpDrawingArea); graph = Graphics.FromHwnd(this.Handle); } private void Form1_Click(object sender, EventArgs e) { DrawCentralCircle(((MouseEventArgs)e).X, ((MouseEventArgs)e).Y, 15); graph.DrawImage(bmpDrawingArea, 0, 0); } void DrawCentralCircle(int CenterX, int CenterY, int Radius) { int start = CenterX - Radius; int end = CenterY - Radius; int diam = Radius * 2; bmpDrawingArea = new Bitmap(Width, Height); graphDrawingArea = Graphics.FromImage(bmpDrawingArea); graphDrawingArea.DrawEllipse(new Pen(Color.Blue), start, end, diam, diam); graphDrawingArea.DrawString("1", new Font("Tahoma", 13), Brushes.Black, new PointF(CenterX - 8, CenterY - 10)); } } } My question is how can i find out if at the coordinates (x,y) on my form i drew a node and which one is it? I thought of representing the nodes as buttons, having a tag or something similar as the node number(which in drawing should be 1 for Santa Barbara, 2 for Barstow etc.)

    Read the article

  • What is faster with PictureBox? Many small redraws or complete redraw.

    - by kornelijepetak
    I have a PictureBox (WinMobile 6 WinForm) on which I draw some images. There is a background image that goes in the background and it does not change. However objects that are drawn on the picturebox are moving during the application so I need to refresh the background. Since items that are redrawn fill from 50% to 80% of the surface, the question is which of the two is faster: 1) Redraw only parts of the background image that have been changed (previous+next location of the moving object). 2) Redraw complete background and then draw all the objects in their current position. Now, the reason for asking is because I am not sure how much of processor power is needed for a single drawImage operation and what are the time consuming factors. I am aware if there is almost complete coverage of the background, it would be stupid to redraw portions of it, because by drawing portions I will have drawn the complete picture. But since sometimes only half of the image had changed (some objects remained in their old position), it may (perhaps) be benefitial to redraw only those regions. But I need your insight on this... Thanks.

    Read the article

  • Scrollbar with Sprite and Rectangle won't move text, just the Rectangle it's painted on.

    - by WebDevHobo
    Warning: school assignment. For those of you still with me, I am tasked with making some scrollable content in Flash. Load in a TextFile using LoadURL(), then display it. To get the text, we've written our own class TextFieldExtended, which is basically just there to give the textfile location to the constructor and then have the class do the various steps of getting it and loading it for you. So I needed to get a Scrollbar, which I got here: http://kirupa.com/forum/showthread.php?t=245468 (all files in a zip linked at the end of this text) The thing is, it works with Sprites. After trying to get it to accept TextFieldExtended, I bumped into a block, since the scrollbar relied heavily on a Sprite property that TextFieldExtended didn't have or could have. So I tried adding the TextFieldExtended instance to a Sprite instance using addchild. A problem occurs here that I do not know how to handle. It seems that a Rectangle is drawn and the Text is drawn on that. I say this because the scrollbar moves the Rectangle up and down a bit, but the text doesn't scroll, just the Rectangle it is positioned in and the text then moves along with it. My question: can this be fixed, or is does this implementation of scrollbars need a lot of adaptations before this is possible? If so, any scrollbars you can recommend, because it's too extended for me at this point. All files: http://www.mediafire.com/?q2ium22gmox This was made in Flash CS4 using ActionScript3. The Example class is the final implementation

    Read the article

  • Draw a position from a 2d Array on respected canvas location

    - by Anon
    Background: I have two 2d arrays. Each index within each 2d array represents a tile which is drawn on a square canvas suitable for 8 x 8 tiles. The first 2d array represents the ground tiles and is looped and drawn on the canvas using the following code: //Draw the map from the land 2d array map = new Canvas(mainFrame, 20, 260, 281, 281); for(int i=0; i < world.length; i++){ for(int j=0; j < world[i].length; j++){ for(int x=0; x < 280; x=x+35){ for(int y=0; y < 280; y=y+35){ Point p = new Point(x,y); map.add(new RectangleObject(p,35,35,Colour.green)); } } } } This creates a grid of green tiles 8 x 8 across as intended. The second 2d array represents the position on the ground. This 2d array has everyone of its indexes as null apart from one which is comprised of a Person class. Problem I am unsure of how I can draw the position on the grid. I was thinking of a similar loop, so it draws over the previous 2d array another set of 64 tiles. Only this time they are all transparent but the one tile which isn't null. In other words, the tile where Person is located. I wanted to use a search throughout the loop using a comparative if statement along the lines of if(!(world[] == null)){ map.add(new RectangleObject(p,35,35,Colour.red));} However my knowledge is limited and I am confused on how to implement it.

    Read the article

  • wxpython: adding panel to wx.Frame disables/conflicts with wx.Frame's OnPaint?!

    - by sdaau
    Hi all, I just encountered this strange situation: I found an example, where wx.Frame's OnPaint is overridden, and a circle is drawn. Funnily, as soon as I add even a single panel to the frame, the circle is not drawn anymore - in fact, OnPaint is not called at all anymore ! Can anyone explain me if this is the expected behavior, and how to correctly handle a wx.Frame's OnPaint, if the wx.Frame has child panels ? Small code example is below.. Thanks in advance for any answers, Cheers! The code: #!/usr/bin/env python # http://www.linuxquestions.org/questions/programming-9/wxwidgets-wxpython-drawing-problems-with-onpaint-event-703946/ import wx class MainWindow(wx.Frame): def __init__(self, parent, title, size=wx.DefaultSize): wx.Frame.__init__(self, parent, wx.ID_ANY, title, wx.DefaultPosition, size) self.circles = list() self.displaceX = 30 self.displaceY = 30 circlePos = (self.displaceX, self.displaceY) self.circles.append(circlePos) ## uncommenting either only first, or both of ## the commands below, causes OnPaint *not* to be called anymore! #~ self.panel = wx.Panel(self, wx.ID_ANY) #~ self.mpanelA = wx.Panel(self.panel, -1, size=(200,50)) self.Bind(wx.EVT_PAINT, self.OnPaint) def OnPaint(self, e): print "OnPaint called" dc = wx.PaintDC(self) dc.SetPen(wx.Pen(wx.BLUE)) dc.SetBrush(wx.Brush(wx.BLUE)) # Go through the list of circles to draw all of them for circle in self.circles: dc.DrawCircle(circle[0], circle[1], 10) def main(): app = wx.App() win = MainWindow(None, "Draw delayed circles", size=(620,460)) win.Show() app.MainLoop() if __name__ == "__main__": main()

    Read the article

  • Obtaining FontMetrics before getting a Graphics instance

    - by Tom Castle
    Typically, I'd obtain a graphics instance something like this: BufferedImage img = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB); Graphics2D g = img.createGraphics(); However, in the current project I'm working on, the width and height variables above are dependent upon the size of a number of text fragments that will later be drawn onto the graphics instance. But, to obtain the dimensions of the font being used I would usually use the FontMetrics that I get from the graphics object. FontMetrics metrics = g.getFontMetrics(); So, I have a nasty little dependency cycle. I cannot create the graphics object until I know the size of the text, and I cannot know the size of the text until I have a graphics object. One solution is just to create another BufferedImage/Graphics pair first in order to get the FontMetrics instance I need, but this seems unnecessary. So, is there a nicer way? Or is it the case that the width, height etc. properties for a Font are somehow dependent upon what (graphics, component...) the text is to be drawn on?

    Read the article

  • Strange issue with fixed form border styles in Vista

    - by Nazgulled
    My previous post about this issue didn't got too many answers and it was kinda specific and hard to understand. I think I've managed to understand the problem better and I now believe it to be a Vista issue... The problem lies on all types of fixed border styles like FixedDialog, Fixed3D, FixedSingle and FixedToolWindow. It does not happen on the sizable ones. This problem, like I said, it also happens only on Vista. Let's say you have a form with any of the fixed border styles and set the starting location to 0,0. What you want here is for the form to be snapped to the top left corner of the screen. This works just fine if the form border style is one of the sizable options, if it's fixed, well, the form will be a little bit outside of the screen working area both to the left and top. What's more strange about this is that the form location does not change, it sill is 0,0, but a few pixels of the form are still drawn outside of the working screen area. I tested this on XP and it didn't happen, the problem is Vista specific. On XP, the only difference was the border size that change a bit between any of the styles. But the form was always perfectly snapped to position 0,0. If possible, without finding how many pixels are being drawn outside of the working area and then add that to the form location, is there a possible way to fix or workaround this?

    Read the article

  • Generate random number from an arbitrary weighted list

    - by Fernando
    Here's what I need to do, I'll be doing this both in PHP and JavaScript. I have a list of numbers that will range from 1 to 300-500 (I haven't set the limit yet). I will be running a drawing were 10 numbers will be picked at random from the given range. Here's the tricky part: I want some numbers to be less likely to be drawn up. A small set of those 300-500 will be flagged as "lucky numbers". For example, out of 100 drawings, most numbers have equal chances of being drawn, except for a few, that will only be picked once every 30-50 drawings. Basically I need to artificially set the probability of certain numbers to be picked while maintaining an even distribution with the rest of the numbers. The only similar thing I've found so far is this question: Generate A Weighted Random Number, the problem being that my spec has considerably more numbers (up to 500) so the weights would get very small and supposedly this could be a problem with that solution (Rejection Sampling). I'm still trying it, though, but I wonder if there other solutions. Math is not my thing so I appreciate any input. Thanks.

    Read the article

  • Creating collaborative whiteboard drawing application

    - by Steven Sproat
    I have my own drawing program in place, with a variety of "drawing tools" such as Pen, Eraser, Rectangle, Circle, Select, Text etc. It's made with Python and wxPython. Each tool mentioned above is a class, which all have polymorphic methods, such as left_down(), mouse_motion(), hit_test() etc. The program manages a list of all drawn shapes -- when a user has drawn a shape, it's added to the list. This is used to manage undo/redo operations too. So, I have a decent codebase that I can hook collaborative drawing into. Each shape could be changed to know its owner -- the user who drew it, and to only allow delete/move/rescale operations to be performed on shapes owned by one person. I'm just wondering the best way to develop this. One person in the "session" will have to act as the server, I have no money to offer free central servers. Somehow users will need a way to connect to servers, meaning some kind of "discover servers" browser...or something. How do I broadcast changes made to the application? Drawing in realtime and broadcasting a message on each mouse motion event would be costly in terms of performance and things get worse the more users there are at a given time. Any ideas are welcome, I'm not too sure where to begin with developing this (or even how to test it)

    Read the article

  • AvoidXferMode Tolerance

    - by kayahr
    I have a problem with the following code: protected void onDraw(Canvas canvas) { Paint paint = new Paint(); // Draw a blue circle paint.setColor(Color.BLUE); canvas.drawCircle(100, 100, 50, paint); // Draw a red circle where it collides with the blue one paint.setXfermode(new AvoidXfermode(Color.BLUE, 0, Mode.TARGET)); paint.setColor(Color.RED); canvas.drawCircle(50, 50, 50, paint); } According to the API documentation of AvoidXfermode the tolerance value 0 means that it looks for an EXACT color match. This should work here because I specify the same color as I used for drawing the first circle. But the result is that the red circle is not drawn at all. When I use a tolerance value of 255 instead then it works (red circle is drawn where it collides with the blue one) but that sounds wrong because with such a high tolerance I think it should draw the circle EVERYWHERE. So what's wrong here? API Documentation? Android? Me?

    Read the article

  • How to refresh/redraw the screen (not the program window)

    - by mohrphium
    I'm having a bit of a hard time figuring out, how to remove a drawn ellipse after it has been drawn somewhere else. I need a circle to follow my mouse all the time and this is all the program should do. I get the mousepositions and draw my circle but how can I remove the last one? #include <Windows.h> #include <iostream> void drawRect(int a1, int a2){ HDC screenDC = ::GetDC(0); //Draw circle at mouse position ::Ellipse(screenDC, a1, a2+5, a1+9, a2+14); ::ReleaseDC(0, screenDC); //::InvalidateRect(0, NULL, TRUE); //<- I tried that but then everything flickers //Also, the refresh rate is not fast enough... still some circles left } int main(void) { int a1; int a2; bool exit=false; while (exit!=true) { POINT cursorPos; GetCursorPos(&cursorPos); float x = 0; x = cursorPos.x; float y = 0; y = cursorPos.y; a1=(int)cursorPos.x; a2=(int)cursorPos.y; drawRect(a1, a2); } } I am working with graphics and all that stuff for the first time. Im kinda stuck here... once again. Thanks.

    Read the article

  • own drawImage / drawLine in OpenGL

    - by Chrise
    I'm implementing some native 2D-draw functions in my graphics engine for android, but now there's another question coming up, when I observe the performance of my program. At the moment I'm implementing a drawLine/drawImage function. In summary, there are following different values for drawing each different line / image: the color the alpha value the width of the line rotation (only for images) size/scale (also for images) blending method (subrtract, add, normal-alpha) Now, when an imageLine is drawn, I put the CPU-calculated vertex-positions and uv-values for 6 vertices (2 triangles), into a Floatbuffer and draw it immediately with drawArrays, after passing information for drawing (color,alpha, etc.) via uniforms to the shader. When I draw an image, the pre-set VBO is directly drawn after passing information. The first fact I recognized, is: of course drawing Images is much faster, than imagelines (beacuse of VBOs), but also: I cannot pre-put vertex-data into a VBO for imageLines, because imageLines have no static shape like normal images (varying linelength, varying linewidth and the vertex positions of x1,y1 and x2,y2 change too often) That's why I use a normal Floatbuffer, instead of a VBO. So my question is: What's the best way for managing images, and other 2D-graphics functions. For me it's some kind of important, that the user of the engine is able to draw as many images/2D graphics as possible, without loosing to much performance. You can find the functions for drawing images, imagelines, rects, quads, etc. here: https://github.com/Chrise55/LLama3D/blob/master/Llama3DLibrary/src/com/llama3d/object/graphics/image/ImageBase.java Here an example how it looks with many images (testing artificial neural networks), it works fine, but already little bit slow with that many images... :(

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >