Search Results

Search found 6598 results on 264 pages for 'opcode cache'.

Page 61/264 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Is it possible to evaluate a JSP only once per session, and cache it after that?

    - by Bears will eat you
    My site has a nav menu that is dynamically built as a separate JSP, and included in most pages via <jsp:include />. The contents and styling of the menu are determined by which pages the user does and doesn't have access to. The set of accessible pages is retrieved from the database when a user logs in, and not during the course of a session. So, there's really no need to re-evaluate the nav menu code every time the user requests a page. Is there an easy way to generate the markup from the JSP only once per session, and cache/reuse it during the session?

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Best tool to understand source

    - by cache
    I have a source code for a project. I am working on porting it to another device as the current source code is for a linux environment. I am having some error on the newly ported code. So i was thinking it would be best to once again understand the whole source code and this will help me localise the errors. Now the problem is that i tried using 'gdb' for linux to debug the code but it does not help. So is there any tool that I can use to trace the program line by line ? By doing so i can understand the program flow. Please Help !

    Read the article

  • Win7 Bluescreen: IRQ_NOT_LESS_OR_EQUAL | athrxusb.sys

    - by wretrOvian
    Hi I'd left my system on last night, and found the bluescreen in the morning. This has been happening occasionally, over the past few days. Details: ================================================== Dump File : 022710-18236-01.dmp Crash Time : 2/27/2010 8:46:44 AM Bug Check String : DRIVER_IRQL_NOT_LESS_OR_EQUAL Bug Check Code : 0x000000d1 Parameter 1 : 00000000`00001001 Parameter 2 : 00000000`00000002 Parameter 3 : 00000000`00000000 Parameter 4 : fffff880`06b5c0e1 Caused By Driver : athrxusb.sys Caused By Address : athrxusb.sys+760e1 File Description : Product Name : Company : File Version : Processor : x64 Computer Name : Full Path : C:\Windows\minidump\022710-18236-01.dmp Processors Count : 2 Major Version : 15 Minor Version : 7600 ================================================== HiJackThis ("[...]" indicates removed text; full log posted to pastebin): Logfile of Trend Micro HijackThis v2.0.2 Scan saved at 8:49:15 AM, on 2/27/2010 Platform: Unknown Windows (WinNT 6.01.3504) MSIE: Internet Explorer v8.00 (8.00.7600.16385) Boot mode: Normal Running processes: C:\Windows\DAODx.exe C:\Program Files (x86)\ASUS\EPU\EPU.exe C:\Program Files\ASUS\TurboV\TurboV.exe C:\Program Files (x86)\PowerISO\PWRISOVM.EXE C:\Program Files (x86)\OpenOffice.org 3\program\soffice.exe C:\Program Files (x86)\OpenOffice.org 3\program\soffice.bin D:\Downloads\HijackThis.exe C:\Program Files (x86)\uTorrent\uTorrent.exe R1 - HKCU\Software\Microsoft\Internet Explorer\[...] [...] O2 - BHO: Java(tm) Plug-In 2 SSV Helper - {DBC80044-A445-435b-BC74-9C25C1C588A9} - C:\Program Files (x86)\Java\jre6\bin\jp2ssv.dll O4 - HKLM\..\Run: [HDAudDeck] C:\Program Files (x86)\VIA\VIAudioi\VDeck\VDeck.exe -r O4 - HKLM\..\Run: [StartCCC] "C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static\CLIStart.exe" MSRun O4 - HKLM\..\Run: [TurboV] "C:\Program Files\ASUS\TurboV\TurboV.exe" O4 - HKLM\..\Run: [PWRISOVM.EXE] C:\Program Files (x86)\PowerISO\PWRISOVM.EXE O4 - HKLM\..\Run: [googletalk] C:\Program Files (x86)\Google\Google Talk\googletalk.exe /autostart O4 - HKLM\..\Run: [AdobeCS4ServiceManager] "C:\Program Files (x86)\Common Files\Adobe\CS4ServiceManager\CS4ServiceManager.exe" -launchedbylogin O4 - HKCU\..\Run: [uTorrent] "C:\Program Files (x86)\uTorrent\uTorrent.exe" O4 - HKUS\S-1-5-19\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-19\..\RunOnce: [mctadmin] C:\Windows\System32\mctadmin.exe (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-20\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'NETWORK SERVICE') O4 - HKUS\S-1-5-20\..\RunOnce: [mctadmin] C:\Windows\System32\mctadmin.exe (User 'NETWORK SERVICE') O4 - Startup: OpenOffice.org 3.1.lnk = C:\Program Files (x86)\OpenOffice.org 3\program\quickstart.exe O13 - Gopher Prefix: O23 - Service: @%SystemRoot%\system32\Alg.exe,-112 (ALG) - Unknown owner - C:\Windows\System32\alg.exe (file missing) O23 - Service: AMD External Events Utility - Unknown owner - C:\Windows\system32\atiesrxx.exe (file missing) O23 - Service: ASUS System Control Service (AsSysCtrlService) - Unknown owner - C:\Program Files (x86)\ASUS\AsSysCtrlService\1.00.02\AsSysCtrlService.exe O23 - Service: DeviceVM Meta Data Export Service (DvmMDES) - DeviceVM - C:\ASUS.SYS\config\DVMExportService.exe O23 - Service: @%SystemRoot%\system32\efssvc.dll,-100 (EFS) - Unknown owner - C:\Windows\System32\lsass.exe (file missing) O23 - Service: ESET HTTP Server (EhttpSrv) - ESET - C:\Program Files\ESET\ESET NOD32 Antivirus\EHttpSrv.exe O23 - Service: ESET Service (ekrn) - ESET - C:\Program Files\ESET\ESET NOD32 Antivirus\x86\ekrn.exe O23 - Service: @%systemroot%\system32\fxsresm.dll,-118 (Fax) - Unknown owner - C:\Windows\system32\fxssvc.exe (file missing) O23 - Service: FLEXnet Licensing Service - Acresso Software Inc. - C:\Program Files (x86)\Common Files\Macrovision Shared\FLEXnet Publisher\FNPLicensingService.exe O23 - Service: FLEXnet Licensing Service 64 - Acresso Software Inc. - C:\Program Files\Common Files\Macrovision Shared\FLEXnet Publisher\FNPLicensingService64.exe O23 - Service: InstallDriver Table Manager (IDriverT) - Macrovision Corporation - C:\Program Files (x86)\Common Files\InstallShield\Driver\11\Intel 32\IDriverT.exe O23 - Service: @keyiso.dll,-100 (KeyIso) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @comres.dll,-2797 (MSDTC) - Unknown owner - C:\Windows\System32\msdtc.exe (file missing) O23 - Service: @%SystemRoot%\System32\netlogon.dll,-102 (Netlogon) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%systemroot%\system32\psbase.dll,-300 (ProtectedStorage) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: Protexis Licensing V2 (PSI_SVC_2) - Protexis Inc. - c:\Program Files (x86)\Common Files\Protexis\License Service\PsiService_2.exe O23 - Service: @%systemroot%\system32\Locator.exe,-2 (RpcLocator) - Unknown owner - C:\Windows\system32\locator.exe (file missing) O23 - Service: @%SystemRoot%\system32\samsrv.dll,-1 (SamSs) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%SystemRoot%\system32\snmptrap.exe,-3 (SNMPTRAP) - Unknown owner - C:\Windows\System32\snmptrap.exe (file missing) O23 - Service: @%systemroot%\system32\spoolsv.exe,-1 (Spooler) - Unknown owner - C:\Windows\System32\spoolsv.exe (file missing) O23 - Service: @%SystemRoot%\system32\sppsvc.exe,-101 (sppsvc) - Unknown owner - C:\Windows\system32\sppsvc.exe (file missing) O23 - Service: Steam Client Service - Valve Corporation - C:\Program Files (x86)\Common Files\Steam\SteamService.exe O23 - Service: @%SystemRoot%\system32\ui0detect.exe,-101 (UI0Detect) - Unknown owner - C:\Windows\system32\UI0Detect.exe (file missing) O23 - Service: @%SystemRoot%\system32\vaultsvc.dll,-1003 (VaultSvc) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%SystemRoot%\system32\vds.exe,-100 (vds) - Unknown owner - C:\Windows\System32\vds.exe (file missing) O23 - Service: @%systemroot%\system32\vssvc.exe,-102 (VSS) - Unknown owner - C:\Windows\system32\vssvc.exe (file missing) O23 - Service: @%systemroot%\system32\wbengine.exe,-104 (wbengine) - Unknown owner - C:\Windows\system32\wbengine.exe (file missing) O23 - Service: @%Systemroot%\system32\wbem\wmiapsrv.exe,-110 (wmiApSrv) - Unknown owner - C:\Windows\system32\wbem\WmiApSrv.exe (file missing) O23 - Service: @%PROGRAMFILES%\Windows Media Player\wmpnetwk.exe,-101 (WMPNetworkSvc) - Unknown owner - C:\Program Files (x86)\Windows Media Player\wmpnetwk.exe (file missing) -- End of file - 6800 bytes CPU-Z ("[...]" indicates removed text; see full log posted to pastebin): CPU-Z TXT Report ------------------------------------------------------------------------- Binaries ------------------------------------------------------------------------- CPU-Z version 1.53.1 Processors ------------------------------------------------------------------------- Number of processors 1 Number of threads 2 APICs ------------------------------------------------------------------------- Processor 0 -- Core 0 -- Thread 0 0 -- Core 1 -- Thread 0 1 Processors Information ------------------------------------------------------------------------- Processor 1 ID = 0 Number of cores 2 (max 2) Number of threads 2 (max 2) Name AMD Phenom II X2 550 Codename Callisto Specification AMD Phenom(tm) II X2 550 Processor Package Socket AM3 (938) CPUID F.4.2 Extended CPUID 10.4 Brand ID 29 Core Stepping RB-C2 Technology 45 nm Core Speed 3110.7 MHz Multiplier x FSB 15.5 x 200.7 MHz HT Link speed 2006.9 MHz Instructions sets MMX (+), 3DNow! (+), SSE, SSE2, SSE3, SSE4A, x86-64, AMD-V L1 Data cache 2 x 64 KBytes, 2-way set associative, 64-byte line size L1 Instruction cache 2 x 64 KBytes, 2-way set associative, 64-byte line size L2 cache 2 x 512 KBytes, 16-way set associative, 64-byte line size L3 cache 6 MBytes, 48-way set associative, 64-byte line size FID/VID Control yes Min FID 4.0x P-State FID 0xF - VID 0x10 P-State FID 0x8 - VID 0x18 P-State FID 0x3 - VID 0x20 P-State FID 0x100 - VID 0x2C Package Type 0x1 Model 50 String 1 0x7 String 2 0x6 Page 0x0 TDP Limit 79 Watts TDC Limit 66 Amps Attached device PCI device at bus 0, device 24, function 0 Attached device PCI device at bus 0, device 24, function 1 Attached device PCI device at bus 0, device 24, function 2 Attached device PCI device at bus 0, device 24, function 3 Attached device PCI device at bus 0, device 24, function 4 Thread dumps ------------------------------------------------------------------------- CPU Thread 0 APIC ID 0 Topology Processor ID 0, Core ID 0, Thread ID 0 Type 0200400Ah Max CPUID level 00000005h Max CPUID ext. level 8000001Bh Cache descriptor Level 1, I, 64 KB, 1 thread(s) Cache descriptor Level 1, D, 64 KB, 1 thread(s) Cache descriptor Level 2, U, 512 KB, 1 thread(s) Cache descriptor Level 3, U, 6 MB, 2 thread(s) CPUID 0x00000000 0x00000005 0x68747541 0x444D4163 0x69746E65 0x00000001 0x00100F42 0x00020800 0x00802009 0x178BFBFF 0x00000002 0x00000000 0x00000000 0x00000000 0x00000000 0x00000003 0x00000000 0x00000000 0x00000000 0x00000000 0x00000004 0x00000000 0x00000000 0x00000000 0x00000000 0x00000005 0x00000040 0x00000040 0x00000003 0x00000000 [...] CPU Thread 1 APIC ID 1 Topology Processor ID 0, Core ID 1, Thread ID 0 Type 0200400Ah Max CPUID level 00000005h Max CPUID ext. level 8000001Bh Cache descriptor Level 1, I, 64 KB, 1 thread(s) Cache descriptor Level 1, D, 64 KB, 1 thread(s) Cache descriptor Level 2, U, 512 KB, 1 thread(s) Cache descriptor Level 3, U, 6 MB, 2 thread(s) CPUID 0x00000000 0x00000005 0x68747541 0x444D4163 0x69746E65 0x00000001 0x00100F42 0x01020800 0x00802009 0x178BFBFF 0x00000002 0x00000000 0x00000000 0x00000000 0x00000000 0x00000003 0x00000000 0x00000000 0x00000000 0x00000000 0x00000004 0x00000000 0x00000000 0x00000000 0x00000000 0x00000005 0x00000040 0x00000040 0x00000003 0x00000000 [...] Chipset ------------------------------------------------------------------------- Northbridge AMD 790GX rev. 00 Southbridge ATI SB750 rev. 00 Memory Type DDR3 Memory Size 4096 MBytes Channels Dual, (Unganged) Memory Frequency 669.0 MHz (3:10) CAS# latency (CL) 9.0 RAS# to CAS# delay (tRCD) 9 RAS# Precharge (tRP) 9 Cycle Time (tRAS) 24 Bank Cycle Time (tRC) 33 Command Rate (CR) 1T Uncore Frequency 2006.9 MHz Memory SPD ------------------------------------------------------------------------- DIMM # 1 SMBus address 0x50 Memory type DDR3 Module format UDIMM Manufacturer (ID) G.Skill (7F7F7F7FCD000000) Size 2048 MBytes Max bandwidth PC3-10700 (667 MHz) Part number F3-10600CL9-2GBNT Number of banks 8 Nominal Voltage 1.50 Volts EPP no XMP no JEDEC timings table CL-tRCD-tRP-tRAS-tRC @ frequency JEDEC #1 6.0-6-6-17-23 @ 457 MHz JEDEC #2 7.0-7-7-20-27 @ 533 MHz JEDEC #3 8.0-8-8-22-31 @ 609 MHz JEDEC #4 9.0-9-9-25-34 @ 685 MHz DIMM # 2 SMBus address 0x51 Memory type DDR3 Module format UDIMM Manufacturer (ID) G.Skill (7F7F7F7FCD000000) Size 2048 MBytes Max bandwidth PC3-10700 (667 MHz) Part number F3-10600CL9-2GBNT Number of banks 8 Nominal Voltage 1.50 Volts EPP no XMP no JEDEC timings table CL-tRCD-tRP-tRAS-tRC @ frequency JEDEC #1 6.0-6-6-17-23 @ 457 MHz JEDEC #2 7.0-7-7-20-27 @ 533 MHz JEDEC #3 8.0-8-8-22-31 @ 609 MHz JEDEC #4 9.0-9-9-25-34 @ 685 MHz DIMM # 1 SPD registers [...] DIMM # 2 SPD registers [...] Monitoring ------------------------------------------------------------------------- Mainboard Model M4A78T-E (0x000001F7 - 0x00A955E4) LPCIO ------------------------------------------------------------------------- LPCIO Vendor ITE LPCIO Model IT8720 LPCIO Vendor ID 0x90 LPCIO Chip ID 0x8720 LPCIO Revision ID 0x2 Config Mode I/O address 0x2E Config Mode LDN 0x4 Config Mode registers [...] Register space LPC, base address = 0x0290 Hardware Monitors ------------------------------------------------------------------------- Hardware monitor ITE IT87 Voltage 1 1.62 Volts [0x65] (VIN1) Voltage 2 1.15 Volts [0x48] (CPU VCORE) Voltage 3 5.03 Volts [0xBB] (+5V) Voltage 8 3.34 Volts [0xD1] (VBAT) Temperature 0 39°C (102°F) [0x27] (TMPIN0) Temperature 1 43°C (109°F) [0x2B] (TMPIN1) Fan 0 3096 RPM [0xDA] (FANIN0) Register space LPC, base address = 0x0290 [...] Hardware monitor AMD SB6xx/7xx Voltage 0 1.37 Volts [0x1D2] (CPU VCore) Voltage 1 3.50 Volts [0x27B] (CPU IO) Voltage 2 12.68 Volts [0x282] (+12V) Hardware monitor AMD Phenom II X2 550 Power 0 89.10 W (Processor) Temperature 0 35°C (94°F) [0x115] (Core #0) Temperature 1 35°C (94°F) [0x115] (Core #1)

    Read the article

  • Stop squid caching 302 and 307 with deny_info

    - by 0xception
    TLDR: 302, 307 and Error pages are being cached. Need to force a refresh of the content. Long version: I've setup a very minimal squid instance running on a gateway which shouldn't not cache ANYTHING but needs to be solely used as a domain based web filter. I'm using another application which redirects un-authenticated users to the proxy which then uses the deny_info option redirects any non-whitelisted request to the login page. After the user has authenticated the firewall rule gets placed so they no longer get sent to the proxy. The problem is that when a user hits a website (xkcd.com) they are unauthenticated so they get redirected via the firewall: iptables -A unknown-user -t nat -p tcp --dport 80 -j REDIRECT --to-port 39135 to the proxy at this point squid redirects the user to the login page using a 302 (i've also tried 307, and i've also make sure the headers are set to no-cache and/or no-store for Cache-Control and Pragma). Then when the user logs into the system they get firewall rule which no longer directs them to the squid proxy. But if they go to xkcd.com again they will have the original redirection page cached and will once again get the login page. Any idea how to force these redirects to NOT be cached by the browser? Perhaps this is a problem w/ the browsers and not squid, but not sure how to get around it. Full squid config below. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl localnet src 192.168.182.0/23 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl https port 443 acl http port 80 acl CONNECT method CONNECT # # Disable Cache # cache deny all via off negative_ttl 0 seconds refresh_all_ims on #error_default_language en # Allow manager access only from localhost http_access allow manager localhost http_access deny manager # Deny access to anything other then http http_access deny !http # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !https visible_hostname gate.ovatn.net # Disable memory pooling memory_pools off # Never use neigh cache objects for cgi-bin scripts hierarchy_stoplist cgi-bin ? # # URL rewrite Test Settings # #acl whitelist dstdomain "/etc/squid/domains-pre.lst" #url_rewrite_program /usr/lib/squid/redirector #url_rewrite_access allow !whitelist #url_rewrite_children 5 startup=0 idle=1 concurrency=0 #http_access allow all # # Deny Info Error Test # acl whitelist dstdomain "/etc/squid/domains-pre.lst" deny_info http://login.domain.com/ whitelist #deny_info ERR_ACCESS_DENIED whitelist http_access deny !whitelist http_access allow whitelist http_port 39135 transparent ## Debug Values access_log /var/log/squid/access-pre.log cache_log /var/log/squid/cache-pre.log # Production Values #access_log /dev/null #cache_log /dev/null # Set PID file pid_filename /var/run/gatekeeper-pre.pid SOLUTION: I believe I might have found a solution to this. After days and days trying to figure it out, only through a random stumble I found client_persistent_connections off server_persistent_connections off This did the trick. So it wasn't so much cache as it was a single persistent connection messing things up. W000T!

    Read the article

  • Need Varnish configuration advice

    - by Patrick
    Hello fellows, I need some advice here for default.vcl. Here's the rules: Only cache pages with urls that contains '/c/', the rest will pass Set the cache expiry to 3 hours Only cache and serve from cache if cookie 'abc' and cookie 'xyz' is empty Thank you!

    Read the article

  • Solaris: Is it OK to disable font services?

    - by cjavapro
    Is it OK to disable these services? # svcs -l '*font*' fmri svc:/application/font/stfsloader:default name Standard Type Services Framework (STSF) Font Server loader enabled true state online next_state none state_time Sun May 30 17:58:14 2010 restarter svc:/network/inetd:default fmri svc:/application/font/fc-cache:default name FontConfig Cache Builder enabled true state online next_state none state_time Sun May 30 17:58:15 2010 logfile /var/svc/log/application-font-fc-cache:default.log restarter svc:/system/svc/restarter:default dependency require_all/none svc:/system/filesystem/local (online) dependency require_all/refresh file://localhost/etc/fonts/fonts.conf (online) dependency require_all/none file://localhost/usr/bin/fc-cache (online) #

    Read the article

  • Need Varnish configuration advise

    - by Patrick
    Hello fellows, I need some advise here for default.vcl. Here's the rules: 1) Only cache pages with urls that contains '/c/', the rest will pass 2) Set the cache expiry to 3 hours 3) Only cache and server from cache if cookie 'abc' and cookie 'xyz' is empty Thank you!

    Read the article

  • Preventing 304 Not Modified Requests with nginx

    - by ustun
    I am running nginx, and have the following block for expiration: expires 52w; However when I use Google Chrome Developer Tools to observe network traffic, some of the assets are loaded from cache (200-from cache) while most of the assets are making a request to the server (304 Not Modified). I want to load all assets from cache without communicating with the server if possible. (200-from cache) What would be the required change in my nginx configuration?

    Read the article

  • SQLServer:Namespaces preventing access to query data

    - by Brian
    Hi A beginners question, hopefully easily answered. I've got an xml file I want to load into SQLServer 2008 and extract the useful informaiton. I'm starting simple and just trying to extract the name (\gpx\name). The code I have is: DECLARE @x xml; SELECT @x = xCol.BulkColumn FROM OPENROWSET (BULK 'C:\Data\EM.gpx', SINGLE_BLOB) AS xCol; -- confirm the xml data is in @x select @x as XML_Data -- try and get the name of the gpx section SELECT c.value('name[1]', 'varchar(200)') as Name from @x.nodes('gpx') x(c) Below is a heavily shortened version of the xml file: <?xml version="1.0" encoding="utf-8"?> <gpx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" version="1.0" creator="Groundspeak Pocket Query" xsi:schemaLocation="http://www.topografix.com/GPX/1/0 http://www.topografix.com/GPX/1/0/gpx.xsd http://www.groundspeak.com/cache/1/0 http://www.groundspeak.com/cache/1/0/cache.xsd" xmlns="http://www.topografix.com/GPX/1/0"> <name>EM</name> <desc>Geocache file generated by Groundspeak</desc> <author>Groundspeak</author> <email>[email protected]</email> <time>2010-03-24T14:01:36.4931342Z</time> <keywords>cache, geocache, groundspeak</keywords> <wpt lat="51.2586" lon="-2.213067"> <time>2008-03-30T07:00:00Z</time> <name>GC1APHM</name> <desc>Sandman's Noble Hoard by Sandman1973, Unknown Cache (2/3)</desc> <groundspeak:cache id="832000" available="True" archived="False" xmlns:groundspeak="http://www.groundspeak.com/cache/1/0"> <groundspeak:name>Sandman's Noble Hoard</groundspeak:name> <groundspeak:placed_by>Sandman1973</groundspeak:placed_by> </groundspeak:cache> </wpt> </gpx> If the first two lines are replaced with just: <gpx> the above example works correctly, however I then can't access groundspeak:name (/gpx/wpt/groundspeak:cache/groundspeak:name), so my guess its a problem with the namespace. Any help would be appriciated.

    Read the article

  • Python 3.1.1 Problem With Tuples

    - by Protean
    This piece of code is supposed to go through a list and preform some formatting to the items, such as removing quotations, and then saving it to another list. class process: def rchr(string_i, asciivalue): string_o = () for i in range(len(string_i)): if ord(string_i[i]) != asciivalue: string_o += string_i[i] return string_o def flist(self, list_i): cache = () cache_list = [] for line in list_i: cache = line.split('\t') cacbe[0] = process.rchr(str(cache[0]), 34) cache_list.append(cache[0]) cache_list[index] = cache index += 1 cache_list.sort() return cache_list p = process() list1a = ['cow', 'dog', '"sheep"'] list1 = p.flist(list1a) print (country_list) However; it chokes at 'string_o += string_i[i]' and gives the following error: Traceback (most recent call last): File "/Projects/Python/safafa.py", line 23, in <module> list1 = p.flist(list1a) File "/Projects/Python/safafa.py", line 14, in flist cacbe[0] = process.rchr(str(cache[0]), 34) File "/Projects/Python/safafa.py", line 7, in rchr string_o += string_i[i] TypeError: can only concatenate tuple (not "str") to tuple

    Read the article

  • Google Chrome audit on caching

    - by Álvaro G. Vicario
    If I run an audit on my sites with Google Chrome, I get this message in the Leverage browser caching section: The following resources are missing a cache expiration. Resources that do not specify an expiration may not be cached by browsers: A list of all the pictures follows. I get a similar notice in Leverage proxy caching: Consider adding a "Cache-Control: public" header to the following resources: Apart from pictures, I also get a notice about HTML, CSS and JavaScript files: The following resources are explicitly non-cacheable. Consider making them cacheable if possible: Its funny because I've worked hard to cache all static contents (except for pictures, where I just left Apache's default settings). Firefox does indeed store all these items in cache. Is there anything I should improve in my HTTP headers? Here's the complete header set of some items as loaded after removing the browser caché. Pictures use default settings I didn't really check before, the rest should be cachéd for three hours. I can set headers with both .htaccess and PHP. PNG HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:46:14 GMT Server: Apache Last-Modified: Thu, 18 Mar 2010 21:40:54 GMT Etag: "c48024-230-4821a15d6c580" Accept-Ranges: bytes Content-Length: 560 Keep-Alive: timeout=4 Connection: Keep-Alive Content-Type: image/png HTML HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:46:13 GMT Server: Apache X-Powered-By: PHP/5.2.11 Expires: Sat, 31 Jul 2010 15:46:13 GMT Cache-Control: max-age=10800, s-maxage=10800, must-revalidate, proxy-revalidate Content-Encoding: gzip Vary: Accept-Encoding Last-Modified: Wed, 24 Mar 2010 20:30:36 GMT Keep-Alive: timeout=4 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=ISO-8859-15 CSS HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:48:21 GMT Server: Apache X-Powered-By: PHP/5.2.11 Expires: Sat, 31 Jul 2010 15:48:21 GMT Cache-Control: max-age=10800, s-maxage=10800, must-revalidate, proxy-revalidate Content-Encoding: gzip Vary: Accept-Encoding Last-Modified: Thu, 18 Mar 2010 21:40:12 GMT Keep-Alive: timeout=4 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/css JavaScript HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:48:21 GMT Server: Apache X-Powered-By: PHP/5.2.11 Expires: Sat, 31 Jul 2010 15:48:21 GMT Cache-Control: max-age=10800, s-maxage=10800, must-revalidate, proxy-revalidate Content-Encoding: gzip Vary: Accept-Encoding Last-Modified: Thu, 18 Mar 2010 21:40:12 GMT Keep-Alive: timeout=4 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: application/x-javascript Update I've tested Jumby's suggestion and set my CSS's expire to 1 year: Cache-Control:max-age=31536000, s-maxage=31536000, must-revalidate, proxy-revalidate Connection:Keep-Alive Content-Encoding:gzip Content-Length:4198 Content-Type:text/css Date:Mon, 02 Aug 2010 20:48:56 GMT Expires:Tue, 02 Aug 2011 20:48:56 GMT Keep-Alive:timeout=5, max=99 Last-Modified:Thu, 18 Mar 2010 20:40:12 GMT Server:Apache/2.2.14 (Win32) PHP/5.3.1 Vary:Accept-Encoding X-Powered-By:PHP/5.3.1 However, Chrome still claims "explicitly non-cacheable".

    Read the article

  • Django Per-site caching using memcached

    - by Paul
    Hi, So I'm using per-site caching on a project and I've observed the following, which is kind of confusing. When I load a flat page in my browser then change it through admin and then do a refresh (within the cache timeout) there is no change in the page--as expected. However when I stat a new session in a different browser and load the page (still within the timeout) the app is hit instead of the cache, with the Isn't the cache key generated from the URL? it seems that the session state is getting in there somewhere, which is causing a cache miss. Any ideas? thanks MIDDLEWARE_CLASSES = ( 'django.middleware.cache.UpdateCacheMiddleware', 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.middleware.gzip.GZipMiddleware', 'django.middleware.http.ConditionalGetMiddleware', 'django.middleware.doc.XViewMiddleware', 'ittybitty.middleware.IttyBittyURLMiddleware', 'django.contrib.flatpages.middleware.FlatpageFallbackMiddleware', 'maintenancemode.middleware.MaintenanceModeMiddleware', 'djangodblog.middleware.DBLogMiddleware', 'SSL.middleware.SSLRedirect', #SSL middleware to handle SSL 'django.middleware.cache.FetchFromCacheMiddleware', )

    Read the article

  • Fragment caching

    - by red5
    I would like to fragment cache part of a page. On the view I have <% cache("saved_area") do %> . <% end -%> In the controller: def index read_fragment("saved_area") end In config/production: config.cache_store = :file_store, File.join(RAILS_ROOT, 'tmp', 'cache') The file was created in the tmp/cache directory. But I am not sure if the cache is being used in the request, since I presume there should be a line in the log stating that the cache is being used (and there is not).

    Read the article

  • how to config grails1.2 with oscache?

    - by wavelet
    I do this : DataSource.groovy: hibernate { cache.use_second_level_cache=true cache.use_query_cache=true cache.provider_class='com.opensymphony.oscache.hibernate.OSCacheProvider' } and in BuildConfig.groovy: inherits( "global" ) { // uncomment to disable ehcache excludes 'ehcache' } runtime ("opensymphony:oscache:2.4.1") { excludes 'jms', 'commons-logging', 'servlet-api' } but i only get this error : commons.DefaultGrailsApplication The class [com.ai.scenter.service.reschange.ResourceChangeAdapter] was not found when attempting to load Grails application. Skipping. commons.DefaultGrailsApplication The class [com.ai.scenter.service.reschange.ResourceChangeService] was not found when attempting to load Grails application. Skipping. hibernate.ConfigurableLocalSessionFactoryBean There was an error configuring the Hibernate second level cache: could not instantiate CacheProvider [com.opensymphony.oscache.hibernate.OSCacheProvide] hibernate.ConfigurableLocalSessionFactoryBean This is normally due to one of two reasons. Either you have incorrectly specified the cache provider class name in [DataSource.groovy] or you do not have the cache provider on your classpath (eg. runtime ("net.sf.ehcache:ehcache:1.6.1")) how can i do it?

    Read the article

  • Java memory mapped files and swap

    - by MarkS
    I'm looking at some memory mapped files in Java. Let's say I have a heap size set to 2gb, and I memory map a file that is 50gb - far more than the physical memory on the machine. The OS will cache parts of that 50gb file in the os file cache, the java process will have 2gb of heap space. What I'm curious about is how does the OS decide how much of the 50gb file to cache? For instance, if I have another java process, also with a 2gb heap size, will that 2gb be swapped out to allow the os to cache parts of the memory mapped file? Will parts of the heap space of the first process be swapped out to allow the OS to cache? Is there any way to tell the OS not to swap heap space for OS caching? If the OS doesn't swap out main processes, how does it determine how big its file cache should be?

    Read the article

  • assistance with classifying tests

    - by amateur
    I have a .net c# library that I have created that I am currently creating some unit tests for. I am at present writing unit tests for a cache provider class that I have created. Being new to writing unit tests I have 2 questions These being: My cache provider class is the abstraction layer to my distributed cache - AppFabric. So to test aspects of my cache provider class such as adding to appfabric cache, removing from cache etc involves communicating with appfabric. Therefore the tests to test for such, are they still categorised as unit tests or integration tests? The above methods I am testing due to interacting with appfabric, I would like to time such methods. If they take longer than a specified benchmark, the tests have failed. Again I ask the question, can this performance benchmark test be classifed as a unit test? The way I have my tests set up I want to include all unit tests together, integration tests together etc, therefore I ask these questions that I would appreciate input on.

    Read the article

  • change Magento to new server, images of existing product do not shown in frontend

    - by forrest gump
    I have moved a Magento website to a new server to optimize speed. All images appear to be replaced by image placeholders. I tried to set permission for media to 777, when I delete cache media folder and refresh, a new cache folder is created automatically, but only with placeholder images inside. I flushed image cache, reindex in magento backend but still no hope. What should I do now? Somethings might be helpful: . It's fine to create new product with images . Flush cache, only placeholders images are created . tried to use magento-cleanup.php, no hope . tried to remove .htaccess in media folder . the images of products are there in the media folder, just not in cache folder. Magento only looks at the cache folder for image :(

    Read the article

  • Difference between redirecting to a page and coming to the same page after pressing back button

    - by Mac
    Actually i have a page in which i am not using cache by using this code HttpContext.Current.Response.Cache.SetExpires(DateTime.UtcNow.AddDays(-1)); HttpContext.Current.Response.Cache.SetValidUntilExpires(false); HttpContext.Current.Response.Cache.SetRevalidation(HttpCacheRevalidation.AllCaches); HttpContext.Current.Response.Cache.SetCacheability(HttpCacheability.NoCache); HttpContext.Current.Response.Cache.SetNoStore(); now i want to know is there any difference between coming to this page using a proper link or coming back using browser back button or is there any way to detect this.

    Read the article

  • Output caching in HTTP Handler and SetValidUntilExpires

    - by mayor
    I'm using output caching in my custom HTTP handler in the following way: public void ProcessRequest(HttpContext context) { TimeSpan freshness = new TimeSpan(0, 0, 0, 60); context.Response.Cache.SetExpires(DateTime.Now.Add(freshness)); context.Response.Cache.SetMaxAge(freshness); context.Response.Cache.SetCacheability(HttpCacheability.Public); context.Response.Cache.SetValidUntilExpires(true); ... } It works, but the problem is that refreshing the page with F5 leads to page regeneration (instead of cache usage) despite of the last codeline: context.Response.Cache.SetValidUntilExpires(true); Any suggestions?

    Read the article

  • How to set the monitor to its native resolution when xrandr approach isn't working?

    - by Krishna Kant Sharma
    I am trying to setup my Samsung syncmaster B2030 monitor in ubuntu 12.04. It's native resolution is 1600x900 which I am not getting in ubuntu and which I am trying to get. I tried using xrandr approach provided in these urls: 1) http://www.ubuntugeek.com/how-change-display-resolution-settings-using-xrandr.html 2) How to set the monitor to its native resolution which is not listed in the resolutions list? S1) I used cvt 1600 900 60 to get the modeline. Output was: # 1600x900 59.95 Hz (CVT 1.44M9) hsync: 55.99 kHz; pclk: 118.25 MHz Modeline "1600x900_60.00" 118.25 1600 1696 1856 2112 900 903 908 934 -hsync +vsync S2) I then used xrandr and output was: Screen 0: minimum 8 x 8, current 1152 x 864, maximum 8192 x 8192 DVI-I-0 disconnected (normal left inverted right x axis y axis) VGA-0 connected 1152x864+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0 + 1360x768 60.0 59.8 1152x864 60.0* 800x600 72.2 60.3 56.2 680x384 119.9 119.6 640x480 59.9 512x384 120.0 400x300 144.4 320x240 120.1 DVI-I-1 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) which gave me "VGA-0". S3) Then I used xrandr --newmode "1600x900_60.00" 118.25 1600 1696 1856 2112 900 903 908 934 -hsync +vsync But instead of adding the modeline it just threw an error: X Error of failed request: BadName (named color or font does not exist) Major opcode of failed request: 153 (RANDR) Minor opcode of failed request: 16 (RRCreateMode) Serial number of failed request: 29 Current serial number in output stream: 29 My system details: 1) ubuntu 12.04 LTS 2) Graphic card: GeForce 9400 GT/PCIe/SSE2 (driver is successfully installed. I am checking it in System Settings Details. And it's showing that driver is installed and its "GeForce 9400 GT/PCIe/SSE2") 3) Monitor: Samsung syncmaster B2030 4) Resolutions I am getting: 800x600 1024x768 1152x864 (I am currently using this one) 1360x768 (this one isn't working properly) Does anyone know what I can do? Thanks in advance. UPDATE (1): Today I tried it again. And adding a modeline (using --newmode) worked. But when I used --addmode by: xrandr --addmode VGA-0 1600x900_60.00 It gave this error: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 153 (RANDR) Minor opcode of failed request: 18 (RRAddOutputMode) Serial number of failed request: 29 Current serial number in output stream: 30

    Read the article

  • Can't Run Assault Cube

    - by Debashis Pradhan
    I installed assault cube from the Software centre and it just opens for half a second and closes. When i run in it from the terminal, this is what i get - d@d-platform:~$ assaultcube Using home directory: /home/d/.assaultcube_v1.104 current locale: en_IN init: sdl init: net init: world init: video: sdl init: video: mode X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 129 (XFree86-VidModeExtension) Minor opcode of failed request: 10 (XF86VidModeSwitchToMode) Value in failed request: 0xb3 Serial number of failed request: 131 Current serial number in output stream: 133

    Read the article

  • OpenGL and switchable graphic cards

    - by Orcun
    I use a laptop and this laptop has readon AMD Radeon HD 6470M and onboad graphic card. When I run fglrxinfo, I get this error: X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 136 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 Is it a problem ? Because of I reason I can't use opengl. Because, I can't run any opengl applications.

    Read the article

  • Error while running unity_support_test

    - by Tojo Chacko
    My unity session won't start and it always gives me a segmentation fault. So I tried running /usr/lib/unity_support_test but this too gave me some error. X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 136 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 22 Current serial number in output stream: 22 What does this mean? Doesn't my machine satisfy unity's requirements?

    Read the article

  • emachines e725 resulution problem

    - by Montasir
    I am new in Ubuntu I cannot set my resolution to 1366x768 all options in graphical displays is 1024x768 only. I tried cvt 1366 768 and xrandr --newmode "1368x768_60.00" 85.86 1368 1440 1584 1800 768 769 772 795 -HSync +Vsync and got xrandr: Failed to get size of gamma for output default X Error of failed request: BadName (named color or font does not exist) Major opcode of failed request: 149 (RANDR) Minor opcode of failed request: 16 (RRCreateMode) Serial number of failed request: 19 Current serial number in output stream: 19 What can I do.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >