Search Results

Search found 23576 results on 944 pages for 'case study'.

Page 89/944 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Know Your Audience, And/Or Your Customer

    - by steve.diamond
    Yesterday I gave an internal presentation to about 20 Oracle employees on "messaging," not messaging technology, but embarking on the process of building messages. One of the elements I covered was the importance of really knowing and understanding your audience. As a humorous reference I included two side-by-side photos of Oakland A's fans and Oakland Raiders fans. The Oakland A's fans looked like happy-go-lucky drunk types. The Oakland Raiders fans looked like angry extras from a low budget horror flick. I then asked my presentation attendees what these two groups had in common. Here's what I heard. --They're human (at least I THINK they're human). --They're from Oakland. --They're sports fans. After that, it was anyone's guess. A few days earlier we were putting the finishing touches on a sales presentation for one of our product lines. We had included an upfront "lead in" addressing how the economy is improving, yet that doesn't mean sales executives will have any more resources to add to their teams, invest in technology, etc. This "lead in" included miscellaneous news article headlines and statistics validating the slowly improving economy. When we subjected this presentation to internal review two days ago, this upfront section in particular was scrutinized. "Is the economy really getting better? I (exclamation point) don't think it's really getting better. Haven't you seen the headlines coming out of Greece and Europe?" Then the question TO ME became, "Who will actually be in the audience that sees and hears this presentation? Will s/he be someone like me? Or will s/he be someone like the critic who didn't like our lead-in?" We took the safe route and removed that lead in. After all, why start a "pitch" with a component that is arguably subjective? What if many of our audience members are individuals at organizations still facing a strong headwind? For reasons I won't go into here, it was the right decision to make. The moral of the story: Make sure you really know your audience. Harness the wisdom of the information your organization's CRM systems collect to get that fully informed "customer view." Conduct formal research. Conduct INFORMAL research. Ask lots of questions. Study industries and scenarios that have nothing to do with yours to see "how they do it." Stop strangers in coffee shops and on the street...seriously. Last week I caught up with an old friend from high school who recently retired from a 25 year career with the USMC. He said, "I can learn something from every single person I come into contact with." What a great way of approaching the world. Then, think about and write down what YOU like and dislike as a customer. But also remember that when it comes to your company's products, you are most likely NOT the customer, so don't go overboard in superimposing your own world view. Approaching the study of customers this way adds rhyme, reason and CONTEXT to lengthy blog posts like this one. Know your audience.

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • FairWarning Privacy Monitoring Solutions Rely on MySQL to Secure Patient Data

    - by Rebecca Hansen
    FairWarning® solutions have audited well over 120 billion events, each of which was processed and stored in a MySQL database. FairWarning is the world's leading supplier of privacy monitoring solutions for electronic health records, relied on by over 1,200 Hospitals and 5,000 Clinics to keep their patients' data safe. In January 2014, FairWarning was awarded the highest commendation in healthcare IT as the first ever Category Leader for Patient Privacy Monitoring in the "2013 Best in KLAS: Software & Services" report[1]. FairWarning has used MySQL as their solutions’ database from their start in 2005 to worldwide expansion and market leadership. FairWarning recently migrated their solutions from MyISAM to InnoDB and updated from MySQL 5.5 to 5.6. Following are some of benefits they’ve had as a result of those changes and reasons for their continued reliance on MySQL (from FairWarning MySQL Case Study). Scalability to Handle Terabytes of Data FairWarning's customers have a lot of data: On average, FairWarning customers receive over 700,000 events to be processed daily. Over 25% of their customers receive over 30 million events per day, which equates to over 1 billion events and nearly one terabyte (TB) of new data each month. Databases range in size from a few hundred GBs to 10+ TBs for enterprise deployments (data are rolled off after 13 months). Low or Zero Admin = Few DBAs "MySQL has not required a lot of administration. After it's been tuned, configured, and optimized for size on initial setup, we have very low administrative costs. I can scale and add more customers without adding DBAs. This has had a big, positive impact on our business.” - Chris Arnold, FairWarning Vice President of Product Management and Engineering. Performance Schema  As the size of FairWarning's customers has increased, so have their tables and data volumes. MySQL 5.6’ new maintenance and management features have helped FairWarning keep up. In particular, MySQL 5.6 performance schema’s low-level metrics have provided critical insight into how the system is performing and why. Support for Mutli-CPU Threads MySQL 5.6' support for multiple concurrent CPU threads, and FairWarning's custom data loader allow multiple files to load into a single table simultaneously vs. one at a time. As a result, their data load time has been reduced by 500%. MySQL Enterprise Hot Backup Because hospitals and clinics never stop, FairWarning solutions can’t either. FairWarning changed from using mysqldump to MySQL Enterprise Hot Backup, which has reduced downtime, restore time, and storage requirements. For many of their larger customers, restore time has decreased by 80%. MySQL Enterprise Edition and Product Roadmap Provide Complete Solution "MySQL's product roadmap fully addresses our needs. We like the fact that MySQL Enterprise Edition has everything included; there's no need to purchase separate modules."  - Chris Arnold Learn More>> FairWarning MySQL Case Study Why MySQL 5.6 is an Even Better Embedded Database for Your Products presentation Updating Your Products to MySQL 5.6, Best Practices for OEMs on-demand webinar (audio and / or slides + Q&A transcript) MyISAM to InnoDB – Why and How on-demand webinar (same stuff) Top 10 Reasons to Use MySQL as an Embedded Database white paper [1] 2013 Best in KLAS: Software & Services report, January, 2014. © 2014 KLAS Enterprises, LLC. All rights reserved.

    Read the article

  • Introducing Oracle User Productivity Kit (UPK) 12.1 Thursday 26th June 2014 – Oracle, Reading, Berkshire

    - by Kathryn Lustenberger
    Join Oracle UPK Product Management and Product Development In conjunction with Larmer Brown Register Now v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid windowtext 1.0pt; mso-border-alt:solid windowtext .5pt; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid windowtext; mso-border-insidev:.5pt solid windowtext; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} UPK Client Event – Introducing v12.1 Thursday 26th June 2014 Oracle Thames Valley Park, Reading, Berkshire Agenda Time Session 10.00am Registration and Coffee 10.30am Introductions and Objectives TWIN TRACK SESSION 10.45am Introduction to UPK (Standard) Version 12.1 Overview and Demonstration for delegates new to UPK Upgrading to UPK (Standard) Version 12.1 Demonstration of the latest release, for delegates with experience of UPK 12.25pm Q&A An opportunity for delegates to raise specific questions about the tool Q&A An opportunity for delegates to raise specific questions about the latest release 12.45pm Lunch 1.30pm Larmer Brown Development Tracker Larmer Brown’s Development Tracker addresses the challenge of ensuring that a Content Development Project will meet agreed deadlines, identifying risks with sufficient notice to take action 1.50pm Case Study How the Development Tracker addressed this client’s requirement to track, monitor and report progress on a large-scale implementation Project 2.10pm Larmer Brown Library Content for UPK This session will showcase some of Larmer Brown’s content library and consider how pre-built content can be used to your advantage 2.30pm Coffee Break 2.45pm Making the most of UPK Professional This presentation and demonstration seeks to unlock the potential of UPK Professional for those that may not be fully utilising the tool   3.20pm Case Study How this client has utilised the tracking and reporting features within UPK Professional 3.40pm Summary and Conclusions 4.00pm Close

    Read the article

  • Live Event: OTN Architect Day: Cloud Computing - Two weeks and counting

    - by Bob Rhubart
    In just two weeks architects and others will gather at the Oracle Conference Center in Redwood Shores, CA for the first Oracle Technology Network Architect Day event of 2013. This event focuses on Cloud Computing, and features sessions specifically focused on real-world examples of the implementation of cloud computing. When: Tuesday July 9, 2013              8:30am - 12:30pm Where: Oracle Conference Center              350 Oracle Pkwy              Redwood City, CA 94065 Register now. It's free! Here's the agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Michael Timpanaro-Perrotta Director, Product Management, Oracle Database Cloud New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Michael Timpanaro-Perrotta respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • Live Event: OTN Architect Day: Cloud Computing - Two weeks and counting

    - by Bob Rhubart
    In just two weeks architects and others will gather at the Oracle Conference Center in Redwood Shores, CA for the first Oracle Technology Network Architect Day event of 2013. This event focuses on Cloud Computing, and features sessions specifically focused on real-world examples of the implementation of cloud computing. When: Tuesday July 9, 2013              8:30am - 12:30pm Where: Oracle Conference Center              350 Oracle Pkwy              Redwood City, CA 94065 Register now. It's free! Here's the agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Michael Timpanaro-Perrotta Director, Product Management, Oracle Database Cloud New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Michael Timpanaro-Perrotta respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • Spotlight on an office - Denmark

    - by jessica.ebbelaar(at)oracle.com
    Hi, my name is Michael. I work as an Intern at the Danish office in Ballerup. My job is a part-time position beside my bachelor study in International Business at Copenhagen Business School. I joined Oracle end of February last year, and what a thrilling ride it has been! Last year, when I was offered the position, there was no doubt that I wanted to go for it. Back then, I only had little idea about Oracle as a company and what kind of exciting assignments lay ahead of me. My main role is internal communications, i.e. editor of a monthly employee’s news letter; Newszone. It is an interesting task, since it requires that I am updated on the different activities that take place within the Oracle Denmark office. I try to bring interesting articles, which are relevant and interesting news to my colleagues and it allows me to interact with many different persons at the office and to learn from their experience, which give me great inspiration and ideas for the magazine. Besides being the editor of Newszone, I also make sure that other communication flow freely at the Oracle Denmark office. I do this through our LCD screen channels. I update the internal channel with the latest information and important messages for employees, and on the external channel I circulate marketing videos featuring Oracle products and customer reference stories. In addition to this, I have the responsibility acting as a content manager of the Local Communication Denmark site on MyOracle (UCM). These are more or less my usual work assignments. On top of these I take care of various ad hoc assignments such as updating the GCM database, renew newspaper subscriptions etc. The Oracle Denmark office Being part of the local employees club I also assist with arranging social events outside working hours – e.g. evenings at the theater or cinema or by attending many of the sportsactivities;such as our running club, cycling club, food club and book club. These activities have indeed helped me grow my personal network within Oracle.  The office is packed with engaging, high-paced and motivated people who manage to take time off to spend a day attending Corporate Social Responsibility initiatives, one of them being GVD (Global Volunteer Day) with approximately 40 employees attending. This proofs some of the social responsible aspects of Oracle. I was positively surprised on how the office (named O-Zone) is designed. The office is designed into three distinct zones, namely Call zone, Project and Dialogue zone and Quiet zone, having different working environments for different job roles. The other thing which I like is that you do not have your own desk, which means you get to sit next to different people every day, getting new ideas and inspiration as well as getting to know more people in the organization you work in. To sum up: If you are considering pursuing an intern or a career after graduation in Oracle, do it! You will not regret it. It has given me many relevant practical experiences beside my study, and I am sure many great experiences will await you too.   Want to know more about the current vacancies in Denmark? Check http://campus.oracle.com for all of our vacancies.

    Read the article

  • 13 MORE Things from the Oracle Social Summit You Should Know

    - by Mike Stiles
    In our previous blog, we started giving those of you who couldn’t make it just a sampling of the valuable takeaways from the first annual Oracle Social Summit, held Nov 14 and 15 in Las Vegas. And while yes, 13 items is a pretty healthy sampling, we wanted to go the extra mile and give you 13 more, an indication of just how much great information came out of it.  Follow the arrow, and come on in as if you were there with us. 1. Weber Shandwick takes a 70/20/10 approach when advising clients how to allocate resources to paid social opportunities. 70% of spend should go toward paid opportunities the agency and client both know work, 20% should go toward paid social the agency knows works, and 10% should go toward experimentation. (Matt Dickman – Weber Shandwick) 2. By 2017, the technically competent CMO will spend more on IT than the CIO. (Gartner Study) 3. CIOs are focused on infrastructure. As the roles of the CMO and CIO continue coming together, those CIOs have to make a very conscious decision to get CMOs what they need. 4. It’s now harder for brands to differentiate based on product. The advantage will go to the brands that are successful in garnering customer trust. 5. More and more, enterprise software is going to start looking like the software consumers are used to seeing and using. 6. You will see brands prioritizing mobile and dropping investments in www, HTML, POS systems, etc. 7. The social graph has to be added to brands’ customer data for a more holistic view. Customers will give you the information you need if the reward is appropriate. 8. Viacom did a study that showed viewers are most honest on social. Not so much on surveys or other feedback vehicles. 9. How are you determining your influencers? Influence isn’t about reach. It’s about getting people to change behavior. 10. A mix of skills is becoming critically important in a social staff. It shouldn’t be a mixture of several disciplines, not just a bunch of “social experts.” 11. If senior management isn’t engaged, the social team is forced into guessing what might be considered a “success” by the C-suite. 12. Mobile customization will be getting big investments from brands in 2013. Brands need to provide shoppers utility, not just information. 75% will use mobile this holiday season to avoid in-store madness. 13. Data becomes information, information becomes insight, and insight becomes actionable. The Oracle Social Summit brought together brands, agencies, Oracle social experts and industry thought leaders to take a serious look at where social stands today, and where it’s headed in the near future. Given the speed of social’s evolution, attending such events (or at least reading nifty summary blogs) is a good investment in making sure your enterprise isn’t falling gradually behind.

    Read the article

  • How do you report out user research results?

    - by user12277104
    A couple weeks ago, one of my mentees asked to meet, because she wanted my advice on how to report out user research results. She had just conducted her first usability test for her new employer, and was getting to the point where she wanted to put together some slides, but she didn't want them to be boring. She wanted to talk with me about what to present and how best to present results to stakeholders. While I couldn't meet for another week, thanks to slideshare, I could quickly point her in the direction that my in-person advice would have led her. First, I'd put together a panel for the February 2012 New Hampshire UPA monthly meeting that we then repeated for the 2012 Boston UPA annual conference. In this panel, I described my reporting techniques, as did six of my colleagues -- two of whom work for companies smaller than mine, and four of whom are independent consultants. Before taking questions, we each presented for 3 to 5 minutes on how we presented research results. The differences were really interesting. For example, when do you really NEED a long, written report (as opposed to an email, spreadsheet, or slide deck with callouts)? When you are reporting your test results to the FDA -- that makes sense. in this presentation, I describe two modes of reporting results that I use.  Second, I'd been a participant in the CUE-9 study. CUE stands for Comparative Usability Evaluation, and this was the 9th of these studies that Rolf Molich had designed. Originally, the studies were designed to show the variability in evaluation methods practitioners use to evaluate websites and applications. Of course, using methods and tasks of their own choosing, the results were wildly different. However, in this 9th study, the tasks were the same, the participants were the same, and the problem severity scale was the same, so how would the results of the 19 practitioners compare? Still wildly variable. But for the purposes of this discussion, it gave me a work product that was not proprietary to the company I work for -- a usability test report that I could share publicly. This was the way I'd been reporting results since 2005, and pretty much what I still do, when time allows.  That said, I have been continuing to evolve my methods and reporting techniques, and sometimes, there is no time to create that kind of report -- the team can't wait the days that it takes to take screen shots, go through my notes, refer back to recordings, and write it all up. So in those cases, I use bullet points in email, talk through the findings with stakeholders in a 1-hour meeting, and then post the take-aways on a wiki page. There are other requirements for that kind of reporting to work -- for example, the stakeholders need to attend each of the sessions, and the sessions can't take more than a day to complete, but you get the idea: there is no one "right" way to report out results. If the method of reporting you are using is giving your stakeholders the information they need, in a time frame in which it is useful, and in a format that meets their needs (FDA report or bullet points on a wiki), then that's the "right" way to report your results. 

    Read the article

  • SQLAuthority News – Advantages of Distance Learning

    - by Pinal Dave
    Distance education is extremely popular – almost overnight, it seems.  Almost everyone has taken an online course, or knows someone who has, or is considering joining an online school.  There are many advantages and disadvantages to attending an online school – but the same can be said of attending a physical school!  Let’s take a look at the top reasons to use distance education. 1) Flexibility.  Physical universities are usually willing to make some concessions to student – like night classes, study hours, and online networks.  However, nothing is going to beat the flexibility of distance education.  You can attend classes and take notes anytime, anywhere, wearing anything you’d like! 2) Affordability.  We don’t need to get into hard numbers to understand how an expensive university can be.  Students are taking on more and more debt just to get an education.  Many of these fees pay for room, board, and facilities.   Distance education cuts out all these costs, and makes attending school much more affordable for the average student. 3) Try before you buy.  Did you know that the average college student changes his or her major 10 times before they graduate?  You can imagine that this kind of indecision plays a huge part in WHEN you graduate – not being able to make up your mind can cost you big bucks if you have to stay in school for extra years!  Distance education allows you to take different classes from a wide range of disciplines.  Do you want to study forensic science or English literature?  Now you don’t have to pay for classes you can’t afford just to find out. 4) Pace yourself.  Some students struggle in a traditional classroom setting – classes can be taught too fast, too slow, or there are too many distractions.  Distance education allows mature students to set the pace themselves.  They can rewatch lectures they didn’t catch the first time, or go through classes quickly if they are already familiar with the material – cutting out the chance of burning out or getting bored. 5) Lifelong learning.  Maybe you already have a degree, but would like to learn more about your field, or a related field, or maybe even about something completely unrelated – just because you are curious!  Distance education allows you to learn whatever you want ,whenever you want (and yes, wearing anything you’d like!). 6) Attend whatever college you want.  Because of the popularity of distance education, physical campuses are getting in on the game by offering online courses – often just uploaded versions of classes already taught at their campus.  Ever wanted to attend Harvard, but knew you couldn’t get in?  Take a class online!  Of course, you probably should not attempt to lie and say you have a Harvard degree, but Ivy League colleges are prestigious because they are the best in their field – take advantage of the best by taking an online course! I am a big believer in continuing education, whether it is online courses, returning to school, or even take informal classes online.  Distance education can be a great way to accomplish these goals and become a lifelong learner. My friends at provides training through virtual classrooms for students who want to avoid travelling. Distance learning course allows IT aspirants to connect with trainers using the internet.  I encourage everyone to check it out! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Microsoft Certifications &ndash; how to prep? and why?

    - by Kelly Jones
    I often get asked by my colleagues, “how do you prepare for Microsoft exams?” Well, the answer for me is a little complicated, so I thought I’d write up here what I do. The first thing I do is go to Microsoft’s website to find the exam that I need to take.  If you’re looking to get a particular certification, then their site lists the exam or exams that you’ll need to pass.  If you’ve already taken an exam, you can log onto the MCP website and use their certification planner.  This little tool tells you what tests you need, based on the exams you’ve already passed.  It is very helpful with the certifications that are multiple tests and especially ones that have electives. Once you’ve identified the test, you can use Microsoft’s website to see the topics that it covers.  This is a good outline to follow when you study.  I’ll keep this handy to reference back throughout my studying to make sure that I’m covering all the topics I need to know. The next step is probably where I am a little different from others.  IF the exam outline covers material that I’ve already been working with, then I’ll skip a lot of the studying and go directly to the practice tests.  However, if I’m looking at the outline and wondering how in the world do you do that? – then it’s time to hit the books. So, where to find study materials?  Try typing in the exam number into any search engine.  You’ll typically find a ton of resources.  If you’re lucky, you’ll find books that others recommend based on their studying and exam experience.  As a Sogeti employee, I have access to three really good resources: an internal company list of all of the consultants who have passed particular tests (on our Connex website), Books 24x7, and Transcender practice exams. Once my studying is done (either through books or experience), I’ll go through the practice exams.  I find them really helpful in getting my knowledge lined up to the thinking process that the exam writers use.  If I’m relying on my experience, then this really helps me to identify gaps in my knowledge that I’ll need to fill. That’s about it.  If I’m doing ok on the practice exams, then I’ll take the real thing.  I’ve found that the practice exams are usually more difficult than then real thing. Oh – one other thing I do related to Microsoft exams – I try to take any beta exams that Microsoft makes available that fall into my skill set.  Microsoft has started a blog to announce these and the seats usually fill up really quick.  The blog is at http://blogs.technet.com/betaexams/ . You don’t get your results instantly, like a normal exam, instead you have to wait for everyone to finish taking the beta exams and for Microsoft to determine which questions they are using and which they are dropping.  So, be prepared to wait six to eight weeks for your results.

    Read the article

  • The Social Enterprise: Gangnam Style

    - by Mike Stiles
    Are only small and medium businesses able to put social strategies in place, generate consistent, compelling content for customers, and be nimble enough to listen and respond to the social communities they build? Or are enterprise organizations eagerly and effectively adopting social as well? It depends on whom inside the organization you ask. A study from Attensity looked at who “gets” social inside enterprise organizations. The results were unsurprising. Mostly, Generation X and Y employees who came of age with social as part of their lives and as a key communications vehicle understand it. Imagine being a 25-year-old at a company that bans employees from accessing Facebook at work. You may as well tell them they can’t use phones and must do all calculations on an abacus. To them, such policy is absent of real-world logic and signals to them the organization is destined to be the victim of an up-and-comer. After that, it’s senior management that gets social. You don’t get to be in senior management without reading a few things and paying attention. Most senior managers are well aware of the impact social has had and will have, though they may be unsure of what to do about it. The better ones will utilize those on the inside who do inherently know how to communicate and build virtual relationships using social. The very best will get the past out of the way for these social innovators, so the new communications can be enacted minus counterproductive dictums, double-clutching, meeting-creep, and all the other fading internal practices that water down content and impede change. Organizationally, the Attensity study found 81% of enterprise companies believe failing to embrace social will result in their being left behind. Yet our old friend fear still has many captive in its clutches. 79% feel overwhelmed by the volume of social data available, something a social technology partner with goal-oriented analytics expertise could go a long way toward alleviating. Then there’s the fear of social having a negative impact. This comes from a lack of belief in the product, the customer service, or both. The public uses social not to go out and slay brands. They’re using it to be honest. If the fear is that honesty will reflect badly on the brand, the brand has much bigger, broader problems than what happens on Facebook. Sadly, most enterprise organizations still see social as a megaphone, a one-way channel with which to hit people with ads. They either don’t understand social relationships, or don’t want any. The truly unenlightened manager will always say, “We help them by selling them our stuff.” “Brand affinity” is a term, it’s just not one assigned much value in enterprise organizations. Which brings us to Psy, the Korean performer whose Internet video phenom “Gangnam Style,” as of this writing, has been viewed 438,550,238 times on YouTube. It’s bigger than anything a brand will probably ever publish. Most brands would never have seen the point of making or publishing it. But a funny thing happened on the way to Internet success. The video literally doubled the stock price of Psy’s father’s software firm. NH Investment and Securities said, "The positive sentiment has attracted investors just because of the fact the company is owned by Psy's father and uncle.” The company wasn’t mentioned or seen in the video in any way, yet reaped tangible rewards just for being tangentially associated with it. Imagine your brand being visibly and directly responsible for such a smash and tell me it’s worthless. When enterprise organizations embrace the value of igniting passions, making people happier, solving their problems, informing them, helping them have fun, etc., then they will have fully embraced social, and will reap the brand affinity rewards of heightened awareness, brand loyalty and yes, sales.

    Read the article

  • fatal error C1014: too many include files : depth = 1024

    - by numerical25
    I have no idea what this means. But here is the code that it supposely is happening in. //======================================================================================= // d3dApp.cpp by Frank Luna (C) 2008 All Rights Reserved. //======================================================================================= #include "d3dApp.h" #include <stream> LRESULT CALLBACK MainWndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { static D3DApp* app = 0; switch( msg ) { case WM_CREATE: { // Get the 'this' pointer we passed to CreateWindow via the lpParam parameter. CREATESTRUCT* cs = (CREATESTRUCT*)lParam; app = (D3DApp*)cs->lpCreateParams; return 0; } } // Don't start processing messages until after WM_CREATE. if( app ) return app->msgProc(msg, wParam, lParam); else return DefWindowProc(hwnd, msg, wParam, lParam); } D3DApp::D3DApp(HINSTANCE hInstance) { mhAppInst = hInstance; mhMainWnd = 0; mAppPaused = false; mMinimized = false; mMaximized = false; mResizing = false; mFrameStats = L""; md3dDevice = 0; mSwapChain = 0; mDepthStencilBuffer = 0; mRenderTargetView = 0; mDepthStencilView = 0; mFont = 0; mMainWndCaption = L"D3D10 Application"; md3dDriverType = D3D10_DRIVER_TYPE_HARDWARE; mClearColor = D3DXCOLOR(0.0f, 0.0f, 1.0f, 1.0f); mClientWidth = 800; mClientHeight = 600; } D3DApp::~D3DApp() { ReleaseCOM(mRenderTargetView); ReleaseCOM(mDepthStencilView); ReleaseCOM(mSwapChain); ReleaseCOM(mDepthStencilBuffer); ReleaseCOM(md3dDevice); ReleaseCOM(mFont); } HINSTANCE D3DApp::getAppInst() { return mhAppInst; } HWND D3DApp::getMainWnd() { return mhMainWnd; } int D3DApp::run() { MSG msg = {0}; mTimer.reset(); while(msg.message != WM_QUIT) { // If there are Window messages then process them. if(PeekMessage( &msg, 0, 0, 0, PM_REMOVE )) { TranslateMessage( &msg ); DispatchMessage( &msg ); } // Otherwise, do animation/game stuff. else { mTimer.tick(); if( !mAppPaused ) updateScene(mTimer.getDeltaTime()); else Sleep(50); drawScene(); } } return (int)msg.wParam; } void D3DApp::initApp() { initMainWindow(); initDirect3D(); D3DX10_FONT_DESC fontDesc; fontDesc.Height = 24; fontDesc.Width = 0; fontDesc.Weight = 0; fontDesc.MipLevels = 1; fontDesc.Italic = false; fontDesc.CharSet = DEFAULT_CHARSET; fontDesc.OutputPrecision = OUT_DEFAULT_PRECIS; fontDesc.Quality = DEFAULT_QUALITY; fontDesc.PitchAndFamily = DEFAULT_PITCH | FF_DONTCARE; wcscpy(fontDesc.FaceName, L"Times New Roman"); D3DX10CreateFontIndirect(md3dDevice, &fontDesc, &mFont); } void D3DApp::onResize() { // Release the old views, as they hold references to the buffers we // will be destroying. Also release the old depth/stencil buffer. ReleaseCOM(mRenderTargetView); ReleaseCOM(mDepthStencilView); ReleaseCOM(mDepthStencilBuffer); // Resize the swap chain and recreate the render target view. HR(mSwapChain->ResizeBuffers(1, mClientWidth, mClientHeight, DXGI_FORMAT_R8G8B8A8_UNORM, 0)); ID3D10Texture2D* backBuffer; HR(mSwapChain->GetBuffer(0, __uuidof(ID3D10Texture2D), reinterpret_cast<void**>(&backBuffer))); HR(md3dDevice->CreateRenderTargetView(backBuffer, 0, &mRenderTargetView)); ReleaseCOM(backBuffer); // Create the depth/stencil buffer and view. D3D10_TEXTURE2D_DESC depthStencilDesc; depthStencilDesc.Width = mClientWidth; depthStencilDesc.Height = mClientHeight; depthStencilDesc.MipLevels = 1; depthStencilDesc.ArraySize = 1; depthStencilDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthStencilDesc.SampleDesc.Count = 1; // multisampling must match depthStencilDesc.SampleDesc.Quality = 0; // swap chain values. depthStencilDesc.Usage = D3D10_USAGE_DEFAULT; depthStencilDesc.BindFlags = D3D10_BIND_DEPTH_STENCIL; depthStencilDesc.CPUAccessFlags = 0; depthStencilDesc.MiscFlags = 0; HR(md3dDevice->CreateTexture2D(&depthStencilDesc, 0, &mDepthStencilBuffer)); HR(md3dDevice->CreateDepthStencilView(mDepthStencilBuffer, 0, &mDepthStencilView)); // Bind the render target view and depth/stencil view to the pipeline. md3dDevice->OMSetRenderTargets(1, &mRenderTargetView, mDepthStencilView); // Set the viewport transform. D3D10_VIEWPORT vp; vp.TopLeftX = 0; vp.TopLeftY = 0; vp.Width = mClientWidth; vp.Height = mClientHeight; vp.MinDepth = 0.0f; vp.MaxDepth = 1.0f; md3dDevice->RSSetViewports(1, &vp); } void D3DApp::updateScene(float dt) { // Code computes the average frames per second, and also the // average time it takes to render one frame. static int frameCnt = 0; static float t_base = 0.0f; frameCnt++; // Compute averages over one second period. if( (mTimer.getGameTime() - t_base) >= 1.0f ) { float fps = (float)frameCnt; // fps = frameCnt / 1 float mspf = 1000.0f / fps; std::wostringstream outs; outs.precision(6); outs << L"FPS: " << fps << L"\n" << "Milliseconds: Per Frame: " << mspf; mFrameStats = outs.str(); // Reset for next average. frameCnt = 0; t_base += 1.0f; } } void D3DApp::drawScene() { md3dDevice->ClearRenderTargetView(mRenderTargetView, mClearColor); md3dDevice->ClearDepthStencilView(mDepthStencilView, D3D10_CLEAR_DEPTH|D3D10_CLEAR_STENCIL, 1.0f, 0); } LRESULT D3DApp::msgProc(UINT msg, WPARAM wParam, LPARAM lParam) { switch( msg ) { // WM_ACTIVATE is sent when the window is activated or deactivated. // We pause the game when the window is deactivated and unpause it // when it becomes active. case WM_ACTIVATE: if( LOWORD(wParam) == WA_INACTIVE ) { mAppPaused = true; mTimer.stop(); } else { mAppPaused = false; mTimer.start(); } return 0; // WM_SIZE is sent when the user resizes the window. case WM_SIZE: // Save the new client area dimensions. mClientWidth = LOWORD(lParam); mClientHeight = HIWORD(lParam); if( md3dDevice ) { if( wParam == SIZE_MINIMIZED ) { mAppPaused = true; mMinimized = true; mMaximized = false; } else if( wParam == SIZE_MAXIMIZED ) { mAppPaused = false; mMinimized = false; mMaximized = true; onResize(); } else if( wParam == SIZE_RESTORED ) { // Restoring from minimized state? if( mMinimized ) { mAppPaused = false; mMinimized = false; onResize(); } // Restoring from maximized state? else if( mMaximized ) { mAppPaused = false; mMaximized = false; onResize(); } else if( mResizing ) { // If user is dragging the resize bars, we do not resize // the buffers here because as the user continuously // drags the resize bars, a stream of WM_SIZE messages are // sent to the window, and it would be pointless (and slow) // to resize for each WM_SIZE message received from dragging // the resize bars. So instead, we reset after the user is // done resizing the window and releases the resize bars, which // sends a WM_EXITSIZEMOVE message. } else // API call such as SetWindowPos or mSwapChain->SetFullscreenState. { onResize(); } } } return 0; // WM_EXITSIZEMOVE is sent when the user grabs the resize bars. case WM_ENTERSIZEMOVE: mAppPaused = true; mResizing = true; mTimer.stop(); return 0; // WM_EXITSIZEMOVE is sent when the user releases the resize bars. // Here we reset everything based on the new window dimensions. case WM_EXITSIZEMOVE: mAppPaused = false; mResizing = false; mTimer.start(); onResize(); return 0; // WM_DESTROY is sent when the window is being destroyed. case WM_DESTROY: PostQuitMessage(0); return 0; // The WM_MENUCHAR message is sent when a menu is active and the user presses // a key that does not correspond to any mnemonic or accelerator key. case WM_MENUCHAR: // Don't beep when we alt-enter. return MAKELRESULT(0, MNC_CLOSE); // Catch this message so to prevent the window from becoming too small. case WM_GETMINMAXINFO: ((MINMAXINFO*)lParam)->ptMinTrackSize.x = 200; ((MINMAXINFO*)lParam)->ptMinTrackSize.y = 200; return 0; } return DefWindowProc(mhMainWnd, msg, wParam, lParam); } void D3DApp::initMainWindow() { WNDCLASS wc; wc.style = CS_HREDRAW | CS_VREDRAW; wc.lpfnWndProc = MainWndProc; wc.cbClsExtra = 0; wc.cbWndExtra = 0; wc.hInstance = mhAppInst; wc.hIcon = LoadIcon(0, IDI_APPLICATION); wc.hCursor = LoadCursor(0, IDC_ARROW); wc.hbrBackground = (HBRUSH)GetStockObject(NULL_BRUSH); wc.lpszMenuName = 0; wc.lpszClassName = L"D3DWndClassName"; if( !RegisterClass(&wc) ) { MessageBox(0, L"RegisterClass FAILED", 0, 0); PostQuitMessage(0); } // Compute window rectangle dimensions based on requested client area dimensions. RECT R = { 0, 0, mClientWidth, mClientHeight }; AdjustWindowRect(&R, WS_OVERLAPPEDWINDOW, false); int width = R.right - R.left; int height = R.bottom - R.top; mhMainWnd = CreateWindow(L"D3DWndClassName", mMainWndCaption.c_str(), WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, width, height, 0, 0, mhAppInst, this); if( !mhMainWnd ) { MessageBox(0, L"CreateWindow FAILED", 0, 0); PostQuitMessage(0); } ShowWindow(mhMainWnd, SW_SHOW); UpdateWindow(mhMainWnd); } void D3DApp::initDirect3D() { // Fill out a DXGI_SWAP_CHAIN_DESC to describe our swap chain. DXGI_SWAP_CHAIN_DESC sd; sd.BufferDesc.Width = mClientWidth; sd.BufferDesc.Height = mClientHeight; sd.BufferDesc.RefreshRate.Numerator = 60; sd.BufferDesc.RefreshRate.Denominator = 1; sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; sd.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED; sd.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED; // No multisampling. sd.SampleDesc.Count = 1; sd.SampleDesc.Quality = 0; sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; sd.BufferCount = 1; sd.OutputWindow = mhMainWnd; sd.Windowed = true; sd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; sd.Flags = 0; // Create the device. UINT createDeviceFlags = 0; #if defined(DEBUG) || defined(_DEBUG) createDeviceFlags |= D3D10_CREATE_DEVICE_DEBUG; #endif HR( D3D10CreateDeviceAndSwapChain( 0, //default adapter md3dDriverType, 0, // no software device createDeviceFlags, D3D10_SDK_VERSION, &sd, &mSwapChain, &md3dDevice) ); // The remaining steps that need to be carried out for d3d creation // also need to be executed every time the window is resized. So // just call the onResize method here to avoid code duplication. onResize(); }

    Read the article

  • Blank Mail from PHP application

    - by brettlwilliams
    Problem: Blank email from PHP web application. Confirmed: App works in Linux, has various problems in Windows server environment. Blank emails are the last remaining problem. PHP Version 5.2.6 on the server I'm a librarian implementing a PHP based web application to help students complete their assignments.I have installed this application before on a Linux based free web host and had no problems. Email is controlled by two files, email_functions.php and email.php. While email can be sent, all that is sent is a blank email. My IT department is an ASP only shop, so I can get little to no help there. I also cannot install additional libraries like PHPmail or Swiftmailer. You can see a functional copy at http://rpc.elm4you.org/ You can also download a copy from Sourceforge from the link there. Thanks in advance for any insight into this! email_functions.php <?php /********************************************************** Function: build_multipart_headers Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Creates email headers for a message of type multipart/mime This will include a plain text part and HTML. **********************************************************/ function build_multipart_headers($boundary_rand) { global $EMAIL_FROM_DISPLAY_NAME, $EMAIL_FROM_ADDRESS, $CALC_PATH, $CALC_TITLE, $SERVER_NAME; // Using \n instead of \r\n because qmail doubles up the \r and screws everything up! $crlf = "\n"; $message_date = date("r"); // Construct headers for multipart/mixed MIME email. It will have a plain text and HTML part $headers = "X-Calc-Name: $CALC_TITLE" . $crlf; $headers .= "X-Calc-Url: http://{$SERVER_NAME}/{$CALC_PATH}" . $crlf; $headers .= "MIME-Version: 1.0" . $crlf; $headers .= "Content-type: multipart/alternative;" . $crlf; $headers .= " boundary=__$boundary_rand" . $crlf; $headers .= "From: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Sender: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Reply-to: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Return-Path: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Date: $message_date" . $crlf; $headers .= "Message-Id: $boundary_rand@$SERVER_NAME" . $crlf; return $headers; } /********************************************************** Function: build_multipart_body Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Builds the email body content to go with the headers from build_multipart_headers() **********************************************************/ function build_multipart_body($plain_text_message, $html_message, $boundary_rand) { //$crlf = "\r\n"; $crlf = "\n"; $boundary = "__" . $boundary_rand; // Begin constructing the MIME multipart message $multipart_message = "This is a multipart message in MIME format." . $crlf . $crlf; $multipart_message .= "--{$boundary}{$crlf}Content-type: text/plain; charset=\"us-ascii\"{$crlf}Content-Transfer-Encoding: 7bit{$crlf}{$crlf}"; $multipart_message .= $plain_text_message . $crlf . $crlf; $multipart_message .= "--{$boundary}{$crlf}Content-type: text/html; charset=\"iso-8859-1\"{$crlf}Content-Transfer-Encoding: 7bit{$crlf}{$crlf}"; $multipart_message .= $html_message . $crlf . $crlf; $multipart_message .= "--{$boundary}--$crlf$crlf"; return $multipart_message; } /********************************************************** Function: build_step_email_body_text Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Returns a plain text version of the email body to be used for individually sent step reminders **********************************************************/ function build_step_email_body_text($stepnum, $arr_instructions, $dates, $query_string, $teacher_info ,$name, $class, $project_id) { global $CALC_PATH, $CALC_TITLE, $SERVER_NAME; $step_email_body =<<<BODY $CALC_TITLE Step $stepnum: {$arr_instructions["step$stepnum"]["title"]} Name: $name Class: $class BODY; $step_email_body .= build_text_single_step($stepnum, $arr_instructions, $dates, $query_string, $teacher_info); $step_email_body .= "\n\n"; $step_email_body .=<<<FOOTER The $CALC_TITLE offers suggestions, but be sure to check with your teacher to find out the best working schedule for your assignment! If you would like to stop receiving further reminders for this project, click the link below: http://$SERVER_NAME/$CALC_PATH/deleteproject.php?proj=$project_id FOOTER; // Wrap text to 78 chars per line // Convert any remaining HTML <br /> to \r\n // Strip out any remaining HTML tags. $step_email_body = strip_tags(linebreaks_html2text(wordwrap($step_email_body, 78, "\n"))); return $step_email_body; } /********************************************************** Function: build_step_email_body_html Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Same as above, but with HTML **********************************************************/ function build_step_email_body_html($stepnum, $arr_instructions, $dates, $query_string, $teacher_info, $name, $class, $project_id) { global $CALC_PATH, $CALC_TITLE, $SERVER_NAME; $styles = build_html_styles(); $step_email_body =<<<BODY <html> <head> <title> $CALC_TITLE </title> $styles </head> <body> <h1> $CALC_TITLE Schedule </h1> <strong>Name:</strong> $name <br /> <strong>Class:</strong> $class <br /> BODY; $step_email_body .= build_html_single_step($stepnum, $arr_instructions, $dates, $query_string, $teacher_info); $step_email_body .=<<<FOOTER <p> The $CALC_TITLE offers suggestions, but be sure to check with your teacher to find out the best working schedule for your assignment! </p> <p> If you would like to stop receiving further reminders for this project, <a href="http://{$SERVER_NAME}/$CALC_PATH/deleteproject.php?proj=$project_id">click this link.</a> </p> </body> </html> FOOTER; return $step_email_body; } /********************************************************** Function: build_html_styles Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Just returns a string of <style /> for the HTML message body **********************************************************/ function build_html_styles() { $styles =<<<STYLES <style type="text/css"> body { font-family: Arial, sans-serif; font-size: 85%; } h1 { font-size: 120%; } table { border: none; } tr { vertical-align: top; } img { display: none; } hr { border: 0; } </style> STYLES; return $styles; } /********************************************************** Function: linebreaks_html2text Author: Michael Berkowski Last Modified: October 2007 *********************************************************** Purpose: Convert <br /> html tags to \n line breaks **********************************************************/ function linebreaks_html2text($in_string) { $out_string = ""; $arr_br = array("<br>", "<br />", "<br/>"); $out_string = str_replace($arr_br, "\n", $in_string); return $out_string; } ?> email.php <?php require_once("include/config.php"); require_once("include/instructions.php"); require_once("dbase/dbfunctions.php"); require_once("include/email_functions.php"); ini_set("sendmail_from", "[email protected]"); ini_set("SMTP", "mail.qatar.net.qa"); // Verify that the email has not already been sent by checking for a cookie // whose value is generated each time the form is loaded freshly. if (!(isset($_COOKIE['rpc_transid']) && $_COOKIE['rpc_transid'] == $_POST['transid'])) { // Setup some preliminary variables for email. // The scanning of $_POST['email']already took place when this file was included... $to = $_POST['email']; $subject = $EMAIL_SUBJECT; $boundary_rand = md5(rand()); $mail_type = ""; switch ($_POST['reminder-type']) { case "progressive": $arr_dbase_dates = array(); $conn = rpc_connect(); if (!$conn) { $mail_success = FALSE; $mail_status_message = "Could not register address!"; break; } // Sanitize all the data that will be inserted into table... // We need to remove "CONTENT-TYPE:" from name/class to defang them. // Additionall, we can't allow any line-breaks in those fields to avoid // hacks to email headers. $ins_name = mysql_real_escape_string($name); $ins_name = eregi_replace("CONTENT-TYPE", "...Content_Type...", $ins_name); $ins_name = str_replace("\n", "", $ins_name); $ins_class = mysql_real_escape_string($class); $ins_class = eregi_replace("CONTENT-TYPE", "...Content_Type...", $ins_class); $ins_class = str_replace("\n", "", $ins_class); $ins_email = mysql_real_escape_string($email); $ins_teacher_info = $teacher_info ? "YES" : "NO"; switch ($format) { case "Slides": $ins_format = "SLIDES"; break; case "Video": $ins_format = "VIDEO"; break; case "Essay": default: $ins_format = "ESSAY"; break; } // The transid from the previous form will be used as a project identifier // Steps will be grouped by project identifier. $ins_project_id = mysql_real_escape_string($_POST['transid'] . md5(rand())); $arr_dbase_dates = dbase_dates($dates); $arr_past_dates = array(); // Iterate over the dates array and build a SQL statement for each one. $insert_success = TRUE; // $min_reminder_date = date("Ymd", mktime(0,0,0,date("m"),date("d")+$EMAIL_REMINDER_DAYS_AHEAD,date("Y"))); for ($date_index = 0; $date_index < sizeof($arr_dbase_dates); $date_index++) { // Make sure we're using the right keys... $ins_date_index = $date_index + 1; // The insert will only happen if the date of the event is in the future. // For dates today and earlier, no insert. // For dates today or after the reminder deadline, we'll send the email immediately after the inserts. if ($arr_dbase_dates[$date_index] > (int)$min_reminder_date) { $qry =<<<QRY INSERT INTO email_queue ( NOTIFICATION_ID, PROJECT_ID, EMAIL, NAME, CLASS, FORMAT, TEACHER_INFO, STEP, MESSAGE_DATE ) VALUES ( NULL, '$ins_project_id', '$ins_email', '$ins_name', '$ins_class', '$ins_format', '$ins_teacher_info', $ins_date_index, /*step number*/ {$arr_dbase_dates[$date_index]} /* Date in the integer format yyyymmdd */ ) QRY; // Attempt to do the insert... $result = mysql_query($qry); // If even one insert fails, bail out. if (!$result) { $mail_success = FALSE; $mail_status_message = "Could not register address!"; break; } } // For dates today or earlier, store the steps=>dates in an array so the mails can // be sent immediately. else { $arr_past_dates[$ins_date_index] = $arr_dbase_dates[$date_index]; } } // Close the connection resources. mysql_close($conn); // SEND OUT THE EMAILS THAT HAVE TO GO IMMEDIATELY... // This should only be step 1, but who knows... //var_dump($arr_past_dates); for ($stepnum=1; $stepnum<=sizeof($arr_past_dates); $stepnum++) { $email_teacher_info = ($teacher_info && $EMAIL_TEACHER_REMINDERS) ? TRUE : FALSE; $boundary = md5(rand()); $plain_text_body = build_step_email_body_text($stepnum, $arr_instructions, $dates, $query_string, $email_teacher_info ,$name, $class, $ins_project_id); $html_body = build_step_email_body_html($stepnum, $arr_instructions, $dates, $query_string, $email_teacher_info ,$name, $class, $ins_project_id); $multipart_headers = build_multipart_headers($boundary); $multipart_body = build_multipart_body($plain_text_body, $html_body, $boundary); mail($to, $subject . ": Step " . $stepnum, $multipart_body, $multipart_headers, "[email protected]"); } // Set appropriate flags and messages $mail_success = TRUE; $mail_status_message = "Email address registered!"; $mail_type = "progressive"; set_mail_success_cookie(); break; // Default to a single email message. case "single": default: // We don't want to send images in the message, so strip them out of the existing structure. // This big ugly regex strips the whole table cell containing the image out of the table. // Must find a better solution... //$email_table_html = eregi_replace("<td class=\"stepImageContainer\" width=\"161px\">[\s\r\n\t]*<img class=\"stepImage\" src=\"images/[_a-zA-Z0-9]*\.gif\" alt=\"Step [1-9]{1} logo\" />[\s\r\n\t]*</td>", "\n", $table_html); // Show more descriptive text based on the value of $format switch ($format) { case "Video": $format_display = "Video"; break; case "Slides": $format_display = "Presentation with electronic slides"; break; case "Essay": default: $format_display = "Essay"; break; } $days = (int)$days; $html_message = ""; $styles = build_html_styles(); $html_message =<<<HTMLMESSAGE <html> <head> <title> $CALC_TITLE </title> $styles </head> <body> <h1> $CALC_TITLE Schedule </h1> <strong>Name:</strong> $name <br /> <strong>Class:</strong> $class <br /> <strong>Email:</strong> $email <br /> <strong>Assignment type:</strong> $format_display <br /><br /> <strong>Starting on:</strong> $date1 <br /> <strong>Assignment due:</strong> $date2 <br /> <strong>You have $days days to finish.</strong><br /> <hr /> $email_table_html </body> </html> HTMLMESSAGE; // Create the plain text version of the message... $plain_text_message = strip_tags(linebreaks_html2text(build_text_all_steps($arr_instructions, $dates, $query_string, $teacher_info))); // Add the title, since it doesn't get built in by build_text_all_steps... $plain_text_message = $CALC_TITLE . " Schedule\n\n" . $plain_text_message; $plain_text_message = wordwrap($plain_text_message, 78, "\n"); $multipart_headers = build_multipart_headers($boundary_rand); $multipart_message = build_multipart_body($plain_text_message, $html_message, $boundary_rand); $mail_success = FALSE; if (mail($to, $subject, $multipart_message, $multipart_headers, "[email protected]")) { $mail_success = TRUE; $mail_status_message = "Email sent!"; $mail_type = "single"; set_mail_success_cookie(); } else { $mail_success = FALSE; $mail_status_message = "Could not send email!"; } break; } } function set_mail_success_cookie() { // Prevent the mail from being resent on page reload. Set a timestamp cookie. // Expires in 24 hours. setcookie("rpc_transid", $_POST['transid'], time() + 86400); } ?>

    Read the article

  • How to compile a C++ source code written for Linux/Unix on Windows Vista (code given)

    - by HTMZ
    I have a c++ source code that was written in linux/unix environment by some other author. It gives me errors when i compile it in windows vista environment. I am using Bloodshed Dev C++ v 4.9. please help. #include <iostream.h> #include <map> #include <vector> #include <string> #include <string.h> #include <strstream> #include <unistd.h> #include <stdlib.h> using namespace std; template <class T> class PrefixSpan { private: vector < vector <T> > transaction; vector < pair <T, unsigned int> > pattern; unsigned int minsup; unsigned int minpat; unsigned int maxpat; bool all; bool where; string delimiter; bool verbose; ostream *os; void report (vector <pair <unsigned int, int> > &projected) { if (minpat > pattern.size()) return; // print where & pattern if (where) { *os << "<pattern>" << endl; // what: if (all) { *os << "<freq>" << pattern[pattern.size()-1].second << "</freq>" << endl; *os << "<what>"; for (unsigned int i = 0; i < pattern.size(); i++) *os << (i ? " " : "") << pattern[i].first; } else { *os << "<what>"; for (unsigned int i = 0; i < pattern.size(); i++) *os << (i ? " " : "") << pattern[i].first << delimiter << pattern[i].second; } *os << "</what>" << endl; // where *os << "<where>"; for (unsigned int i = 0; i < projected.size(); i++) *os << (i ? " " : "") << projected[i].first; *os << "</where>" << endl; *os << "</pattern>" << endl; } else { // print found pattern only if (all) { *os << pattern[pattern.size()-1].second; for (unsigned int i = 0; i < pattern.size(); i++) *os << " " << pattern[i].first; } else { for (unsigned int i = 0; i < pattern.size(); i++) *os << (i ? " " : "") << pattern[i].first << delimiter << pattern[i].second; } *os << endl; } } void project (vector <pair <unsigned int, int> > &projected) { if (all) report(projected); map <T, vector <pair <unsigned int, int> > > counter; for (unsigned int i = 0; i < projected.size(); i++) { int pos = projected[i].second; unsigned int id = projected[i].first; unsigned int size = transaction[id].size(); map <T, int> tmp; for (unsigned int j = pos + 1; j < size; j++) { T item = transaction[id][j]; if (tmp.find (item) == tmp.end()) tmp[item] = j ; } for (map <T, int>::iterator k = tmp.begin(); k != tmp.end(); ++k) counter[k->first].push_back (make_pair <unsigned int, int> (id, k->second)); } for (map <T, vector <pair <unsigned int, int> > >::iterator l = counter.begin (); l != counter.end (); ) { if (l->second.size() < minsup) { map <T, vector <pair <unsigned int, int> > >::iterator tmp = l; tmp = l; ++tmp; counter.erase (l); l = tmp; } else { ++l; } } if (! all && counter.size () == 0) { report (projected); return; } for (map <T, vector <pair <unsigned int, int> > >::iterator l = counter.begin (); l != counter.end(); ++l) { if (pattern.size () < maxpat) { pattern.push_back (make_pair <T, unsigned int> (l->first, l->second.size())); project (l->second); pattern.erase (pattern.end()); } } } public: PrefixSpan (unsigned int _minsup = 1, unsigned int _minpat = 1, unsigned int _maxpat = 0xffffffff, bool _all = false, bool _where = false, string _delimiter = "/", bool _verbose = false): minsup(_minsup), minpat (_minpat), maxpat (_maxpat), all(_all), where(_where), delimiter (_delimiter), verbose (_verbose) {}; ~PrefixSpan () {}; istream& read (istream &is) { string line; vector <T> tmp; T item; while (getline (is, line)) { tmp.clear (); istrstream istrs ((char *)line.c_str()); while (istrs >> item) tmp.push_back (item); transaction.push_back (tmp); } return is; } ostream& run (ostream &_os) { os = &_os; if (verbose) *os << transaction.size() << endl; vector <pair <unsigned int, int> > root; for (unsigned int i = 0; i < transaction.size(); i++) root.push_back (make_pair (i, -1)); project (root); return *os; } void clear () { transaction.clear (); pattern.clear (); } }; int main (int argc, char **argv) { extern char *optarg; unsigned int minsup = 1; unsigned int minpat = 1; unsigned int maxpat = 0xffffffff; bool all = false; bool where = false; string delimiter = "/"; bool verbose = false; string type = "string"; int opt; while ((opt = getopt(argc, argv, "awvt:M:m:L:d:")) != -1) { switch(opt) { case 'a': all = true; break; case 'w': where = true; break; case 'v': verbose = true; break; case 'm': minsup = atoi (optarg); break; case 'M': minpat = atoi (optarg); break; case 'L': maxpat = atoi (optarg); break; case 't': type = string (optarg); break; case 'd': delimiter = string (optarg); break; default: cout << "Usage: " << argv[0] << " [-m minsup] [-M minpat] [-L maxpat] [-a] [-w] [-v] [-t type] [-d delimiter] < data .." << endl; return -1; } } if (type == "int") { PrefixSpan<unsigned int> prefixspan (minsup, minpat, maxpat, all, where, delimiter, verbose); prefixspan.read (cin); prefixspan.run (cout); }else if (type == "short") { PrefixSpan<unsigned short> prefixspan (minsup, minpat, maxpat, all, where, delimiter, verbose); prefixspan.read (cin); prefixspan.run (cout); } else if (type == "char") { PrefixSpan<unsigned char> prefixspan (minsup, minpat, maxpat, all, where, delimiter, verbose); prefixspan.read (cin); prefixspan.run (cout); } else if (type == "string") { PrefixSpan<string> prefixspan (minsup, minpat, maxpat, all, where, delimiter, verbose); prefixspan.read (cin); prefixspan.run (cout); } else { cerr << "Unknown Item Type: " << type << " : choose from [string|int|short|char]" << endl; return -1; } return 0; }

    Read the article

  • file doesn't open, running outside of debugger results in seg fault (c++)

    - by misterich
    Hello (and thanks in advance) I'm in a bit of a quandry, I cant seem to figure out why I'm seg faulting. A couple of notes: It's for a course -- and sadly I am required to use use C-strings instead of std::string. Please dont fix my code (I wont learn that way and I will keep bugging you). please just point out the flaws in my logic and suggest a different function/way. platform: gcc version 4.4.1 on Suse Linux 11.2 (2.6.31 kernel) Here's the code main.cpp: // /////////////////////////////////////////////////////////////////////////////////// // INCLUDES (C/C++ Std Library) #include <cstdlib> /// EXIT_SUCCESS, EXIT_FAILURE #include <iostream> /// cin, cout, ifstream #include <cassert> /// assert // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (custom header files) #include "dict.h" /// Header for the dictionary class // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR CONSTANTS #define ENTER '\n' /// Used to accept new lines, quit program. #define SPACE ' ' /// One way to end the program // /////////////////////////////////////////////////////////////////////////////////// // CUSTOM DATA TYPES /// File Namespace -- keep it local namespace { /// Possible program prompts to display for the user. enum FNS_Prompts { fileName_, /// prints out the name of the file noFile_, /// no file was passed to the program tooMany_, /// more than one file was passed to the program noMemory_, /// Not enough memory to use the program usage_, /// how to use the program word_, /// ask the user to define a word. notFound_, /// the word is not in the dictionary done_, /// the program is closing normally }; } // /////////////////////////////////////////////////////////////////////////////////// // Namespace using namespace std; /// Nothing special in the way of namespaces // /////////////////////////////////////////////////////////////////////////////////// // FUNCTIONS /** prompt() prompts the user to do something, uses enum Prompts for parameter. */ void prompt(FNS_Prompts msg /** determines the prompt to use*/) { switch(msg) { case fileName_ : { cout << ENTER << ENTER << "The file name is: "; break; } case noFile_ : { cout << ENTER << ENTER << "...Sorry, a dictionary file is needed. Try again." << endl; break; } case tooMany_ : { cout << ENTER << ENTER << "...Sorry, you can only specify one dictionary file. Try again." << endl; break; } case noMemory_ : { cout << ENTER << ENTER << "...Sorry, there isn't enough memory available to run this program." << endl; break; } case usage_ : { cout << "USAGE:" << endl << " lookup.exe [dictionary file name]" << endl << endl; break; } case done_ : { cout << ENTER << ENTER << "like Master P says, \"Word.\"" << ENTER << endl; break; } case word_ : { cout << ENTER << ENTER << "Enter a word in the dictionary to get it's definition." << ENTER << "Enter \"?\" to get a sorted list of all words in the dictionary." << ENTER << "... Press the Enter key to quit the program: "; break; } case notFound_ : { cout << ENTER << ENTER << "...Sorry, that word is not in the dictionary." << endl; break; } default : { cout << ENTER << ENTER << "something passed an invalid enum to prompt(). " << endl; assert(false); /// something passed in an invalid enum } } } /** useDictionary() uses the dictionary created by createDictionary * - prompts user to lookup a word * - ends when the user enters an empty word */ void useDictionary(Dictionary &d) { char *userEntry = new char; /// user's input on the command line if( !userEntry ) // check the pointer to the heap { cout << ENTER << MEM_ERR_MSG << endl; exit(EXIT_FAILURE); } do { prompt(word_); // test code cout << endl << "----------------------------------------" << endl << "Enter something: "; cin.getline(userEntry, INPUT_LINE_MAX_LEN, ENTER); cout << ENTER << userEntry << endl; }while ( userEntry[0] != NIL && userEntry[0] != SPACE ); // GARBAGE COLLECTION delete[] userEntry; } /** Program Entry * Reads in the required, single file from the command prompt. * - If there is no file, state such and error out. * - If there is more than one file, state such and error out. * - If there is a single file: * - Create the database object * - Populate the database object * - Prompt the user for entry * main() will return EXIT_SUCCESS upon termination. */ int main(int argc, /// the number of files being passed into the program char *argv[] /// pointer to the filename being passed into tthe program ) { // EXECUTE /* Testing code * / char tempFile[INPUT_LINE_MAX_LEN] = {NIL}; cout << "enter filename: "; cin.getline(tempFile, INPUT_LINE_MAX_LEN, '\n'); */ // uncomment after successful debugging if(argc <= 1) { prompt(noFile_); prompt(usage_); return EXIT_FAILURE; /// no file was passed to the program } else if(argc > 2) { prompt(tooMany_); prompt(usage_); return EXIT_FAILURE; /// more than one file was passed to the program } else { prompt(fileName_); cout << argv[1]; // print out name of dictionary file if( !argv[1] ) { prompt(noFile_); prompt(usage_); return EXIT_FAILURE; /// file does not exist } /* file.open( argv[1] ); // open file numEntries >> in.getline(file); // determine number of dictionary objects to create file.close(); // close file Dictionary[ numEntries ](argv[1]); // create the dictionary object */ // TEMPORARY FILE FOR TESTING!!!! //Dictionary scrabble(tempFile); Dictionary scrabble(argv[1]); // creaate the dicitonary object //*/ useDictionary(scrabble); // prompt the user, use the dictionary } // exit return EXIT_SUCCESS; /// terminate program. } Dict.h/.cpp #ifndef DICT_H #define DICT_H // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (Custom header files) #include "entry.h" /// class for dictionary entries // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR MACROS #define INPUT_LINE_MAX_LEN 256 /// Maximum length of each line in the dictionary file class Dictionary { public : // // Do NOT modify the public section of this class // typedef void (*WordDefFunc)(const char *word, const char *definition); Dictionary( const char *filename ); ~Dictionary(); const char *lookupDefinition( const char *word ); void forEach( WordDefFunc func ); private : // // You get to provide the private members // // VARIABLES int m_numEntries; /// stores the number of entries in the dictionary Entry *m_DictEntry_ptr; /// points to an array of class Entry // Private Functions }; #endif ----------------------------------- // /////////////////////////////////////////////////////////////////////////////////// // INCLUDES (C/C++ Std Library) #include <iostream> /// cout, getline #include <fstream> // ifstream #include <cstring> /// strchr // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (custom header files) #include "dict.h" /// Header file required by assignment //#include "entry.h" /// Dicitonary Entry Class // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR MACROS #define COMMA ',' /// Delimiter for file #define ENTER '\n' /// Carriage return character #define FILE_ERR_MSG "The data file could not be opened. Program will now terminate." #pragma warning(disable : 4996) /// turn off MS compiler warning about strcpy() // /////////////////////////////////////////////////////////////////////////////////// // Namespace reference using namespace std; // /////////////////////////////////////////////////////////////////////////////////// // PRIVATE MEMBER FUNCTIONS /** * Sorts the dictionary entries. */ /* static void sortDictionary(?) { // sort through the words using qsort } */ /** NO LONGER NEEDED?? * parses out the length of the first cell in a delimited cell * / int getWordLength(char *str /// string of data to parse ) { return strcspn(str, COMMA); } */ // /////////////////////////////////////////////////////////////////////////////////// // PUBLIC MEMBER FUNCTIONS /** constructor for the class * - opens/reads in file * - creates initializes the array of member vars * - creates pointers to entry objects * - stores pointers to entry objects in member var * - ? sort now or later? */ Dictionary::Dictionary( const char *filename ) { // Create a filestream, open the file to be read in ifstream dataFile(filename, ios::in ); /* if( dataFile.fail() ) { cout << FILE_ERR_MSG << endl; exit(EXIT_FAILURE); } */ if( dataFile.is_open() ) { // read first line of data // TEST CODE in.getline(dataFile, INPUT_LINE_MAX_LEN) >> m_numEntries; // TEST CODE char temp[INPUT_LINE_MAX_LEN] = {NIL}; // TEST CODE dataFile.getline(temp,INPUT_LINE_MAX_LEN,'\n'); dataFile >> m_numEntries; /** Number of terms in the dictionary file * \todo find out how many lines in the file, subtract one, ingore first line */ //create the array of entries m_DictEntry_ptr = new Entry[m_numEntries]; // check for valid memory allocation if( !m_DictEntry_ptr ) { cout << MEM_ERR_MSG << endl; exit(EXIT_FAILURE); } // loop thru each line of the file, parsing words/def's and populating entry objects for(int EntryIdx = 0; EntryIdx < m_numEntries; ++EntryIdx) { // VARIABLES char *tempW_ptr; /// points to a temporary word char *tempD_ptr; /// points to a temporary def char *w_ptr; /// points to the word in the Entry object char *d_ptr; /// points to the definition in the Entry int tempWLen; /// length of the temp word string int tempDLen; /// length of the temp def string char tempLine[INPUT_LINE_MAX_LEN] = {NIL}; /// stores a single line from the file // EXECUTE // getline(dataFile, tempLine) // get a "word,def" line from the file dataFile.getline(tempLine, INPUT_LINE_MAX_LEN); // get a "word,def" line from the file // Parse the string tempW_ptr = tempLine; // point the temp word pointer at the first char in the line tempD_ptr = strchr(tempLine, COMMA); // point the def pointer at the comma *tempD_ptr = NIL; // replace the comma with a NIL ++tempD_ptr; // increment the temp def pointer // find the string lengths... +1 to account for terminator tempWLen = strlen(tempW_ptr) + 1; tempDLen = strlen(tempD_ptr) + 1; // Allocate heap memory for the term and defnition w_ptr = new char[ tempWLen ]; d_ptr = new char[ tempDLen ]; // check memory allocation if( !w_ptr && !d_ptr ) { cout << MEM_ERR_MSG << endl; exit(EXIT_FAILURE); } // copy the temp word, def into the newly allocated memory and terminate the strings strcpy(w_ptr,tempW_ptr); w_ptr[tempWLen] = NIL; strcpy(d_ptr,tempD_ptr); d_ptr[tempDLen] = NIL; // set the pointers for the entry objects m_DictEntry_ptr[ EntryIdx ].setWordPtr(w_ptr); m_DictEntry_ptr[ EntryIdx ].setDefPtr(d_ptr); } // close the file dataFile.close(); } else { cout << ENTER << FILE_ERR_MSG << endl; exit(EXIT_FAILURE); } } /** * cleans up dynamic memory */ Dictionary::~Dictionary() { delete[] m_DictEntry_ptr; /// thou shalt not have memory leaks. } /** * Looks up definition */ /* const char *lookupDefinition( const char *word ) { // print out the word ---- definition } */ /** * prints out the entire dictionary in sorted order */ /* void forEach( WordDefFunc func ) { // to sort before or now.... that is the question } */ Entry.h/cpp #ifndef ENTRY_H #define ENTRY_H // /////////////////////////////////////////////////////////////////////////////////// // INCLUDES (C++ Std lib) #include <cstdlib> /// EXIT_SUCCESS, NULL // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR MACROS #define NIL '\0' /// C-String terminator #define MEM_ERR_MSG "Memory allocation has failed. Program will now terminate." // /////////////////////////////////////////////////////////////////////////////////// // CLASS DEFINITION class Entry { public: Entry(void) : m_word_ptr(NULL), m_def_ptr(NULL) { /* default constructor */ }; void setWordPtr(char *w_ptr); /// sets the pointer to the word - only if the pointer is empty void setDefPtr(char *d_ptr); /// sets the ponter to the definition - only if the pointer is empty /// returns what is pointed to by the word pointer char getWord(void) const { return *m_word_ptr; } /// returns what is pointed to by the definition pointer char getDef(void) const { return *m_def_ptr; } private: char *m_word_ptr; /** points to a dictionary word */ char *m_def_ptr; /** points to a dictionary definition */ }; #endif -------------------------------------------------- // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (custom header files) #include "entry.h" /// class header file // /////////////////////////////////////////////////////////////////////////////////// // PUBLIC FUNCTIONS /* * only change the word member var if it is in its initial state */ void Entry::setWordPtr(char *w_ptr) { if(m_word_ptr == NULL) { m_word_ptr = w_ptr; } } /* * only change the def member var if it is in its initial state */ void Entry::setDefPtr(char *d_ptr) { if(m_def_ptr == NULL) { m_word_ptr = d_ptr; } }

    Read the article

  • How to create sprites, programatically without using prefabs?

    - by DemonSOCKET
    I have different types of images for different sprites. and i am not certain that how much different sprites(images) i will have to show. So, i gotta create the sprites and apply textures programatically at runtime. Now, I defiantly can't use prefabs because it will restrict me with the number of different sprites i can use. and also, changing texture on one sprite prefab instance in game, will change all the sprites prefab, that's not acceptable in my case. Is there a way i can create sprites without having to create static prefab ? where ever i looked for the solution every time i got the same answer "create a prefab", which is what can not be done in my case.

    Read the article

  • Parallelism in .NET – Part 3, Imperative Data Parallelism: Early Termination

    - by Reed
    Although simple data parallelism allows us to easily parallelize many of our iteration statements, there are cases that it does not handle well.  In my previous discussion, I focused on data parallelism with no shared state, and where every element is being processed exactly the same. Unfortunately, there are many common cases where this does not happen.  If we are dealing with a loop that requires early termination, extra care is required when parallelizing. Often, while processing in a loop, once a certain condition is met, it is no longer necessary to continue processing.  This may be a matter of finding a specific element within the collection, or reaching some error case.  The important distinction here is that, it is often impossible to know until runtime, what set of elements needs to be processed. In my initial discussion of data parallelism, I mentioned that this technique is a candidate when you can decompose the problem based on the data involved, and you wish to apply a single operation concurrently on all of the elements of a collection.  This covers many of the potential cases, but sometimes, after processing some of the elements, we need to stop processing. As an example, lets go back to our previous Parallel.ForEach example with contacting a customer.  However, this time, we’ll change the requirements slightly.  In this case, we’ll add an extra condition – if the store is unable to email the customer, we will exit gracefully.  The thinking here, of course, is that if the store is currently unable to email, the next time this operation runs, it will handle the same situation, so we can just skip our processing entirely.  The original, serial case, with this extra condition, might look something like the following: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) break; customer.LastEmailContact = DateTime.Now; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re processing our loop, but at any point, if we fail to send our email successfully, we just abandon this process, and assume that it will get handled correctly the next time our routine is run.  If we try to parallelize this using Parallel.ForEach, as we did previously, we’ll run into an error almost immediately: the break statement we’re using is only valid when enclosed within an iteration statement, such as foreach.  When we switch to Parallel.ForEach, we’re no longer within an iteration statement – we’re a delegate running in a method. This needs to be handled slightly differently when parallelized.  Instead of using the break statement, we need to utilize a new class in the Task Parallel Library: ParallelLoopState.  The ParallelLoopState class is intended to allow concurrently running loop bodies a way to interact with each other, and provides us with a way to break out of a loop.  In order to use this, we will use a different overload of Parallel.ForEach which takes an IEnumerable<T> and an Action<T, ParallelLoopState> instead of an Action<T>.  Using this, we can parallelize the above operation by doing: Parallel.ForEach(customers, (customer, parallelLoopState) => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) parallelLoopState.Break(); else customer.LastEmailContact = DateTime.Now; } }); There are a couple of important points here.  First, we didn’t actually instantiate the ParallelLoopState instance.  It was provided directly to us via the Parallel class.  All we needed to do was change our lambda expression to reflect that we want to use the loop state, and the Parallel class creates an instance for our use.  We also needed to change our logic slightly when we call Break().  Since Break() doesn’t stop the program flow within our block, we needed to add an else case to only set the property in customer when we succeeded.  This same technique can be used to break out of a Parallel.For loop. That being said, there is a huge difference between using ParallelLoopState to cause early termination and to use break in a standard iteration statement.  When dealing with a loop serially, break will immediately terminate the processing within the closest enclosing loop statement.  Calling ParallelLoopState.Break(), however, has a very different behavior. The issue is that, now, we’re no longer processing one element at a time.  If we break in one of our threads, there are other threads that will likely still be executing.  This leads to an important observation about termination of parallel code: Early termination in parallel routines is not immediate.  Code will continue to run after you request a termination. This may seem problematic at first, but it is something you just need to keep in mind while designing your routine.  ParallelLoopState.Break() should be thought of as a request.  We are telling the runtime that no elements that were in the collection past the element we’re currently processing need to be processed, and leaving it up to the runtime to decide how to handle this as gracefully as possible.  Although this may seem problematic at first, it is a good thing.  If the runtime tried to immediately stop processing, many of our elements would be partially processed.  It would be like putting a return statement in a random location throughout our loop body – which could have horrific consequences to our code’s maintainability. In order to understand and effectively write parallel routines, we, as developers, need a subtle, but profound shift in our thinking.  We can no longer think in terms of sequential processes, but rather need to think in terms of requests to the system that may be handled differently than we’d first expect.  This is more natural to developers who have dealt with asynchronous models previously, but is an important distinction when moving to concurrent programming models. As an example, I’ll discuss the Break() method.  ParallelLoopState.Break() functions in a way that may be unexpected at first.  When you call Break() from a loop body, the runtime will continue to process all elements of the collection that were found prior to the element that was being processed when the Break() method was called.  This is done to keep the behavior of the Break() method as close to the behavior of the break statement as possible. We can see the behavior in this simple code: var collection = Enumerable.Range(0, 20); var pResult = Parallel.ForEach(collection, (element, state) => { if (element > 10) { Console.WriteLine("Breaking on {0}", element); state.Break(); } Console.WriteLine(element); }); If we run this, we get a result that may seem unexpected at first: 0 2 1 5 6 3 4 10 Breaking on 11 11 Breaking on 12 12 9 Breaking on 13 13 7 8 Breaking on 15 15 What is occurring here is that we loop until we find the first element where the element is greater than 10.  In this case, this was found, the first time, when one of our threads reached element 11.  It requested that the loop stop by calling Break() at this point.  However, the loop continued processing until all of the elements less than 11 were completed, then terminated.  This means that it will guarantee that elements 9, 7, and 8 are completed before it stops processing.  You can see our other threads that were running each tried to break as well, but since Break() was called on the element with a value of 11, it decides which elements (0-10) must be processed. If this behavior is not desirable, there is another option.  Instead of calling ParallelLoopState.Break(), you can call ParallelLoopState.Stop().  The Stop() method requests that the runtime terminate as soon as possible , without guaranteeing that any other elements are processed.  Stop() will not stop the processing within an element, so elements already being processed will continue to be processed.  It will prevent new elements, even ones found earlier in the collection, from being processed.  Also, when Stop() is called, the ParallelLoopState’s IsStopped property will return true.  This lets longer running processes poll for this value, and return after performing any necessary cleanup. The basic rule of thumb for choosing between Break() and Stop() is the following. Use ParallelLoopState.Stop() when possible, since it terminates more quickly.  This is particularly useful in situations where you are searching for an element or a condition in the collection.  Once you’ve found it, you do not need to do any other processing, so Stop() is more appropriate. Use ParallelLoopState.Break() if you need to more closely match the behavior of the C# break statement. Both methods behave differently than our C# break statement.  Unfortunately, when parallelizing a routine, more thought and care needs to be put into every aspect of your routine than you may otherwise expect.  This is due to my second observation: Parallelizing a routine will almost always change its behavior. This sounds crazy at first, but it’s a concept that’s so simple its easy to forget.  We’re purposely telling the system to process more than one thing at the same time, which means that the sequence in which things get processed is no longer deterministic.  It is easy to change the behavior of your routine in very subtle ways by introducing parallelism.  Often, the changes are not avoidable, even if they don’t have any adverse side effects.  This leads to my final observation for this post: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Bug Triage

    In this blog post brain dump, I'll attempt to describe the process my team tries to follow when dealing with new bug reports (specifically, code defect reports). This is not official Microsoft policy, just the way we do things… if you do things differently and want to share, you can do so at the bottom in the comments (or on your blog).Feature Triage TeamA subset of the feature crew, the triage team (which has representations from the PM, Dev and QA disciplines), looks at all unassigned bugs at regular intervals. This can be weekly or daily (or other frequency) dependent on which part of the product cycle we are in and what the untriaged bug load looks like. They discuss each bug considering the evidence and make a decision of whether the bug goes from Not Yet Assigned to Assigned (plus the name of the DEV to fix this) or whether it goes from Active to Resolved (which means it gets assigned back to the requestor for closure or further debate if they were not present at the triage meeting). Close to critical milestones, the feature triage team needs to further justify bugs they take to additional higher-level triage teams.Bug Opened = Not Yet AssignedSomeone (typically an SDET from the QA team) creates the bug item (e.g. in TFS), ensuring they populate all the relevant fields including: Title, Description, Repro Steps (including the Actual Result at the end of the steps), attachments of code and/or screenshots, Build number that they observed the issue in, regression details if applicable, how it was found, if a test case exists or needs to be created etc. They also indicate their opinion on the Priority and Severity. The bug status is left as Not Yet Assigned."Issue" versus "Fix for issue"The solution to some bugs is easy to determine, e.g. "bug: the column name is misspelled". Obviously the fix is to correct the spelling – still, the triage team should be explicit and enter the correct spelling in the bug's Description. Note that a bad bug name here would be "bug: fix the spelling of the column" (it describes the solution, rather than the problem).Other solutions are trickier to establish, e.g. "bug: the column header is not accessible (can only be clicked on with the mouse, not reached via keyboard)". What is the correct solution here? The last thing to do is leave this undetermined and just assign it to a developer. The solution has to be entered in the description. Behind this type of a bug usually hides a spec defect or a new feature request.The person opening the bug should focus on describing the issue, rather than the solution. The person indicates what the fix is in their opinion by stating the Expected Result (immediately after stating the Actual Result). If they have a complex suggested solution, that should be split out in a separate part, but the triage team has the final say before assigning it. If the solution is lengthy/complicated to describe, the bug can be assigned to the PM. Note: the strict interpretation suggests that any bug with no clear, obvious solution is always a hole in the spec and should always go to the PM. This also ensures the spec gets updated.Not Yet Assigned - Not Yet Assigned (on someone else's plate)If the bug is observed in our feature, but the cause is actually another team, we change the Area Path (which is the way we identify teams in TFS) and leave it as Not Yet Assigned. The triage team may add more comments as appropriate including potentially changing the repro steps. In some cases, we may even resolve the bug in our area path and open a new bug in the area path of the other team.Even though there is no action on a dev on the team, the bug still needs to be tracked. One way of doing this is to implement some notification system that informs the team when the tracked bug changed status; another way is to occasionally run a global query (against all area paths) for bugs that have been opened by a member of the team and follow up with the current owners for stale bugs.Not Yet Assigned - ResolvedThis state transition can only be made by the Feature Triage Team.0. Sometimes the bug description is not clear and in that case it gets Resolved as More Information Needed, so the original requestor can provide it.After understanding what the bug item is about, the first decision is to determine whether it needs to go to a dev.1. If it is a known bug, it gets resolved as "Duplicate" and linked to the existing bug.2. If it is "By Design" it gets resolved as such, indicating that the triage team does not think this is a bug.3. If the bug does not repro on latest bits, it is resolved as "No Repro"4. The most painful: If it is decided that we cannot fix it for this release it gets resolved as "Postponed" or "Won't Fix". The former is typically due to resources and time constraints, while the latter is due to deciding that it is not important enough to consume our resources in any release (yes, not all bugs must be fixed!). For both cases, there are other factors that contribute to the decision such as: existence of a reasonable workaround, frequency we expect users to encounter the issue, dependencies on other team to offer a solution, whether it breaks a core scenario, whether it prohibits customer feedback on a major feature, is it a regression from a previous release, impact of the fix on other partner teams (e.g. User Education, User Experience, Localization/Globalization), whether this is the right fix, does the fix impact performance goals, and last but not least, severity of bug (e.g. loss of customer data, security threat, crash, hang). The bar for fixing a bug goes up as the release date approaches. The triage team becomes hardnosed about which bugs to take, while the developers are busy resolving assigned bugs thus everyone drives for Zero Bug Bounce (ZBB). ZBB is when you have 0 active bugs older than 48 hours.Not Yet Assigned - AssignedIf the bug is something we decide to fix in this release and the solution is known, then it is assigned to a DEV. This is either the developer that will do the work, or a Lead that can further assign it to one of his developer team based on a load balancing algorithm of their choosing.Sometimes, the triage team needs the dev to do some investigation work before deciding whether to take the fix; similarly, the checkin for the fix may be gated on code review by the triage team. In these cases, these instructions are provided in the comments section of the bug and when the developer is done they notify the triage team for final decision.Additionally, a Priority and Severity (from 0 to 4) has to be entered, e.g. a P0 means "drop anything you are doing and fix this now" whereas a P4 is something you get to after all P0,1,2,3 bugs are fixed.From a testing perspective, if the bug was found through ad-hoc testing or an external team, the decision is made whether test cases should be added to avoid future regressions. This is communicated to the QA team.Assigned - ResolvedWhen the developer receives the bug (they should be checking daily for new bugs on their plate looking at bugs in order of priority and from older to newer) they can send it back to triage if the information is not clear. Otherwise, they investigate the bug, setting the Sub Status to "Investigating"; if they cannot make progress, they set the Sub Status to "Blocked" and discuss this with triage or whoever else can help them get unblocked. Once they are unblocked, they set the Sub Status to "Working on Solution"; once they are code complete they send a code review request, setting the Sub Status to "Fix Available". After the iterative code review process is over and everyone is happy with the fix, the developer checks it in and changes the state of the bug from Active (and Assigned to them) to Resolved (and Assigned to someone else).The developer needs to ensure that when the status is changed to Resolved that it is assigned to a QA person. For example, maybe the PM opened the bug, but it should be a QA person that will verify the fix - the developer needs to manually change the assignee in that case. Typically the QA person will send an email to the original requestor notifying them that the fix is verified.Resolved - ??In all cases above, note that the final state was Resolved. What happens after that? The final step should be Closed. The bug is closed once the QA person verifying the fix is happy with it. If the person is not happy, then they change the state from Resolved to Active, thus sending it back to the developer. If the developer and QA person cannot reach agreement, then triage can be brought into it. An easy way to do that is change the status back to Not Yet Assigned with appropriate comments so the triage team can re-review.It is important to note that only QA can close a bug. That means that if the opener of the bug was a PM, when the bug gets resolved by the dev it may land on the PM's plate and after a quick review, the PM would re-assign to an SDET, which is the only role that can close bugs. One exception to this is if the person that filed the bug is external: in that case, we leave it Resolved and assigned to them and also send them a notification that they need to verify the fix. Another exception is if specialized developer knowledge is needed for verifying the bug fix (e.g. it was a refactoring suggestion bug typically not observable by the user) in which case it is fine to have a developer verify the fix, and ideally a different developer to the one that opened the bug.Other links on bug triageA quick search reveals that others have talked about this subject, e.g. here, here, here, here and here.Your take?If you have other best practices your team uses to deal with incoming bug reports, feel free to share in the comments below or on your blog. Comments about this post welcome at the original blog.

    Read the article

  • Cleaner HTML Markup with ASP.NET 4 Web Forms - Client IDs (VS 2010 and .NET 4.0 Series)

    - by ScottGu
    This is the sixteenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post is the first of a few blog posts I’ll be doing that talk about some of the important changes we’ve made to make Web Forms in ASP.NET 4 generate clean, standards-compliant, CSS-friendly markup.  Today I’ll cover the work we are doing to provide better control over the “ID” attributes rendered by server controls to the client. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Clean, Standards-Based, CSS-Friendly Markup One of the common complaints developers have often had with ASP.NET Web Forms is that when using server controls they don’t have the ability to easily generate clean, CSS-friendly output and markup.  Some of the specific complaints with previous ASP.NET releases include: Auto-generated ID attributes within HTML make it hard to write JavaScript and style with CSS Use of tables instead of semantic markup for certain controls (in particular the asp:menu control) make styling ugly Some controls render inline style properties even if no style property on the control has been set ViewState can often be bigger than ideal ASP.NET 4 provides better support for building standards-compliant pages out of the box.  The built-in <asp:> server controls with ASP.NET 4 now generate cleaner markup and support CSS styling – and help address all of the above issues.  Markup Compatibility When Upgrading Existing ASP.NET Web Forms Applications A common question people often ask when hearing about the cleaner markup coming with ASP.NET 4 is “Great - but what about my existing applications?  Will these changes/improvements break things when I upgrade?” To help ensure that we don’t break assumptions around markup and styling with existing ASP.NET Web Forms applications, we’ve enabled a configuration flag – controlRenderingCompatbilityVersion – within web.config that let’s you decide if you want to use the new cleaner markup approach that is the default with new ASP.NET 4 applications, or for compatibility reasons render the same markup that previous versions of ASP.NET used:   When the controlRenderingCompatbilityVersion flag is set to “3.5” your application and server controls will by default render output using the same markup generation used with VS 2008 and .NET 3.5.  When the controlRenderingCompatbilityVersion flag is set to “4.0” your application and server controls will strictly adhere to the XHTML 1.1 specification, have cleaner client IDs, render with semantic correctness in mind, and have extraneous inline styles removed. This flag defaults to 4.0 for all new ASP.NET Web Forms applications built using ASP.NET 4. Any previous application that is upgraded using VS 2010 will have the controlRenderingCompatbilityVersion flag automatically set to 3.5 by the upgrade wizard to ensure backwards compatibility.  You can then optionally change it (either at the application level, or scope it within the web.config file to be on a per page or directory level) if you move your pages to use CSS and take advantage of the new markup rendering. Today’s Cleaner Markup Topic: Client IDs The ability to have clean, predictable, ID attributes on rendered HTML elements is something developers have long asked for with Web Forms (ID values like “ctl00_ContentPlaceholder1_ListView1_ctrl0_Label1” are not very popular).  Having control over the ID values rendered helps make it much easier to write client-side JavaScript against the output, makes it easier to style elements using CSS, and on large pages can help reduce the overall size of the markup generated. New ClientIDMode Property on Controls ASP.NET 4 supports a new ClientIDMode property on the Control base class.  The ClientIDMode property indicates how controls should generate client ID values when they render.  The ClientIDMode property supports four possible values: AutoID—Renders the output as in .NET 3.5 (auto-generated IDs which will still render prefixes like ctrl00 for compatibility) Predictable (Default)— Trims any “ctl00” ID string and if a list/container control concatenates child ids (example: id=”ParentControl_ChildControl”) Static—Hands over full ID naming control to the developer – whatever they set as the ID of the control is what is rendered (example: id=”JustMyId”) Inherit—Tells the control to defer to the naming behavior mode of the parent container control The ClientIDMode property can be set directly on individual controls (or within container controls – in which case the controls within them will by default inherit the setting): Or it can be specified at a page or usercontrol level (using the <%@ Page %> or <%@ Control %> directives) – in which case controls within the pages/usercontrols inherit the setting (and can optionally override it): Or it can be set within the web.config file of an application – in which case pages within the application inherit the setting (and can optionally override it): This gives you the flexibility to customize/override the naming behavior however you want. Example: Using the ClientIDMode property to control the IDs of Non-List Controls Let’s take a look at how we can use the new ClientIDMode property to control the rendering of “ID” elements within a page.  To help illustrate this we can create a simple page called “SingleControlExample.aspx” that is based on a master-page called “Site.Master”, and which has a single <asp:label> control with an ID of “Message” that is contained with an <asp:content> container control called “MainContent”: Within our code-behind we’ll then add some simple code like below to dynamically populate the Label’s Text property at runtime:   If we were running this application using ASP.NET 3.5 (or had our ASP.NET 4 application configured to run using 3.5 rendering or ClientIDMode=AutoID), then the generated markup sent down to the client would look like below: This ID is unique (which is good) – but rather ugly because of the “ct100” prefix (which is bad). Markup Rendering when using ASP.NET 4 and the ClientIDMode is set to “Predictable” With ASP.NET 4, server controls by default now render their ID’s using ClientIDMode=”Predictable”.  This helps ensure that ID values are still unique and don’t conflict on a page, but at the same time it makes the IDs less verbose and more predictable.  This means that the generated markup of our <asp:label> control above will by default now look like below with ASP.NET 4: Notice that the “ct100” prefix is gone. Because the “Message” control is embedded within a “MainContent” container control, by default it’s ID will be prefixed “MainContent_Message” to avoid potential collisions with other controls elsewhere within the page. Markup Rendering when using ASP.NET 4 and the ClientIDMode is set to “Static” Sometimes you don’t want your ID values to be nested hierarchically, though, and instead just want the ID rendered to be whatever value you set it as.  To enable this you can now use ClientIDMode=static, in which case the ID rendered will be exactly the same as what you set it on the server-side on your control.  This will cause the below markup to be rendered with ASP.NET 4: This option now gives you the ability to completely control the client ID values sent down by controls. Example: Using the ClientIDMode property to control the IDs of Data-Bound List Controls Data-bound list/grid controls have historically been the hardest to use/style when it comes to working with Web Form’s automatically generated IDs.  Let’s now take a look at a scenario where we’ll customize the ID’s rendered using a ListView control with ASP.NET 4. The code snippet below is an example of a ListView control that displays the contents of a data-bound collection — in this case, airports: We can then write code like below within our code-behind to dynamically databind a list of airports to the ListView above: At runtime this will then by default generate a <ul> list of airports like below.  Note that because the <ul> and <li> elements in the ListView’s template are not server controls, no IDs are rendered in our markup: Adding Client ID’s to Each Row Item Now, let’s say that we wanted to add client-ID’s to the output so that we can programmatically access each <li> via JavaScript.  We want these ID’s to be unique, predictable, and identifiable. A first approach would be to mark each <li> element within the template as being a server control (by giving it a runat=server attribute) and by giving each one an id of “airport”: By default ASP.NET 4 will now render clean IDs like below (no ctl001-like ids are rendered):   Using the ClientIDRowSuffix Property Our template above now generates unique ID’s for each <li> element – but if we are going to access them programmatically on the client using JavaScript we might want to instead have the ID’s contain the airport code within them to make them easier to reference.  The good news is that we can easily do this by taking advantage of the new ClientIDRowSuffix property on databound controls in ASP.NET 4 to better control the ID’s of our individual row elements. To do this, we’ll set the ClientIDRowSuffix property to “Code” on our ListView control.  This tells the ListView to use the databound “Code” property from our Airport class when generating the ID: And now instead of having row suffixes like “1”, “2”, and “3”, we’ll instead have the Airport.Code value embedded within the IDs (e.g: _CLE, _CAK, _PDX, etc): You can use this ClientIDRowSuffix approach with other databound controls like the GridView as well. It is useful anytime you want to program row elements on the client – and use clean/identified IDs to easily reference them from JavaScript code. Summary ASP.NET 4 enables you to generate much cleaner HTML markup from server controls and from within your Web Forms applications.  In today’s post I covered how you can now easily control the client ID values that are rendered by server controls.  In upcoming posts I’ll cover some of the other markup improvements that are also coming with the ASP.NET 4 release. Hope this helps, Scott

    Read the article

  • ASP.NET MVC ‘Extendable-hooks’ – ControllerActionInvoker class

    - by nmarun
    There’s a class ControllerActionInvoker in ASP.NET MVC. This can be used as one of an hook-points to allow customization of your application. Watching Brad Wilsons’ Advanced MP3 from MVC Conf inspired me to write about this class. What MSDN says: “Represents a class that is responsible for invoking the action methods of a controller.” Well if MSDN says it, I think I can instill a fair amount of confidence into what the class does. But just to get to the details, I also looked into the source code for MVC. Seems like the base class Controller is where an IActionInvoker is initialized: 1: protected virtual IActionInvoker CreateActionInvoker() { 2: return new ControllerActionInvoker(); 3: } In the ControllerActionInvoker (the O-O-B behavior), there are different ‘versions’ of InvokeActionMethod() method that actually call the action method in question and return an instance of type ActionResult. 1: protected virtual ActionResult InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary<string, object> parameters) { 2: object returnValue = actionDescriptor.Execute(controllerContext, parameters); 3: ActionResult result = CreateActionResult(controllerContext, actionDescriptor, returnValue); 4: return result; 5: } I guess that’s enough on the ‘behind-the-screens’ of this class. Let’s see how we can use this class to hook-up extensions. Say I have a requirement that the user should be able to get different renderings of the same output, like html, xml, json, csv and so on. The user will type-in the output format in the url and should the get result accordingly. For example: http://site.com/RenderAs/ – renders the default way (the razor view) http://site.com/RenderAs/xml http://site.com/RenderAs/csv … and so on where RenderAs is my controller. There are many ways of doing this and I’m using a custom ControllerActionInvoker class (even though this might not be the best way to accomplish this). For this, my one and only route in the Global.asax.cs is: 1: routes.MapRoute("RenderAsRoute", "RenderAs/{outputType}", 2: new {controller = "RenderAs", action = "Index", outputType = ""}); Here the controller name is ‘RenderAsController’ and the action that’ll get called (always) is the Index action. The outputType parameter will map to the type of output requested by the user (xml, csv…). I intend to display a list of food items for this example. 1: public class Item 2: { 3: public int Id { get; set; } 4: public string Name { get; set; } 5: public Cuisine Cuisine { get; set; } 6: } 7:  8: public class Cuisine 9: { 10: public int CuisineId { get; set; } 11: public string Name { get; set; } 12: } Coming to my ‘RenderAsController’ class. I generate an IList<Item> to represent my model. 1: private static IList<Item> GetItems() 2: { 3: Cuisine cuisine = new Cuisine { CuisineId = 1, Name = "Italian" }; 4: Item item = new Item { Id = 1, Name = "Lasagna", Cuisine = cuisine }; 5: IList<Item> items = new List<Item> { item }; 6: item = new Item {Id = 2, Name = "Pasta", Cuisine = cuisine}; 7: items.Add(item); 8: //... 9: return items; 10: } My action method looks like 1: public IList<Item> Index(string outputType) 2: { 3: return GetItems(); 4: } There are two things that stand out in this action method. The first and the most obvious one being that the return type is not of type ActionResult (or one of its derivatives). Instead I’m passing the type of the model itself (IList<Item> in this case). We’ll convert this to some type of an ActionResult in our custom controller action invoker class later. The second thing (a little subtle) is that I’m not doing anything with the outputType value that is passed on to this action method. This value will be in the RouteData dictionary and we’ll use this in our custom invoker class as well. It’s time to hook up our invoker class. First, I’ll override the Initialize() method of my RenderAsController class. 1: protected override void Initialize(RequestContext requestContext) 2: { 3: base.Initialize(requestContext); 4: string outputType = string.Empty; 5:  6: // read the outputType from the RouteData dictionary 7: if (requestContext.RouteData.Values["outputType"] != null) 8: { 9: outputType = requestContext.RouteData.Values["outputType"].ToString(); 10: } 11:  12: // my custom invoker class 13: ActionInvoker = new ContentRendererActionInvoker(outputType); 14: } Coming to the main part of the discussion – the ContentRendererActionInvoker class: 1: public class ContentRendererActionInvoker : ControllerActionInvoker 2: { 3: private readonly string _outputType; 4:  5: public ContentRendererActionInvoker(string outputType) 6: { 7: _outputType = outputType.ToLower(); 8: } 9: //... 10: } So the outputType value that was read from the RouteData, which was passed in from the url, is being set here in  a private field. Moving to the crux of this article, I now override the CreateActionResult method. 1: protected override ActionResult CreateActionResult(ControllerContext controllerContext, ActionDescriptor actionDescriptor, object actionReturnValue) 2: { 3: if (actionReturnValue == null) 4: return new EmptyResult(); 5:  6: ActionResult result = actionReturnValue as ActionResult; 7: if (result != null) 8: return result; 9:  10: // This is where the magic happens 11: // Depending on the value in the _outputType field, 12: // return an appropriate ActionResult 13: switch (_outputType) 14: { 15: case "json": 16: { 17: JavaScriptSerializer serializer = new JavaScriptSerializer(); 18: string json = serializer.Serialize(actionReturnValue); 19: return new ContentResult { Content = json, ContentType = "application/json" }; 20: } 21: case "xml": 22: { 23: XmlSerializer serializer = new XmlSerializer(actionReturnValue.GetType()); 24: using (StringWriter writer = new StringWriter()) 25: { 26: serializer.Serialize(writer, actionReturnValue); 27: return new ContentResult { Content = writer.ToString(), ContentType = "text/xml" }; 28: } 29: } 30: case "csv": 31: controllerContext.HttpContext.Response.AddHeader("Content-Disposition", "attachment; filename=items.csv"); 32: return new ContentResult 33: { 34: Content = ToCsv(actionReturnValue as IList<Item>), 35: ContentType = "application/ms-excel" 36: }; 37: case "pdf": 38: string filePath = controllerContext.HttpContext.Server.MapPath("~/items.pdf"); 39: controllerContext.HttpContext.Response.AddHeader("content-disposition", 40: "attachment; filename=items.pdf"); 41: ToPdf(actionReturnValue as IList<Item>, filePath); 42: return new FileContentResult(StreamFile(filePath), "application/pdf"); 43:  44: default: 45: controllerContext.Controller.ViewData.Model = actionReturnValue; 46: return new ViewResult 47: { 48: TempData = controllerContext.Controller.TempData, 49: ViewData = controllerContext.Controller.ViewData 50: }; 51: } 52: } A big method there! The hook I was talking about kinda above actually is here. This is where different kinds / formats of output get returned based on the output type requested in the url. When the _outputType is not set (string.Empty as set in the Global.asax.cs file), the razor view gets rendered (lines 45-50). This is the default behavior in most MVC applications where-in a view (webform/razor) gets rendered on the browser. As you see here, this gets returned as a ViewResult. But then, for an outputType of json/xml/csv, a ContentResult gets returned, while for pdf, a FileContentResult is returned. Here are how the different kinds of output look like: This is how we can leverage this feature of ASP.NET MVC to developer a better application. I’ve used the iTextSharp library to convert to a pdf format. Mike gives quite a bit of detail regarding this library here. You can download the sample code here. (You’ll get an option to download once you open the link). Verdict: Hot chocolate: $3; Reebok shoes: $50; Your first car: $3000; Being able to extend a web application: Priceless.

    Read the article

  • SQL SERVER – Difference Between GETDATE and SYSDATETIME

    - by pinaldave
    Sometime something so simple skips our mind. I never knew the difference between GETDATE and SYSDATETIME. I just ran simple query as following and realized the difference. SELECT GETDATE() fn_GetDate, SYSDATETIME() fn_SysDateTime In case of GETDATE the precision is till miliseconds and in case of SYSDATETIME the precision is till nanoseconds. Now the questions is to you – did you know this? Be honest and please share your views. I already accepted that I did not know this in very first line. Reference: Pinal Dave (http://www.SQLAuthority.com), Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage

    - by pinaldave
    I have mentioned several times on this blog that the best part of blogging is the questions I receive from readers. They are often very interesting. The questions from readers give me a good idea what other readers might be thinking as well. After reading my earlier article Simple Example to Configure Resource Governor – Introduction to Resource Governor – I received an email from a reader and we exchanged a few emails. After exchanging emails we both figured out what is going on. It was indeed interesting and reader suggested to that I should blog about it.  I asked for permission to publish his name but he does not like the attention so we will just call him Jeff. I have converted our emails into chat for easy consumption. Jeff: Your script does not work at all. I think either there is a bug in SQL Server. Pinal: Would you please explain in detail? Jeff: Your code does not limit the CPU usage? Pinal: How did you measure it? Jeff: Well, we have third party tools for it but let us say I have limited the resources for Reporting Services and used your script described in your blog. After that I ran only reporting service workload the CPU is still used more than 100% and it is not limited to 30% as described in your script. Clearly something is wrong somewhere. Pinal: Did you say you ONLY ran reporting server load? Jeff: Yeah, to validate I ran ONLY reporting server load and CPU did not throttle at 30% as per your script. Pinal: Oh! I get it here is the answer - CAP_CPU_PERCENT = 30. Use it. Jeff: What is that, I think your earlier script says it will throttle the Reporting Service workload and Application/OLTP workload and balance it. Pinal: Exactly, that is correct. Jeff: You need to write more in email buddy! Just like your blogs, your answers do not make sense! No Offense! Pinal: Hmm…feedback well taken. Let me try again. In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Configuration: [Read Earlier Post] Reporting Workload: MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30 Application/OLTP Workload: MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100 Example 1: If there is only Reporting Workload on the server: SQL Server will not limit usage of CPU to only 30% workload but SQL Server instance will use all available CPU (if needed). In another word in this scenario it will use more than 30% CPU. Example 2: If there is Reproting Workload and heavy Application/OLTP workload: SQL Server will allocate a maximum of 30% CPU resources to Reporting Workload and allocate remaining resources to heavy application/OLTP workload. The reason for this enhancement is for better utilization of the resources. Let us think, if there is only single workload, which we have limited to max CPU usage to 30%. The other unused available CPU resources is now wasted. In this situation SQL Server allows the workload to use more than 30% resources leading to overall improved/optimized performance. However, in the case of multiple workload where lots of resources are needed the limits specified in MAX_CPU_PERCENT are acknowledged. Example 3: If there is a situation where the max CPU workload has to be enforced: This is a very interesting scenario, in the case when the max CPU workload has to be enforced irrespective of the workload and enhanced algorithm, the keyword CAP_CPU_PERCENT is essential. It specifies a hard cap on the CPU bandwidth that all requests in the resource pool will receive. It will never let CPU usage for reporting workload to go over 30% in our case. You can use the key word as follows: -- Creating Resource Pool for Report Server CREATE RESOURCE POOL ReportServerPool WITH ( MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30, CAP_CPU_PERCENT=40, MIN_MEMORY_PERCENT=0, MAX_MEMORY_PERCENT=30) GO Notice that there is MAX_CPU_PERCENT=30 and CAP_CPU_PERCENT=40, what it means is that when SQL Server Instance is under heavy load under different workload it will use the maximum CPU at 30%. However, when the SQL Server instance is not under workload it will go over the 30% limit. However, as CAP_CPU_PERCENT is set to 40, it will not go over 40% in any case by limiting the usage of CPU. CAP_CPU_PERCENT puts a hard limit on the resources usage by workload. Jeff: Nice Pinal, you should blog about it. [A day passes by] Pinal: Jeff, it is done! Click here to read it. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Service Broker

    Read the article

  • SOA &amp; E2.0 Partner Community Forum XIII registration is open

    - by Jürgen Kress
    INVITATION TO THE ORACLE SOA AND E2.0 PARTNER COMMUNITY FORUM Do you want to learn about how to sell the value of Fusion Middleware by combining SOA and E2.0 Solutions? We would like to invite you to become updated and trained at our SOA and E2.0 Partner Community Forum March on 15th and 16th 2011 in Utrecht, The Netherlands. Keynote: Andrew Sutherland and Andrew Gilboy The Oracle SOA and E2.0 Partner Community Forum is a wonderful opportunity to: learn how to sell the value of Fusion Middleware bij combining SOA and E2.0 solutions meet with Oracle SOA and E2.0 Product management exchange knowledge learn from successful SOA, BPM, WebCenter and UCM implementations understand Oracle's Fusion Applications Strategy network within the Oracle SOA Partner Community and the Oracle E2.0 Partner Community During this highly informative event you can learn about partner success stories, participate in an array of break out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Additionally to the SOA and E2.0 Partner Community Forum, you can participate in technical hands on workshops on March 17th and 18th. The goal of these workshops is to prepare you for customer implementations. Places are limited, so don't delay and register now by clicking here. Registration takes a few minutes and is free of charge, except in case of cancellation or no show (cancellation fee € 150). For more information, please visit our website. Best regards Jürgen Kress & Hans Blaas SOA & E2.0 Partner Adoption EMEA Agenda March 15th 2011 Welcome & Introduction Keynote Oracle Middleware Strategy and information on Application Grid and Exalogic Andrew Sutherland, SVP Middleware Sales EMEA, Oracle Keynote Managing Online Customer, Partner and Employee Engagement with Oracle E2.0 Solutions Andrew Gilboy, VP E2.0 Sales EMEA, Oracle Partner SOA/BPM Reference Case Partner WebCenter/UCM Reference Case SOA Suite PS3 David Shaffer, VP Product Management, Oracle Why Specialization is important for Partners Nick Kritikos, Hans Blaas & Jürgen Kress, Alliances & Channels, Oracle   Agenda March 16th 2011 Welcome & Introduction Day II Breakout round 1 - SOA Suite 11g PS3 & OSB - Importance of ADF & JDeveloper - SOA Security IDM - WebCenter PS3, Whats new - E2.0 Sales Plays Breakout round 2 - WebCenter PS3, Whats new - Application Management Enterprise manager and Amberpoint - ADF/WebCenter 11g integration with BPM Suite 11g - Importance of ADF & JDeveloper - JCAPS & OC4J migration opportunities for service business Breakout round 3 - BPM 11g: Whats new - Universal Content management 11g - SOA Security Management - E2.0 Surrounding Products: ATG, Documaker, Primavera - Middleware Industry Value Propositions & Sales Play Fusion Application SOA & E2.0 Summary & Closing For registration and additional information, please visit our website. For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Community,SOA,SOA Partner Community Forum,SOA Community Forum,OPN,Jürgen Kress

    Read the article

  • Time Warp

    - by Jesse
    It’s no secret that daylight savings time can wreak havoc on systems that rely heavily on dates. The system I work on is centered around recording dates and times, so naturally my co-workers and I have seen our fair share of date-related bugs. From time to time, however, we come across something that we haven’t seen before. A few weeks ago the following error message started showing up in our logs: “The supplied DateTime represents an invalid time. For example, when the clock is adjusted forward, any time in the period that is skipped is invalid.” This seemed very cryptic, especially since it was coming from areas of our application that are typically only concerned with capturing date-only (no explicit time component) from the user, like reports that take a “start date” and “end date” parameter. For these types of parameters we just leave off the time component when capturing the date values, so midnight is used as a “placeholder” time. How is midnight an “invalid time”? Globalization Is Hard Over the last couple of years our software has been rolled out to users in several countries outside of the United States, including Brazil. Brazil begins and ends daylight savings time at midnight on pre-determined days of the year. On October 16, 2011 at midnight many areas in Brazil began observing daylight savings time at which time their clocks were set forward one hour. This means that at the instant it became midnight on October 16, it actually became 1:00 AM, so any time between 12:00 AM and 12:59:59 AM never actually happened. Because we store all date values in the database in UTC, always adjust any “local” dates provided by a user to UTC before using them as filters in a query. The error we saw was thrown by .NET when trying to convert the Brazilian local time of 2011-10-16 12:00 AM to UTC since that local time never actually existed. We hadn’t experienced this same issue with any of our US customers because the daylight savings time changes in the US occur at 2:00 AM which doesn’t conflict with our “placeholder” time of midnight. Detecting Invalid Times In .NET you might use code similar to the following for converting a local time to UTC: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); The code above throws the “invalid time” exception referenced above. We could try to detect whether or not the local time is invalid with something like this: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); if (localTimeZone.IsInvalidTime(localDate)) localDate = localDate.AddHours(1); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); This code works in this particular scenario, but it hardly seems robust. It also does nothing to address the issue that can arise when dealing with the ambiguous times that fall around the end of daylight savings. When we roll the clocks back an hour they record the same hour on the same day twice in a row. To continue on with our Brazil example, on February 19, 2012 at 12:00 AM, it will immediately become February 18, 2012 at 11:00 PM all over again. In this scenario, how should we interpret February 18, 2011 11:30 PM? Enter Noda Time I heard about Noda Time, the .NET port of the Java library Joda Time, a little while back and filed it away in the back of my mind under the “sounds-like-it-might-be-useful-someday” category.  Let’s see how we might deal with the issue of invalid and ambiguous local times using Noda Time (note that as of this writing the samples below will only work using the latest code available from the Noda Time repo on Google Code. The NuGet package version 0.1.0 published 2011-08-19 will incorrectly report unambiguous times as being ambiguous) : var localDateTime = new LocalDateTime(2011, 10, 16, 0, 0); const string timeZoneId = "Brazil/East"; var timezone = DateTimeZone.ForId(timeZoneId); var localDateTimeMaping = timezone.MapLocalDateTime(localDateTime); ZonedDateTime unambiguousLocalDateTime; switch (localDateTimeMaping.Type) { case ZoneLocalMapping.ResultType.Unambiguous: unambiguousLocalDateTime = localDateTimeMaping.UnambiguousMapping; break; case ZoneLocalMapping.ResultType.Ambiguous: unambiguousLocalDateTime = localDateTimeMaping.EarlierMapping; break; case ZoneLocalMapping.ResultType.Skipped: unambiguousLocalDateTime = new ZonedDateTime( localDateTimeMaping.ZoneIntervalAfterTransition.Start, timezone); break; default: throw new InvalidOperationException(string.Format("Unexpected mapping result type: {0}", localDateTimeMaping.Type)); } var convertedDateTime = unambiguousLocalDateTime.ToInstant().ToDateTimeUtc(); Let’s break this sample down: I’m using the Noda Time ‘LocalDateTime’ object to represent the local date and time. I’ve provided the year, month, day, hour, and minute (zeros for the hour and minute here represent midnight). You can think of a ‘LocalDateTime’ as an “invalidated” date and time; there is no information available about the time zone that this date and time belong to, so Noda Time can’t make any guarantees about its ambiguity. The ‘timeZoneId’ in this sample is different than the ones above. In order to use the .NET TimeZoneInfo class we need to provide Windows time zone ids. Noda Time expects an Olson (tz / zoneinfo) time zone identifier and does not currently offer any means of mapping the Windows time zones to their Olson counterparts, though project owner Jon Skeet has said that some sort of mapping will be publicly accessible at some point in the future. I’m making use of the Noda Time ‘DateTimeZone.MapLocalDateTime’ method to disambiguate the original local date time value. This method returns an instance of the Noda Time object ‘ZoneLocalMapping’ containing information about the provided local date time maps to the provided time zone.  The disambiguated local date and time value will be stored in the ‘unambiguousLocalDateTime’ variable as an instance of the Noda Time ‘ZonedDateTime’ object. An instance of this object represents a completely unambiguous point in time and is comprised of a local date and time, a time zone, and an offset from UTC. Instances of ZonedDateTime can only be created from within the Noda Time assembly (the constructor is ‘internal’) to ensure to callers that each instance represents an unambiguous point in time. The value of the ‘unambiguousLocalDateTime’ might vary depending upon the ‘ResultType’ returned by the ‘MapLocalDateTime’ method. There are three possible outcomes: If the provided local date time is unambiguous in the provided time zone I can immediately set the ‘unambiguousLocalDateTime’ variable from the ‘Unambiguous Mapping’ property of the mapping returned by the ‘MapLocalDateTime’ method. If the provided local date time is ambiguous in the provided time zone (i.e. it falls in an hour that was repeated when moving clocks backward from Daylight Savings to Standard Time), I can use the ‘EarlierMapping’ property to get the earlier of the two possible local dates to define the unambiguous local date and time that I need. I could have also opted to use the ‘LaterMapping’ property in this case, or even returned an error and asked the user to specify the proper choice. The important thing to note here is that as the programmer I’ve been forced to deal with what appears to be an ambiguous date and time. If the provided local date time represents a skipped time (i.e. it falls in an hour that was skipped when moving clocks forward from Standard Time to Daylight Savings Time),  I have access to the time intervals that fell immediately before and immediately after the point in time that caused my date to be skipped. In this case I have opted to disambiguate my local date and time by moving it forward to the beginning of the interval immediately following the skipped period. Again, I could opt to use the end of the interval immediately preceding the skipped period, or raise an error depending on the needs of the application. The point of this code is to convert a local date and time to a UTC date and time for use in a SQL Server database, so the final ‘convertedDate’  variable (typed as a plain old .NET DateTime) has its value set from a Noda Time ‘Instant’. An 'Instant’ represents a number of ticks since 1970-01-01 at midnight (Unix epoch) and can easily be converted to a .NET DateTime in the UTC time zone using the ‘ToDateTimeUtc()’ method. This sample is admittedly contrived and could certainly use some refactoring, but I think it captures the general approach needed to take a local date and time and convert it to UTC with Noda Time. At first glance it might seem that Noda Time makes this “simple” code more complicated and verbose because it forces you to explicitly deal with the local date disambiguation, but I feel that the length and complexity of the Noda Time sample is proportionate to the complexity of the problem. Using TimeZoneInfo leaves you susceptible to overlooking ambiguous and skipped times that could result in run-time errors or (even worse) run-time data corruption in the form of a local date and time being adjusted to UTC incorrectly. I should point out that this research is my first look at Noda Time and I know that I’ve only scratched the surface of its full capabilities. I also think it’s safe to say that it’s still beta software for the time being so I’m not rushing out to use it production systems just yet, but I will definitely be tinkering with it more and keeping an eye on it as it progresses.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >