Search Results

Search found 1308 results on 53 pages for 'dr dre'.

Page 31/53 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • WORK PLANS in 10 Minutes

    Most productive people use some form of a work plan to sort out what they need to do and to guide them when they are doing the work. People generally agree that they need a work plan for new work, b... [Author: Dr Neil Miller - Computers and Internet - August 24, 2009]

    Read the article

  • Integrating Oracle Argus Safety with other Clinical Systems Using Argus Interchange's E2B Functionality

    - by Roxana Babiciu
    Over the past few years, companies conducting clinical trials have increasingly been interested in integrating their pharmacovigilance systems with other clinical and safety solutions to streamline their processes. Please join BioPharm Systems’ Dr. Rodney Lemery, vice president of safety and pharmacovigilance, for a one-hour webinar in which he will discuss the ability to integrate Oracle’s Argus Safety with other applications using the safety system’s inherent extended E2B functionality. Read more here

    Read the article

  • Social Networking at Professional Events

    Dr. Masha Petrova compresses, into a small space, much good advice on networking with other professional people. She draws from her own experience as a technical expert to provide a detailed checklist of things you should and shouldn't do at conferences or tradeshows to be a successful 'networker'. As usual, she delivers sage advice with a dash of humour.

    Read the article

  • Proven Approach to Financial Progress Using Modern Best Practice

    - by Oracle Accelerate for Midsize Companies
    Normal 0 false false false EN-US X-NONE X-NONE by Larry Simcox, Sr. Director, Oracle Midsize Programs Top performing organizations generate 25 percent higher profit margins and grow at twice the rate of their competitors. How do they do it? Recently, Dr. Stephen G. Timme, President of FinListics Solutions and Adjunct Professor at the Georgia Institute of Technology, joined me on a webcast to answer that question. I've know Dr. Timme since my days at G-log when we worked together to help customers determine the ROI of transportation management solutions. We were also joined by Steve Cox, Vice President of Oracle Midsize Programs, who recently published an Oracle E-book, "Modern Best Practice Explained". In this webcast, Cox provides his perspective on how best performing companies are moving from best practice to modern best practice.  Watch the webcast replay and you'll learn about the easy to follow, top down approach to: Identify processes that should be targeted for improvement Leverage a modern best practice maturity model to start a path to progress Link financial performance gaps to operational KPIs Improve cash flow by benchmarking key financial metrics Develop intelligent estimates of achievable cash flow benefits Click HERE to watch a replay of the webcast. You might also be interested in the following: Video: Modern Best Practices Defined  AppCast: Modern Best Practices for Growing Companies Looking for more news and information about Oracle Solutions for Midsize Companies? Read the latest Oracle for Midsize Companies Newsletter Sign-up to receive the latest communications from Oracle’s industry leaders and experts Larry Simcox Senior Director, Oracle Midsize Programs responsible for supporting and creating marketing content ,communications, sales and partner program support for Oracle's go to market activities for midsize companies. I have over 17 years experience helping customers identify the value and ROI from their IT investment. I live in Charlotte NC with my family and my dog Dingo. The views expressed here are my own, and not necessarily those of Oracle. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Amazon s'associe à Nokia pour créer son propre service de cartographie, un autre acteur majeur du mobile tourne le dos à Google

    Amazon s'associe à Nokia pour créer son propre service de cartographie Un autre acteur majeur du mobile tourne le dos à Google Maps Après Apple qui lâchera définitivement Google Maps dès la sortie imminente d'iOS 6, c'est maintenant au tour d'Amazon de lancer son propre service de cartographie sur ses tablettes Kindle Fire et Kindle Fire HD. Dans un communiqué adressé à la presse, le porte-parole de Nokia Dr Sebastian Kurme affirme que la société Amazon s'associe à Nokia et se base sur sa plateforme de localisation NLP pour créer un service de cartographie ...

    Read the article

  • Using PHP to place database rows into an array?

    - by Hamed Szilazi
    I was just wondering how i would be able to code perform an SQL query and then place each row into a new array, for example, lets say a table looked like the following: $people= mysql_query("SELECT * FROM friends") Output: | ID | Name | Age | --1----tom----32 --2----dan----22 --3----pat----52 --4----nik----32 --5----dre----65 How could i create a multidimensional array that works in the following way, the first rows second column data could be accessed using $people[0][1] and fifth rows third column could be accessed using $people[4][2]. How would i go about constructing this type of array? Sorry if this is a strange question, its just that i am new to PHP+SQL and would like to know how to directly access data. Performance and speed is not a issue as i am just writing small test scripts to get to grips with the language.

    Read the article

  • How to allow writing to a mounted NFS partition

    - by Cerin
    How do you allow a specific user permission to write to an NFS partition? I've mounted an NFS share on my localhost (a Fedora install), and I can read and write as root, but I'm unable to write as the apache user, even though all the files and directories in the share on my localhost and remote host are owned by apache. For example, I've mounted it via this line in my /etc/fstab: remotehost:/data/media /data/media nfs _netdev,soft,intr,rw,bg 0 0 And both locations are owned by apache: [root@remotehost ~]# ls -la /data total 24 drwxr-xr-x. 6 root root 4096 Jan 6 2011 . dr-xr-xr-x. 28 root root 4096 Oct 31 2011 .. drwxr-xr-x 4 apache apache 4096 Jan 14 2011 media [root@localhost ~]# ls -la /data total 16 drwxr-xr-x 4 apache apache 4096 Dec 7 2011 . dr-xr-xr-x. 27 root root 4096 Jun 11 15:51 .. drwxrwxrwx 5 apache apache 4096 Jan 31 2011 media However, when I try and write as the apache user, I get a "Permission denied" error. [root@localhost ~]# sudo -u apache touch /data/media/test.txt' touch: cannot touch `/data/media/test.txt': Permission denied But of course it works fine as root. What am I doing wrong?

    Read the article

  • Programmatically adding an object and selecting the correspondig row does not make it become the CurrentRow

    - by Robert
    I'm in a struggle with the DataGridView: I do have a BindingList of some simple objects that implement INotifyPropertyChanged. The DataGridView's datasource is set to this BindingList. Now I need to add an object to the list by hitting the "+" key. When an object is added, it should appear as a new row and it shall become the current row. As the CurrentRow-property is readonly, I iterate through all rows, check if its bound item is the newly created object, and if it is, I set this row to "Selected = true;" The problem: although the new object and thereby a new row gets inserted and selected in the DataGridView, it still is not the CurrentRow! It does not become the CurrentRow unless I do a mouse click into this new row. In this test program you can add new objects (and thereby rows) with the "+" key, and with the "i" key the data-bound object of the CurrentRow is shown in a MessageBox. How can I make a newly added object become the CurrentObject? Thanks for your help! Here's the sample: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Forms; namespace WindowsFormsApplication1 { public partial class Form1 : Form { BindingList<item> myItems; public Form1() { InitializeComponent(); myItems = new BindingList<item>(); for (int i = 1; i <= 10; i++) { myItems.Add(new item(i)); } dataGridView1.DataSource = myItems; } public void Form1_KeyDown(object sender, KeyEventArgs e) { if (e.KeyCode == Keys.Add) { addItem(); } } public void addItem() { item i = new item(myItems.Count + 1); myItems.Add(i); foreach (DataGridViewRow dr in dataGridView1.Rows) { if (dr.DataBoundItem == i) { dr.Selected = true; } } } private void btAdd_Click(object sender, EventArgs e) { addItem(); } private void dataGridView1_KeyDown(object sender, KeyEventArgs e) { if (e.KeyCode == Keys.Add) { addItem(); } if (e.KeyCode == Keys.I) { MessageBox.Show(((item)dataGridView1.CurrentRow.DataBoundItem).title); } } } public class item : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; private int _id; public int id { get { return _id; } set { this.title = "This is item number " + value.ToString(); _id = value; InvokePropertyChanged(new PropertyChangedEventArgs("id")); } } private string _title; public string title { get { return _title; } set { _title = value; InvokePropertyChanged(new PropertyChangedEventArgs("title")); } } public item(int id) { this.id = id; } #region Implementation of INotifyPropertyChanged public void InvokePropertyChanged(PropertyChangedEventArgs e) { PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) handler(this, e); } #endregion } }

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Have you ever wondered...?

    - by diana.gray
    I've often wondered why folks do the same thing over and over. For some of us, it's because we "don't get it" and there's an abundance of TV talk shows that will help us analyze the why of it. Dr. Phil is all too eager to ask "...and how's that working for you?". But I'm not referring to being stuck in a destructive pattern or denial. I'm really talking about doing something over and over because you have found a joy, a comfort, a boost of energy from an activity or event. For example, how many times have I planted bulbs in November or December only to be amazed by their reach, colors, and fragrance in early spring? Or baked fresh cookies and allowed the aroma to fill the house? Or kissed a sleeping baby held gently in my arms and being reminded of how tiny and fragile we all are. I've often wondered why it is that I get so much out of something I've done so many times. I think it's because I've changed. The activity may be the same but in the preceding days, months and years I've had new experiences, challenges, joys and sorrows that have shaped me. I'm different. The same is true about attending the Professional Businesswomen of California (PBWC) conference. Although the conference is an annual event held at San Francisco's Moscone Center, I still enjoy being with 3,000 other women like me. Yes, we work at different companies and in different industries, have different lifestyles and are at different stages in our professional careers and personal lives; but we are all alike in that we bring the NEW me each year that we attend. This year I can cheer when Safra Catz, President of Oracle, encourages us to trust our intuition; that "if something doesn't make sense, it doesn't make sense". And I can warmly introduce myself to Lisa Askins, Cheryl Melching's business partner at Center Stage Group, when I would have been too intimated to do so last year. This year I can commit to new challenges such as "no whining, no excuses and no gossip" as suggested by Roxanne Emmerich, a goal that I would have wavered on last year. I can also embrace the suggestion given by Dr. Ian Smith to "spend one hour each day" on me - giving myself time to rejuvenate. A friend, when asked if she was attending PBWC this year, said "I've attended the conference several times and there's nothing new!" My perspective is that WE are what makes PBWC's annual conference new. We are far different in 2010 than we were in 2009. We are learning, growing, developing and shedding and that's what makes the conference fresh, vibrant, rewarding, and lasting. It is the diversity of women coming together that makes it new. By sharing our experiences, we discover. By meeting with one another professionally and personally, we connect. And by applying the wisdom learned, we shine. We are reNEW-ed. It shows in our fresh ideas, confident interactions, strategic decisions and successful businesses. This refreshed approach is what our companies want and need, our families depend on, our communities and nation look to for creative solutions to pressing concerns. Thanks Oracle for your continued support and thanks PBWC for providing an annual day to be reNEW-ed.

    Read the article

  • Live Event: OTN Architect Day: Cloud Computing - Two weeks and counting

    - by Bob Rhubart
    In just two weeks architects and others will gather at the Oracle Conference Center in Redwood Shores, CA for the first Oracle Technology Network Architect Day event of 2013. This event focuses on Cloud Computing, and features sessions specifically focused on real-world examples of the implementation of cloud computing. When: Tuesday July 9, 2013              8:30am - 12:30pm Where: Oracle Conference Center              350 Oracle Pkwy              Redwood City, CA 94065 Register now. It's free! Here's the agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Michael Timpanaro-Perrotta Director, Product Management, Oracle Database Cloud New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Michael Timpanaro-Perrotta respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • Live Event: OTN Architect Day: Cloud Computing - Two weeks and counting

    - by Bob Rhubart
    In just two weeks architects and others will gather at the Oracle Conference Center in Redwood Shores, CA for the first Oracle Technology Network Architect Day event of 2013. This event focuses on Cloud Computing, and features sessions specifically focused on real-world examples of the implementation of cloud computing. When: Tuesday July 9, 2013              8:30am - 12:30pm Where: Oracle Conference Center              350 Oracle Pkwy              Redwood City, CA 94065 Register now. It's free! Here's the agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Michael Timpanaro-Perrotta Director, Product Management, Oracle Database Cloud New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Michael Timpanaro-Perrotta respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • 65536% Autogrowth!

    - by Tara Kizer
    Twice a year, we move our production systems to our disaster recovery site.  Last Saturday night was one of those days.  There are about 50 SQL Server databases to be moved to the DR site, which is done via database mirroring.  It takes only a few seconds to failover, but some databases have a bit more involved work such as setting up replication.  Everything went relatively smooth, but we encountered a weird bug on our most mission critical system.  After everything was successfully failed over to the DR site, it was noticed that mirroring was in a suspended state on one of the databases.  We thought we had run into a SQL Server 2005 bug that we had been encountering and were working with Microsoft on a fix.  Microsoft did fix it in both SQL Server 2005 service pack 3 cumulative update package 13 and service pack 4 cumulative update package 2, however SP3 CU13 and SP4 both recently failed on this system so we were not patched yet with the bug fix.  As the suspended state was causing us issues with replication, we dropped mirroring.  We then noticed we had 10MB of free disk space on the mount point where the principal’s data files are stored.  I knew something went amiss as this system should have at least 150GB free on that mount point.  I immediately checked the main database’s data file and was shocked to see an autgrowth size of 65536%.  The data file autogrew right before mirroring went into the suspended state. 65536%! I didn’t have a lot of time to research if this autgrowth problem was a known SQL Server bug, so I deferred that research to today.  A quick Google search yielded no results but emphasis on “quick”.  I checked our performance system, which was recently restored with a copy of the affected production database, and found the autogrowth setting to be 512MB.  So this autogrowth bug was encountered sometime in the last two weeks.  On February 26th, we had attempted to install SQL 2005 SP4 on production, however it had failed (PSS case open with Microsoft).  I suspected that the SP4 failure was somehow related to this autgrowth bug although that turned out not to be the case. I then tweeted (@TaraKizer) about this problem to see if the SQL Server community (#sqlhelp) had any insights.  It seems several people have either heard of this bug or encountered it.  Aaron Bertrand (blog|twitter) referred me to this Connect item. Our affected database originated on SQL Server 2000 and was upgraded to SQL Server 2005 in 2007.  Back on SQL Server 2000, we were using the default file growth setting which was a percentage.  Sometime after the 2005 upgrade is when we changed it to 512MB.  Our situation seemed to fit the bug Aaron referred to me, so now the question was whether Microsoft had fixed it yet. I received a reply to my tweet from Amit Banerjee (twitter) that it had been fixed in SP3 CU1 (KB958004).  My affected system is SP3 CU8, so I was initially confused why we had encountered the bug.  Because I don’t read things fully, I had missed that there are additional steps you have to follow after applying the bug fix.  Amit set me straight.  Although you can read this information in the KB article, I will also copy it here in case you are as lazy as me and miss the most important section of it (although if you are as lazy as me, you won’t have read this far down my blog post): This hotfix will prevent only future occurrences of this problem. For example, if you restore a database from SQL Server 2000 to a SQL Server 2005 instance that contains this hotfix, this problem will not occur. However, if you already have a database that is affected by this problem, you must follow these steps to resolve this problem manually: Apply this hotfix. Set the file growth settings for the affected files to percentage settings, and then set the settings back to megabyte settings. Take the database offline, and then bring it back online. Verify that the values of the is_percent_growth column are correct in the sys.database_files system table and in the sys.master_files system table.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for December 9-15, 2012

    - by Bob Rhubart
    You click, we listen. The following list reflects the Top 10 most popular items posted on the OTN ArchBeat Facefbook page for the week of December 9-15, 2012. DevOps Basics II: What is Listening on Open Ports and Files – WebLogic Essentials | Dr. Frank Munz "Can you easily find out which WebLogic servers are listening to which port numbers and addresses?" asks Dr. Frank Munz. The good doctor has an answer—and a tech tip. Using OBIEE against Transactional Schemas Part 4: Complex Dimensions | Stewart Bryson "Another important entity for reporting in the Customer Tracking application is the Contact entity," says Stewart Bryson. "At first glance, it might seem that we should simply build another dimension called Dim – Contact, and use analyses to combine our Customer and Contact dimensions along with our Activity fact table to analyze Customer and Contact behavior." SOA 11g Technology Adapters – ECID Propagation | Greg Mally "Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few," says Oracle Fusion Middleware A-Team member Greg Mally. "Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective." Greg's post focuses on technical tips for dealing with one of these challenges. Podcast: DevOps and Continuous Integration In Part 1 of a 3-part program, panelists Tim Hall (Senior Director of product management for Oracle Enterprise Repository and Oracle’s Application Integration Architecture), Robert Wunderlich (Principal Product Manager for Oracle’s Application Integration Architecture Foundation Pack) and Peter Belknap (Director of product management for Oracle SOA Integration) discuss why DevOps matters and how it changes development methodologies and organizational structure. Good To Know - Conflicting View Objects and Shared Entity | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis shares his thoughts -- and a sample application -- dealing with an "interesting ADF behavior" encountered over the weekend. Cloud Deployment Models | B. R. Clouse Looking out for the cloud newbies... "As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective," says B. R. Clouse. "This blog is a basic review of cloud deployment models, to help orient newcomers and neophytes." Service governance morphs into cloud API management | David Linthicum "When building and using clouds, the ability to manage APIs or services is the single most important item you can provide to ensure the success of the project," says David Linthicum. "But most organizations driving a cloud project for the first time have no experience handling a service-based architecture and don't see the need for API management until after deployment. By then, it's too late." Oracle Fusion Middleware Security: Password Policy in OAM 11g R2 | Rob Otto Rob Otto continues the Oracle Fusion Middleware A-Team "Oracle Access Manager Academy" series with a detailed look at OAM's ability to support "a subset of password management processes without the need to use Oracle Identity Manager and LDAP Sync." Understanding the JSF Lifecycle and ADF Optimized Lifecycle | Steven Davelaar Could you call that a surprise ending? Oracle WebCenter & ADF Architecture Team (A-Team) member learned a lot more than he expected while creating a UKOUG presentation entitled "What you need to know about JSF to be succesful with ADF." Expanding on requestaudit - Tracing who is doing what...and for how long | Kyle Hatlestad "One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing," says Oracle Fusion Middleware A-Team architect Kyle Hatlestad. Get up close and technical in his post. Thought for the Day "There is no code so big, twisted, or complex that maintenance can't make it worse." — Gerald Weinberg Source: SoftwareQuotes.com

    Read the article

  • C# Excel file OLEDB read HTML IMPORT

    - by Michel van Engelen
    Hi, I have to automate something for the finance dpt. I've got an Excel file which I want to read using OleDb: string connectionString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=A_File.xls;Extended Properties=""HTML Import;IMEX=1;"""; using (OleDbConnection connection = new OleDbConnection()) { using (DbCommand command = connection.CreateCommand()) { connection.ConnectionString = connectionString; connection.Open(); DataTable dtSchema = connection.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, null); if( (null == dtSchema) || ( dtSchema.Rows.Count <= 0 ) ) { //raise exception if needed } command.CommandText = "SELECT * FROM [NameOfTheWorksheet$]"; using (DbDataReader dr = command.ExecuteReader()) { while (dr.Read()) { //do something with the data } } } } Normally the connectionstring would have an extended property "Excel 8.0", but the file can't be read that way because it seems to be an html file renamed to .xls. when I copy the data from the xls to a new xls, I can read the new xls with the E.P. set to "Excel 8.0". Yes, I can read the file by creating an instance of Excel, but I rather not.. Any idea how I can read the xls using OleDb without making manual changes to the xls or by playing with ranges in a instanciated Excel? Regards, Michel

    Read the article

  • DynamicJasper font encoding problem

    - by user266956
    Hello All, I'm new to DynamicJasper, I think it's a great project but for a few days I cannot run simple example which will display polish fonts properly. Details sections is generated without any problems automatically displaying polish letters like 'c z z a' etc. where title and column headers contain strange letters instead. I was trying to do the following, but it doesn't work: FastReportBuilder drb = new FastReportBuilder(); Font font = new Font(25, "SansSerif", "Helvetica", Font.PDF_ENCODING_CP1257_Baltic, true); Style titleStyle = new StyleBuilder(false).setFont(font).build(); DynamicReport dr = drb.addColumn("State", "state", String.class.getName(),30) .addColumn("Branch", "branch", String.class.getName(),30) .addColumn("Product Line", "productLine", String.class.getName(),50) .addGroups(2) .setTitle("November 2008 Zrebie aczc!") .setTitleStyle(titleStyle) .setSubtitle("This report was generated at " + new Date()) .setPrintBackgroundOnOddRows(true) .setUseFullPageWidth(true) .build(); JRDataSource ds = new JRBeanCollectionDataSource(this.simples); JasperPrint jp = DynamicJasperHelper.generateJasperPrint(dr, new ClassicLayoutManager(), ds); JasperViewer.viewReport(jp); //finally display the report report Any help would be highly appreciated as dynamicjasper forums seems to be dead and constantly spammed. Thanks in advance! Kris

    Read the article

  • VB.net Debug sqldatareader - immediate window

    - by ScaryJones
    Scenario is this; I've a sqldatareader query that seems to be returning nothing when I try to convert the sqldatareader to a datatable using datatable.load. So I debug into it, I grab the verbose SQL query before it goes into the sqldatareader, just to make sure it's formatted correctly. I copy and paste this into SQL server to run it and see if it returns anything. It does, one row. I go back to visual studio and let the program continue, I create a datatable and try to load the sqldatareader but it just returns an empty reader. I'm baffled as to what's going on. I'll copy a version of the code (not the exact SQL query I'm using but close) here: Dim cn As New SqlConnection cn.ConnectionString = <connection string details here> cn.Open() Dim sqlQuery As String = "select * from Products where productid = 5" Dim cm As New SqlCommand(sqlQuery, cn) Dim dr As SqlDataReader = cm.ExecuteReader() Dim dt as new DataTable dt.load(dr) dt should have contents but it's empty. If I copy that SQL query into sql server and run it I get a row of results. Any ideas what I'm doing wrong? ######### UPDATE ############ I've now noticed that it seems to be returning one less row than I get with each SQL query. So, if I run the SQL myself and get 1 row then the datatable seems to have 0 rows. If the query returns 4 rows, the datatable has 3!! Very strange, any ideas anyone?

    Read the article

  • Trouble pre-populating drop down and textarea from MySQL Database

    - by Tony
    I am able to successfully pre-populate my questions using the following code: First Name: <input type="text" name="first_name" size="30" maxlength="20" value="' . $row[2] . '" /><br /> However, when I try to do the same for a drop down box and a textarea box, nothing is pre-populated from the database, even though there is actual content in the database. This is the code I'm using for the drop down and textarea, respectively: <?php echo ' <form action ="edit_contact.php" method="post"> <div class="contactfirstcolumn"> Prefix: <select name = "prefix" value="' . $row[0] . '" /> <option value="blank">--</option> <option value="Dr">Dr.</option> <option value="Mr">Mr.</option> <option value="Mrs">Mrs.</option> <option value="Ms">Ms.</option> </select><br />'; ?> AND Contact Description:<textarea id = "contactdesc" name="contactdesc" rows="3" cols="50" value="' . $row[20] . '" /></textarea><br /><br /> It's important to note that I am not receiving any errors. The form loads fine, however without the data for the drop down and textarea fields. Thanks! Tony

    Read the article

  • Catching MediaPlayer Exceptions from WPF MediaElement Control

    - by ScottCate
    I'm playing video in a MediaElement in WPF. It's working 1000's of times, over and over again. Once in a blue moon (like once a week), I get a windows exception (you know the dialog Dr. Watson Crash??) that happens. The MediaElment doesn't expose an error, it just crashes and sits there with an ugly Crash report on the screen. If you "view this report" you can see it is in fact MediaPlayer that has crashed. I know I can disable the crash reports from popping up - but I'm more interested in finding out what's going wrong. I'm not sure how to capture the results of the Dr. Watson capture, but I have the dialog open now if someone has advice on a better way to capture. Here is the opening line of data, that points to my application, then to wmvdecod.dll AppName: ScottApp.exe AppVer: 2.2009.2291.805 AppStamp:4a36c812 ModName: wmvdecod.dll ModVer: 11.0.5721.5145 ModStamp:453711a3 fDebug: 0 Offset: 000cbc88 And from the Win Event Log. (same information) Event Type: Error Event Source: .NET Runtime 2.0 Error Reporting Event Category: None Event ID: 1000 Date: 7/13/2009 Time: 10:20:27 AM User: N/A Computer:28022 Description: Faulting application ScottApp.exe, version 2.2009.2291.805, stamp 4a36c812, faulting module wmvdecod.dll, version 11.0.5721.5145, stamp 453711a3, debug? 0, fault address 0x000cbc88.

    Read the article

  • Autocomplet extender control in ajax (asp.net)?

    - by Surya sasidhar
    hi, i am using autocomplete extender in my application but it is not working. this is my code.. <form id="form1" runat="server"> <asp:scriptmanager ID="Scriptmanager1" runat="server"></asp:scriptmanager> <br /> <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> <div> <cc1:AutoCompleteExtender TargetControlID="TextBox1" MinimumPrefixLength="1" ServiceMethod="GetVideoTitles" CompletionSetCount="10" ServicePath="Myservices.asmx" ID="AutoCompleteExtender1" runat="server"> </cc1:AutoCompleteExtender> </div> </form> this is webservice method public string[] GetVideoTitles(string prefixText) { SqlConnection con = new SqlConnection(@"Data Source=SERVER5\SQLserver2005;Initial Catalog=tpvnew;User ID=xx;Password=525"); con.Open(); SqlCommand cmd = new SqlCommand("video_videotitles", con); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add("@prefixText", SqlDbType.VarChar, 50); cmd.Parameters["@prefixText"].Value = prefixText; SqlDataAdapter da = new SqlDataAdapter(cmd); DataTable dt = new DataTable(); da.Fill(dt); string[] items = new string[dt.Rows.Count]; int i = 0; foreach (DataRow dr in dt.Rows) { items.SetValue(dr["videotitle"].ToString(), i); i++; } return items; }

    Read the article

  • How to understand the output of gcc -S *.cpp?

    - by user198729
    A sample one as follows: .file "hw.cpp" .section .rdata,"dr" LC0: .ascii "Oh shit really bad~!\15\12\0" .text .align 2 .globl __Z3badv .def __Z3badv; .scl 2; .type 32; .endef __Z3badv: pushl %ebp movl %esp, %ebp subl $8, %esp movl $LC0, (%esp) call _printf leave ret .section .rdata,"dr" LC1: .ascii "WOW\0" .text .align 2 .globl __Z3foov .def __Z3foov; .scl 2; .type 32; .endef __Z3foov: pushl %ebp movl %esp, %ebp subl $4, %esp movl LC1, %eax movl %eax, -4(%ebp) movl $__Z3badv, 4(%ebp) leave ret .def ___main; .scl 2; .type 32; .endef .align 2 .globl _main .def _main; .scl 2; .type 32; .endef _main: pushl %ebp movl %esp, %ebp subl $8, %esp andl $-16, %esp movl $0, %eax addl $15, %eax addl $15, %eax shrl $4, %eax sall $4, %eax movl %eax, -4(%ebp) movl -4(%ebp), %eax call __alloca call ___main call __Z3foov movl $0, %eax leave ret .def _printf; .scl 2; .type 32; .endef Has anyone tried to understand it?

    Read the article

  • using .NET web service (Dataset) from progress OpenEdge 4GL

    - by jasperdmx
    I created web service in C# .net with input parameter : DataSet in other side, i need to use this web service from progress OpenEdge 4GL 10.1 (not 10.2) the problem is Dataset in OpenEdge cant match with DataSet in .net. always return 0 in the result I'm C# programmer, so dont have deep knowledge in porgress. I did research in progress forum, but not a good result. any help ? thanks in advance. *codes***** //web service : C# .net [webmethod] public int getResult(DataSet ds) { DataTable tbl = ds.Tables["datas"]; int result=0; foreach (DataRow dr in tbl.Rows){ //only 1 record = 1 row result = Convert.toInt32(dr["field1"]); } return result; } //progress OpenEdge 10.1 --- create and fill temp-table : field1 = 30 and only 1 record --- create dataset and bind to temp-table --- connect to web service --- call webmethod : define variable result as integer no-undo. RUN getResult IN hPortType(INPUT dataset,OUTPUT result). message result view-as alert-box info button ok. --- RESULT ALWAYS 0 /****/ anyone know how to "bridge" dataset in progress openedge to .net dataset ? note: this web service works well if called from .net

    Read the article

  • ObjectDataSource.Select with Parameters Time Out

    - by MasterMax1313
    I'm using an ObjectDataSource with a 2008 ReportViewer control and Linq to CSV. The ODS has two parameters (the SQL is spelled out in an XSD file with a table adapter). The Reportviewer takes a very long time to render the output after a button is clicked to generate the report. That's my first problem. Even though it works (most of the time), the processing time worries me, and subsequent requests don't seem to be changing the results shown on the screen. The next issue is that when I go to export the ODS to CSV I'm getting a time out exception on the select method of the ODS (shown below). This works for ODS without parameters, but it seems like now that I've added parameters that it doesn't want to cooperate. I'm fresh out of ideas, any thoughts? <asp:ObjectDataSource ID="obsGetDataAllCustomers" runat="server" SelectMethod="GetDataAllCustomers" TypeName="my.myAdapter.AllCustomers" OldValuesParameterFormatString="original_{0}" > <SelectParameters> <asp:ControlParameter ControlID="StartDate" Name="Start" PropertyName="Text" Type="DateTime" /> <asp:ControlParameter ControlID="EndDate" Name="_End" PropertyName="Text" Type="DateTime" /> </SelectParameters> </asp:ObjectDataSource> After button click to view report - rvAllCustomers.LocalReport.Refresh() Export to CSV (adding the items returned to a list which is then processed by working code) - For Each dr As DataRow In CType(obs.Select(), DataView).Table.Rows l.Add(New FullOrderOutput(dr)) Next

    Read the article

  • reading dbase file without standard .dbf extension.

    - by Nathan W
    I am trying to read a file which is just a dbase file but without the standard extension the file is something like: Test.Dat I am using this block of code to try and read the file: string ConnectionString = @"Provider=Microsoft.Jet.OLEDB.4.0; Data Source=C:\Temp;Extended Properties=dBase III"; OleDbConnection dBaseConnection = new OleDbConnection(ConnectionString); dBaseConnection.Open(); OleDbDataAdapter oDataAdapter = new OleDbDataAdapter("SELECT * FROM Test", ConnectionString); DataSet oDataSet = new DataSet(); oDataAdapter.Fill(oDataSet);//I get the error right here... DataTable oDataTable = oDataSet.Tables[0]; foreach (DataRow dr in oDataTable.Rows) { Console.WriteLine(dr["Name"]); } Of course this crashes because it can't find the dbase file called test, but if I rename the file to Test.dbf it works fine. I can't really rename the file all the time because a third party application uses it as its file format. Does anyone know a way to read a dbase file without a standard extension in C#. Thanks.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >