Search Results

Search found 9830 results on 394 pages for 'percent complete'.

Page 27/394 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Upcoming Carbon Tax in South Africa

    - by Evelyn Neumayr
    By Elena Avesani, Principal Product Strategy Manager, Oracle In 2012, the South Africa National Treasury announced the plan to impose a carbon tax to cut carbon emissions that are blamed for climate change. South Africa is ranked among the top 20 countries measured by absolute carbon dioxide emissions, with emissions per capita in the region of 10 metric tons per annum and over 90% of South Africa's energy produced by burning fossil fuels. The top 40 largest companies in the country are responsible for 207 million tons of carbon dioxide, directly emitting 20 percent of South Africa’s carbon output. The legislation, originally scheduled to be implemented from January 2015 to 31 December 2019, is now delayed to January 2016. It will levy a carbon tax of R120 (US$11) per ton of CO2, rising then by 10 percent a year until 2020, while all sectors bar electricity will be able to claim additional relief of at least 10 percent. The South African treasury proposed a 60 percent tax-free threshold on emissions for all sectors, including electricity, petroleum, iron, steel and aluminum. Oracle Environmental Accounting and Reporting (EA&R) supports these needs and guarantees consistency across organizations in how data is collected, retained, controlled, consolidated and used in calculating and reporting emissions inventory. EA&R also enables companies to develop an enterprise-wide data view that includes all 5 of the key sustainability categories: carbon emissions, energy, water, materials and waste. Thanks to its native integration with Oracle E-Business Suite and JD Edwards EnterpriseOne ERP Financials and Inventory Systems and the capability of capturing environmental data across business silos, Oracle Environmental Accounting and Reporting is uniquely positioned to support a strategic approach to carbon management that drives business value. Sources: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} African Utility Week BDlive Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • C# TextBox Autocomplete (Winforms) key events and customization?

    - by m0s
    Hi, I was wondering if it is possible to catch key events for auto-complete list. For example instead of Enter key press for auto-complete use lets say Tab key. Also is it possible to change the colors and add background image for the auto-complete pop-up list? Currently I have my own implementation which is a separate window(form) with a list-box, which works OK, but Id really like to use .net's auto-complete if it can do what I need. Thanks for attention.

    Read the article

  • SQL Server: What locale should be used to format numeric values into SQL Server format?

    - by Ian Boyd
    It seems that SQL Server does not accept numbers formatted using any particular locale. It also doesn't support locales that have digits other than 0-9. For example, if the current locale is bengali, then the number 123456789 would come out as "?????????". And that's just the digits, nevermind what the digit grouping would be. But the same problem happens for numbers in the Invariant locale, which formats numbers as "123,456,789", which SQL Server won't accept. Is there a culture that matches what SQL Server accepts for numeric values? Or will i have to create some custom "sql server" culture, generating rules for that culture myself from lower level formatting routines? If i was in .NET (which i'm not), i could peruse the Standard Numeric Format strings. Of the format codes available in .NET: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF Only 6 accept all numeric types: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF And of those only 2 generate string representations, in the en-US locale anyway, that would be accepted by SQL Server: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF Of the remaining two, fixed is dependant on the locale's digits, rather than the number being used, leaving General g format: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF And i can't even say for certain that the g format won't add digit groupings (e.g. 1,234). Is there a locale that formats numbers in the way SQL Server expects? Is there a .NET format code? A java format code? A Delphi format code? A VB format code? A stdio format code? latin-numeral-digits

    Read the article

  • Separating Content (aspx) from Code (aspx.cs) in ASP.NET

    - by firedrawndagger
    I would like to know what is the best practice on separating the content of an aspx page (ASP.NET 3.5) from the code (I'm using C#). I have a form that users can type data in - for example they are allowed to enter a percent. If they Otherwise for example I would like to display the following error messages: <p id="errormsg" class="percenthigh">Please enter a percent below 100</p> <p id="errormsg" class="percentnegative">Percent cannot be below 0</p> <p id="errormsg" class="percentnot">This is not a percent</p> So in essence I'm hiding the error messages and showing one depending on what the user input is. I believe this is the best way to seperate the content from the code behind. However, how do I select elements and hide/unhide them depending on the user input? I'm aware I can do a runat="server" on the elements but the problem is that I can't select by class and am limited only to ID's. What workarounds do you recommend? Aside from putting in the values in code behind which is notoriously difficult to debug. Also has this been "fixed" in ASP.NET 4?

    Read the article

  • Is There a Better Way to Feed Different Parameters into Functions with If-Statements?

    - by FlowofSoul
    I've been teaching myself Python for a little while now, and I've never programmed before. I just wrote a basic backup program that writes out the progress of each individual file while it is copying. I wrote a function that determines buffer size so that smaller files are copied with a smaller buffer, and bigger files are copied with a bigger buffer. The way I have the code set up now doesn't seem very efficient, as there is an if loop that then leads to another if loops, creating four options, and they all just call the same function with different parameters. import os import sys def smartcopy(filestocopy, dest_path, show_progress = False): """Determines what buffer size to use with copy() Setting show_progress to True calls back display_progress()""" #filestocopy is a list of dictionaries for the files needed to be copied #dictionaries are used as the fullpath, st_mtime, and size are needed if len(filestocopy.keys()) == 0: return None #Determines average file size for which buffer to use average_size = 0 for key in filestocopy.keys(): average_size += int(filestocopy[key]['size']) average_size = average_size/len(filestocopy.keys()) #Smaller buffer for smaller files if average_size < 1024*10000: #Buffer sizes determined by informal tests on my laptop if show_progress: for key in filestocopy.keys(): #dest_path+key is the destination path, as the key is the relative path #and the dest_path is the top level folder copy(filestocopy[key]['fullpath'], dest_path+key, callback = lambda pos, total: display_progress(pos, total, key)) else: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, callback = None) #Bigger buffer for bigger files else: if show_progress: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, 1024*2600, callback = lambda pos, total: display_progress(pos, total, key)) else: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, 1024*2600) def display_progress(pos, total, filename): percent = round(float(pos)/float(total)*100,2) if percent <= 100: sys.stdout.write(filename + ' - ' + str(percent)+'% \r') else: percent = 100 sys.stdout.write(filename + ' - Completed \n') Is there a better way to accomplish what I'm doing? Sorry if the code is commented poorly or hard to follow. I didn't want to ask someone to read through all 120 lines of my poorly written code, so I just isolated the two functions. Thanks for any help.

    Read the article

  • APC UPS replace battery light and apcupsd reporting "replace battery"

    - by mgjk
    We have an APC Smart UPS 1500. The "Replace Battery" light is on, and apcupsd reports: Emergency! Batteries have failed on UPS xxxx. Change them NOW However, from this article, http://sturgeon.apcc.com/kbasewb2.nsf/for+external/f39c4312fcaf7b948525679a005ebb78?OpenDocument it seems that it's not so clear that the UPS battery needs to be replaced. Stranger, according to the information on the UPS, an 11 minute runtime at 42.9% load running at 27.7V isn't so bad. Any thoughts about what to try next? We're a non-profit, money is an object. It would be a shame to replace a battery with a year or so left in it. # apcaccess status APC : 001,041,1017 DATE : Thu Mar 29 13:01:41 EDT 2012 HOSTNAME : oreilly2 VERSION : 3.14.6 (16 May 2009) debian UPSNAME : xxxx CABLE : Custom Cable Smart MODEL : Smart-UPS 1500 UPSMODE : Stand Alone STARTTIME: Thu Mar 29 12:57:30 EDT 2012 STATUS : ONLINE LINEV : 112.3 Volts LOADPCT : 42.9 Percent Load Capacity BCHARGE : 100.0 Percent TIMELEFT : 11.0 Minutes MBATTCHG : 5 Percent MINTIMEL : 3 Minutes MAXTIME : 0 Seconds OUTPUTV : 112.3 Volts SENSE : High DWAKE : -01 Seconds DSHUTD : 090 Seconds LOTRANS : 106.0 Volts HITRANS : 127.0 Volts RETPCT : 000.0 Percent ITEMP : 23.8 C Internal ALARMDEL : Always BATTV : 27.7 Volts LINEFREQ : 60.0 Hz LASTXFER : No transfers since turnon NUMXFERS : 0 TONBATT : 0 seconds CUMONBATT: 0 seconds XOFFBATT : N/A SELFTEST : NO STATFLAG : 0x07000008 Status Flag SERIALNO : AS0603298896 BATTDATE : 2006-01-14 NOMOUTV : 120 Volts NOMBATTV : 24.0 Volts FIRMWARE : 601.3.D USB FW:1.5 APCMODEL : Smart-UPS 1500 END APC : Thu Mar 29 13:02:12 EDT 2012 Error when running upstest You are using a SMART cable type, so I'm entering SMART test mode mode.type = USB_UPS Setting up the port ... Hello, this is the apcupsd Cable Test program. This part of apctest is for testing Smart UPSes. Please select the function you want to perform. 1) Query the UPS for all known values 2) Perform a Battery Runtime Calibration 3) Abort Battery Calibration 4) Monitor Battery Calibration progress 5) Program EEPROM 6) Enter TTY mode communicating with UPS 7) Quit Select function number: 2 First ensure that we have a good link and that the UPS is functionning normally. Simulating UPSlinkCheck ... YWrote: Y Got: getline failed. Apparently the link is not up. Giving up.

    Read the article

  • display values within stacked boxes of rowstacked histograms in gnuplot?

    - by gojira
    I am using gnuplot (Version 4.4 patchlevel 2) to generate rowstacked histograms, very similar to the example called "Stacked histograms by percent" from the gnuplot demo site at http://www.gnuplot.info/demo/histograms.html I want to display the values of each stacked box within it. I.e. I want to display the actual numerical value (in percent and/or the absolute number) of each box. How can I do that?

    Read the article

  • Possible reasons for high CPU load of taskmgr.exe process on VM?

    - by mjn
    On a VMware virtual machine which has severe performance problems I can see a constant average of 20+ percent CPU load for the TASKMGR.EXE (task manager) process. The apps running on this server have lower load, around 4 to 10 percent average. The VM is running Windows 2003 Server Standard with 3.75 GB assigned RAM. I suspect that the task manager CPU load has something to do with other VM instances on the VMWare server but could not see a similar value on internal ESXi systems (the problematic VM runs in the customers IT).

    Read the article

  • Silverlight DataGrid set cell IsReadOnly programatically

    - by Brandon Montgomery
    I am binding a data grid to a collection of Task objects. A particular column needs some special rules pertaining to editing: <!--Percent Complete--> <data:DataGridTextColumn Header="%" ElementStyle="{StaticResource RightAlignStyle}" Binding="{Binding PercentComplete, Mode=TwoWay, Converter={StaticResource PercentConverter}}" /> What I want to do is set the IsReadOnly property only for each task's percent complete cell based on a property on the actual Task object. I've tried this: <!--Percent Complete--> <data:DataGridTextColumn Header="%" ElementStyle="{StaticResource RightAlignStyle}" Binding="{Binding PercentComplete, Mode=TwoWay, Converter={StaticResource PercentConverter}}" IsReadOnly={Binding IsNotLocalID} /> but apparently you can't bind to the IsReadOnly property on a data grid column. What is the best way do to do what I am trying to do?

    Read the article

  • working with arrays

    - by user295189
    I currently do a query which goes through the records and forms an array. the print_r on query gives me this print_r($query) yields the following: Array ( [0] = ( [field1] = COMPLETE [field2] = UNKNOWN [field3] = Test comment ) [1] = ( [field1] = COMPLETE [field2] = UNKNOWN [field3] = comment here ) [2] = ( [field1] = COMPLETE [field2] = UNKNOWN [field3] = checking ) [3] = ( [field1] = COMPLETE [field2] = UNKNOWN [field3] = testing ) [4] = ( [field1] = COMPLETE [field2] = UNKNOWN [field3] = working ) ) somehow I want to take this array and convert it back to php. So for example some thing like this $myArray = array( ...) then $myArray should yield the samething as the print_r($query) yeilds. Thanks

    Read the article

  • Algorithm to aggregate values from child tree nodes

    - by user222164
    I have objects in a tree structure, I would like to aggregate status information from children nodes and update parent node with aggregated status. Lets say node A has children B1, B2, and B1 has C1, C2, C3 as children. Each of the nodes have a status attribute. Now if C1, C2, C3 are all complete then I would like to mark B1 as completed. And if C4, C5,C6,C7 are complete make B2 as complete. When B1 and B2 are both complete mark A as complete. I can go through these nodes in brute force method and make updates, could someone suggest an efficient algorithm to do it. A { B1 { C1, C2, C3}, B2 { C4, C5, C6, C7} }

    Read the article

  • URL Encoding of Characters in a password field

    - by Alavoil
    I am trying to pass login credentials to a PHP script that I have in my iPhone app. When I pull a password with special characters the password is missing certain characters especially the percent sign. I am trying to encode the text but even before I send it, the percent sign is missing. //password_field is a UITextField holding the password: !@#$%^&*() NSString *tmpPass = [password_field.text stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]; NSLog(p_field.text); NSLog(tmpPass); This is what appears in the console: !@#$^&*() [email protected]&*() Is there any reason why it would be dropping the percent sign?

    Read the article

  • Is there a way to split the results of a select query into two equal halfs?

    - by Matthias
    I'd like to have a query returning two ResultSets each of which holding exactly half of all records matching a certain criteria. I tried using TOP 50 PERCENT in conjunction with an Order By but if the number of records in the table is odd, one record will show up in both resultsets. Example: I've got a simple table with TheID (PK) and TheValue fields (varchar(10)) and 5 records. Skip the where clause for now. SELECT TOP 50 PERCENT * FROM TheTable ORDER BY TheID asc results in the selected id's 1,2,3 SELECT TOP 50 PERCENT * FROM TheTable ORDER BY TheID desc results in the selected id's 3,4,5 3 is a dup. In real life of course the queries are fairly complicated with a ton of where clauses and subqueries.

    Read the article

  • Oracle Fusion Procurement Designed for User Productivity

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience Oracle Fusion Procurement Design Goals In Oracle Fusion Procurement, we set out to create a streamlined user experience based on the way users do their jobs. Oracle has spent hundreds of hours with customers to get to the heart of what users need to do their jobs. By designing a procurement application around user needs, Oracle has crafted a user experience that puts the tools that people need at their fingertips. In Oracle Fusion Procurement, the user experience is designed to provide the user with information that will drive navigation rather than requiring the user to find information. One of our design goals for Oracle Fusion Procurement was to reduce the number of screens and clicks that a user must go through to complete frequently performed tasks. The requisition process in Oracle Fusion Procurement (Figure 1) illustrates how we have streamlined workflows. Oracle Fusion Self-Service Procurement brings together billing metrics, descriptions of the order, justification for the order, a breakdown of the components of the order, and the amount—all in one place. Previous generations of procurement software required the user to navigate to several different pages to gather all of this information. With Oracle Fusion, everything is presented on one page. The result is that users can complete their tasks in less time. The focus is on completing the work, not finding the work. Figure 1. Creating a requisition in Oracle Fusion Self-Service Procurement is a consumer-like shopping experience. Will Oracle Fusion Procurement Increase Productivity? To answer this question, Oracle sought to model how two experts working head to head—one in an existing enterprise application and another in Oracle Fusion Procurement—would perform the same task. We compared Oracle Fusion designs to corresponding existing applications using the keystroke-level modeling (KLM) method. This method is based on years of research at universities such as Carnegie Mellon and research labs like Xerox Palo Alto Research Center. The KLM method breaks tasks into a sequence of operations and uses standardized models to evaluate all of the physical and cognitive actions that a person must take to complete a task: what a user would have to click, how long each click would take (not only the physical action of the click or typing of a letter, but also how long someone would have to think about the page when taking the action), and user interface changes that result from the click. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time is calculated. Task times from the model enable researchers to predict end-user productivity. For the study, we focused on modeling procurement business process task flows that were considered business or mission critical: high-frequency tasks and high-value tasks. The designs evaluated encompassed tasks that are currently performed by employees, professional buyers, suppliers, and sourcing professionals in advanced procurement applications. For each of these flows, we created detailed task scenarios that provided the context for each task, conducted task walk-throughs in both the Oracle Fusion design and the existing application, analyzed and documented the steps and actions required to complete each task, and applied standard time estimates to the operators in each task to estimate overall task completion times. The Results The KLM method predicted that the Oracle Fusion Procurement designs would result in productivity gains in each task, ranging from 13 percent to 38 percent, with an overall productivity gain of 22.5 percent. These performance gains can be attributed to a reduction in the number of clicks and screens needed to complete the tasks. For example, creating a requisition in Oracle Fusion Procurement takes a user through only two screens, while ordering the same item in a previous version requires six screens to complete the task. Modeling user productivity has resulted not only in advances in Oracle Fusion applications, but also in advances in other areas. We leveraged lessons learned from the KLM studies to establish products like Oracle E-Business Suite (EBS). New user experience features in EBS 12.1.3, such as navigational improvements to the main menu, a Google-type search using auto-suggest, embedded analytics, and an in-context list of values tool help to reduce clicks and improve efficiency. For more information about KLM, refer to the Measuring User Productivity blog.

    Read the article

  • Async run for javascript by using listeners

    - by CharlieShi
    I have two functions, the names are Function3, Function4, Function3 will send request to server side to get jsondata by using ajax, which, however, will take about 3 seconds to complete. Function4 is a common function which will wait for Function3's result and then action. My code puts below: function ajaxRequest(container) { $.ajax({ url: "Home/GetResult", type: "post", success: function (data) { container.append(data.message); } }); } var eventable = { on: function (event, cb) { $(this).on(event, cb); }, trigger: function (event) { $(this).trigger(event); } } var Function3 = { run: function () { var self = this; setTimeout(function () { ajaxRequest($(".container1")); self.trigger('done'); }, 500); } } var Function4 = { run: function () { var self = this; setTimeout(function () { $(".container1").append("Function4 complete"); self.trigger('done'); },500); } } $.extend(Function3, eventable); $.extend(Function4, eventable); Function3.on('done', function (event) { Function4.run(); }); Function4.on('done', function () { $(".container1").append("All done"); }); Function3.run(); but now the problem is, when I start the code , it always show me the result as : first will appear "Function4 complete", then "All done" follows, 3 seconds later, "Function3 complete" will appear. That's out of my expection because my expection is "Function3 complete" comes first, "Function4 complete" comes second and "All done" is expected as the last one. Anyone can help me on this? thx in advice. EDIT: I have included all the functions above now. Also, you can check the js script in JSFIDDER: http://jsfiddle.net/sporto/FYBjc/light/ I have replaced the function in JSFIDDER from a common array push action to ajax request.

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Friday Fun: Factory Balls – Christmas Edition

    - by Asian Angel
    Your weekend is almost here, but until the work day is over we have another fun holiday game for you. This week your job is to correctly decorate/paint the ornaments that go on the Christmas tree. Simple you say? Maybe, but maybe not! Factory Balls – Christmas Edition The object of the game is to correctly decorate/paint each Christmas ornament exactly as shown in the “sample image” provided for each level. What starts off as simple will quickly have you working to figure out the correct combination or sequence to complete each ornament. Are you ready? The first level serves as a tutorial to help you become comfortable with how to decorate/paint the ornaments. To move an ornament to a paint bucket or cover part of it with one of the helper items simply drag the ornament towards that area. The ornament will automatically move back to its’ starting position when the action is complete. First, a nice coat of red paint followed by covering the middle area with a horizontal belt. Once the belt is on move the ornament to the bucket of yellow paint. Next, you will need to remove the belt, so move the ornament back to the belt’s original position. One ornament finished! As soon as you complete decorating/painting an ornament, you move on to the next level and will be shown the next “sample Image” in the upper right corner. Starting with a coat of orange paint sounds good… Pop the little serrated edge cap on top… Add some blue paint… Almost have it… Place the large serrated edge cap on top… Another dip in the orange paint… And the second ornament is finished. Level three looks a little bit tougher…just work out your pattern of helper items & colors and you will definitely get it! Have fun decorating/painting those ornaments! Note: Starting with level four you will need to start using a combination of two helper items combined at times to properly complete the ornaments. Play Factory Balls – Christmas Edition Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser Free Shipping Day is Friday, December 17, 2010 – National Free Shipping Day Find an Applicable Quote for Any Programming Situation Winter Theme for Windows 7 from Microsoft Score Free In-Flight Wi-Fi Courtesy of Google Chrome

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 26 (sys.dm_db_log_space_usage)

    - by Tamarick Hill
    The sys.dm_db_log_space_usage DMV is a new DMV for SQL Server 2012. It returns Total Size, Used Size, and Used Percent size for a transaction log file of a given database. To illustrate this DMV, I will query the DMV against my AdventureWorks2012 database. SELECT * FROM sys.dm_db_log_space_usage As mentioned above, the result set gives us the total size of the transaction log in bytes, the used size of the log in bytes, and the percent of the log that has been used. This is a very simplistic DMV but returns valuable information. Being able to detect when a transaction log is close to being full is always a valuable thing to alert on, and this DMV just provided an additional method for acquiring the necessary information. Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Oracle VM Templates Enable Oracle Real Application Clusters Deployment Up to 10 Times Faster than VMware vSphere

    - by Angela Poth
    Oracle VM - Quantifying the Value of Application-Driven DeploymentThe recently published report by the Evaluator Group, “Oracle VM – Quantifying The Value of Application-Driven Virtualization,“ validates Oracle’s ease of use and time savings over VMware vSphere 5. The report found that users can deploy Oracle Real Application Clusters up to 10 times faster with Oracle VM templates than VMware vSphere. Using Oracle VM templates users can deploy E-Business Suites nearly 7 times faster than VMware vSphere."Oracle VM’s application-driven architecture was built to enable rapid deployment of enterprise applications, with simplified integrated lifecycle management, to fully support Oracle applications. Our testing has demonstrated 90 percent improvement deploying Oracle Real Application Clusters 11g R2 using Oracle VM Templates and a similar improvement of 85 percent for the Oracle E-Business Suite 12.1.1. This clearly shows that Templates can provide real value to customers and should become an essential component for enabling rapid enterprise application deployment and management," said Leah Schoeb, Senior Partner, Evaluator Group.Read the Press Release Get a copy of the Evaluator Group Report: Oracle VM – Quantifying The Value of Application-Driven Virtualization

    Read the article

  • Building Private IaaS with SPARC and Oracle Solaris

    - by ferhat
    A superior enterprise cloud infrastructure with high performing systems using built-in virtualization! We are happy to announce the expansion of Oracle Optimized Solution for Enterprise Cloud Infrastructure with Oracle's SPARC T-Series servers and Oracle Solaris.  Designed, tuned, tested and fully documented, the Oracle Optimized Solution for Enterprise Cloud Infrastructure now offers customers looking to upgrade, consolidate and virtualize their existing SPARC-based infrastructure a proven foundation for private cloud-based services which can lower TCO by up to 81 percent(1). Faster time to service, reduce deployment time from weeks to days, and can increase system utilization to 80 percent. The Oracle Optimized Solution for Enterprise Cloud Infrastructure can also be deployed at up to 50 percent lower cost over five years than comparable alternatives(2). The expanded solution announced today combines Oracle’s latest SPARC T-Series servers; Oracle Solaris 11, the first cloud OS; Oracle VM Server for SPARC, Oracle’s Sun ZFS Storage Appliance, and, Oracle Enterprise Manager Ops Center 12c, which manages all Oracle system technologies, streamlining cloud infrastructure management. Thank you to all who stopped by Oracle booth at the CloudExpo Conference in New York. We were also at Cloud Boot Camp: Building Private IaaS with Oracle Solaris and SPARC, discussing how this solution can maximize return on investment and help organizations manage costs for their existing infrastructures or for new enterprise cloud infrastructure design. Designed, tuned, and tested, Oracle Optimized Solution for Enterprise Cloud Infrastructure is a complete cloud infrastructure or any virtualized environment  using the proven documented best practices for deployment and optimization. The solution addresses each layer of the infrastructure stack using Oracle's powerful SPARC T-Series as well as x86 servers with storage, network, virtualization, and management configurations to provide a robust, flexible, and balanced foundation for your enterprise applications and databases.  For more information visit Oracle Optimized Solution for Enterprise Cloud Infrastructure. Solution Brief: Accelerating Enterprise Cloud Infrastructure Deployments White Paper: Reduce Complexity and Accelerate Enterprise Cloud Infrastructure Deployments Technical White Paper: Enterprise Cloud Infrastructure on SPARC (1) Comparison based on current SPARC server customers consolidating existing installations including Sun Fire E4900, Sun Fire V440 and SPARC Enterprise T5240 servers to latest generation SPARC T4 servers. Actual deployments and configurations will vary. (2) Comparison based on solution with SPARC T4-2 servers with Oracle Solaris and Oracle VM Server for SPARC versus HP ProLiant DL380 G7 with VMware and Red Hat Enterprise Linux and IBM Power 720 Express - Power 730 Express with IBM AIX Enterprise Edition and Power VM.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >