Search Results

Search found 1848 results on 74 pages for 'significant'.

Page 7/74 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • C# Performance Pitfall – Interop Scenarios Change the Rules

    - by Reed
    C# and .NET, overall, really do have fantastic performance in my opinion.  That being said, the performance characteristics dramatically differ from native programming, and take some relearning if you’re used to doing performance optimization in most other languages, especially C, C++, and similar.  However, there are times when revisiting tricks learned in native code play a critical role in performance optimization in C#. I recently ran across a nasty scenario that illustrated to me how dangerous following any fixed rules for optimization can be… The rules in C# when optimizing code are very different than C or C++.  Often, they’re exactly backwards.  For example, in C and C++, lifting a variable out of loops in order to avoid memory allocations often can have huge advantages.  If some function within a call graph is allocating memory dynamically, and that gets called in a loop, it can dramatically slow down a routine. This can be a tricky bottleneck to track down, even with a profiler.  Looking at the memory allocation graph is usually the key for spotting this routine, as it’s often “hidden” deep in call graph.  For example, while optimizing some of my scientific routines, I ran into a situation where I had a loop similar to: for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i]); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This loop was at a fairly high level in the call graph, and often could take many hours to complete, depending on the input data.  As such, any performance optimization we could achieve would be greatly appreciated by our users. After a fair bit of profiling, I noticed that a couple of function calls down the call graph (inside of ProcessElement), there was some code that effectively was doing: // Allocate some data required DataStructure* data = new DataStructure(num); // Call into a subroutine that passed around and manipulated this data highly CallSubroutine(data); // Read and use some values from here double values = data->Foo; // Cleanup delete data; // ... return bar; Normally, if “DataStructure” was a simple data type, I could just allocate it on the stack.  However, it’s constructor, internally, allocated it’s own memory using new, so this wouldn’t eliminate the problem.  In this case, however, I could change the call signatures to allow the pointer to the data structure to be passed into ProcessElement and through the call graph, allowing the inner routine to reuse the same “data” memory instead of allocating.  At the highest level, my code effectively changed to something like: DataStructure* data = new DataStructure(numberToProcess); for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i], data); } delete data; Granted, this dramatically reduced the maintainability of the code, so it wasn’t something I wanted to do unless there was a significant benefit.  In this case, after profiling the new version, I found that it increased the overall performance dramatically – my main test case went from 35 minutes runtime down to 21 minutes.  This was such a significant improvement, I felt it was worth the reduction in maintainability. In C and C++, it’s generally a good idea (for performance) to: Reduce the number of memory allocations as much as possible, Use fewer, larger memory allocations instead of many smaller ones, and Allocate as high up the call stack as possible, and reuse memory I’ve seen many people try to make similar optimizations in C# code.  For good or bad, this is typically not a good idea.  The garbage collector in .NET completely changes the rules here. In C#, reallocating memory in a loop is not always a bad idea.  In this scenario, for example, I may have been much better off leaving the original code alone.  The reason for this is the garbage collector.  The GC in .NET is incredibly effective, and leaving the allocation deep inside the call stack has some huge advantages.  First and foremost, it tends to make the code more maintainable – passing around object references tends to couple the methods together more than necessary, and overall increase the complexity of the code.  This is something that should be avoided unless there is a significant reason.  Second, (unlike C and C++) memory allocation of a single object in C# is normally cheap and fast.  Finally, and most critically, there is a large advantage to having short lived objects.  If you lift a variable out of the loop and reuse the memory, its much more likely that object will get promoted to Gen1 (or worse, Gen2).  This can cause expensive compaction operations to be required, and also lead to (at least temporary) memory fragmentation as well as more costly collections later. As such, I’ve found that it’s often (though not always) faster to leave memory allocations where you’d naturally place them – deep inside of the call graph, inside of the loops.  This causes the objects to stay very short lived, which in turn increases the efficiency of the garbage collector, and can dramatically improve the overall performance of the routine as a whole. In C#, I tend to: Keep variable declarations in the tightest scope possible Declare and allocate objects at usage While this tends to cause some of the same goals (reducing unnecessary allocations, etc), the goal here is a bit different – it’s about keeping the objects rooted for as little time as possible in order to (attempt) to keep them completely in Gen0, or worst case, Gen1.  It also has the huge advantage of keeping the code very maintainable – objects are used and “released” as soon as possible, which keeps the code very clean.  It does, however, often have the side effect of causing more allocations to occur, but keeping the objects rooted for a much shorter time. Now – nowhere here am I suggesting that these rules are hard, fast rules that are always true.  That being said, my time spent optimizing over the years encourages me to naturally write code that follows the above guidelines, then profile and adjust as necessary.  In my current project, however, I ran across one of those nasty little pitfalls that’s something to keep in mind – interop changes the rules. In this case, I was dealing with an API that, internally, used some COM objects.  In this case, these COM objects were leading to native allocations (most likely C++) occurring in a loop deep in my call graph.  Even though I was writing nice, clean managed code, the normal managed code rules for performance no longer apply.  After profiling to find the bottleneck in my code, I realized that my inner loop, a innocuous looking block of C# code, was effectively causing a set of native memory allocations in every iteration.  This required going back to a “native programming” mindset for optimization.  Lifting these variables and reusing them took a 1:10 routine down to 0:20 – again, a very worthwhile improvement. Overall, the lessons here are: Always profile if you suspect a performance problem – don’t assume any rule is correct, or any code is efficient just because it looks like it should be Remember to check memory allocations when profiling, not just CPU cycles Interop scenarios often cause managed code to act very differently than “normal” managed code. Native code can be hidden very cleverly inside of managed wrappers

    Read the article

  • SQLAuthority News SQL Server Modeling CTP Nov 2009 Release 2 (formerly Oslo)

    SQL Server Modeling (formerly code name “Oslo”) is a set of future technologies that provide significant productivity gains across the lifecycle of .NET applications by enabling developers, architects, and IT professionals to work together more effectively with SQL Server at the center of the application lifecycle.SQL Server Modeling CTP Nov 2009 Release 2 is a [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Inauguration Of My Laptop

    - by Pawan_Mishra
    Today I received my new laptop which is an Intel Core i5-2450M @ 2.50GHz 4 GB RAM machine . The other laptop(office provided) which I have used for past two years for programming is an Intel Core2 Duo T6570 @ 2.10GHz machine. Reason why I am talking about the laptops that I own is because of my interest in writing multi-threaded/parallel code using the new TPL API provided in the .Net 4.0 framework. I have spent significant amount of time in past one year writing code using the Parallel API of .Net...(read more)

    Read the article

  • Are highly capable programmers paid more than their managers?

    - by Fun Mun Pieng
    I know a lot of programmers are paid less than their managers by significant amounts, as highlighted there. How often is it that a programmer gets paid more than his manager? Or phrased different how many programmers are paid more than their managers? Personally, I know of one case. I'm asking to see how common is the case. When I say "manager", I mean anyone further up their organization hierarchy.

    Read the article

  • Improve Performance of char.IsWhiteSpace for ASCII inputs in .NET 3.5

    - by Tanzim Saqib
    IsNullOrWhiteSpace is a new method introduced in string class in .NET 4.0. While this is a very useful method in string based processing, I attempted to implement it in .NET 3.5 using char.IsWhiteSpace() . I have found significant performance penalty using this method which I replaced later on, with my version. The following code takes about 20.6074219 seconds in my machine whereas my implementation of char.IsWhiteSpace takes about 1/4 less time 15.8271485 seconds only. In many scenarios ex. string...(read more)

    Read the article

  • Moodle: The free learning platform

    <b>The H Open:</b> "Moodle , the E-learning platform, is one of the most significant and successful projects in open source. Despite its success, with hundreds of thousands of people being taught by courses written in Moodle, as a product it is not well known."

    Read the article

  • IBM DB2 9.7, DBADM and my Rubik's Cube

    It's a challenge to adapt to change, but the changes in IBM DB2 9.7's Database Administrator authority bring significant database security benefits. Join Rebecca Bond as she shares some twists, some turns, and some clues regarding DB2 9.7's Database Administrator (DBADM) authority.

    Read the article

  • Welcome Back !!!

    - by sanket
    Well, its been quite sometime since I have been able to post anything significant.Have been quite busy with some personal stuff, which required more immediate attention. Finally, got the time today to log about something. I have switched companies, I have been cussed about and world seems to have gone awry to awesome for me in the meanwhile. Any ways, back to my blogging ways again and soon I would be starting a series of blogs about WCF, and Networking stuff. Till Then, - Happy Coding!

    Read the article

  • Finding co-maintainers for open source projects

    - by Mike Samuel
    I have a number of open-source projects that have gotten some significant usage and would like to find co-maintainers so that I am not a bottleneck when it comes to maintenance and support requests and to get other perspectives on how the project should evolve. Where should I look for co-maintainers, what should I look for in a co-maintainer, and how should I go about bringing them up to speed on the code and maintainer responsibilities?

    Read the article

  • An XEvent a Day (28 of 31) – Tracking Page Compression Operations

    - by Jonathan Kehayias
    The Database Compression feature in SQL Server 2008 Enterprise Edition can provide some significant reductions in storage requirements for SQL Server databases, and in the right implementations and scenarios performance improvements as well.  There isn’t really a whole lot of information about the operations of database compression that is documented as being available in the DMV’s or SQL Trace.  Paul Randal pointed out on Twitter today that sys.dm_db_index_operational_stats() provides...(read more)

    Read the article

  • Does Ubuntu run on current Asus Transformer Prime?

    - by Ubuntu User
    I've read instructions about dual boot Android / Transformer Prime (a significant factor in ordering one). Also about not working with /latest/ Transformer Prime (firmware / BIOS?) Also about imminent Ubuntu ARM support. Will I be able to run Ubuntu in a day or two when Transformer arrives? Also, am I right to assume I can restore Transformer to factory status if I break something in the attempt?

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario   Conventional Structures   Columnstore   Δ SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • What could be the Java successor Oracle wants to invest in?

    - by deamon
    I've read that Oracle wants to invest into another language than Java: "On the other hand, Oracle has been particularly supportive of alternative JVM languages. Adam Messinger ( http://www.linkedin.com/in/adammessinger ) was pretty blunt at the JVM Languages Summit this year about Java the language reaching it's logical end and how Oracle is looking for a 'higher level' language to 'put significant investment into.'" But what language could be the one Oracle wants to invest in? Is there another candidate than Scala?

    Read the article

  • Reflections on SQL Saturday #60 - Cleveland

    - by AaronBertrand
    Every time I attend a SQL Saturday , I leave with a rejuvenated and even further reinforced sense of community. Cleveland ( SQL Saturday #60 ) was by far no exception. Allen White ( blog | twitter ), Erin Stellato ( blog | twitter ), Cory Stevenson, Brian Davis ( twitter ), and all others involved put on a fantastic event that endured some crappy weather, parking problems, and significant delays and hardship for at least one speaker - sorry Grant! (Grant wrote about his experience .) I was able to...(read more)

    Read the article

  • Exadata at Oracle Openworld - A guide to sessions

    - by Javier Puerta
    A large number of sessions focusing on Exadata will be taking place during the week of Oracle Openworld in San Francisco. To help you organize your schedule I am including below a list of sessions and events around Exadata that you will find of interest. PARTNER SPECIFIC SESSIONS Date/Time/Location  Session Sunday, Sep 30, 3:30 PM - 4:30 PM - Moscone South - 301 Building a Winning Services Practice with Oracle’s Engineered Systems.- This session kicks off a week-long session on Oracle’s engineered systems, from Oracle Database Appliance to Oracle Exadata, Oracle Exalogic, Oracle Exalytics, Oracle Big Data Appliance, and Oracle SPARC SuperCluster. Hear about what is to come in the week ahead in terms of engineered systems. As an ideal consolidation platform for database workloads, Oracle Exadata generates significant services opportunities. This session reviews the range of partner-led services that support Oracle Exadata deployments.   Monday, October 1st, 2011 at 15:30 - 18:00 PST Grand Hyatt San Francisco 345 Stockton Street, San Francisco (Conference Theater) (It is a 15 minute walk from OOW Moscone Center. See directions here) Exadata & Manageability EMEA Partner Community Forum.- Listen to other partners share their experiences in selling and implementing Exadata and Manageability projects, and have a direct dialogue with some of the Oracle executives that are driving the strategy of the company in these areas. Agenda Welcome - Hans-Peter Kipfer, VP, Engineered Systems Oracle EMEA Next challenges in building and managing clouds - Javier Cabrerizo, VP, Business Development for Exadata, Oracle Corp. Partner Experiences: IT modernization, simplification and cost reduction: The case of a customer in Transportation & Logistics with custom applications and SAP. - Francisco Bermudez, Country Leader Infrastructure Services, Capgemini, Spain Nvision cloud project - Dmitry Krasilov, Head of Oracle Competence Center, Nvision Group, Russia From Exadata Ready to Exadata Optimized: An ISV Experience - Miguel Alves, Product Business Solutions Manager, WeDo Technologies, Portugal To confirm your participation send an email to [email protected]  Wednesday, Oct 3, 11:45 AM - 12:45 PM - Marriott Marquis - Golden Gate B  Building a Practice with Exadata Database Machine.- As an ideal consolidation platform for database workloads, Oracle’s Exadata Database Machine generates significant services opportunities. In this session, learn about the range of partner-led services that support Exadata Database Machine deployments.  Other Engineered Systems sessions for Partners at the Oracle PartnerNetwork Exchange  Click here.-   OOW CUSTOMER SESSIONS   Download the Focus On Exadata guide for a full list of Exadata OOW sessions.  

    Read the article

  • How to Get Your Online Business Noticed

    Getting your online business noticed can sometimes feel like an extremely slow and mundane process. Unfortunately unless you have a significant quantity of money to spend on AdWords, the likelihood of generating thousands of hits per month is slim to say the least. So what can you do to improve your chances?

    Read the article

  • Intel Server Strategy Shift with Sandy Bridge EN & EP

    - by jchang
    The arrival of the Sandy Bridge EN and EP processors, expected in early 2012, will mark the completion of a significant shift in Intel server strategy. For the longest time 1995-2009, the strategy had been to focus on producing a premium processor designed for 4-way systems that might also be used in 8-way systems and higher. The objective for 2-way systems was use the desktop processor that later had a separate brand and different package & socket to leverage the low cost structure in driving...(read more)

    Read the article

  • Tweet count just shot up

    - by Tom Gullen
    On our homepage we have a tweet button and counter: http://www.scirra.com This was around 600 until overnight it suddenly doubled to 1,200. It's been continuing to rise at a normal rate since. Has Twitter changed what counts as a Tweet for that counter? I've noticed competitors counts have dropped significantly. We don't buy tweets or followers, and I haven't found any spam tweets about us nor have we had any significant recent press.

    Read the article

  • Ubuntu's Lucid Lynx Linux OS Debuts With an Eye on ISVs

    <b>Serverwatch:</b> "What's really exciting is the ecosystem support that we've seen around this release," Canonical CEO Jane Silber said on a conference call announcing the release. "With over 80 vendors announcing support for about 100 applications, that's significant and a recognition of the long term support nature of this particular release."

    Read the article

  • Essence of Anchor Text

    It is significant to utilize anchor text in order to improve search engine ranking. Anchor text is directly correlated with inbound links. If you are leaving comments to blogs or submit articles with link, make use of anchor text and not the URL only.

    Read the article

  • APress Deal of the Day 26/Jul/2013 - Pro Windows Embedded Compact 7

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/07/26/apress-deal-of-the-day-26jul2013---pro-windows-embedded.aspxToday's $10 deal of the day from APress at http://www.apress.com/9781430241799 is Pro Windows Embedded Compact 7"This book is the natural choice for developers who want to create sophisticated small-footprint devices for both consumer and enterprise use. After significant upgrades from Microsoft, Windows Embedded Compact 7 is more powerful than ever, as you'll discover in this carefully focused deep-dive."

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >