Search Results

Search found 1596 results on 64 pages for 'akshay deep lamba'.

Page 12/64 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

    - by RoyOsherove
    One of the talks I did at QCON London was about a subject that I’ve come across fairly recently , when I was building SilverUnit – a “pure” unit test framework for silverlight objects that depend on the silverlight runtime to run. It is the concept of “cogs in the machine” – when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be: your custom control running inside silverlight runtime in the browser your plug-in running inside an IDE your activity running inside a windows workflow your code running inside a java EE bean your code inheriting from a COM+ (enterprise services) component etc.. Not all of these are necessarily testability problems. The main testability problem usually comes when your code actually inherits form something inside the system. For example. one of the biggest problems with testing objects like silverlight controls is the way they depend on the silverlight runtime – they don’t implement some silverlight interface, they don’t just call external static methods against the framework runtime that surrounds them – they actually inherit parts of the framework: they all inherit (in this case) from the silverlight DependencyObject Wrapping it up? An inheritance dependency is uniquely challenging to bring under test, because “classic” methods such as wrapping the object under test with a framework wrapper will not work, and the only way to do manually is to create parallel testable objects that get delegated with all the possible actions from the dependencies.    In silverlight’s case, that would mean creating your own custom logic class that would be called directly from controls that inherit from silverlight, and would be tested independently of these controls. The pro side is that you get the benefit of understanding the “contract” and the “roles” your system plays against your logic, but unfortunately, more often than not, it can be very tedious to create, and may sometimes feel unnecessary or like code duplication. About perimeters A perimeter is that invisible line that your draw around your pieces of logic during a test, that separate the code under test from any dependencies that it uses. Most of the time, a test perimeter around an object will be the list of seams (dependencies that can be replaced such as interfaces, virtual methods etc.) that are actually replaced for that test or for all the tests. Role based perimeters In the case of creating a wrapper around an object – one really creates a “role based” perimeter around the logic that is being tested – that wrapper takes on roles that are required by the code under test, and also communicates with the host system to implement those roles and provide any inputs to the logic under test. in the image below – we have the code we want to test represented as a star. No perimeter is drawn yet (we haven’t wrapped it up in anything yet). in the image below is what happens when you wrap your logic with a role based wrapper – you get a role based perimeter anywhere your code interacts with the system: There’s another way to bring that code under test – using isolation frameworks like typemock, rhino mocks and MOQ (but if your code inherits from the system, Typemock might be the only way to isolate the code from the system interaction.   Ad-Hoc Isolation perimeters the image below shows what I call ad-hoc perimeter that might be vastly different between different tests: This perimeter’s surface is much smaller, because for that specific test, that is all the “change” that is required to the host system behavior.   The third way of isolating the code from the host system is the main “meat” of this post: Subterranean perimeters Subterranean perimeters are Deep rooted perimeters  - “always on” seams that that can lie very deep in the heart of the host system where they are fully invisible even to the test itself, not just to the code under test. Because they lie deep inside a system you can’t control, the only way I’ve found to control them is with runtime (not compile time) interception of method calls on the system. One way to get such abilities is by using Aspect oriented frameworks – for example, in SilverUnit, I’ve used the CThru AOP framework based on Typemock hooks and CLR profilers to intercept such system level method calls and effectively turn them into seams that lie deep down at the heart of the silverlight runtime. the image below depicts an example of what such a perimeter could look like: As you can see, the actual seams can be very far away form the actual code under test, and as you’ll discover, that’s actually a very good thing. Here is only a partial list of examples of such deep rooted seams : disabling the constructor of a base class five levels below the code under test (this.base.base.base.base) faking static methods of a type that’s being called several levels down the stack: method x() calls y() calls z() calls SomeType.StaticMethod()  Replacing an async mechanism with a synchronous one (replacing all timers with your own timer behavior that always Ticks immediately upon calls to “start()” on the same caller thread for example) Replacing event mechanisms with your own event mechanism (to allow “firing” system events) Changing the way the system saves information with your own saving behavior (in silverunit, I replaced all Dependency Property set and get with calls to an in memory value store instead of using the one built into silverlight which threw exceptions without a browser) several questions could jump in: How do you know what to fake? (how do you discover the perimeter?) How do you fake it? Wouldn’t this be problematic  - to fake something you don’t own? it might change in the future How do you discover the perimeter to fake? To discover a perimeter all you have to do is start with a wishful invocation. a wishful invocation is the act of trying to invoke a method (or even just create an instance ) of an object using “regular” test code. You invoke the thing that you’d like to do in a real unit test, to see what happens: Can I even create an instance of this object without getting an exception? Can I invoke this method on that instance without getting an exception? Can I verify that some call into the system happened? You make the invocation, get an exception (because there is a dependency) and look at the stack trace. choose a location in the stack trace and disable it. Then try the invocation again. if you don’t get an exception the perimeter is good for that invocation, so you can move to trying out other methods on that object. in a future post I will show the process using CThru, and how you end up with something close to a domain specific test framework after you’re done creating the perimeter you need.

    Read the article

  • Google I/O 2012 - Beyond Paper: Google Cloud Print and the Future of Printing

    Google I/O 2012 - Beyond Paper: Google Cloud Print and the Future of Printing Akshay Kannan Use Google Cloud Print's API to send documents to a printer (or anywhere else) quickly and easily. We're currently integrated with Chrome, ChromeOS, mobile Gmail/Docs, and most new printers, and that's just the start. We provide a configurable JavaScript API, an Android Intent, as well as HTTP and XMPP interfaces for sending documents and receiving them in virtually any format. Come learn how to enable printing from your web and mobile apps on any device to any printer in the world, with just a few lines of code! For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 41 1 ratings Time: 01:06:43 More in Science & Technology

    Read the article

  • Advanced Java book in the lines of CLR via c# or C# in Depth?

    - by devoured elysium
    I want to learn about how things work in depth in Java. Coming from a c# background, there were a couple of very good books that go really deep in c# (C# in depth, CLR via c#, just to name the most popular). Is there anything like that in Java? I searched it up on amazon but nothing seemed to go that deep in Java as the two above go in c#. I don't want to know more about specific classes, or how to use this library or that other library, I want to learn how the objects are created on memory, how they get created on the stack, heap, etc. A more fundamental knowledge, let's say. I've read some chapters of Effective Java and The Java Programming Language but they don't seem to go so deep as I'd want them to go. Maybe there are other people that know both c# and Java that have read any of the referred books and know any that might be useful? Thanks

    Read the article

  • This Week in Geek History: Morse Code, Mars Rovers, J.R.R. Tolkien’s Birthday

    - by Jason Fitzpatrick
    Every week we bring you interesting facts from the history of Geekdom. This week in Geek History witnessed the first successful demonstration of the electric telegraph, the safe landing of the Spirit rover on the surface of Mars, and the birth of famed fantasy author J.R.R. Tolkien. Latest Features How-To Geek ETC How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Deep – Awesome Use of Metal Objects as Deep Sea Creatures [Video] Convert or View Documents Online Easily with Zoho, No Account Required Build a Floor Scrubbing Robot out of Computer Fans and a Frisbee Serene Blue Windows Wallpaper for Your Desktop 2011 International Space Station Calendar Available for Download (Free) Ultimate Elimination – Lego Black Ops [Video]

    Read the article

  • Recover that Photo, Picture or File You Deleted Accidentally

    - by The Geek
    Have you ever accidentally deleted a photo on your camera, computer, USB drive, or anywhere else? What you might not know is that you can usually restore those pictures—even from your camera’s memory stick. Windows tries to prevent you from making a big mistake by providing the Recycle Bin, where deleted files hang around for a while—but unfortunately it doesn’t work for external USB drives, USB flash drives, memory sticks, or mapped drives. The great news is that this technique also works if you accidentally deleted the photo… from the camera itself. That’s what happened to me, and prompted writing this article. Restore that File or Photo using Recuva The first piece of software that you’ll want to try is called Recuva, and it’s extremely easy to use—just make sure when you are installing it, that you don’t accidentally install that stupid Yahoo! toolbar that nobody wants. Now that you’ve installed the software, and avoided an awful toolbar installation, launch the Recuva wizard and let’s start through the process of recovering those pictures you shouldn’t have deleted. The first step on the wizard page will let you tell Recuva to only search for a specific type of file, which can save a lot of time while searching, and make it easier to find what you are looking for. Next you’ll need to specify where the file was, which will obviously be up to wherever you deleted it from. Since I deleted mine from my camera’s SD card, that’s where I’m looking for it. The next page will ask you whether you want to do a Deep Scan. My recommendation is to not select this for the first scan, because usually the quick scan can find it. You can always go back and run a deep scan a second time. And now, you’ll see all of the pictures deleted from your drive, memory stick, SD card, or wherever you searched. Looks like what happened in Vegas didn’t stay in Vegas after all… If there are a really large number of results, and you know exactly when the file was created or modified, you can switch to the advanced view, where you can sort by the last modified time. This can help speed up the process quite a bit, so you don’t have to look through quite as many files. At this point, you can right-click on any filename, and choose to Recover it, and then save the files elsewhere on your drive. Awesome! Restore that File or Photo using DiskDigger If you don’t have any luck with Recuva, you can always try out DiskDigger, another excellent piece of software. I’ve tested both of these applications very thoroughly, and found that neither of them will always find the same files, so it’s best to have both of them in your toolkit. Note that DiskDigger doesn’t require installation, making it a really great tool to throw on your PC repair Flash drive. Start off by choosing the drive you want to recover from…   Now you can choose whether to do a deep scan, or a really deep scan. Just like with Recuva, you’ll probably want to select the first one first. I’ve also had much better luck with the regular scan, rather than the “dig deeper” one. If you do choose the “dig deeper” one, you’ll be able to select exactly which types of files you are looking for, though again, you should use the regular scan first. Once you’ve come up with the results, you can click on the items on the left-hand side, and see a preview on the right.  You can select one or more files, and choose to restore them. It’s pretty simple! Download DiskDigger from dmitrybrant.com Download Recuva from piriform.com Good luck recovering your deleted files! And keep in mind, DiskDigger is a totally free donationware software from a single, helpful guy… so if his software helps you recover a photo you never thought you’d see again, you might want to think about throwing him a dollar or two. Similar Articles Productive Geek Tips Stupid Geek Tricks: Undo an Accidental Move or Delete With a Keyboard ShortcutRestore Accidentally Deleted Files with RecuvaCustomize Your Welcome Picture Choices in Windows VistaAutomatically Resize Picture Attachments in Outlook 2007Resize Your Photos with Easy Thumbnails TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • The Best of CES (Consumer Electronics Show) in 2011

    - by Justin Garrison
    This year, How-To Geek’s own Justin was on-site at the Consumer Electronics Show in Las Vegas, where every gadget manufacturer shows off their latest creations, and he was able to sit down and get hands-on with most of them. Here’s the best of the bunch. Make sure to also check out our list of the Worst of CES 2011, where we covered the gadgets that just didn’t make the cut Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Arctic Theme for Windows 7 Gives Your Desktop an Icy Touch Install LibreOffice via PPA and Receive Auto-Updates in Ubuntu Creative Portraits Peek Inside the Guts of Modern Electronics Scenic Winter Lane Wallpaper to Create a Relaxing Mood Access Your Web Apps Directly Using the Context Menu in Chrome The Deep – Awesome Use of Metal Objects as Deep Sea Creatures [Video]

    Read the article

  • SQL SERVER – Finding Latch Statistics

    - by pinaldave
    Last month I wrote SQL Server Wait Types and Queues series SQL SERVER – Summary of Month – Wait Type – Day 28 of 28. I had great fun to write the series. I learned a lot and I felt this has created some deep interest on the subject with others. I recently received very interesting question from one of the reader after reading SQL SERVER – PAGELATCH_DT, PAGELATCH_EX, PAGELATCH_KP, PAGELATCH_SH, PAGELATCH_UP – Wait Type – Day 12 of 28 that if they can know what kind of latches are waiting and what is their count. Absolutely! SQL Server team has already provided DMV which does the same. -- Latch SELECT * FROM sys.dm_os_latch_stats ORDER BY wait_time_ms DESC Above script will return you details about how many latches were waiting for how long. After going over this script I feel like going deep into the subject further. I will post a blog post on the subject soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Developer Webinar Toay:"Publishing IPS PAckages"

    - by user13333379
    Oracle's Solaris Organization is pleased to announce a Technical Webinar for Developers on Oracle Solaris 11: "Publishing IPS Packages" By Eric Reid (Principal Software Engineer) today June 19, 2012 9:00 AM PDT This bi-weekly webinar series (every other Tuesday @ 9 a.m. PT) is designed for ISVs, IHVs, and Application Developers who want a deep-dive overview about how they can deploy Oracle Solaris 11 into their application environments. This series will provide you the unique opportunity to learn directly from Oracle Solaris ISV Engineers and will include LIVE Q&A via chat with subject matter experts from each topic area. Any OTN member can register for this free webinar here.  Today's webinar is a deep dive into IPS. The attendees of the initial IPS webinar asked for more information around this topic. Eric Reid who worked with leading software vendors (ISVs) to migrate Solaris 10 System V packages to IPS will share his experience with us. 

    Read the article

  • Is the abundance of Frameworks dumbing down programmers?

    - by Gratzy
    With all of the frameworks available these days ORM's DI/IoC etc. I find that many programmers are losing or don't have the problem solving skills needed to solve difficult issues. I've seen many times unexpected behaviour creep into applications and the developers unable to really dig in and find the issues. It seems to me that deep understanding of whats going on under the hood is being lost. Don't get me wrong, I'm not suggesting these frameworks aren't good and haven't moved the industry forward, only asking if as a unintended consequence developers aren't gaining the knowledge and skill needed for deep understanding of systems.

    Read the article

  • Developer Webinar Today:"Publishing IPS Packages"

    - by user13333379
    Oracle's Solaris Organization is pleased to announce a Technical Webinar for Developers on Oracle Solaris 11: "Publishing IPS Packages" By Eric Reid (Principal Software Engineer) today June 19, 2012 9:00 AM PDT This bi-weekly webinar series (every other Tuesday @ 9 a.m. PT) is designed for ISVs, IHVs, and Application Developers who want a deep-dive overview about how they can deploy Oracle Solaris 11 into their application environments. This series will provide you the unique opportunity to learn directly from Oracle Solaris ISV Engineers and will include LIVE Q&A via chat with subject matter experts from each topic area. Any OTN member can register for this free webinar here.  Today's webinar is a deep dive into IPS. The attendees of the initial IPS webinar asked for more information around this topic. Eric Reid who worked with leading software vendors (ISVs) to migrate Solaris 10 System V packages to IPS will share his experience with us. 

    Read the article

  • HTG Projects: How to Create Your Own Custom Papercraft Toy

    - by Eric Z Goodnight
    Graphics programs aren’t simply for just editing your photos—they can have whatever fun application you can think of. For a fun, geeky project, here’s a simple papercraft toy you can make with a printer and simple household tools Latest Features How-To Geek ETC How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Deep – Awesome Use of Metal Objects as Deep Sea Creatures [Video] Convert or View Documents Online Easily with Zoho, No Account Required Build a Floor Scrubbing Robot out of Computer Fans and a Frisbee Serene Blue Windows Wallpaper for Your Desktop 2011 International Space Station Calendar Available for Download (Free) Ultimate Elimination – Lego Black Ops [Video]

    Read the article

  • How to Setup Software RAID for a Simple File Server on Ubuntu

    - by Sysadmin Geek
    Do you need a file server on the cheap that is easy to setup, “rock solid” reliable with Email Alerting? will show you how to use Ubuntu, software RAID and SaMBa to accomplish just that Latest Features How-To Geek ETC How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC Install LibreOffice via PPA and Receive Auto-Updates in Ubuntu Creative Portraits Peek Inside the Guts of Modern Electronics Scenic Winter Lane Wallpaper to Create a Relaxing Mood Access Your Web Apps Directly Using the Context Menu in Chrome The Deep – Awesome Use of Metal Objects as Deep Sea Creatures [Video] Convert or View Documents Online Easily with Zoho, No Account Required

    Read the article

  • C-states and P-states : confounding factors for benchmarking

    - by Dave
    I was recently looking into a performance issue in the java.util.concurrent (JUC) fork-join pool framework related to particularly long latencies when trying to wake (unpark) threads in the pool. Eventually I tracked the issue down to the power & scaling governor and idle-state policies on x86. Briefly, P-states refer to the set of clock rates (speeds) at which a processor can run. C-states reflect the possible idle states. The deeper the C-state (higher numerical values) the less power the processor will draw, but the longer it takes the processor to respond and exit that sleep state on the next idle to non-idle transition. In some cases the latency can be worse than 100 microseconds. C0 is normal execution state, and P0 is "full speed" with higher Pn values reflecting reduced clock rates. C-states are P-states are orthogonal, although P-states only have meaning at C0. You could also think of the states as occupying a spectrum as follows : P0, P1, P2, Pn, C1, C2, ... Cn, where all the P-states are at C0. Our fork-join framework was calling unpark() to wake a thread from the pool, and that thread was being dispatched onto a processor at deep C-state, so we were observing rather impressive latencies between the time of the unpark and the time the thread actually resumed and was able to accept work. (I originally thought we were seeing situations where the wakee was preempting the waker, but that wasn't the case. I'll save that topic for a future blog entry). It's also worth pointing out that higher P-state values draw less power and there's usually some latency in ramping up the clock (P-states) in response to offered load. The issue of C-states and P-states isn't new and has been described at length elsewhere, but it may be new to Java programmers, adding a new confounding factor to benchmarking methodologies and procedures. To get stable results I'd recommend running at C0 and P0, particularly for server-side applications. As appropriate, disabling "turbo" mode may also be prudent. But it also makes sense to run with the system defaults to understand if your application exhibits any performance sensitivity to power management policies. The operating system power management sub-system typically control the P-state and C-states based on current and recent load. The scaling governor manages P-states. Operating systems often use adaptive policies that try to avoid deep C-states for some period if recent deep idle episodes proved to be very short and futile. This helps make the system more responsive under bursty or otherwise irregular load. But it also means the system is stateful and exhibits a memory effect, which can further complicate benchmarking. Forcing C0 + P0 should avoid this issue.

    Read the article

  • Introducing the EMEA Oracle Partner Days: Maximize your Potential

    - by Julien Haye
    The EMEA Oracle PartnerNetwork Days - which used to incorporate Partner Executive Forum (local/regional live events delivering sales strategy to a partner executive audience) and Satellite Events (local/regional live events targeting sales and consultants delivering Oracle strategy, engagement around specializations, executive keynotes and deep dive content-related breakout sessions) is now made of two distinct Partner events in EMEA: Oracle Partner Days. They are similar to the Satellite events from last year: local/Regional live events targeting the key contacts in sales and consultancy delivering Oracle strategy, engaging around the several perspectives of the Oracle portfolio, executive keynotes and deep dive Business content-related breakout sessions. Learn more about the EMEA Oracle Partner Days on www.oracle.com/partners/goto/partnerdays-emea Oracle Partner Executive Forums that are on invitation only. Please contact your local Alliances manager for any questions.

    Read the article

  • How To Delete Your Skype Call and Chat History

    - by Gopinath
    Just like every other modern application, Skype also records all the communications we exchange using it. It records instant messages, calls, file transfers, SMS, etc. and makes it easy to view using the Conversation tab. If you ever feel like getting rid of these history information, then you need to delete them. Skype provides a single click option to clear all the history from you account, but the feature is buried deep under options menu.Really deep!. To clear history follow the menu Tools –> Options, switch to Privacy Settings tab available on the left side, click on Show advanced options button and finally hit the button Clear history. Ah! You are almost done. Just confirm a popup it displays on screen and your history is vanished from your account. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Duplicate page content and the Google index

    - by Kit Sunde
    I have a static pages with dynamically expanding content that google is indexing. I also have deep links into virtually duplicate pages which will pre-expand the relevant section of content into the relevant section. It seems like Google is ignoring all my specialized pages and not putting them in the index. Even after going through web-masters tools, crawling and submitting them to the index manually. I also use the google API for integrating search on the site, and the deep linked pages won't show up. Is there a good solution for this?

    Read the article

  • Recommended Bean Utility Libraries for Java

    - by Jim Ferrans
    I'm looking for a good, well-supported, and efficient Java library that uses reflection to automate JavaBean operations. These include making a deep copy of an arbitrary bean hierarchy (with nested lists and maps of beans), comparing two bean hierarchies for deep equality, and "transmorphing" one bean to another of a different class. Some possibilities include Apache Commons BeanUtils, Spring's BeanUtils, and Java's Bean support. Which libraries would you recommend?

    Read the article

  • Problem in named destination

    - by Palanisamy
    hi, i want to give deep linking for named destination.. is that possible? i am using this tool : http://flexpaper.devaldi.com/ This tool is loading PDF from converting PDF to SWF using PDF2SWF(www.swftools.org) is there any way to get the named destination and give deep linking for the named destination..? Please help me. thanks in advance.. Palanisamy

    Read the article

  • C# Performance Pitfall – Interop Scenarios Change the Rules

    - by Reed
    C# and .NET, overall, really do have fantastic performance in my opinion.  That being said, the performance characteristics dramatically differ from native programming, and take some relearning if you’re used to doing performance optimization in most other languages, especially C, C++, and similar.  However, there are times when revisiting tricks learned in native code play a critical role in performance optimization in C#. I recently ran across a nasty scenario that illustrated to me how dangerous following any fixed rules for optimization can be… The rules in C# when optimizing code are very different than C or C++.  Often, they’re exactly backwards.  For example, in C and C++, lifting a variable out of loops in order to avoid memory allocations often can have huge advantages.  If some function within a call graph is allocating memory dynamically, and that gets called in a loop, it can dramatically slow down a routine. This can be a tricky bottleneck to track down, even with a profiler.  Looking at the memory allocation graph is usually the key for spotting this routine, as it’s often “hidden” deep in call graph.  For example, while optimizing some of my scientific routines, I ran into a situation where I had a loop similar to: for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i]); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This loop was at a fairly high level in the call graph, and often could take many hours to complete, depending on the input data.  As such, any performance optimization we could achieve would be greatly appreciated by our users. After a fair bit of profiling, I noticed that a couple of function calls down the call graph (inside of ProcessElement), there was some code that effectively was doing: // Allocate some data required DataStructure* data = new DataStructure(num); // Call into a subroutine that passed around and manipulated this data highly CallSubroutine(data); // Read and use some values from here double values = data->Foo; // Cleanup delete data; // ... return bar; Normally, if “DataStructure” was a simple data type, I could just allocate it on the stack.  However, it’s constructor, internally, allocated it’s own memory using new, so this wouldn’t eliminate the problem.  In this case, however, I could change the call signatures to allow the pointer to the data structure to be passed into ProcessElement and through the call graph, allowing the inner routine to reuse the same “data” memory instead of allocating.  At the highest level, my code effectively changed to something like: DataStructure* data = new DataStructure(numberToProcess); for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i], data); } delete data; Granted, this dramatically reduced the maintainability of the code, so it wasn’t something I wanted to do unless there was a significant benefit.  In this case, after profiling the new version, I found that it increased the overall performance dramatically – my main test case went from 35 minutes runtime down to 21 minutes.  This was such a significant improvement, I felt it was worth the reduction in maintainability. In C and C++, it’s generally a good idea (for performance) to: Reduce the number of memory allocations as much as possible, Use fewer, larger memory allocations instead of many smaller ones, and Allocate as high up the call stack as possible, and reuse memory I’ve seen many people try to make similar optimizations in C# code.  For good or bad, this is typically not a good idea.  The garbage collector in .NET completely changes the rules here. In C#, reallocating memory in a loop is not always a bad idea.  In this scenario, for example, I may have been much better off leaving the original code alone.  The reason for this is the garbage collector.  The GC in .NET is incredibly effective, and leaving the allocation deep inside the call stack has some huge advantages.  First and foremost, it tends to make the code more maintainable – passing around object references tends to couple the methods together more than necessary, and overall increase the complexity of the code.  This is something that should be avoided unless there is a significant reason.  Second, (unlike C and C++) memory allocation of a single object in C# is normally cheap and fast.  Finally, and most critically, there is a large advantage to having short lived objects.  If you lift a variable out of the loop and reuse the memory, its much more likely that object will get promoted to Gen1 (or worse, Gen2).  This can cause expensive compaction operations to be required, and also lead to (at least temporary) memory fragmentation as well as more costly collections later. As such, I’ve found that it’s often (though not always) faster to leave memory allocations where you’d naturally place them – deep inside of the call graph, inside of the loops.  This causes the objects to stay very short lived, which in turn increases the efficiency of the garbage collector, and can dramatically improve the overall performance of the routine as a whole. In C#, I tend to: Keep variable declarations in the tightest scope possible Declare and allocate objects at usage While this tends to cause some of the same goals (reducing unnecessary allocations, etc), the goal here is a bit different – it’s about keeping the objects rooted for as little time as possible in order to (attempt) to keep them completely in Gen0, or worst case, Gen1.  It also has the huge advantage of keeping the code very maintainable – objects are used and “released” as soon as possible, which keeps the code very clean.  It does, however, often have the side effect of causing more allocations to occur, but keeping the objects rooted for a much shorter time. Now – nowhere here am I suggesting that these rules are hard, fast rules that are always true.  That being said, my time spent optimizing over the years encourages me to naturally write code that follows the above guidelines, then profile and adjust as necessary.  In my current project, however, I ran across one of those nasty little pitfalls that’s something to keep in mind – interop changes the rules. In this case, I was dealing with an API that, internally, used some COM objects.  In this case, these COM objects were leading to native allocations (most likely C++) occurring in a loop deep in my call graph.  Even though I was writing nice, clean managed code, the normal managed code rules for performance no longer apply.  After profiling to find the bottleneck in my code, I realized that my inner loop, a innocuous looking block of C# code, was effectively causing a set of native memory allocations in every iteration.  This required going back to a “native programming” mindset for optimization.  Lifting these variables and reusing them took a 1:10 routine down to 0:20 – again, a very worthwhile improvement. Overall, the lessons here are: Always profile if you suspect a performance problem – don’t assume any rule is correct, or any code is efficient just because it looks like it should be Remember to check memory allocations when profiling, not just CPU cycles Interop scenarios often cause managed code to act very differently than “normal” managed code. Native code can be hidden very cleverly inside of managed wrappers

    Read the article

  • The Sensemaking Spectrum for Business Analytics: Translating from Data to Business Through Analysis

    - by Joe Lamantia
    One of the most compelling outcomes of our strategic research efforts over the past several years is a growing vocabulary that articulates our cumulative understanding of the deep structure of the domains of discovery and business analytics. Modes are one example of the deep structure we’ve found.  After looking at discovery activities across a very wide range of industries, question types, business needs, and problem solving approaches, we've identified distinct and recurring kinds of sensemaking activity, independent of context.  We label these activities Modes: Explore, compare, and comprehend are three of the nine recognizable modes.  Modes describe *how* people go about realizing insights.  (Read more about the programmatic research and formal academic grounding and discussion of the modes here: https://www.researchgate.net/publication/235971352_A_Taxonomy_of_Enterprise_Search_and_Discovery) By analogy to languages, modes are the 'verbs' of discovery activity.  When applied to the practical questions of product strategy and development, the modes of discovery allow one to identify what kinds of analytical activity a product, platform, or solution needs to support across a spread of usage scenarios, and then make concrete and well-informed decisions about every aspect of the solution, from high-level capabilities, to which specific types of information visualizations better enable these scenarios for the types of data users will analyze. The modes are a powerful generative tool for product making, but if you've spent time with young children, or had a really bad hangover (or both at the same time...), you understand the difficult of communicating using only verbs.  So I'm happy to share that we've found traction on another facet of the deep structure of discovery and business analytics.  Continuing the language analogy, we've identified some of the ‘nouns’ in the language of discovery: specifically, the consistently recurring aspects of a business that people are looking for insight into.  We call these discovery Subjects, since they identify *what* people focus on during discovery efforts, rather than *how* they go about discovery as with the Modes. Defining the collection of Subjects people repeatedly focus on allows us to understand and articulate sense making needs and activity in more specific, consistent, and complete fashion.  In combination with the Modes, we can use Subjects to concretely identify and define scenarios that describe people’s analytical needs and goals.  For example, a scenario such as ‘Explore [a Mode] the attrition rates [a Measure, one type of Subject] of our largest customers [Entities, another type of Subject] clearly captures the nature of the activity — exploration of trends vs. deep analysis of underlying factors — and the central focus — attrition rates for customers above a certain set of size criteria — from which follow many of the specifics needed to address this scenario in terms of data, analytical tools, and methods. We can also use Subjects to translate effectively between the different perspectives that shape discovery efforts, reducing ambiguity and increasing impact on both sides the perspective divide.  For example, from the language of business, which often motivates analytical work by asking questions in business terms, to the perspective of analysis.  The question posed to a Data Scientist or analyst may be something like “Why are sales of our new kinds of potato chips to our largest customers fluctuating unexpectedly this year?” or “Where can innovate, by expanding our product portfolio to meet unmet needs?”.  Analysts translate questions and beliefs like these into one or more empirical discovery efforts that more formally and granularly indicate the plan, methods, tools, and desired outcomes of analysis.  From the perspective of analysis this second question might become, “Which customer needs of type ‘A', identified and measured in terms of ‘B’, that are not directly or indirectly addressed by any of our current products, offer 'X' potential for ‘Y' positive return on the investment ‘Z' required to launch a new offering, in time frame ‘W’?  And how do these compare to each other?”.  Translation also happens from the perspective of analysis to the perspective of data; in terms of availability, quality, completeness, format, volume, etc. By implication, we are proposing that most working organizations — small and large, for profit and non-profit, domestic and international, and in the majority of industries — can be described for analytical purposes using this collection of Subjects.  This is a bold claim, but simplified articulation of complexity is one of the primary goals of sensemaking frameworks such as this one.  (And, yes, this is in fact a framework for making sense of sensemaking as a category of activity - but we’re not considering the recursive aspects of this exercise at the moment.) Compellingly, we can place the collection of subjects on a single continuum — we call it the Sensemaking Spectrum — that simply and coherently illustrates some of the most important relationships between the different types of Subjects, and also illuminates several of the fundamental dynamics shaping business analytics as a domain.  As a corollary, the Sensemaking Spectrum also suggests innovation opportunities for products and services related to business analytics. The first illustration below shows Subjects arrayed along the Sensemaking Spectrum; the second illustration presents examples of each kind of Subject.  Subjects appear in colors ranging from blue to reddish-orange, reflecting their place along the Spectrum, which indicates whether a Subject addresses more the viewpoint of systems and data (Data centric and blue), or people (User centric and orange).  This axis is shown explicitly above the Spectrum.  Annotations suggest how Subjects align with the three significant perspectives of Data, Analysis, and Business that shape business analytics activity.  This rendering makes explicit the translation and bridging function of Analysts as a role, and analysis as an activity. Subjects are best understood as fuzzy categories [http://georgelakoff.files.wordpress.com/2011/01/hedges-a-study-in-meaning-criteria-and-the-logic-of-fuzzy-concepts-journal-of-philosophical-logic-2-lakoff-19731.pdf], rather than tightly defined buckets.  For each Subject, we suggest some of the most common examples: Entities may be physical things such as named products, or locations (a building, or a city); they could be Concepts, such as satisfaction; or they could be Relationships between entities, such as the variety of possible connections that define linkage in social networks.  Likewise, Events may indicate a time and place in the dictionary sense; or they may be Transactions involving named entities; or take the form of Signals, such as ‘some Measure had some value at some time’ - what many enterprises understand as alerts.   The central story of the Spectrum is that though consumers of analytical insights (represented here by the Business perspective) need to work in terms of Subjects that are directly meaningful to their perspective — such as Themes, Plans, and Goals — the working realities of data (condition, structure, availability, completeness, cost) and the changing nature of most discovery efforts make direct engagement with source data in this fashion impossible.  Accordingly, business analytics as a domain is structured around the fundamental assumption that sense making depends on analytical transformation of data.  Analytical activity incrementally synthesizes more complex and larger scope Subjects from data in its starting condition, accumulating insight (and value) by moving through a progression of stages in which increasingly meaningful Subjects are iteratively synthesized from the data, and recombined with other Subjects.  The end goal of  ‘laddering’ successive transformations is to enable sense making from the business perspective, rather than the analytical perspective.Synthesis through laddering is typically accomplished by specialized Analysts using dedicated tools and methods. Beginning with some motivating question such as seeking opportunities to increase the efficiency (a Theme) of fulfillment processes to reach some level of profitability by the end of the year (Plan), Analysts will iteratively wrangle and transform source data Records, Values and Attributes into recognizable Entities, such as Products, that can be combined with Measures or other data into the Events (shipment of orders) that indicate the workings of the business.  More complex Subjects (to the right of the Spectrum) are composed of or make reference to less complex Subjects: a business Process such as Fulfillment will include Activities such as confirming, packing, and then shipping orders.  These Activities occur within or are conducted by organizational units such as teams of staff or partner firms (Networks), composed of Entities which are structured via Relationships, such as supplier and buyer.  The fulfillment process will involve other types of Entities, such as the products or services the business provides.  The success of the fulfillment process overall may be judged according to a sophisticated operating efficiency Model, which includes tiered Measures of business activity and health for the transactions and activities included.  All of this may be interpreted through an understanding of the operational domain of the businesses supply chain (a Domain).   We'll discuss the Spectrum in more depth in succeeding posts.

    Read the article

  • Lambda&rsquo;s for .NET made easy&hellip;

    - by mbcrump
    The purpose of my blog is to explain things for a beginner to intermediate c# programmer. I’ve seen several blog post that use lambda expressions always assuming the audience is familiar with them. The purpose of this post is to make them simple and easily understood. Let’s begin with a definition. A lambda expression is an anonymous function that can contain expressions and statements, and can be used to create delegates or expression tree types. So anonymous function… delegates or expression tree types? I don’t get it??? Confused yet?   Lets break this into a few definitions and jump right into the code. anonymous function – is an "inline" statement or expression that can be used wherever a delegate type is expected. delegate - is a type that references a method. Once a delegate is assigned a method, it behaves exactly like that method. The delegate method can be used like any other method, with parameters and a return value. Expression trees - represent code in a tree-like data structure, where each node is an expression, for example, a method call or a binary operation such as x < y.   Don’t worry if this still sounds confusing, lets jump right into the code with a simple 3 line program. We are going to use a Function Delegate (all you need to remember is that this delegate returns a value.) Lambda expressions are used most commonly with the Func and Action delegates, so you will see an example of both of these. Lambda Expression 3 lines. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {             Func<int, int> myfunc = x => x *x;             Console.WriteLine(myfunc(6).ToString());             Console.ReadLine();         }       } } Is equivalent to Old way of doing it. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {               Console.WriteLine(myFunc(6).ToString());             Console.ReadLine();         }            static int myFunc(int x)          {              return x * x;            }       } } In the example, there is a single parameter, x, and the expression is x*x. I’m going to stop here to make sure you are still with me. A lambda expression is an unnamed method written in place of a delegate instance. In other words, the compiler converts the lambda expression to either a : A delegate instance An expression tree All lambda have the following form: (parameters) => expression or statement block Now look back to the ones we have created. It should start to sink in. Don’t get stuck on the => form, use it as an identifier of a lambda. A Lamba expression can also be written in the following form: Lambda Expression. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {             Func<int, int> myFunc = x =>             {                 return x * x;             };               Console.WriteLine(myFunc(6).ToString());             Console.ReadLine();         }       } } This form may be easier to read but consumes more space. Lets try an Action delegate – this delegate does not return a value. Action Delegate example. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {             Action<string> myAction = (string x) => { Console.WriteLine(x); };             myAction("michael has made this so easy");                                   Console.ReadLine();         }       } } Lambdas can also capture outer variables (such as the example below) A lambda expression can reference the local variables and parameters of the method in which it’s defined. Outer variables referenced by a lambda expression are called captured variables. Capturing Outer Variables using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {             string mike = "Michael";             Action<string> myAction = (string x) => {                 Console.WriteLine("{0}{1}", mike, x);          };             myAction(" has made this so easy");                                   Console.ReadLine();         }       } } Lamba’s can also with a strongly typed list to loop through a collection.   Used w a strongly typed list. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {             List<string> list = new List<string>() { "1", "2", "3", "4" };             list.ForEach(s => Console.WriteLine(s));             Console.ReadLine();         }       } } Outputs: 1 2 3 4 I think this will get you started with Lambda’s, as always consult the MSDN documentation for more information. Still confused? Hopefully you are not.

    Read the article

  • SQL SERVER – A Successful Performance Tuning Seminar – Hyderabad – Nov 27-28, 2010 – Next Pune

    - by pinaldave
    My recent SQL Server Performance Tuning Seminar in Colombo was oversubscribed with total of 35 attendees. You can read the details over here SQLAuthority News – SQL Server Performance Optimizations Seminar – Grand Success – Colombo, Sri Lanka – Oct 4 – 5, 2010. I had recently completed another seminar in Hyderabad which was again blazing success. We had 25 attendees to the seminar and had wonderful time together. There is one thing very different between usual class room training and this seminar series. In this seminar series we go 100% demo oriented and real world scenario deep down. We do not talk usual theory talk-talk. The goal of this seminar to give anybody who attends a jump start and deep dive on the performance tuning subject. I will share many different examples and scenarios from my years of experience of performance tuning. The beginning of the second day is always interesting as I take attendees the server as example of the talk, and together we will attempt to identify the bottleneck and see if we can resolve the same. So far I have got excellent feedback on this unique session, where we pick database of the attendees and address the issues. I plan to do the same again in next sessions. The next Seminar is in Pune.I am very excited for the same. Date and Time: December 4-5, 2010. 10 AM to 6 PM The Pride Hotel 05, University Road, Shivaji Nagar, Pune – 411 005 Tel: 020 255 34567 Click here for the agenda of the seminar. Instead of writing more details, I will let the photos do the talk for latest Hyderabad Seminar. Hotel Amrutha Castle King Arthur's Court Pinal Presenting Seminar Pinal Presenting Seminar Seminar Attendees Pinal Presenting Seminar Group Photo of Hyderabad Seminar Attendees Seminar Support Staff - Nupur and Shaivi Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >