Search Results

Search found 10378 results on 416 pages for 'feature driven'.

Page 316/416 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • Silverlight Cream for December 05, 2010 -- #1003

    - by Dave Campbell
    In this (Almost) All-Submittal Issue: John Papa(-2-), Jesse Liberty, Tim Heuer, Dan Wahlin, Markus Egger, Phil Middlemiss, Coding4Fun, Michael Washington, Gill Cleeren, MichaelD!, Colin Eberhardt, Kunal Chowdhury, and Rabeeh Abla. Above the Fold: Silverlight: "Two-Way Binding on TreeView.SelectedItem" Phil Middlemiss WP7: "Taking Screen Shots of Windows Phone 7 Panorama Apps" Markus Egger Training: "Beginners Guide to Visual Studio LightSwitch (Part - 4)" Kunal Chowdhury Shoutouts: Don't let the fire go out... check out the Firestarter Labs Bart Czernicki discusses the need for 64-bit Silverlight: Why a 64-bit runtime for Silverlight 5 Matters Laurent Duveau is interviewed by the SilverlightShow folks to discuss his WP7 app: Laurent Duveau on Morse Code Flash Light WP7 Application From SilverlightCream.com: John Papa: Silverlight 5 Features John Papa has a post up highlighting his take on what's cool in the new featureset for Silverlight 5... including an external link to the keynote. Silverlight Firestarter Keynote and Sessions John Papa also has posted links to all the individual session videos... what a great resource! Yet Another Podcast #17 – Scott Guthrie Jesse Liberty went big with his latest Yet Another Podcast ... he is interviewing Scott Guthrie about the Firestarter, Silverlight, WP7. and more. Silverlight 5 Plans Revealed With this post from Tim Heuer, I find myself adding a Silverlight 5 tag... so bring on the fun! ... unless you've been overloaded like I have since last Thursday, you've probably seen this, but what the heck... Silverlight Firestarter Wrap Up and WCF RIA Services Talk Sample Code Phoenix's own Dan Wahlin had a great WCF RIA Services presentation at the Firestarter last week, and his material and lots of other good links are up on his blog, and I'd say that even if he didn't have a couple shoutouts to me in it :) Thanks Dan!! Taking Screen Shots of Windows Phone 7 Panorama Apps Markus Egger helps us all out with a post on how to get screenshots of your WP7 Panorama app... in case you haven't tried it ... it's not as easy as it sounds! Two-Way Binding on TreeView.SelectedItem Phil Middlemiss is back with a post taking some of the mystery out of the TreeView control bound to a data context and dealing with the SelectedItem property... oh yeah, and throw all that into MVVM! Great tutorial as usual, a cool behavior, and all the source. Native Extensions for Microsoft Silverlight Alan Cobb pointed me to a quick post up on the Coding4Fun site about the NESL (Native Extensions for SilverLight) from Microsoft that give access to some cool features of Windows 7 from Silverlight... I added an NESL tag in case other posts appear on this subject. Silverlight Simple Drag And Drop / Or Browse View Model / MVVM File Upload Control Michael Washington has another great tutorial up at CodeProject that expands on prior work he'd done with drag/drop file upload with this post on integrating an updated browse/upload into ViewModel/MVVM projects, all of which is Blendable. The validation story in Silverlight (Part 1) In good news for all of us, Gill Cleeren has started a tutorial series at SilverlightShow on Silverlight Validation. The first one is up discussing the basics... The Common Framework MichaelD! has a WPF/Silverlight framework up with Facebook Authentication, Xaml-driven IOC, T4 synchronous WCF proxies, and WP7 on the roadmap... source on CodePlex, check it out and give him some feedback. Exploring Reactive Extensions (Rx) through Twitter and Bing Maps Mashups If you've been waiting around to learn Rx, Colin Eberhardt has the post up for you (and me)... great tutorial up on Twitter and Bing Maps Mashups ... and all the code... for the twitter immediate app, and also the UKSnow one we showed last week... check out the demo page, and grab the source! Beginners Guide to Visual Studio LightSwitch (Part - 4) Kunal Chowdhury has the 4th part of his Lightswitch tutorial series up at SilverlightShow. In this one, he shows how to integrate multiple tables into a screen. It is here Take Your Silverlight Application Full Screen & intercept all windows keys !! Rabeeh Abla sent me this link to the blog describing a COM exposed library that intercepts all keys when Silverlight is full-screen. There are a few I hit when I'm going through blogs that Ctrl-W (FF) just won't take down and that annoys me... so this might be a solution if you have that problem... worth a look anyway! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • What is Inversion of control and why we need it?

    - by Jalpesh P. Vadgama
    Most of programmer need inversion of control pattern in today’s complex real time application world. So I have decided to write a blog post about it. This blog post will explain what is Inversion of control and why we need it. We are going to take a real world example so it would be better to understand. The problem- Why we need inversion of control? Before giving definition of Inversion of control let’s take a simple real word example to see why we need inversion of control. Please have look on the following code. public class class1 { private class2 _class2; public class1() { _class2=new class2(); } } public class class2 { //Some implementation of class2 } I have two classes “Class1” and “Class2”.  If you see the code in that I have created a instance of class2 class in the class1 class constructor. So the “class1” class is dependent on “class2”. I think that is the biggest issue in real world scenario as if we change the “class2” class then we might need to change the “class1” class also. Here there is one type of dependency between this two classes that is called Tight Coupling. Tight coupling will have lots of problem in real world applications as things are tends to be change in future so we have to change all the tight couple classes that are dependent of each other. To avoid this kind of issue we need Inversion of control. What is Inversion of Control? According to the wikipedia following is a definition of Inversion of control. “In software engineering, Inversion of Control (IoC) is an object-oriented programming practice where the object coupling is bound at run time by an assembler object and is typically not known at compile time using static analysis.” So if you read the it carefully it says that we should have object coupling at run time not compile time where it know what object it will create, what method it will call or what feature it will going to use for that. We need to use same classes in such way so that it will not tight couple with each other. There are multiple way to implement Inversion of control. You can refer wikipedia link for knowing multiple ways of implementing Inversion of control. In future posts we are going to see all the different way of implementing Inversion of control.

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • SharePoint Saturday Michigan 2010 Recap, Slides, and Photos

    - by Brian Jackett
    This past weekend I attended SharePoint Saturday Michigan (SPSMI) in Ann Arbor, Michigan.  For those unfamiliar, SharePoint Saturday is a community driven event where various speakers gather to present at a FREE conference on all topics related to SharePoint.  This made my third SharePoint Saturday attended and second I’ve spoken at.  I believe today it was announced that about 210 people total attended the event.  I was very happy with the turnout, especially the ratio of male to female attendees.  Typically with computer related conferences the ratio leans towards more males attending, but both Peter Serzo (one of conference organizers) and I both commented to each other that at the end of the day it appeared to be close to 40% women in the crowd.  So here’s my recap of the weekend. Arrival     Friday afternoon I drove up from Columbus, OH to Ann Arbor, MI and arrived around 4pm.  I was attempting to avoid the rush hour traffic and construction backups.  Turned out to be a good idea because other speakers coming up Friday got stuck on a highway which literally closed down in both directions due to a bad accident.  I was talking my friend Sean McDonough through the highway closing and this was the first time I had seen a solid black traffic line on Google Maps.  Most of us are familiar with Green, Yellow, and Red, but this line was black if that tells you how bad it got. Speaker “Dinner”     Fast forward a few hours and it was time for the speaker “dinner.”  I put “dinner” in quotes because with this night alone SPSMI set a new bar for nicest and most extravagant speaker appreciation events for SharePoint Saturday.  By tapping into some very influential contacts, the conference organizers were able to provide a truck limo (yep you heard right) with refreshments, access to an underground suite at the Palace of Auburn Hills, and courtside tickets to see the Detroit Pistons play that night.  Being a Michigan native I have to say that I was absolutely floored by this experience and very thankful to our conference organizers Peter, Sebastian, and Jesse along with Trillium Teamologies. Sessions     The actual conference started Saturday morning at 9am with the keynote by Rob Collie who is the Microsoft program manager for PowerPivot.  The day continued and I attended the following sessions: Mike Watson (@mikewat) – “SharePoint 2010 Fight Night: Devs vs. Admins” Karl Swedeberg (@kswedberg) – “A Walk on the Client Side with jQuery“ [my session] Brian Jackett (@briantjackett) - “Real World Deployment of SharePoint 2007 Solutions” Jeff Willinger (@jwillie) - “Social Computing and Collaboration Inside and Outside the 4 Walls” Paul Schaeflein (@paulschaeflein) – “PowerShell for the SharePoint Developer” My Presentation     I had a great time presenting my session on Deploying SharePoint 2007 Solutions, but it wasn’t without its fair share of technical issues.  As my session was right after lunch I came in to my room 10 mins early to set up my laptop, slides, and demos.  As a quick background note, a few months ago I got an upgraded laptop from my company Sogeti and have been dual booting it between XP (factory installed) and Windows Server 2008 R2 w/ Hyper-V.  As such I had prepared all of my demo virtual machines to run under Hyper-V.  About 3 minutes before my session was scheduled to start though it became apparent that I did not have the correct display drivers to connect Windows Server 2008 R2 to the projector…     As you can imagine this was a slight cause for concern as I was potentially going to be unable to give my presentation.  Luckily for me I usually prepare for such unforeseen issues and had my presentation and some spare VMs that would run on XP on my external hard drive.  Knowing this I rebooted my machine into XP and began my presentation without slides until about 5 mins into the session when everything was up and running on XP.  Despite this being the first time I gave this presentation I have to say it was one of my favorites I’ve given so far.  The audience was very engaged in the session and I received some great, positive feedback afterwards.  Thanks to all who attended my session, I appreciate it very much. Link to Presentation Files     For those of you who attended my session and would like my slides or demo PowerShell scripts they can be found on my SkyDrive at the link below.  Also, if you have a few minutes and wouldn’t mind rating my session I have this session posted on SpeakerRate.  As speakers we always appreciate any and all feedback attendees offer, so thank you if you are able to provide any. SkyDrive folder with session files Rate my SharePoint 2007 Solutions session   Picture Albums     For everyone else, here are my pictures from the weekend.  The first link is to my FaceBook album which will have tagging (recommend this one.)  The second is to my Live album if you care for higher resolution images. http://www.facebook.com/album.php?aid=2154482&id=21905041&l=a3fb72ee8c View Full Album Conclusion     A big thank you goes out to all of the organizers, speakers, sponsors, and attendees of SPSMI.  As I’ve said so many times, without each and every one of you these events wouldn’t be possible.  I thoroughly enjoyed this trip back to my home state and presenting a new session.  For those interested in my upcoming schedule I will be giving two sessions on PowerShell at SharePoint Saturday Charlotte in April, helping plan Stir Trek: Iron Man Edition in May, and I’m submitting sessions to Day of .Net Ann Arbor in May as well.  Beyond that I haven’t planned out any travels.  Thanks for reading my recap.  Look forward to more technical posts now that I have a short break in conferences.         -Frog Out   links: Michigan image

    Read the article

  • To ORM or Not to ORM. That is the question&hellip;

    - by Patrick Liekhus
    UPDATE:  Thanks for the feedback and comments.  I have adjusted my table below with your recommendations.  I had missed a point or two. I wanted to do a series on creating an entire project using the EDMX XAF code generation and the SpecFlow BDD Easy Test tools discussed in my earlier posts, but I thought it would be appropriate to start with a simple comparison and reasoning on why I choose to use these tools. Let’s start by defining the term ORM, or Object-Relational Mapping.  According to Wikipedia it is defined as the following: Object-relational mapping (ORM, O/RM, and O/R mapping) in computer software is a programming technique for converting data between incompatible type systems in object-oriented programming languages. This creates, in effect, a "virtual object database" that can be used from within the programming language. Why should you care?  Basically it allows you to map your business objects in code to their persistence layer behind them. And better yet, why would you want to do this?  Let me outline it in the following points: Development speed.  No more need to map repetitive tasks query results to object members.  Once the map is created the code is rendered for you. Persistence portability.  The ORM knows how to map SQL specific syntax for the persistence engine you choose.  It does not matter if it is SQL Server, Oracle and another database of your choosing. Standard/Boilerplate code is simplified.  The basic CRUD operations are consistent and case use database metadata for basic operations. So how does this help?  Well, let’s compare some of the ORM tools that I have used and/or researched.  I have been interested in ORM for some time now.  My ORM of choice for a long time was NHibernate and I still believe it has a strong case in some business situations.  However, you have to take business considerations into account and the law of diminishing returns.  Because of these two factors, my recent activity and experience has been around DevExpress eXpress Persistence Objects (XPO).  The primary reason for this is because they have the DevExpress eXpress Application Framework (XAF) that sits on top of XPO.  With this added value, the data model can be created (either database first of code first) and the Web and Windows client can be created from these maps.  While out of the box they provide some simple list and detail screens, you can verify easily extend and modify these to your liking.  DevExpress has done a tremendous job of providing enough framework while also staying out of the way when you need to extend it.  This sounds worse than it really is.  What I mean by this is that if you choose to follow DevExpress coding style and recommendations, the hooks and extension points provided allow you to do some pretty heavy lifting while also not worrying about the basics. I have put together a list of the top features that I have used to compare the limited list of ORM’s that I have exposure with.  Again, the biggest selling point in my opinion is that XPO is just a solid as any of the other ORM’s but with the added layer of XAF they become unstoppable.  And then couple that with the EDMX modeling tools and code generation, it becomes a no brainer. Designer Features Entity Framework NHibernate Fluent w/ Nhibernate Telerik OpenAccess DevExpress XPO DevExpress XPO/XAF plus Liekhus Tools Uses XML to map relationships - Yes - - -   Visual class designer interface Yes - - - - Yes Management integrated w/ Visual Studio Yes - - Yes - Yes Supports schema first approach Yes - - Yes - Yes Supports model first approach Yes - - Yes Yes Yes Supports code first approach Yes Yes Yes Yes Yes Yes Attribute driven coding style Yes - Yes - Yes Yes                 I have a very small team and limited resources with a lot of responsibilities.  In order to keep up with our customers, we must rely on tools like these.  We use the EDMX tool so that we can create a visual representation of the applications with our customers.  Second, we rely on the code generation so that we can focus on the business problems at hand and not whether a field is mapped correctly.  This keeps us from requiring as many junior level developers on our team.  I have also worked on multiple teams where they believed in writing their own “framework”.  In my experiences and opinion this is not the route to take unless you have a team dedicated to supporting just the framework.  Each time that I have worked on custom frameworks, the framework eventually becomes old, out dated and full of “performance” enhancements specific to one or two requirements.  With an ORM, there are a lot smarter people than me working on the bigger issue of persistence and performance.  Again, my recommendation would be to use an available framework and get to working on your business domain problems.  If your coding is not making money for you, why are you working on it?  Do you really need to be writing query to object member code again and again? Thanks

    Read the article

  • Test-Drive ASP.NET MVC Review

    - by Ben Griswold
    A few years back I started dallying with test-driven development, but I never fully committed to the practice. This wasn’t because I didn’t believe in the value of TDD; it was more a matter of not completely understanding how to incorporate “test first” into my everyday development. Back in my web forms days, I could point fingers at the framework for my ignorance and laziness. After all, web forms weren’t exactly designed for testability so who could blame me for not embracing TDD in those conditions, right? But when I switched to ASP.NET MVC and quickly found myself fresh out of excuses and it became instantly clear that it was time to get my head around red-green-refactor once and for all or I would regretfully miss out on one of the biggest selling points the new framework had to offer. I have previously written about how I learned ASP.NET MVC. It was primarily hands on learning but I did read a couple of ASP.NET MVC books along the way. The books I read dedicated a chapter or two to TDD and they certainly addressed the benefits of TDD and how MVC was designed with testability in mind, but TDD was merely an afterthought compared to, well, teaching one how to code the model, view and controller. This approach made some sense, and I learned a bunch about MVC from those books, but when it came to TDD the books were just a teaser and an opportunity missed.  But then I got lucky – Jonathan McCracken contacted me and asked if I’d review his book, Test-Drive ASP.NET MVC, and it was just what I needed to get over the TDD hump. As the title suggests, Test-Drive ASP.NET MVC takes a different approach to learning MVC as it focuses on testing right from the very start. McCracken wastes no time and swiftly familiarizes us with the framework by building out a trivial Quote-O-Matic application and then dedicates the better part of his book to testing first – first by explaining TDD and then coding a full-featured Getting Organized application inspired by David Allen’s popular book, Getting Things Done. If you are a learn-by-example kind of coder (like me), you will instantly appreciate and enjoy McCracken’s style – its fast-moving, pragmatic and focused on only the most relevant information required to get you going with ASP.NET MVC and TDD. The book continues with the test-first theme but McCracken moves away from the sample application and incorporates other practical skills like persisting models with NHibernate, leveraging Inversion of Control with the IControllerFactory and building a RESTful web service. What I most appreciated about this section was McCracken’s use of and praise for open source libraries like Rhino Mocks, SQLite and StructureMap (to name just a few) and productivity tools like ReSharper, Web Platform Installer and ASP.NET SQL Server Setup Wizard.  McCracken’s emphasis on real world, pragmatic development was clearly demonstrated in every tool choice, straight-forward code block and developer tip. Whether one is already familiar with the tools/tips or not, McCracken’s thought process is easily understood and appreciated. The final section of the book walks the reader through security and deployment – everything from error handling and logging with ELMAH, to ASP.NET Health Monitoring, to using MSBuild with automated builds, to the deployment  of ASP.NET MVC to various web environments. These chapters, like those prior, offer enough information and explanation to simply help you get the job done.  Do I believe Test-Drive ASP.NET MVC will turn you into an expert MVC developer overnight?  Well, no.  I don’t think any book can make that claim.  If that were possible, I think book list prices would skyrocket!  That said, Test-Drive ASP.NET MVC provides a solid foundation and a unique (and dare I say necessary) approach to learning ASP.NET MVC.  Along the way McCracken shares loads of very practical software development tips and references numerous tools and libraries. The bottom line is it’s a great ASP.NET MVC primer – if you’re new to ASP.NET MVC it’s just what you need to get started.  Do I believe Test-Drive ASP.NET MVC will give you everything you need to start employing TDD in your everyday development?  Well, I used to think that learning TDD required a lot of practice and, if you’re lucky enough, the guidance of a mentor or coach.  I used to think that one couldn’t learn TDD from a book alone. Well, I’m still no pro, but I’m testing first now and Jonathan McCracken and his book, Test-Drive ASP.NET MVC, played a big part in making this happen.  If you are an MVC developer and a TDD newb, Test-Drive ASP.NET MVC is just the book for you.

    Read the article

  • SQLAuthority News – Training and Consultancy and Travel – Story of 30 Last 30 Days

    - by pinaldave
    Today’s blog post is not technical as usual. Here, I present a real story, and I also invite you all to share your thoughts or opinions on this post. I am a professional SQL Server Trainer; I also do consultation in the area of the Performance Tuning and Query Optimizations. In any month, I like the mix of both in my schedule. I prefer to do training for one week, and then commit the next week for some consultation work. Due to the advancement in technology, for most of the consultation works, there is no client location visit or first time visit for understanding the project. Usually, I conduct high-end training sessions or 400 level training, and these training sessions are very intensive most of the time. Always after completing the training for 5 days straight away at 400 level, I make sure to take out some time to cool down and relax. During this time, I prefer to work on optimization projects. Consultancy is great as it keeps me updated regarding what is going on in the real world. As we all know, all those trainers who have real world experience are always considered to be the best trainers. My learning is immense during my consultations with the real client and while resolving real problems. I share the same with my students the very next week when I go for training sessions. For the same reason, every class is different from the previous ones. An experience trainer would tell you that the class is best if it is driven by Students the way instructor wants! The best scenario is as described above; but you won’t get the best scenario all the time. I was on road for nearly 25 days out of the last 30 days and involved in doing various SQL Server-related trainings. Here is what I have done in the last 30 days. I have gathered the following details from my expenditure reports, which are maintained by my wife. There are few points related to my personal expenses and few other related to business. I maintain a separate list for each of these expenses, but here I have aggregated them. Last 30 days - Training 23 days - 4 – two days training classes – 8 days of training 3 – five days training classes – 15 days of training 1 – one day training classes – 2 days of training Flights 18 flights - 8 – Kingfisher 6 – Spicejet 2 – Jet light 2 – Jet connect Stay in different cities Hyderabad – 16 days Chennai – 6 days Bangalore – 2 days Ahmedabad – 6 days (Hometown) Meals – 54 (Averaging less than 2 per day) Room Services – 16 times Training Campuses – 20 times Restaurants – 6 times Home – 12 times Taxi/Cabs – 64 times (Averaging more than 2 per day) Hotel Cab – 34 times Meru Cab – 8 times Easy Cabs – 10 times Auto Rickshaw – 2 times Looking at the above statistics, I can see that I have eaten less than what I should have, which is not good, and traveled in taxi more than what I should have. Also the temperatures in different cities were very different, not to mention the humidity as well. I missed my family, especially my little girl (9 months). When I was at home, I used to have a proper healthy meal every single day; however, when I was traveling, the food was something I had to compromise on. I have previously written about my travel experience with different airlines, my opinion is still same about them. Well, I have question to all of you road warriors, how do you manage your health and enthusiasm during situations I am going through. I have couple of time stomach upset as well sour throat. I drink lots of water and do my best to keep up. Any idea? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Browsing Your ADF Application Module Pooling Params with WLST

    - by Duncan Mills
    In ADF 11g you can of course use Enterprise Manager (EM) to browse and configure the settings used by ADF Business Components  Application Modules, as shown here for one of my sample deployed applications. This screen you can access from the EM homepage by pulling down the Application Deployment menu, and then ADF > Configure ADF Business Components. Then select the profile that you are actually using (Hint: look in the DataBindings.cpx file to work this out - probably the "Local" version unless you've explicitly changed it. )So, from this screen you can change the pooling parameters and the world is good. But what if you don't have EM installed? In that case you can use the WebLogic scripting capabilities to view (and Update) the MBean Properties. Explanation The pooling parameters and many others are handled through Message Driven Beans that are created for the deployed application in the server. In the case of the ADF BC pooling parameters, this MBean will combine the configuration deployed as part of the application, along with any overrides defined as -D environement commands on the JVM startup for the application server instance. Using WLST to Browse the Bean ValuesFor our purposes here I'm doing this interactively, although you can also write a script or write Java to achieve the same thing.Step 0: Before You Start You will need the followingAccess to the console on the machine that is running the serverThe WebLogic Admin username and password (I'll use weblogic/password as my example here - yours will be different)The name of the deployed application (in this example FMWdh_application1)The package path to the bc4j.xcfg file (in this example oracle.demo.fmwdh.model.service.common.bc4j.xcfg) This is based on the default path for your model project so it shoudl be fairly easy to work out.The BC configuration your AM is actually running with (look in the DataBindings.cpx for that. In this example DealHelpServiceDeployed is the profile being used..)Step 1: Start the WLST consoleTo start at the beginning, you need to run the WLST command but that needs a little setup:Change to the wlserver_10.3/server/bin directory e.g. under your Fusion Middleware Home[oracle@mymachine] cd /home/oracle/FMW_R1/wlserver_10.3/server/binSet your environment using the setWLSEnv script. e.g. on Oracle Enterprise Linux:[oracle@mymachine bin] source setWLSEnv.shStart the WLST interactive console[oracle@mymachine bin] java weblogic.WLSTInitializing WebLogic Scripting Tool (WLST) ...Welcome to WebLogic Server Administration Scripting ShellType help() for help on available commandswls:/offline> Step 2:Enter the WLST commandsConnect to the server wls:> connect('weblogic','password')Change to the Custom root, this is where the AMPooling MBeans are registered wls:> custom()Change to the b4j MBean directorywls:> cd ('oracle.bc4j.mbean.config')Work out the correct directory for the AM configuration you need. This is the difficult bit, not because it's hard to do, but because the names are long. The structure here is such that every child MBean is displayed at the same level as the parent, so for each deployed application there will be many directories shown. In fact, do an ls() command here and you'll see what I mean. Each application will have one MBean for the app as a whole, and then for each deployed configuration in the .xcfg file you'll see: One for the config entry itself, and then one each for Security, DB Connection and AM Pooling. So if you deploy an app with just one configuration you'll see 5 directories, if it has two configurations in the .xcfg you'll see 9 and so on.The directory you are looking for will contain those bits of information you gathered in Step 0, specifically the Application Name, the configuration you are using and the xcfg name: First of all narrow your list to just those directories returned from the ls() command that begin oracle.bc4j.mbean.config:name=AMPool. These identify the AM pooling MBeans for all the deployed applications. Now look for the correct application name e.g. Application=FMWdh_application1The config setting in that sub-list should already be correct and match what you expect e.g. oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfgFinally look for the correct value for the AppModuleConfigType e.g. oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployedNow you have identified the correct directory name, change to that (keep the name on one line of course - I've had to split it across lines here for clarity:wls:> cd ('oracle.bc4j.mbean.config:name=AMPool,     type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,    oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,    Application=FMWdh_application1,    oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed') Now you can actually view the parameter values with a simple ls() commandwls:> ls()And here's the output in which you can view the realtime values of the various pool settings: -rw- AmpoolConnectionstrategyclass oracle.jbo.common.ampool.DefaultConnectionStrategy -rw- AmpoolDoampooling true -rw- AmpoolDynamicjdbccredentials false -rw- AmpoolInitpoolsize 2 -rw- AmpoolIsuseexclusive true -rw- AmpoolMaxavailablesize 40 -rw- AmpoolMaxinactiveage 600000 -rw- AmpoolMaxpoolsize 4096 -rw- AmpoolMinavailablesize 2 -rw- AmpoolMonitorsleepinterval 600000 -rw- AmpoolResetnontransactionalstate true -rw- AmpoolSessioncookiefactoryclass oracle.jbo.common.ampool.DefaultSessionCookieFactory -rw- AmpoolTimetolive 3600000 -rw- AmpoolWritecookietoclient false -r-- ConfigMBean true -rw- ConnectionPoolManager oracle.jbo.server.ConnectionPoolManagerImpl -rw- Doconnectionpooling false -rw- Dofailover false -rw- Initpoolsize 0 -rw- Maxpoolcookieage -1 -rw- Maxpoolsize 4096 -rw- Poolmaxavailablesize 25 -rw- Poolmaxinactiveage 600000 -rw- Poolminavailablesize 5 -rw- Poolmonitorsleepinterval 600000 -rw- Poolrequesttimeout 30000 -rw- Pooltimetolive -1 -r-- ReadOnly false -rw- Recyclethreshold 10 -r-- RestartNeeded false -r-- SystemMBean false -r-- eventProvider true -r-- eventTypes java.lang.String[jmx.attribute.change] -r-- objectName oracle.bc4j.mbean.config:name=AMPool,type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,Application=FMWdh_application1,oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed -rw- poolClassName oracle.jbo.common.ampool.ApplicationPoolImpl Thanks to Brian Fry on the JDeveloper PM Team who did most of the work to put this sequence of steps together with me badgering him over his shoulder.

    Read the article

  • Ask the Readers: Do You Prefer Computers, Game Consoles, or Other Devices for Your Gaming Needs?

    - by Asian Angel
    Nearly everyone who has access to a computer will play games on it at some point, but many people also use a separate game platform as well. What we would like to know this week is if you prefer using a computer, game consoles, or other devices for your gaming needs. Photo of Faith and Kate Connors from Mirror’s Edge by Tamahikari Tammas. Video games are a perfect way to relax and have fun at home (or at work if you can sneak in some game time!). The increasing variety of devices available with each passing year are making it easier to have access to a gaming platform to suit your needs or “darkest gaming desires”. For many people their computers are the perfect platform…they can play Flash-based games in their browsers, use the default set of games that come with their system, and install any extras that catch their eyes. The added benefit is that when game time is over they can drop right into their browsing, e-mail, personal projects, or work without having to switch hardware. The convenience of the “all-in-one” platform is certainly appealing! Perhaps you prefer to use your computer for other activities outside of gaming and own one or more separate game consoles. You might have chosen an Xbox, Playstation, or Nintendo for example. Maybe a hand-held is preferable for its’ size and portability. Then there are mobile phones and the iPad… With so many options it may feel hard to choose the right platform(s) without a good bit of research regarding display, availability of games for a particular platform, how long before the platform starts to become “obsolete”, etc. What we would like to know this week is which gaming platform you prefer. Is there only one that you choose to use or do you use multiple platforms for gaming? Is there a particular reason such as convenience for your choices? You may even be keeping an older platform around just for a certain game (or games) made for it. Are there any recommendations or advice that you would like to share with your fellow readers? Let us know in the comments! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Apture Highlights Turns Your Cursor into a Search Tool Add Classic Sci-Fi Goodness to Your Desktop with the Matrix Theme for Windows 7 You Can’t Walk Straight without Visual Markers [Video] Lord of the Rings Movie Parody Double Feature [Video] Turn a Webpage into an Asteroids-Styled Shooting Game in Opera Dolphin Browser Mini Leaves Beta; Sports New GUI, Easy Bookmarking, and More

    Read the article

  • Claims-based Identity Terminology

    - by kaleidoscope
    There are several terms commonly used to describe claims-based identity, and it is important to clearly define these terms. · Identity In terms of Access Control, the term identity will be used to refer to a set of claims made by a trusted issuer about the user. · Claim You can think of a claim as a bit of identity information, such as name, email address, age, and so on. The more claims your service receives, the more you’ll know about the user who is making the request. · Security Token The user delivers a set of claims to your service piggybacked along with his or her request. In a REST Web service, these claims are carried in the Authorization header of the HTTP(S) request. Regardless of how they arrive, claims must somehow be serialized, and this is managed by security tokens. A security token is a serialized set of claims that is signed by the issuing authority. · Issuing Authority & Identity Provider An issuing authority has two main features. The first and most obvious is that it issues security tokens. The second feature is the logic that determines which claims to issue. This is based on the user’s identity, the resource to which the request applies, and possibly other contextual data such as time of day. This type of logic is often referred to as policy[1]. There are many issuing authorities, including Windows Live ID, ADFS, PingFederate from Ping Identity (a product that exposes user identities from the Java world), Facebook Connect, and more. Their job is to validate some credential from the user and issue a token with an identifier for the user's account and  possibly other identity attributes. These types of authorities are called identity providers (sometimes shortened as IdP). It’s ultimately their responsibility to answer the question, “who are you?” and ensure that the user knows his or her password, is in possession of a smart card, knows the PIN code, has a matching retinal scan, and so on. · Security Token Service (STS) A security token service (STS) is a technical term for the Web interface in an issuing authority that allows clients to request and receive a security token according to interoperable protocols that are discussed in the following section. This term comes from the WS-Trust standard, and is often used in the literature to refer to an issuing authority. STS when used from developer point of view indicates the URL to use to request a token from an issuer. For more details please refer to the link http://www.microsoft.com/windowsazure/developers/dotnetservices/ Geeta, G

    Read the article

  • Silverlight Cream for March 05, 2010 -- #807

    - by Dave Campbell
    In this Issue: Phil Middlemiss(-2-, -3-), Pencho Popadiyn, John Papa(-2-, -3-), Jim Lynn, and SilverLaw(-2-). Shoutouts: Walt Ritscher has added more shaders and features: Shazzam 1.2 – Feature Overview I hope you're getting as excited as I am about MIX10. You should be reading MIX10 News and checking out the sessions and the directory of attendees. From SilverlightCream.com: Watermarked TextBox Part I Phil Middlemiss's Orb Radio Button hit number two in the Silverlight Cream Skim page, in 2 days... now Phil has a very nice 3-part tutorial up on creating a Watermarked TextBox with lots of cool features. This is part 1 and starts the series off. Watermarked TextBox Part II In Phil Middlemiss's Part II of the Watermarked TextBox tutorial, he's concentrating on visual elements of the control began in the last episode... you're paying attention, right? ... this is a cool control :) Watermarked Textbox Part III In the final part of Phil Middlemiss's tutorial series, he's wiring all the pieces together in the UserControl. Go grab the control, then leave Phil some love on his blog! Using Reactive Extensions in Silverlight Pencho Popadiyn has a great tutorial up on SilverlightShow about Rx ... if you want to get your arms around this... this tutorial is a good place to begin. Silverlight TV 10: Silverlight Hyper Video Platform with Jesse Liberty Running a little behind here, but check out John Papa and THE Silverlight GeekTM Jesse Liberty discussing Jesse's Hyper Video Platform on Silverlight TV Silverlight TV 11: Dynamically Loading XAPs with MEF In Silverlight TV episode 11, John Papa talks to Glenn Block about MEF and partitioning and dynamically loading XAPs ... good stuff. Silverlight TV 12: The Best Blend 3 Video Ever! And the latest Silverlight TV episode, number 12, has John Papa and Adam Kinney giving "The Best Blend 3 Video ever (or at least on Silverlight TV)"... check out the list of topics and you'll want to watch :) InvalidOperation_EnumFailedVersion when binding data to a Silverlight Chart Read Jim Lynn's post about a problem found while deploying his app, the very confusing (long) error, and the workaround. Leather Stamped Style Series For Silverlight Controls - Part 1 SilverLaw contued after his 'leather stamped' textbox and has added TextBlock, Button and some template bindings... check it out then get it at the Expression Gallery Circular Accordion Style Silverlight 3 SilverLaw also built a Circualar Accordian style... interesting idea and once again it, in the Expression Gallery. He's also looking for feedback. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    MIX10

    Read the article

  • AS11 Oracle B2B Sync Support - Series 1

    - by sinkarbabu.kirubanithi
    Synchronous message support has been enabled in Oracle B2B 11G. This would help customers to send the business message and receive the corresponding business response synchronously. We would like to keep this blog entry as three part series, first one would carry Oracle B2B configuration related details followed by 'how it can be consumed and utilized in an enterprise' using composites backed model. And, the last one would talk about more sophisticated seeded support built on Oracle B2B platform (Note: the last one is still in description phase and ETA hasn't been finalized yet). Details: In an effort to enable synchronous processing in Oracle B2B, we provided a platform using the existing 'callout' mechanism. In this case, we expect the 'callout' attached to the agreement to deliver incoming business message (inbound) to back-end application and get the corresponding business response from back-end and deliver it to Oracle B2B as its output. The output of 'callout' would be processed as outbound message and the same will be attached as a response for the inbound message. Requirements to enable Sync Support: Outbound side: Outbound Agreement - to send business message request Inbound Agreement - to receive business message response Inbound side: Inbound Agreement - to receive business message request Outbound Agreement - to send business message response Agreement Level Callout - to deliver the inbound request to back-end and get the corresponding business response This feature is supported only for HTTP based transport to exchange messages with Trading Partners. One may initiate the outbound message (enqueue) using any of the available Transports in Oracle B2B. Configuration: Outbound side: Please add "syncresponse=true" as "Additional Transport Header" parameter for remote Trading Partner's HTTP delivery channel configuration. This would enable Oracle B2B to process the HTTP response as inbound message and deliver the same to back-end application. All other configuration related to Agreement and Document setup remain same. Inbound side: There is no change in Agreement and Document setup. To enable "Sync Support", you need to build a 'callout' that takes the responsibility of delivering inbound message to back-end and get the corresponding business response from the back-end and attach the same as its output. Oracle B2B treats the output of 'callout' as outbound message and deliver it to Trading Partner as synchronous HTTP response. The requests that needs to processed synchronously should be received by "syncreceiver" (http://:/b2b/syncreceiver) endpoint in Oracle B2B. Exception Handling: Existing Oracle B2B exception handling applies to this use case as well. Here's the sample callout, SampleSyncCallout.java We will get you second part that talks about 'SOA composites' backed model to design the "Sync Support" use case from back-end to Trading Partners, stay tuned.

    Read the article

  • Silverlight Cream for March 20, 2010 -- #815

    - by Dave Campbell
    In this Issue: Andy Beaulieu(-2-, -3-), Alex Golesh, Damian Schenkelman, Adam Kinney(-2-), Jeremy Likness, Laurent Bugnion, and John Papa. Shoutouts: Adam Kinney has a good summary up of where to go for all the tools and toys: Install checklist for Silverlight 4 RC, Blend 4 Beta and Windows Phone Developer tools from MIX10 ... tons of links Laurent Bugnion had a few announcements at MIX10: MVVM Light V3 released at #MIX10, and he followed that with What’s new in MVVM Light V3 ... now for Windows Phone! Laurent Bugnion also has announced Sample code for my #mix10 talk online From SilverlightCream.com: Physics Games in Silverlight on Windows Phone 7 Andy Beaulieu has the Physics Helper working for WP7 already... read his post, check out all the links and get going on something fun... was great seeing you at MIX, too, Andy! Silverlight 4: GPU Accelerated PlaneProjection Andy Beaulieu has a comparison up of Plane Projection with and without the new GPU acceleration... be sure to read his notes section. Silverlight 4 PathListBox for Motion Path Animation Have you heard of the PathListBox? Well, showing is better than telling, so check out Andy Beaulieu's post on it Silverlight at Windows Phone 7 Alex Golesh has a quick overview on developing a Windows Phone 7 app in Silverlight using the new toys, and executiting it in the emulator Prism v2.1: Creating a Region Adapter for the Accordion control Damian Schenkelman shows how to use the Accordian control from the toolkit as a region in a Prism app. Expression Blend 4 Beta Feature Overview available for download Adam Kinney announced the presence of an Expression Blend whitepaper as well... you should go grab that too .toolbox – Free online Silverlight and Expression Blend training Want to improve your Silverlight chops or gain some Expression Blend chops? Check out .toolbox post that Adam Kinney posted Introducing the Visual State Aggregator Jeremy Likness describes the basic panel A/panel B problem, describes ways he and other folks have flipped between them, then describes his Visual State Aggregator ... and it's downloadable for you to give it a dance! Multithreading in Windows Phone 7 emulator: A bug Laurent Bugnion found a bug wit multi-threading on the Windows Phone emulator. He confirmed this with the team, and has a workaround you'll be needing... thanks Laurent. Silverlight Overview - Technical Whitepaper John Papa has reiterated the existence of this Silverlight 4 whitepaper ... it was updated this week, and we all should be aware of it. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Using Pandora in Boxee

    - by Mysticgeek
    Boxee is a very cool multimedia app that lets you access and stream your digital media in many different ways. There’s also a lot of extra apps included with it, and today we take a look at the Pandora application in Boxee. Pandora has been a favorite free music streaming service that’s been around for some time now. Though there are new services like Grooveshark and Spotify that are competing, Pandora is still a reliable choice. It’s now included in Boxee, and here we take a look at using it. Create a Pandora Account If you don’t already have a Pandora account, you can easily create one at their website (link below). Pandora in Boxee To start using Pandora from Boxee, launch Boxee and from the main menu select Apps. Now from the My Apps section select Pandora. When the Pandora app menu comes up, select Start. Now you need to log into your Pandora account. After signing in you can starting listening to your stations, viewing artist info, and cover art. All while enjoying some cool visuals in the background. From the controls at the top you can control playback, skip songs, control volume, get information on why a song was picked, and give a song a thumbs up or down. Of course you can also pull up your stations and switch between them and add more. The same features you’ve come to expect from Pandora are available. One thing we noticed missing is not being able to click on the band or artist to get additional information about them –which you can do on the Pandora site and desktop app. But that isn’t a deal breaker by any means, and we’re hoping the feature will be added in the future. Then while you’re checking out other apps, shows, and setting within Boxee, the cool visuals continue and the songs from you stations keep playing. Conclusion Pandora is a great streaming music service and a welcome edition to Boxee. If you’re a fan of Pandora now you can listen to it on your home theater system. If you’re new to Boxee, make sure to check out our article on getting started with Boxee. Create a Pandora Account Download Boxee Similar Articles Productive Geek Tips Integrate Boxee with Media Center in Windows 7Getting Started with BoxeePandora One is a Worthwhile Upgrade for Your Current Pandora AccountCreate Music Video Playlists with TubeRadio.fmSpotify is an Awesome Music Streaming Service TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Draw Online using Harmony How to Browse Privately in Firefox Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually

    Read the article

  • Good fix vs Quick fix [duplicate]

    - by Andrea Girardi
    This question already has an answer here: Does craftsmanship pay off? [duplicate] 16 answers Good design: How much hackyness is acceptable? [duplicate] 9 answers How do you balance between “do it right” and “do it ASAP” in your daily work? 14 answers Let's start from this principle: quality is a feature that you can't add to a project in the middle of the development process. This is the scenario: two weeks to go live with my project and, one of the developers added a specific method used only for one web application to our framework (Our framework is a bounce of java classes used to extract content from MongoDB, Alfresco, mySql and it's used by web applications). I'm the team leader and I told him to generalize the method to keep the framework to keep reusable but he said "no, I prefer don't do that because there are a lot of bugs that need to be fixed". The manager is agree with him and of course I'm not. Is it better to made extra effort to keep a framework free from any specific implementation (probably used only by one web application) or just add the methods because it works? So, my question is: is it correct to write code that only works or is better to write code that works but it doesn't sucks (i.e. adding embedded value, specific methods, extra classes, add column to database, etc)? How is it possible to justify the extra time (to be honest, this kind of fix requires 10 minutes extra to write a good generic code) to the management? How is possible to argue it's the right way to write code to young developers and PM? in general, good fix or quick fix? Ah, 10 minutes after I get the email from PM, he asked me why on a url of application 2 there was the name of application 1 during the login? I like to quote Jeff Atwood: "Don't leave "broken windows" (bad designs, wrong decisions, or poor code) unrepaired. Fix each one as soon as it is discovered. " Excerpt From: Hyperink. "How-To-Stop-Sucking-And-Be-Awesome-Instead." iBooks.

    Read the article

  • Automapper: Handling NULL members

    - by PSteele
    A question about null members came up on the Automapper mailing list.  While the problem wasn’t with Automapper, investigating the issue led to an interesting feature in Automapper. Normally, Automapper ignores null members.  After all, what is there really to do?  Imagine these source classes: public class Source { public int Data { get; set; } public Address Address { get; set; } }   public class Destination { public string Data { get; set; } public Address Address { get; set; } }   public class Address { public string AddressType { get; set; } public string Location { get; set; } } And imagine a simple mapping example with these classes: Mapper.CreateMap<Source, Destination>();   var source = new Source { Data = 22, Address = new Address { AddressType = "Home", Location = "Michigan", }, };   var dest = Mapper.Map<Source, Destination>(source); The variable ‘dest’ would have a complete mapping of the Data member and the Address member. But what if the source had no address? Mapper.CreateMap<Source, Destination>();   var source = new Source { Data = 22, };   var dest = Mapper.Map<Source, Destination>(source); In that case, Automapper would just leave the Destination.Address member null as well.  But what if we always wanted an Address defined – even if it’s just got some default data?  Use the “NullSubstitute” option: Mapper.CreateMap<Source, Destination>() .ForMember(d => d.Address, o => o.NullSubstitute(new Address { AddressType = "Unknown", Location = "Unknown", }));   var source = new Source { Data = 22, };   var dest = Mapper.Map<Source, Destination>(source); Now, the ‘dest’ variable will have an Address defined with a type and location of “Unknown”.  Very handy! Technorati Tags: .NET,Automapper,NULL

    Read the article

  • Disable IE 8 Thumbnail Previews on Windows 7 Taskbar

    - by Asian Angel
    The Aero thumbnail previews are a great new feature, but if you are not a fan of the flashy eye-candy, you can get rid of them with a simple tweak. Here is how to do it. Before Here we are…Internet Explorer 8 with a lot of How-To Geek Network goodness ready to go. The Taskbar Thumbnail Previews look very nice, but perhaps they take up too much room for those of you who like to keep things simple. The Taskbar Icon has the classic “fanned edge” look just like any other software with Taskbar Thumbnail Previews active. Disabling the Thumbnail Previews If you want to deactivate the Taskbar Thumbnail Previews for Internet Explorer, it is quite easy and will only take you a few moments to complete. Open IE and go to Tools \ Internet Options. When the Internet Options Window opens you will already be on the General Tab. Under the Tabs Section, click on the Settings button. The Tabbed Browsing Settings window opens. Uncheck Show previews for individual tabs in the taskbar and click OK. When you are returned to the Internet Options Window, click OK once again to totally exit out. Note: A browser restart will be required for the changes to take effect. After you have restarted Internet Explorer, you will see the simple default Taskbar Thumbnail Preview and standard icon look. Conclusion If you have been looking to disable the Taskbar Thumbnail Previews for Internet Explorer, then you are only a few clicks away from satisfaction. If you want to change it back, it is as simple as re-enabling the Show previews for individual tabs in the taskbar setting. Similar Articles Productive Geek Tips Increase the size of Taskbar Preview Thumbnails in Windows 7Vista Style Popup Previews for Firefox TabsWorkaround for Vista Taskbar Thumbnail Previews Not Showing CorrectlyDisable Thumbnail Previews in Windows 7 or Vista ExplorerGet Vista Taskbar Thumbnail Previews in Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Quickly Preview Songs in Windows Media Center 12 in Windows 7

    - by DigitalGeekery
    Do you ever wish you could quickly preview a song without having to play it? Today we look at a quick and easy way to do that in Windows Media Player 12. Open Windows Media Player in Library Mode and select your Music library. Hover your cursor over the Title of the song and a Preview pop-up window will appear after a few seconds.    Click on the Preview in the pop-up window and the song will begin to play. As the preview begins to play, you will see the Skip link and a song timer. Click on Skip to jump ahead 15 seconds in the song. When you are finished previewing the song, simply move your mouse away from the preview window to stop playback. Automatically Preview Songs You can adjust settings in Windows Media Player to automatically preview songs when you hover your cursor over the title. Select Tools  from the menu and click Options. On the Options window, select the Library tab and click on Automatically preview songs on title hover. Click OK.   Now when you simply hover your cursor over the song title the preview window will appear and playback will begin automatically. This feature works just as well in Details view as it does in Expanded Tile view. Would you like to stream your music to other computers on your network? Check out our article on how to stream media to other Windows 7 computers. Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Add Color Coding to Windows 7 Media Center Program GuideSchedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Integrate Boxee with Media Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • Whitepaper list for the application framework

    - by Rick Finley
    We're reposting the list of technical whitepapers for the Oracle ETPM framework (called OUAF, Oracle Utilities Application Framework).  These are are available from My Oracle Support at the Doc Id's mentioned below. Some have been updated in the last few months to reflect new advice and new features.  This is reposted from the OUAF blog:  http://blogs.oracle.com/theshortenspot/entry/whitepaper_list_as_at_november Doc Id Document Title Contents 559880.1 ConfigLab Design Guidelines This whitepaper outlines how to design and implement a data management solution using the ConfigLab facility. This whitepaper currently only applies to the following products: Oracle Utilities Customer Care And Billing Oracle Enterprise Taxation Management Oracle Enterprise Taxation and Policy Management           560367.1 Technical Best Practices for Oracle Utilities Application Framework Based Products Whitepaper summarizing common technical best practices used by partners, implementation teams and customers. 560382.1 Performance Troubleshooting Guideline Series A set of whitepapers on tracking performance at each tier in the framework. The individual whitepapers are as follows: Concepts - General Concepts and Performance Troublehooting processes Client Troubleshooting - General troubleshooting of the browser client with common issues and resolutions. Network Troubleshooting - General troubleshooting of the network with common issues and resolutions. Web Application Server Troubleshooting - General troubleshooting of the Web Application Server with common issues and resolutions. Server Troubleshooting - General troubleshooting of the Operating system with common issues and resolutions. Database Troubleshooting - General troubleshooting of the database with common issues and resolutions. Batch Troubleshooting - General troubleshooting of the background processing component of the product with common issues and resolutions. 560401.1 Software Configuration Management Series  A set of whitepapers on how to manage customization (code and data) using the tools provided with the framework. The individual whitepapers are as follows: Concepts - General concepts and introduction. Environment Management - Principles and techniques for creating and managing environments. Version Management - Integration of Version control and version management of configuration items. Release Management - Packaging configuration items into a release. Distribution - Distribution and installation of releases across environments Change Management - Generic change management processes for product implementations. Status Accounting - Status reporting techniques using product facilities. Defect Management - Generic defect management processes for product implementations. Implementing Single Fixes - Discussion on the single fix architecture and how to use it in an implementation. Implementing Service Packs - Discussion on the service packs and how to use them in an implementation. Implementing Upgrades - Discussion on the the upgrade process and common techniques for minimizing the impact of upgrades. 773473.1 Oracle Utilities Application Framework Security Overview A whitepaper summarizing the security facilities in the framework. Now includes references to other Oracle security products supported. 774783.1 LDAP Integration for Oracle Utilities Application Framework based products Updated! A generic whitepaper summarizing how to integrate an external LDAP based security repository with the framework. 789060.1 Oracle Utilities Application Framework Integration Overview A whitepaper summarizing all the various common integration techniques used with the product (with case studies). 799912.1 Single Sign On Integration for Oracle Utilities Application Framework based products A whitepaper outlining a generic process for integrating an SSO product with the framework. 807068.1 Oracle Utilities Application Framework Architecture Guidelines This whitepaper outlines the different variations of architecture that can be considered. Each variation will include advice on configuration and other considerations. 836362.1 Batch Best Practices for Oracle Utilities Application Framework based products This whitepaper outlines the common and best practices implemented by sites all over the world. 856854.1 Technical Best Practices V1 Addendum Addendum to Technical Best Practices for Oracle Utilities Customer Care And Billing V1.x only. 942074.1 XAI Best Practices This whitepaper outlines the common integration tasks and best practices for the Web Services Integration provided by the Oracle Utilities Application Framework. 970785.1 Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt intergration between Oracle Utilities Application Framework Based Products and Oracle Identity Manager used to provision user and user group security information. For Fw4.x customers use whitepaper 1375600.1 instead. 1068958.1 Production Environment Configuration Guidelines A whitepaper outlining common production level settings for the products based upon benchmarks and customer feedback. 1177265.1 What's New In Oracle Utilities Application Framework V4? Whitepaper outlining the major changes to the framework since Oracle Utilities Application Framework V2.2. 1290700.1 Database Vault Integration Whitepaper outlining the Database Vault Integration solution provided with Oracle Utilities Application Framework V4.1.0 and above. 1299732.1 BI Publisher Guidelines for Oracle Utilities Application Framework Whitepaper outlining the interface between BI Publisher and the Oracle Utilities Application Framework 1308161.1 Oracle SOA Suite Integration with Oracle Utilities Application Framework based products This whitepaper outlines common design patterns and guidelines for using Oracle SOA Suite with Oracle Utilities Application Framework based products. 1308165.1 MPL Best Practices Oracle Utilities Application Framework This is a guidelines whitepaper for products shipping with the Multi-Purpose Listener. This whitepaper currently only applies to the following products: Oracle Utilities Customer Care And Billing Oracle Enterprise Taxation Management Oracle Enterprise Taxation and Policy Management 1308181.1 Oracle WebLogic JMS Integration with the Oracle Utilities Application Framework This whitepaper covers the native integration between Oracle WebLogic JMS with Oracle Utilities Application Framework using the new Message Driven Bean functionality and real time JMS adapters. 1334558.1 Oracle WebLogic Clustering for Oracle Utilities Application Framework New! This whitepaper covers process for implementing clustering using Oracle WebLogic for Oracle Utilities Application Framework based products. 1359369.1 IBM WebSphere Clustering for Oracle Utilities Application Framework New! This whitepaper covers process for implementing clustering using IBM WebSphere for Oracle Utilities Application Framework based products 1375600.1 Oracle Identity Management Suite Integration with the Oracle Utilities Application Framework New! This whitepaper covers the integration between Oracle Utilities Application Framework and Oracle Identity Management Suite components such as Oracle Identity Manager, Oracle Access Manager, Oracle Adaptive Access Manager, Oracle Internet Directory and Oracle Virtual Directory. 1375615.1 Advanced Security for the Oracle Utilities Application Framework New! This whitepaper covers common security requirements and how to meet those requirements using Oracle Utilities Application Framework native security facilities, security provided with the J2EE Web Application and/or facilities available in Oracle Identity Management Suite.

    Read the article

  • Multi-Threaded Application vs. Single Threaded Application

    Why would we use a multi threaded application vs. a single threaded application? First we must define multithreading. Multithreading is a feature of an operating system that allows programs to run subcomponents or threads in parallel. Typically most applications only need to use one thread because they do not perform time consuming tasks. The use of multiple threads allows an application to distribute long running tasks so that they can be executed in parallel. This gives the user the perceived appearance that the application is working faster due to the fact that while one thread is waiting on an IO process the remaining tasks can make use of the available CPU. The allows working threads to execute in tandem so that they can be competed sooner. Multithreading Benefits Improved responsiveness — Users usually report improved responsiveness compared to single thread applications. Faster applications — Multiple threads can lead to improved application performance. Prioritization — Threads can be assigned a priority which would allow higher priority tasks to take precedence over lower priority tasks. Single Threading Benefits Programming and debugging —These activities are easier compared to multithreaded applications due to the reduced complexity Less Overhead — Threads add overhead to an application When developing multi-threaded applications, the following must be considered. Deadlocks occur when two threads hold a monitor that the other one requires. In essence each task is blocking the other and both tasks are waiting for the other monitor to be released. This forces an application to hang or deadlock. Resource allocation is used to prevent deadlocks because the system determines if approving the resource request will render the system in an unsafe state. An unsafe state could result in a deadlock. The system only approves requests that will lead to safe states. Thread Synchronization is used when multiple threads use the same instance of an object. The threads accessing the object can then be locked and then synchronized so that each task can interact with the static object on at a time.

    Read the article

  • My History with Agile

    - by Robert May
    I’m going to write my history with Agile here.  That way, in future posts, I can refer back to it, instead of typing it out in the post that contains information you may actually want to read.  Note that I’m actually a pretty senior developer, and do lots of technical interviews.  I’m an Agile fan because of the difference it makes in peoples lives and the improvement in quality it brings, and I’ll sacrifice my technological advance to help teams. Management History I started management pretty early in my career, starting with the first job that I ever had.  I actually do NOT have a CS or similar degree.  I have a Bachelor’s of Business Administration with an emphasis in Computer Information Systems. My first management gigs were around call center work and were very schedule oriented.  I didn’t understand the true value of teams, and I’m ashamed to admit, I actually installed a fingerprint scanner as a time clock in this job.  I shudder to think of the impact that I had on the team spirit.  I didn’t even trust them enough to fill out their time cards correctly.  How sad. I was managing nearly 100 people in this position, with the help of a great set of subordinates. I did try to come up with reward programs for the team, but again, didn’t understand the concept of team, so instead of letting the team determine how the rewards should work, I mandated from on high, which isn’t a good thing. I was told that I wasn’t the type that would be a good manager by people whom I respected a lot.  They said it because I was a computer geek, since they don’t understand good management either, but in retrospect, they were right about me then.  I was too green. After my first job, I went on to other jobs and with the exception of one job, I’ve managed people at them all.  The rest of the management story is important for understanding agile, so I’ll save it for my next post. Technical History I’ve been in software development for many, many years.  I technically started programming on a commodore 64 in basic.  I didn’t know that I was programming, but I was sure having fun.  That was followed by batch files, Gorilla hacking (I always had to win), WordPerfect Macro programming and other things that taught me the basics. My first “real” job was with a telephone company, and that’s where I made my first database application in DataEase, wrote my first VBA app and started using real programming tools, like turbo pascal, vb3-vb5, and semi-real tools like RPG and VisualRPG.  I wrote my first web page in 1994, and built my first data driven web page in 1995 using perlDB.  You really can do anything with Perl.  At this time, I also started a Linux based internet service provider that is still in operation today.  One of the people I worked with is now a Microsoft employee building and designing frameworks you probably know well.  Smart guy.  I also built my first ASP applications connecting to Sql Server 6.5, setup Exchange 5.5 for the company, and many other system administration stuff.  I’m a programmer by choice, mostly because I don’t really like PC support. From there, I went on to a large state agency.  I got to see and maintain true waterfall projects.  5 years of maintaining the 200 VB COM+ (MTS, actually) dlls that were used to calculate a single number is a long time.  That was all Microsoft DNS technologies.  SQL Server and VB6 were the tools of choice, although .net started to be a factor near the end of employment.  I did some heavy XML work at this job and even wrote an XSD parser and validator in VB6 that was a shim until MSXML 3.0 came out.  Prior to 3.0, XSD’s weren’t supported, and I didn’t want to write DTDs. Ironically, jobs after this were more generic.  I pretty much settled in on the .net framework and revisions of it.  Lots of WPF, some silverlight, lots of ASP.NET, some SQL Azure, lots of SQL Server, some Oracle, but I don’t think that I was as passionate about development and technologies.  I was more into the management of development.  I like people. Technorati Tags: Agile,history

    Read the article

  • Link To Work Item &ndash; Visual Studio extension to link changeset(s) to work item directly from VS history window

    - by Utkarsh Shigihalli
    Originally posted on: http://geekswithblogs.net/onlyutkarsh/archive/2014/08/11/link-to-work-item-ndash-visual-studio-extension-to-link.aspxBy linking work items and other objects, you can track related work, dependencies, and changes made over time. As the following illustration shows, specific link types are used to track specific work items and actions. (– via MSDN) While making a check-in, Visual Studio 2013 provides you a quick way to search and assign a work item via pending changes section in Team Explorer. However, if you forget to assign the work item during your check-in, things really get cumbersome as Visual Studio does not provide an easy way of assigning. For example, you usually have to open the work item and then link the changeset which involves approx. 7-8 mouse clicks. Now, you will really feel the difficulty if you have to assign work item to multiple changesets, you have to repeat the same steps again. Hence, I decided to develop a small Visual Studio extension to perform this action of linking work item to changeset bit easier. How to use the extension? First, download and install the extension from VS Gallery (Supports VS 2013 Professional and above). Once you install, you will see a new "Link To Work Item" menu item when you right click on a changeset in history window. Clicking Link To Work Item menu, will open a new dialog with which you can search for a work item. As you can see in below screenshot, this dialog displays the search result and also the type of the work item. You can also open work item from this dialog by right clicking on the work item and clicking 'Open'. Finally, clicking Save button, will actually link the work item to changeset. One feature which I think helpful, is you can select multiple changesets from history window and assign the work item to all those changesets.  To summarize the features Directly assign work items to changesets from history window Assign work item to multiple changesets Know the type of the work item before assigning. Open the work item from search results It also supports all default Visual Studio themes. Below is a small demo showcasing the working of this extension. Finally, if you like the extension, do not forget to rate and review the extension in VS Gallery. Also, do not hesitate to provide your suggestions, improvements and any issues you may encounter via github.

    Read the article

  • How to Customize Your How-To Geek RSS Feeds (We’re Changing Things)

    - by The Geek
    If you’re an RSS subscriber, you’ll soon notice that we’re making a few changes. Why? It’s time to simplify our system, while providing you a little more control over which articles you want to see. The point, of course, is that people like different things, and that’s OK. What’s not so great is getting complaints—Linux users are always whining about Windows posts, and Windows users are whining when we write Linux posts. It’s also worth pointing out that if you aren’t interested in a post—you don’t have to click on it to read it. This is probably fairly obvious to reasonable people. The New Feeds Here’s the new set of feeds you can subscribe to. We’ll probably add more fine-grained feeds in the future, as we get some more things straightened out. Everything we publish (news, how-tos, features) Just the Feature Articles (the absolute best stuff) Just News (ETC) Posts Just Windows Articles Just Linux Articles Just Apple Articles Just Desktop Fun Articles You can obviously subscribe to one or many of them if you feel like it. The Once Daily Summary Feed! If you’d rather get all your How-To Geek in a single dose each day, you can subscribe to the summary feed, which is pretty much the same as our daily email newsletter. You can subscribe to this summary feed by clicking here. Note: we’re working on a lot of backend changes to hopefully make things a little better for you, the reader. One of the things we’ve consistently had feedback on is the comment system, which we’ll tackle a little later. Also, if you suddenly saw a barrage of posts earlier… oops! Our mistake. Latest Features How-To Geek ETC Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents

    Read the article

  • Instalar SQL Server 2008

    - by Jason Ulloa
    En este post trataré de explicar los pasos para la instalación de SQL y su posterior configuración. Primer paso: Instalación de las reglas de Soporte (Setup Support Rules) Está será la primer pantalla de instalación con la que nos toparemos cuando tratemos de instalar sql server. En ella, únicamente debemos dar clic en siguiente(next). Paso 2: Selección de las características de instalación de SQL Server (Feature Selection) Este es a mi parecer el paso mas importante del proceso de instalación de SQL, pues es el que nos permitirá seleccionar todos los componentes que este tendrá posteriormente Acá lo importante es: Servicios de bases de datos y herramientas de administración. Todas las demás son plus del motor.   Paso 3: Configuración de la Instancia En este paso, no debemos preocuparnos por nada. Únicamente presionamos siguiente. Paso 4: Requerimientos de espacio en disco Nuevamente en esta instancia no tendremos trabajo alguno. Únicamente es una pantalla informativa de SQL en donde se muestra el espacio actual del disco y el espacio que la instalación de SQL Server consumirá. Presionamos siguiente (next). Paso 5: Configuración del servidor Este paso es uno de los mas importantes, pues en el le indicaremos a SQL que usuario utilizará para autenticarse y levantar cada uno de los servicios que hayamos seleccionado al inicio. Generalmente cuando se trabaja en local el usuario NT AUTHORITY\SYSTEM es la mejor opción. Si en este paso, seleccionamos un usuario con permisos insuficientes SQL nos dará un error. Presionamos siguiente (next) Paso 6: Configuración del motor de bases de datos En este paso, nos enfocaremos en la pestaña Account Provisioning, que será en la que le indiquemos el usuario con el que el motor de bases de datos funcionará por defecto. Lo mas recomendado sería hacer clic en la opción add current user, la cual agregará el usuario de windows  que se encuentre en ese momento. También, podremos seleccionar si queremos el modo de autenticación de SQL o el modo Mixto, que incluye autenticación de SQL Server y Windows. Para nuestra instalación seleccionaremos unicamente modo de autenticación de SQL. Una vez que agregamos el usuario presionamos siguiente (next) Paso 7:  Finalizar la configuración Luego de los pasos anteriores, las demás pantallas no requieren nada especial. Únicamente presionar siguiente y esperar a que la instalación de SQL termine.

    Read the article

  • Talking JavaOne with Rock Star Martijn Verburg

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top-rated speakers at each JavaOne Conference. They are awarded by their peers, who, through conference surveys, recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized. Martijn Verburg has, in recent years, established himself as an important mover and shaker in the Java community. His “Diabolical Developer” session at the JavaOne 2011 Conference got people’s attention by identifying some of the worst practices Java developers are prone to engage in. Among other things, he is co-leader and organizer of the thriving London Java User Group (JUG) which has more than 2,500 members, co-represents the London JUG on the Executive Committee of the Java Community Process, and leads the global effort for the Java User Group “Adopt a JSR” and “Adopt OpenJDK” programs. Career highlights include overhauling technology stacks and SDLC practices at Mizuho International, mentoring Oracle on technical community management, and running off shore development teams for AIG. He is currently CTO at jClarity, a start-up focusing on automating optimization for Java/JVM related technologies, and Product Advisor at ZeroTurnaround. He co-authored, with Ben Evans, "The Well-Grounded Java Developer" published by Manning and, as a leading authority on technical team optimization, he is in high demand at major software conferences.Verburg is participating in five sessions, a busy man indeed. Here they are: CON6152 - Modern Software Development Antipatterns (with Ben Evans) UGF10434 - JCP and OpenJDK: Using the JUGs’ “Adopt” Programs in Your Group (with Csaba Toth) BOF4047 - OpenJDK Building and Testing: Case Study—Java User Group OpenJDK Bugathon (with Ben Evans and Cecilia Borg) BOF6283 - 101 Ways to Improve Java: Why Developer Participation Matters (with Bruno Souza and Heather Vancura-Chilson) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Kirk Pepperdine, Ellen Kraffmiller and Henri Tremblay) When I asked Verburg about the biggest mistakes Java developers tend to make, he listed three: A lack of communication -- Software development is far more a social activity than a technical one; most projects fail because of communication issues and social dynamics, not because of a bad technical decision. Sadly, many developers never learn this lesson. No source control -- Developers simply storing code in local filesystems and emailing code in order to integrate Design-driven Design -- The need for some developers to cram every design pattern from the Gang of Four (GoF) book into their source code All of which raises the question: If these practices are so bad, why do developers engage in them? “I've seen a wide gamut of reasons,” said Verburg, who lists them as: * They were never taught at high school/university that their bad habits were harmful.* They weren't mentored in their first professional roles.* They've lost passion for their craft.* They're being deliberately malicious!* They think software development is a technical activity and not a social one.* They think that they'll be able to tidy it up later.A couple of key confusions and misconceptions beset Java developers, according to Verburg. “With Java and the JVM in particular I've seen a couple of trends,” he remarked. “One is that developers think that the JVM is a magic box that will clean up their memory, make their code run fast, as well as make them cups of coffee. The JVM does help in a lot of cases, but bad code can and will still lead to terrible results! The other trend is to try and force Java (the language) to do something it's not very good at, such as rapid web development. So you get a proliferation of overly complex frameworks, libraries and techniques trying to get around the fact that Java is a monolithic, statically typed, compiled, OO environment. It's not a Golden Hammer!”I asked him about the keys to running a good Java User Group. “You need to have a ‘Why,’” he observed. “Many user groups know what they do (typically, events) and how they do it (the logistics), but what really drives users to join your group and to stay is to give them a purpose. For example, within the LJC we constantly talk about the ‘Why,’ which in our case is several whys:* Re-ignite the passion that developers have for their craft* Raise the bar of Java developers in London* We want developers to have a voice in deciding the future of Java* We want to inspire the next generation of tech leaders* To bring the disparate tech groups in London together* So we could learn from each other* We believe that the Java ecosystem forms a cornerstone of our society today -- we want to protect that for the futureLooking ahead to Java 8 Verburg expressed excitement about Lambdas. “I cannot wait for Lambdas,” he enthused. “Brian Goetz and his group are doing a great job, especially given some of the backwards compatibility that they have to maintain. It's going to remove a lot of boiler plate and yet maintain readability, plus enable massive scaling.”Check out Martijn Verburg at JavaOne if you get a chance, and, stay tuned for a longer interview yours truly did with Martijn to be publish on otn/java some time after JavaOne. Originally published on blogs.oracle.com/javaone.

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >