Search Results

Search found 18096 results on 724 pages for 'let'.

Page 476/724 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Language Club – Battle of the Dynamic Languages

    - by Ben Griswold
    After dedicating the last eight weeks to learning Ruby, it’s time to move onto another language.  I really dig Ruby.  I really enjoy its dynamism and expressiveness and always-openness and it’s been the highlight of our coding club for me so far. But that’s just my take on the language.  I know a lot of coders who’s stomachs turn with the mere thought of Ruby.  They say it’s Ruby’s openness which has them feeling uneasy.  I’d say “write a bunch of tests and get over it,” but I figure there must be more to it than always open classes and possible method collisions. Yes, there’s something else to it alright. The folks who didn’t fall head over heals for Ruby are already in love with Python.  You might remember that Python was the first language we tackled in our coding club.  My time with Python was okay but it didn’t feel as natural to me as Ruby.  But let’s say we started with Ruby and then moved onto Python.  Would I see Python in a different light right now.  Might I even prefer Python over Ruby?  I suppose it’s possible but it’s pretty tough to test that theory – unless we visit Python for a second time. That’s right. The language club is going to focus on Python again and in my attempt to learn Python – yet again – in the open, I’ll be posting my solutions here just as I did for Ruby.  We don’t always have second chances so I going about this relearning with two primary goals in mind:  First, I’m going to use IronPython and the IronPython tools which provide a Python code editor, a file-based project system, and an interactive Python interpreter, all inside Visual Studio 2010.  As a note, the IronPython tools are now part of the main IronPython installer which is Version 2.7 Alpha 1 (not the latest stable version, 2.6.1) and I’d be crazy not to use them.  Second, I’d like to make sure I’m still learning Python without a complete MS skew so I’m going to run my code through Eclipse using the PyDev plugin as well.  Heck, I might use IDLE too. I already have this setup on my machine so it’s no big deal. Okay, that’s it for now.  I worked on the first ten Euler problems last night and the solutions will be posted shortly. Wish me luck.

    Read the article

  • links for 2010-04-27

    - by Bob Rhubart
    @oracletechnet: Oracle Technology Network Newsletters Revisited "You may find this hard to believe, but some analysts contend that email newsletters are still among the most preferred methods of "information awareness" by developers today. And in our experience, the numbers back it up: subscriptions to Oracle Technology Network newsletters grow organically by 15% every year, even after you take continual list cleanup into account. " -- Justin Kestelyn (tags: oracle otn newsletters developers architects) Sylvain Duloutre: Directory Services as a Web Service Sylvain Duloutre shares a WSDL file he created to deal with issues involved in XML binding generation. (tags: oracle sun wsdl webservices DSEE netbeans jdeveloper) Nick Wooler: Iron-Clad Cloud: Secure Cloud Computing "One solution to the security problem with cloud services can be overcome using Service Oriented Security. The Oracle approach to using Service Oriented Security allows developers to pull from a centralized, authoritative source of identity services. This allows developers to build security into every application from the inside-out. This is critical to ensuring this is done in a standardized manner and most importantly it allows developers to develop without being security experts." -- Nick Wooler (tags: oracle sun security cloud saas) Andy Mulholland: A week of visits; Cisco, HP, Oracle, SAP and VMware (in alphabetical order!) "I now am considering that we should be thinking about ‘clouds’ in virtual way, by which I mean that a succession of virtual ‘clouds’ will need to exist, each possessing specific characteristics that suit certain types of services. Really it’s no different to what we see with servers today. Adding a hypervisor to a server adds new flexibility, but creating a virtualised environment means much more. What I suspect will happen is that we will start to use vendor specific approaches to building what I will term a physical cloud solution using their technology and approach to supporting a specific objective, but with time we will find these physical clouds will interoperate as a fully virtualised cloud environment." -- Andy Mulholland (tags: entarch enterprisearchitecture cloudcomputing virtualization) @fteter: Highlights From The Bright Lights - Tuesday #c10 Oracle Ace Director Floyd Teter of JPL with one last wrap-up of Collaborate 10. (tags: oracle otn collaborate2010 las vegas) Rittman Mead India – Call for very good Oracle BI Developers/Architects "Now that we have an office in India and if you are interested in joining us, do drop us a line at [email protected], and we will be glad to have technical discussions with you. If you are also an Oracle BI, DW or EPM customer looking for help on projects in the Asia-Pacific region, again we’ll be pleased to hear from you and to let you know how we can help." -- Venkatakrishnan J (tags: otn oracle jobs india developers architects software)

    Read the article

  • Part 2: The Customization Lifecycle

    - by volker.eckardt(at)oracle.com
    To understand the challenges when working with Customizations better, please allow me to explain my understanding from the Customization Lifecycle.  The starting point is the functional GAP list. Any GAP can lead to a customization (but not have to). The decision is driven by priority, gain, costs, future functionality, accepted workarounds etc. Let's assume the customization has been accepted as such - including estimation. (Otherwise this blog would not have any value)Now the customization life-cycle starts and could look like this:-    Functional specification-    Technical specification-    Technical development-    Functional setup-    Module Test-    System Test-    Integration Test (if required)-    Acceptance Test-    Production mode-    Usage-    10 x Rework-    10 x Retest -    2 x Upgrade-    2 x Upgrade Test-    Usage-    10 x Rework-    10 x Retest -    1 x Upgrade-    1 x Upgrade Test-    Usage-    Review for Retirement-    Accepted Retirement-    De-installationWhat I like to highlight herewith is that any material and documentation you create upfront or during the first phases will usually be used multiple times, partial or complete, will be enhanced, reviewed, retested. The better the quality right from the beginning is, the better we can perform the next steps.What I see very often is the wish to remove a customization, our customers are upgrading and they like to get at least some of the customizations replaced with standard functionality. To be able to support this process best, the customization documentation should contain at least the following key information: What is/are the business process(es) where this customization is used or linked to?Who was involved in the different customization phases?What are the objects comprising the customization?What is the setup necessary for the customization?What setup comes with the customization, what has to be done via other tools or manually?What are the test steps and test results (in all test areas)?What are linked customizations? What is the customization complexity?How is this customization classified?Which technologies were used?How many days were needed to create/test/upgrade the customization?Etc.If all this is available, a replacement / retirement can be done much more efficient and precise, or an estimation and upgrade itself can be executed with much better support.In the following blog entries I will explain in more detail why we suggest tracking such information, by whom this task shall be done and how.Volker Eckardt

    Read the article

  • Make Your Coworker’s Day in Ubuntu

    - by Trevor Bekolay
    It can be difficult to express your appreciation for your coworkers in person – what if they take it the wrong way, or think you’re fishing for a compliment of your own? If you use Ubuntu in your office, here’s a quick way to show your appreciation while avoiding the social pitfalls of face-to-face communication. Make sure their computer is locked An unlocked computer is a vulnerable computer. Vulnerable to malware sure, but much more vulnerable to the local office prankster, who thinks it’s hilarious to make a screenshot of your desktop, change your background to that screenshot, then hide all of your desktop icons. These incidents have taught us that you should lock your computer when taking a break. Hopefully your coworker has learned the same lesson, and pressed Ctrl+Alt+L before stepping out for a coffee. Leave a carefully worded message Now is your opportunity to leave your message of appreciation on your coworker’s computer. Click on the Leave Message button and type away! Click on Save. Wait, possibly in the shadows If you sit near your coworker, then wait for them to return. If you sit farther away, then try to listen for their footsteps. Eventually they will return to their computer and enter their password to unlock it. Observe smile Once they return to their desktop, they will be greeted with the message you left. Look to see if they appreciated the message, and if so, feel free to take credit. If they look annoyed, or press the Cancel button, continue on with your day like nothing happened. You may also try to slip into a conversation that you saw Jerry tinkering with their computer earlier. Conclusion Leaving your coworkers a nice message is easy and can brighten up their dull afternoon. We’re pretty sure that this method can only be used for good and not evil, but if you have any other suggestions of messages to leave, let us know in the comments! Similar Articles Productive Geek Tips Make Ubuntu Automatically Save Changes to Your SessionAdding extra Repositories on UbuntuInstall IceWM on Ubuntu LinuxInstall Blackbox on Ubuntu LinuxMake Firefox Display Large Images Full Size TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper Live PDF Searches PDF Files and Ebooks Converting Mp4 to Mp3 Easily

    Read the article

  • The softer side of BPM

    - by [email protected]
    BPM and RTD are great complementary technologies that together provide a much higher benefit than each of them separately. BPM covers the need for automating processes, making sure that there is uniformity, that rules and regulations are complied with and that the process runs smoothly and quickly processes the units flowing through it. By nature, this automation and unification can lead to a stricter, less flexible process. To avoid this problem it is common to encounter process definition that include multiple conditional branches and human input to help direct processing in the direction that best applies to the current situation. This is where RTD comes into play. The selection of branches and conditions and the optimization of decisions is better left in the hands of a system that can measure the results of its decisions in a closed loop fashion and make decisions based on the empirical knowledge accumulated through observing the running of the process.When designing a business process there are key places in which it may be beneficial to introduce RTD decisions. These are:Thresholds - whenever a threshold is used to determine the processing of a unit, there may be an opportunity to make the threshold "softer" by introducing an RTD decision based on predicted results. For example an insurance company process may have a total claim threshold to initiate an investigation. Instead of having that threshold, RTD could be used to help determine what claims to investigate based on the likelihood they are fraudulent, cost of investigation and effect on processing time.Human decisions - sometimes a process will let the human participants make decisions of flow. For example, a call center process may leave the escalation decision to the agent. While this has flexibility, it may produce undesired results and asymetry in customer treatment that is not based on objective functions but subjective reasoning by the agent. Instead, an RTD decision may be introduced to recommend escalation or other kinds of treatments.Content Selection - a process may include the use of messaging with customers. The selection of the most appropriate message to the customer given the content can be optimized with RTD.A/B Testing - a process may have optional paths for which it is not clear what populations they work better for. Rather than making the arbitrary selection or selection by committee of the option deeped the best, RTD can be introduced to dynamically determine the best path for each unit.In summary, RTD can be used to make BPM based process automation more dynamic and adaptable to the different situations encountered in processing. Effectively making the automation softer, less rigid in its processing.

    Read the article

  • SQL SERVER – Detecting Leap Year in T-SQL using SQL Server 2012 – IIF, EOMONTH and CONCAT Function

    - by pinaldave
    Note: Tomorrow is February 29th. This blog post is dedicated to coming tomorrow – a special day :) Subu: “How can I find leap year in using SQL Server 2012?“ Pinal: “Are you asking me how to year 2012 is leap year using T-SQL – search online and you will find many example of the same.” Subu: “No. I am asking – How can I find leap year in using SQL Server 2012?“ Pinal: “Oh so you are asking – How can I find leap year in using SQL Server 2012?“ Subu: “Yeah - How can I find leap year in using SQL Server 2012?“ Pinal: “Let me do that for you – How can you find leap year in using SQL Server 2012?“ Indeed a fun conversation. Honestly, only reason I pasted our conversation here is – it was fun. What he was asking is that how to do it using new functions introduced in SQL Server 2012. Here is the article I have written which introduces all the new functions in SQL Server 2012 Summary of All the Analytic Functions – MSDN and SQLAuthority and 14 New Functions – A Quick Guide. There are many functions written to figure out to figure out if any year is Leap Year or not. The same I have written using T-SQL function over here. CREATE FUNCTION dbo.IsLeapYear (@year INT) RETURNS INT AS BEGIN RETURN(IIF(DATEPART(dd,(EOMONTH(CONCAT(@year,'0201')))) = 29,1,0)) END GO What I really like is that I was able to use three newly introduced function in SQL Server 2012 in above script. You can read more about them here. IIF, EOMONTH and CONCAT. You can validate above query by running following script. SELECT dbo.IsLeapYear('2011') 'IsLeapYear'; SELECT dbo.IsLeapYear('2012') 'IsLeapYear'; GO You will get result 1 if the year is leap year and 0 if year is not leap year. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Simple Hack To Install iOS 5 Beta On iPhone/iPad Without Developer Account

    - by Gopinath
    Apple released iOS 5 beta couple of days ago with loads of new features(watch the keynote video) . This beta version of iOS is available to iPhone developers who pay Apple $99 per annum. What if you are not a developer or not willing to pay $99 for a developer account but desperately looking for ways to upgrade your iPhone to iOS 5? Here is a simple hack to install iOS 5 beta without having a developer account.   1. Download iOS 5 installer with the help of Google or Torrent sites 2. Manually update your iPhone with the help of iTunes. Connect your phone to computer -click on Update button with Shift (PC) or Option (Mac) key pressed — Select the downloaded iOS 5 IPSW file and let the iTunes update your iPhone. 3. After update completes you see Activation screen. 4. Triple click the home button. This will activate the Voice Over. Again triple click the home button and Emergency Call option will appear. 5. Click the Emergency Call option and while it’s opening, swipe your three fingers down to open Notification Center 6. Click on the Weather widget. The Weather application will load. 7. Click on the home button. Bam!! Your iPhone is loaded with iOS 5. Note: Upgrade iPhone to iOS 5 at your own risk. You can’t blame us if something goes wrong. I know that you are aware of the side effects in using the hacks, but just reminding you once again . via Gizmodo This article titled,Simple Hack To Install iOS 5 Beta On iPhone/iPad Without Developer Account, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • SQL SERVER – PREEMPTIVE and Non-PREEMPTIVE – Wait Type – Day 19 of 28

    - by pinaldave
    In this blog post, we are going to talk about a very interesting subject. I often get questions related to SQL Server 2008 Book-Online about various Preemptive wait types. I got a few questions asking what these wait types are and how they could be interpreted. To get current wait types of the system, you can read this article and run the script: SQL SERVER – DMV – sys.dm_os_waiting_tasks and sys.dm_exec_requests – Wait Type – Day 4 of 28. Before we continue understanding them, let us study first what PREEMPTIVE and Non-PREEMPTIVE waits in SQL Server mean. PREEMPTIVE: Simply put, this wait means non-cooperative. While SQL Server is executing a task, the Operating System (OS) interrupts it. This leads to SQL Server to involuntarily give up the execution for other higher priority tasks. This is not good for SQL Server as it is a particular external process which makes SQL Server to yield. This kind of wait can reduce the performance drastically and needs to be investigated properly. Non-PREEMPTIVE: In simple terms, this wait means cooperative. SQL Server manages the scheduling of the threads. When SQL Server manages the scheduling instead of the OS, it makes sure its own priority. In this case, SQL Server decides the priority and one thread yields to another thread voluntarily. In the earlier version of SQL Server, there was no preemptive wait types mentioned and the associated task status with them was marked as suspended. In SQL Server 2005, preemptive wait types were not listed as well, but their associated task status was marked as running. In SQL Server 2008, preemptive wait types are properly listed and their associated task status is also marked as running. Now, SQL Server is in Non-Preemptive mode by default and it works fine. When CLR, extended Stored Procedures and other external components run, they run in Preemptive mode, leading to the creation of these wait types. There are a wide variety of preemptive wait types. If you see consistent high value in the Preemptive wait types, I strongly suggest that you look into the wait type and try to know the root cause. If you are still not sure, you can send me an email or leave a comment about it and I will do my best to help you reduce this wait type. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – DMV – sys.dm_exec_query_optimizer_info – Statistics of Optimizer

    - by pinaldave
    Incredibly, SQL Server has so much information to share with us. Every single day, I am amazed with this SQL Server technology. Sometimes I find several interesting information by just querying few of the DMV. And when I present this info in front of my client during performance tuning consultancy, they are surprised with my findings. Today, I am going to share one of the hidden gems of DMV with you, the one which I frequently use to understand what’s going on under the hood of SQL Server. SQL Server keeps the record of most of the operations of the Query Optimizer. We can learn many interesting details about the optimizer which can be utilized to improve the performance of server. SELECT * FROM sys.dm_exec_query_optimizer_info WHERE counter IN ('optimizations', 'elapsed time','final cost', 'insert stmt','delete stmt','update stmt', 'merge stmt','contains subquery','tables', 'hints','order hint','join hint', 'view reference','remote query','maximum DOP', 'maximum recursion level','indexed views loaded', 'indexed views matched','indexed views used', 'indexed views updated','dynamic cursor request', 'fast forward cursor request') All occurrence values are cumulative and are set to 0 at system restart. All values for value fields are set to NULL at system restart. I have removed a few of the internal counters from the script above, and kept only documented details. Let us check the result of the above query. As you can see, there is so much vital information that is revealed in above query. I can easily say so many things about how many times Optimizer was triggered and what the average time taken by it to optimize my queries was. Additionally, I can also determine how many times update, insert or delete statements were optimized. I was able to quickly figure out that my client was overusing the Query Hints using this dynamic management view. If you have been reading my blog, I am sure you are aware of my series related to SQL Server Views SQL SERVER – The Limitations of the Views – Eleven and more…. With this, I can take a quick look and figure out how many times Views were used in various solutions within the query. Moreover, you can easily know what fraction of the optimizations has been involved in tuning server. For example, the following query would tell me, in total optimizations, what the fraction of time View was “reference“. As this View also includes system Views and DMVs, the number is a bit higher on my machine. SELECT (SELECT CAST (occurrence AS FLOAT) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'view reference') / (SELECT CAST (occurrence AS FLOAT) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'optimizations') AS ViewReferencedFraction Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Emoti-phrases

    - by Tony Davis
    Surely the next radical step in the development of User-interface design is for applications to react appropriately to the rising tide of bewilderment, frustration or antipathy of the users. When an application understands that rapport is lost, it should respond accordingly. When we, for example, become confused by an unforgiving interface, shouldn't there be a way of signalling our bewilderment and having the application respond appropriately? There is surprisingly little in the current interface standards that would help. If we're getting frustrated with an unresponsive application, perhaps we could let it know of our increasing irritation by means of an "I'm getting angry and exasperated" slider. Although, by 'responding appropriately', I don't include playing a "we are experiencing unusually heavy traffic: your application usage is important to us" message, accompanied by calming muzak. When confronted with a tide of wizards, 'are you sure?' messages, or page-after-page of tiresome and barely-relevant options, how one yearns for a handy 'JFDI' (Just Flaming Do It) button. One click and the application miraculously desists with its annoying questions and just gets on with the job, using the defaults, or whatever we selected last time. Much more satisfying, and more direct to most developers and DBAs, however, would be the facility to communicate to the application via a twitter-style input field, or via parameters to command-line applications ("I don't want a wide-ranging debate with you; just open the bl**dy PDF!" or, or "Don't forget which of us has the close button"). Although to avoid too much cultural-dependence, perhaps we should take the lead from emoticons, and use a set of standardized emoti-phrases such as 'sez you', 'huh?', 'Pshaw!', or 'meh', which could be used to vent a range of feelings in any given application, whether it be SQL Server stubbornly refusing to give us the result we are expecting, or when an online-survey is getting too personal. Or a 'Lingua Glaswegia' perhaps: 'Atsabelter' ("very good") 'Atspish' ("must try harder") 'AnThenYerArsFellAff ' ("I don't quite trust these results") 'BileYerHeid' or 'ShutYerGub' ("please stop these inane questions") There would, of course, have to be an ANSI standards body to define the phrases that were acceptable. Presumably, there would be a tussle amongst the different international standards organizations. Meanwhile Oracle, Microsoft and Apple would each release non-standard extensions. Time then, surely, to plant emoti-phrases on the lot of them and develop a user-driven consensus. Send us your suggestions! The best one will win an iPod nano! Cheers! Tony.

    Read the article

  • Silverlight Cream for November 20, 2011 - 2 -- #1170

    - by Dave Campbell
    In this Issue: Michael Washington, Oliver Fuh, Jeremy Likness, Derik Whittaker, Jesse Liberty, Jeff Blankenburg(-2-), and Michael Crump. Above the Fold: Silverlight: "Handling Extremely Large Data Sets in Silverlight" Jeremy Likness WP7: "31 Days of Mango | Day #8: Contacts API" Jeff Blankenburg LightSwitch: "LightSwitch Chat Application Using A Data Source Extension" Michael Washington Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up Check out Shawn Wildermuth's take on the AppStore and WP7 in general: 40,000 Apps - What Does It Mean? Be sure to check out Jesse Liberty & Paul Betts new book: Programming Reactive Extensions and LINQ, I've just had a little time to look at mine, but don't let the size fool you... this is the good stuff! From SilverlightCream.com: LightSwitch Chat Application Using A Data Source Extension In his latest LightSwitch post, Michael Washington gives up code that will enable two people using the same LightSwitch app to chat. Great detailed tutorial as usual! Handling AdControl Fetching Exception WindowsPhoneGeek turns the blog reigns over to Oliver Fuh for this post about using the AdControl in your WP7 app and handling a common exception you get with the Microsoft AdControl Handling Extremely Large Data Sets in Silverlight In this excerpt from his book, Jeremy Likness discusses reading *LARGE* data sets with Silverlight using 3 different patterns: OData, WCF RIA Services, and MVVM. Using MVVM with the AutoCompleteTextBox in Silverlight 4 Derik Whittaker takes a break from WinRT to discuss the Silverlight 4 AutoCompleteTextBox and MVVM ... including a custom Behavior to allow the backing property to be updated and a command to trigger background searches Yet Another Podcast #52–Peter Torr on Windows Phone Multitasking Jesse Liberty scored Peter Torr on his Latest Yet Another Podcast .. talking about Multitasking on Windows Phone including background agents, the backstack, and other Mango features 31 Days of Mango | Day #8: Contacts API Jeff Blankenburg's Day 8 is about a new namespace on WP7: Microsoft.Phone.UserData ... now giving us the ability to treat the user's contact list like a local database 31 Days of Mango | Day #9: Calendar API On Day 9 in his series, Jeff Blankenburg revisits the Microsoft.Phone.UserData namespace and looks at another set of data: the calendar Want to Decompile Silverlight XAP files? Try JustDecompile Beta! Michael Crump has a post up about the new free developer productivity tool from Telerik that provides assembly browsing and decompiling: JustDecompile ... Just download it! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • 3 Day Level 400 SQL Tuning Workshop 15 March in London, early bird and referral offer

    - by sqlworkshops
    I want to inform you that we have organized the "3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop" in London, United Kingdom during March 15-17, 2011.This is a truly level 400 hands-on workshop and you can find the Agenda, Prerequisite, Goal of the Workshop and Registration information at www.sqlworkshops.com/ruk. Charges are GBP 1800 (VAT excl.). Early bird discount of GBP 125 until 18 February. We are also introducing a new referral plan. If you refer someone who participates in the workshop you will receive an Amazon gift voucher for GBP 125.Feedback from one of the participants who attended our November London workshop:Andrew, Senior SQL Server DBA from UBS, UK, www.ubs.com, November 26, 2010:Rating: In a scale of 1 to 5 please rate each item below (1=Poor & 5=Excellent) Overall I was satisfied with the workshop 5 Instructor maintained the focus of the course 5 Mix of theory and practice was appropriate 5 Instructor answered the questions asked 5 The training facility met the requirement 5 How confident are you with SQL Server 2008 performance tuning 5 Additional comments from Andrew: The course was expertly delivered and backed up with practical examples. At the end of the course I felt my knowledge of SQL Server had been greatly enhanced and was eager to share with my colleagues. I felt there was one prerequisite missing from the course description, an open mind since the course changed some of my core product beliefs. For Additional workshop feedbacks refer to: www.sqlworkshops.com/feedbacks.I will be delivering the Level 300-400 1 Day Microsoft SQL Server 2008 Performance Monitoring and Tuning Seminar at Istanbul and Ankara, Turkey during March. This event is organized by Microsoft Turkey, let me know if you are in Turkey and would like to attend.During September 2010 I delivered this Level 300-400 1 Day Microsoft SQL Server 2008 Performance Monitoring and Tuning Seminar in Zurich, Switzerland organized by Microsoft Switzerland and the feedback was 4.85 out of 5, there were about 100 participants. During November 2010 when I delivered seminar in Lisbon, Portugal organized by Microsoft Portugal, the feedback was 8.30 out of 9, there were 130 participants.Our Mission: Empower customers to fully realize the Performance potential of Microsoft SQL Server without increasing the total cost of ownership (TCO) and achieve high customer satisfaction in every consulting engagement and workshop delivery.Our Business Plan: Provide useful content in webcasts, articles and seminars to get visibility for consulting engagements and workshop delivery opportunity. Help us by forwarding this email to your SQL Server friends and colleagues.Looking forwardR Meyyappan & Team @ www.SQLWorkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • SQL SERVER – DMV – sys.dm_os_waiting_tasks and sys.dm_exec_requests – Wait Type – Day 4 of 28

    - by pinaldave
    Previously, we covered the DMV sys.dm_os_wait_stats, and also saw how it can be useful to identify the major resource bottleneck. However, at the same time, we discussed that this is only useful when we are looking at an instance-level picture. Quite often we want to know about the processes going in our server at the given instant. Here is the query for the same. This DMV is written taking the following into consideration: we want to analyze the queries that are currently running or which have recently ran and their plan is still in the cache. SELECT dm_ws.wait_duration_ms, dm_ws.wait_type, dm_es.status, dm_t.TEXT, dm_qp.query_plan, dm_ws.session_ID, dm_es.cpu_time, dm_es.memory_usage, dm_es.logical_reads, dm_es.total_elapsed_time, dm_es.program_name, DB_NAME(dm_r.database_id) DatabaseName, -- Optional columns dm_ws.blocking_session_id, dm_r.wait_resource, dm_es.login_name, dm_r.command, dm_r.last_wait_type FROM sys.dm_os_waiting_tasks dm_ws INNER JOIN sys.dm_exec_requests dm_r ON dm_ws.session_id = dm_r.session_id INNER JOIN sys.dm_exec_sessions dm_es ON dm_es.session_id = dm_r.session_id CROSS APPLY sys.dm_exec_sql_text (dm_r.sql_handle) dm_t CROSS APPLY sys.dm_exec_query_plan (dm_r.plan_handle) dm_qp WHERE dm_es.is_user_process = 1 GO You can change CROSS APPLY to OUTER APPLY if you want to see all the details which are omitted because of the plan cache. Let us analyze the result of the above query and see how it can be helpful to identify the query and the kind of wait type it creates. Click to Enlarage The above query will return various columns. There are various columns that provide very important details. e.g. wait_duration_ms – it indicates current wait for the query that executes at that point of time. wait_type – it indicates the current wait type for the query text – indicates the query text query_plan – when clicked on the same, it will display the query plans There are many other important information like CPU_time, memory_usage, and logical_reads, which can be read from the query as well. In future posts on this series, we will see how once identified wait type we can attempt to reduce the same. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Puzzle to Win Print Book – Explain Value of PERCENTILE_CONT() Using Simple Example

    - by pinaldave
    From last several days I am working on various Denali Analytical functions and it is indeed really fun to refresh the concept which I studied in the school. Earlier I wrote article where I explained how we can use PERCENTILE_CONT() to find median over here SQL SERVER – Introduction to PERCENTILE_CONT() – Analytic Functions Introduced in SQL Server 2012. Today I am going to ask question based on the same blog post. Again just like last time the intention of this puzzle is as following: Learn new concept of SQL Server 2012 Learn new concept of SQL Server 2012 even if you are on earlier version of SQL Server. On another note, SQL Server 2012 RC0 has been announced and available to download SQL SERVER – 2012 RC0 Various Resources and Downloads. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: The reason we get median is because we are passing value .05 to PERCENTILE_COUNT() function. Now run read the puzzle. Puzzle: Run following T-SQL code: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.9) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO Observe the result and you will notice that MidianCont has different value than before, the reason is PERCENTILE_CONT function has 0.9 value passed. For first four value the value is 775.1. Now run following T-SQL code: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.1) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO Observe the result and you will notice that MidianCont has different value than before, the reason is PERCENTILE_CONT function has 0.1 value passed. For first four value the value is 709.3. Now in my example I have explained how the median is found using this function. You have to explain using mathematics and explain (in easy words) why the value in last columns are 709.3 and 775.1 Hint: SQL SERVER – Introduction to PERCENTILE_CONT() – Analytic Functions Introduced in SQL Server 2012 Rules Leave a comment with your detailed answer by Nov 25's blog post. Open world-wide (where Amazon ships books) If you blog about puzzle’s solution and if you win, you win additional surprise gift as well. Prizes Print copy of my new book SQL Server Interview Questions Amazon|Flipkart If you already have this book, you can opt for any of my other books SQL Wait Stats [Amazon|Flipkart|Kindle] and SQL Programming [Amazon|Flipkart|Kindle]. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL Contest – Win USD 300 Worth Gift – Cartoon Contest is Back

    - by pinaldave
    There are two excellent contests and we have lots of winning to do this year end. 1) Win USD 25 Amazon Gift Cards (10 Units) This is very simple, you just have to download SQL Server DB Optimizer. That’s it! There are only two conditions: You must have a valid email address. As USD 25 Amazon Gift Card will be sent to the same address. Download DB Optimizer between today and Dec 8, 2012. Link to Download DB Optimizer. Every day one winner will be notified about their winning USD 25 Amazon Gift Cards for next 10 days. 2) Win Star Wars R2-D2 Inflatable R/C This the coolest thing to win. I personally want one but as I am running a contest, I can’t  participate. You get this cool Remote Controlled Device – you just have to answer following cartoon contest. Read the complete story and think what will be the answer provided by the smart employee. There are only two conditions: Leave your answer in the comment area of this blog post (every comment will be hidden till Dec 8, 2012). Please leave your answer in the comment area between today and Dec 8, 2012. Remember you can participate as many times as you want. Make sure that your answer is correct and creative. The most creative answer will be selected. The decision of contest owner will be final. We may have runner’s up prices but for the moment let us try to win R2-D2. Here is the cool video of R2D2. Now here is the cartoon story, please follow the story and complete the very last cartoon template. Your answer should be correct and should be creative. However, the ideal answer will not be longer than one or two sentences. Hint: (Hint) Well, Leave your answer in the comment area of this blog post. If you do not win R2D2, trust me there are chances you may win a surprise gift from me. Remember your answer should be correct and should be creative. However, the ideal answer will not be longer than one or two sentences. Last day to participate in both of the contest is Dec 8, 2012. We will announce the winner in the week of December 10. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • How to Code Faster (Without Sacrificing Quality)

    - by ashes999
    I've been a professional coder for a several years. The comments about my code have generally been the same: writes great code, well-tested, but could be faster. So how do I become a faster coder, without sacrificing quality? For the sake of this question, I'm going to limit the scope to C#, since that's primarily what I code (for fun) -- or Java, which is similar enough in many ways that matter. Things that I'm already doing: Write the minimal solution that will get the job done Write a slew of automated tests (prevents regressions) Write (and use) reusable libraries for all kinds of things Use well-known technologies where they work well (eg. Hibernate) Use design patterns where they fit into place (eg. Singleton) These are all great, but I don't feel like my speed is increasing over time. I do care, because if I can do something to increase my productivity (even by 10%), that's 10% faster than my competitors. (Not that I have any.) Besides which, I've consistently gotten this feeback from my managers -- whether it was small-scale Flash development or enterprise Java/C++ development. Edit: There seem to be a lot of questions about what I mean by fast, and how I know I'm slow. Let me clarify with some more details. I worked in small and medium-sized teams (5-50 people) in various companies over various projects and various technologies (Flash, ASP.NET, Java, C++). The observation of my managers (which they told me directly) is that I'm "slow." Part of this is because a significant number of my peers sacrificed quality for speed; they wrote code that was buggy, hard to read, hard to maintain, and difficult to write automated tests for. My code generally is well-documented, readable, and testable. At Oracle, I would consistently solve bugs slower than other team-members. I know this, because I would get comments to that effect; this means that other (yes, more senior and experienced) developers could do my work in less time than it took me, at nearly the same quality (readability, maintainability, and testability). Why? What am I missing? How can I get better at this? My end goal is simple: if I can make product X in 40 hours today, and I can improve myself somehow so that I can create the same product at 20, 30, or even 38 hours tomorrow, that's what I want to know -- how do I get there? What process can I use to continually improve? I had thought it was about reusing code, but that's not enough, it seems.

    Read the article

  • Exadata support for ACFS (and thus, 10gR2) now available!

    - by Robert Freeman
    Really? Exadata, ACFS and 10gR2? If you work with Exadata you are probably aware that ACFS has not been supported - until now! ACFS is now supported on Exadata if you are running Grid Infrastructure version 12.1.0.2 or later. This new support is described in MOS note 1326938.1. Also Exadata support for ACFS is mentioned in MOS note 888828.1, which is the king of all Exadata notes on MOS. The upshot is that you can now run Oracle Database 10gR2 on Exadata using ACFS as the storage for the Oracle Database. Don’t Over React and just Throw Everything on ACFS!First, let’s be clear that ACFS is not an alternative for running your Exadata databases on ASM. If you are running any production or non-production performance sensitive Oracle databases on 11.2 or 12.1, then you should be running them on ASM disks that are associated with the storage cells. The use case for ACFS is generally limited to the following: Running any Oracle 10gR2 databases on Exadata. Running Oracle 11gR2 development or test databases that require rapid cloning, and that do not require the performance benefits of the Exadata storage cells. If you are running Oracle Database 12c and you need snapshot/clone kinds of capabilities, then you should be using Oracle MultiTennant and the features present in that option (remember though that MultiTennant is a licensed option). The Fine PrintThere are some requirements that you will need to meet If you are going to run ACFS on Exadata. These are: You have to use Oracle Linux You must use GI 12.1.0.2 or later If you wish to use HCC then you must apply the fix for bug 19136936 to your system. This bug, and it’s associated patch do not appear on MOS (as of the time that I wrote this) so you will need to open an SR and get support to provide the patch for you. The Best Use Case for ACFSEven though Oracle Database 10gR2 is at end of life, it remains in use in a large number of places. This has caused problems when choosing to implement Exadata as a consolidation platform, or when choosing it during a hardware refresh process. Now that ACFS is supported, Exadata has become even more flexible and affords customers greater flexibility when migrating to Exadata and Engineered Systems. While all of the features of Exadata might not be available to a 10.2.0.4 database, certainly just the improved processing capabilities of Exadata with its fast as heck infiniband network fabric, additional memory, reduced power requirements and a whole host of other features, justifies moving these databases to Exadata now. This will also make it easier to upgrade these databases when the time comes!

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Three Principles to Fix Your Broken Organization

    - by Michael Snow
    Everyone's organization is broken in some capacity. For some this is painfully visible both inside and outside their organization. For others, there are cracks noticed by only the keenest trained eyes used to looking for problems in the midst of perfection. We all know that there is often incredible hope in the despair of chaos and recognition of your problems is the first step on the road to recovery. Let us help you in your path to recovery. Join our very own, Christian Finn,  this Thursday (11/15), as he guides you through three important principles you can take back to the office to start the mending process. (Above Image Credits: the BEST site on the web to make fun of our organizations and ourselves: http://www.despair.com/ ) His three principles are NOT "TeamWork", "Ignorance" and "Tradition", but - before jumping lower on this blog post to click and register for the upcoming webcast - I thought it would be a good opportunity to give you a little taste of what we have to offer beyond the array of our fabulous On-Demand webcasts from our Social Business Thought Leader Webcast Series featuring Christian as the host. Instead, here's a snippet from our marketing team friends across the pond in Europe, where they hosted a Social Business Forum recently and featured Christian in a segment.  Simple. Powerful. Proven. Face it, your organization is broken. Customers are not the focus they should be. Processes are running amok. Your intranet is a ghost town. And colleagues wonder why it’s easier to get things done on the Web than at work. What’s the solution?Join us for this Webcast. Christian Finn will talk about three simple, powerful, and proven principles for improving your organization through collaboration. Each principle will be illustrated by real-world examples. Discover: How to dramatically improve workplace collaboration Why improved employee engagement creates better business results What’s the value of a fully engaged customer Time to Fix What’s Broken Register now for this Webcast—the tenth in the Oracle Social Business Thought Leaders Series. Register Now Thurs., Nov. 15, 2012 10 a.m. PT / 1 p.m. ET Presented by: Christian Finn Senior Director, Product Management, Oracle Copyright © 2012, Oracle Corporation and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • SQL SERVER – Display Datetime in Specific Format – SQL in Sixty Seconds #033 – Video

    - by pinaldave
    A very common requirement of developers is to format datetime to their specific need. Every geographic location has different need of the date formats. Some countries follow the standard of mm/dd/yy and some countries as dd/mm/yy. The need of developer changes as geographic location changes. In SQL Server there are various functions to aid this requirement. There is function CAST, which developers have been using for a long time as well function CONVERT which is a more enhanced version of CAST. In the latest version of SQL Server 2012 a new function FORMAT is introduced as well. In this SQL in Sixty Seconds video we cover two different methods to display the datetime in specific format. 1) CONVERT function and 2) FORMAT function. Let me know what you think of this video. Here is the script which is used in the video: -- http://blog.SQLAuthority.com -- SQL Server 2000/2005/2008/2012 onwards -- Datetime SELECT CONVERT(VARCHAR(30),GETDATE()) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),10) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),110) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),5) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),105) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),113) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),114) AS DateConvert; GO -- SQL Server 2012 onwards -- Various format of Datetime SELECT CONVERT(VARCHAR(30),GETDATE(),113) AS DateConvert; SELECT FORMAT ( GETDATE(), 'dd mon yyyy HH:m:ss:mmm', 'en-US' ) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),114) AS DateConvert; SELECT FORMAT ( GETDATE(), 'HH:m:ss:mmm', 'en-US' ) AS DateConvert; GO -- Specific usage of Format function SELECT FORMAT(GETDATE(), N'"Current Time is "dddd MMMM dd, yyyy', 'en-US') AS CurrentTimeString; This video discusses CONVERT and FORMAT in simple manner but the subject is much deeper and there are lots of information to cover along with it. I strongly suggest that you go over related blog posts in next section as there are wealth of knowledge discussed there. Related Tips in SQL in Sixty Seconds: Get Date and Time From Current DateTime – SQL in Sixty Seconds #025 Retrieve – Select Only Date Part From DateTime – Best Practice Get Time in Hour:Minute Format from a Datetime – Get Date Part Only from Datetime DATE and TIME in SQL Server 2008 Function to Round Up Time to Nearest Minutes Interval Get Date Time in Any Format – UDF – User Defined Functions Retrieve – Select Only Date Part From DateTime – Best Practice – Part 2 Difference Between DATETIME and DATETIME2 Saturday Fun Puzzle with SQL Server DATETIME2 and CAST What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • MAXDOP in SQL Azure

    - by Herve Roggero
    In my search of better understanding the scalability options of SQL Azure I stumbled on an interesting aspect: Query Hints in SQL Azure. More specifically, the MAXDOP hint. A few years ago I did a lot of analysis on this query hint (see article on SQL Server Central:  http://www.sqlservercentral.com/articles/Configuring/managingmaxdegreeofparallelism/1029/).  Here is a quick synopsis of MAXDOP: It is a query hint you use when issuing a SQL statement that provides you control with how many processors SQL Server will use to execute the query. For complex queries with lots of I/O requirements, more CPUs can mean faster parallel searches. However the impact can be drastic on other running threads/processes. If your query takes all available processors at 100% for 5 minutes... guess what... nothing else works. The bottom line is that more is not always better. The use of MAXDOP is more art than science... and a whole lot of testing; it depends on two things: the underlying hardware architecture and the application design. So there isn't a magic number that will work for everyone... except 1... :) Let me explain. The rules of engagements are different. SQL Azure is about sharing. Yep... you are forced to nice with your neighbors.  To achieve this goal SQL Azure sets the MAXDOP to 1 by default, and ignores the use of the MAXDOP hint altogether. That means that all you queries will use one and only one processor.  It really isn't such a bad thing however. Keep in mind that in some of the largest SQL Server implementations MAXDOP is usually also set to 1. It is a well known configuration setting for large scale implementations. The reason is precisely to prevent rogue statements (like a SELECT * FROM HISTORY) from bringing down your systems (like a report that should have been running on a different in the first place) and to avoid the overhead generated by executing too many parallel queries that could cause internal memory management nightmares to the host Operating System. Is summary, forcing the MAXDOP to 1 in SQL Azure makes sense; it ensures that your database will continue to function normally even if one of the other tenants on the same server is running massive queries that would otherwise bring you down. Last but not least, keep in mind as well that when you test your database code for performance on-premise, make sure to set the DOP to 1 on your SQL Server databases to simulate SQL Azure conditions.

    Read the article

  • PowerShell PowerPack Download

    - by BuckWoody
    I read Jeffery Hicks’ article in this month’s Redmond Magazine on a new add-in for Windows PowerShell 2.0. It’s called the PowerShell Pack and it has a some great new features that I plan to put into place on my production systems as soon as I finished learning and testing them. You can download the pack here if you have PowerShell 2.0. I’m having a lot of fun with it, and I’ll blog about what I’m learning here in the near future, but you should check it out. The only issue I have with it right now is that you have to load a module and then use get-help to find out what it does, because I haven’t found a lot of other documentation so far. The most interesting modules for me are the ones that can run a command elevated (in PSUserTools), the task scheduling commands (in TaskScheduler) and the file system checks and tools (in FileSystem). There’s also a way to create simple Graphical User Interface panels (in ). I plan to string all these together to install a management set of tools on my SQL Server Express Instances, giving the user “task buttons” to backup or restore a database, add or delete users and so on. Yes, I’ll be careful, and yes, I’ll make sure the user is allowed to do that. For now, I’m testing the download, but I thought I would share what I’m up to. If you have PowerShell 2.0 and you download the pack, let me know how you use it. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Chunking a List - .NET vs Python

    - by Abhijeet Patel
    Chunking a List As I mentioned last time, I'm knee deep in python these days. I come from a statically typed background so it's definitely a mental adjustment. List comprehensions is BIG in Python and having worked with a few of them I can see why. Let's say we need to chunk a list into sublists of a specified size. Here is how we'd do it in C#  static class Extensions   {       public static IEnumerable<List<T>> Chunk<T>(this List<T> l, int chunkSize)       {           if (chunkSize <0)           {               throw new ArgumentException("chunkSize cannot be negative", "chunkSize");           }           for (int i = 0; i < l.Count; i += chunkSize)           {               yield return new List<T>(l.Skip(i).Take(chunkSize));           }       }    }    static void Main(string[] args)  {           var l = new List<string> { "a", "b", "c", "d", "e", "f","g" };             foreach (var list in l.Chunk(7))           {               string str = list.Aggregate((s1, s2) => s1 + "," + s2);               Console.WriteLine(str);           }   }   A little wordy but still pretty concise thanks to LINQ.We skip the iteration number plus chunkSize elements and yield out a new List of chunkSize elements on each iteration. The python implementation is a bit more terse. def chunkIterable(iter, chunkSize):      '''Chunks an iterable         object into a list of the specified chunkSize     '''        assert hasattr(iter, "__iter__"), "iter is not an iterable"      for i in xrange(0, len(iter), chunkSize):          yield iter[i:i + chunkSize]    if __name__ == '__main__':      l = ['a', 'b', 'c', 'd', 'e', 'f']      generator = chunkIterable(l,2)      try:          while(1):              print generator.next()      except StopIteration:          pass   xrange generates elements in the specified range taking in a seed and returning a generator. which can be used in a for loop(much like using a C# iterator in a foreach loop) Since chunkIterable has a yield statement, it turns this method into a generator as well. iter[i:i + chunkSize] essentially slices the list based on the current iteration index and chunksize and creates a new list that we yield out to the caller one at a time. A generator much like an iterator is a state machine and each subsequent call to it remembers the state at which the last call left off and resumes execution from that point. The caveat to keep in mind is that since variables are not explicitly typed we need to ensure that the object passed in is iterable using hasattr(iter, "__iter__").This way we can perform chunking on any object which is an "iterable", very similar to accepting an IEnumerable in the .NET land

    Read the article

  • SQL SERVER – Maximize Database Performance with DB Optimizer – SQL in Sixty Seconds #054

    - by Pinal Dave
    Performance tuning is an interesting concept and everybody evaluates it differently. Every developer and DBA have different opinion about how one can do performance tuning. I personally believe performance tuning is a three step process Understanding the Query Identifying the Bottleneck Implementing the Fix While, we are working with large database application and it suddenly starts to slow down. We are all under stress about how we can get back the database back to normal speed. Most of the time we do not have enough time to do deep analysis of what is going wrong as well what will fix the problem. Our primary goal at that time is to just fix the database problem as fast as we can. However, here is one very important thing which we need to keep in our mind is that when we do quick fix, it should not create any further issue with other parts of the system. When time is essence and we want to do deep analysis of our system to give us the best solution we often tend to make mistakes. Sometimes we make mistakes as we do not have proper time to analysis the entire system. Here is what I do when I face such a situation – I take the help of DB Optimizer. It is a fantastic tool and does superlative performance tuning of the system. Everytime when I talk about performance tuning tool, the initial reaction of the people is that they do not want to try this as they believe it requires lots of the learning of the tool before they use it. It is absolutely not true with the case of the DB optimizer. It is a very easy to use and self intuitive tool. Once can get going with the product, in no time. Here is a quick video I have build where I demonstrate how we can identify what index is missing for query and how we can quickly create the index. Entire three steps of the query tuning are completed in less than 60 seconds. If you are into performance tuning and query optimization you should download DB Optimizer and give it a go. Let us see the same concept in following SQL in Sixty Seconds Video: You can Download DB Optimizer and reproduce the same Sixty Seconds experience. Related Tips in SQL in Sixty Seconds: Performance Tuning – Part 1 of 2 – Getting Started and Configuration Performance Tuning – Part 2 of 2 – Analysis, Detection, Tuning and Optimizing What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Interview Questions and Answers, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Identity

    Read the article

  • ASP.NET MVC 2 Model Binding for a Collection

    - by nmarun
    Yes, my yet another post on Model Binding (previous one is here), but this one uses features presented in MVC 2. How I got to writing this blog? Well, I’m on a project where we’re doing some MVC things for a shopping cart. Let me show you what I was working with. Below are my model classes: 1: public class Product 2: { 3: public int Id { get; set; } 4: public string Name { get; set; } 5: public int Quantity { get; set; } 6: public decimal UnitPrice { get; set; } 7: } 8:   9: public class Totals 10: { 11: public decimal SubTotal { get; set; } 12: public decimal Tax { get; set; } 13: public decimal Total { get; set; } 14: } 15:   16: public class Basket 17: { 18: public List<Product> Products { get; set; } 19: public Totals Totals { get; set;} 20: } The view looks as below:  1: <h2>Shopping Cart</h2> 2:   3: <% using(Html.BeginForm()) { %> 4: 5: <h3>Products</h3> 6: <% for (int i = 0; i < Model.Products.Count; i++) 7: { %> 8: <div style="width: 100px;float:left;">Id</div> 9: <div style="width: 100px;float:left;"> 10: <%= Html.TextBox("ID", Model.Products[i].Id) %> 11: </div> 12: <div style="clear:both;"></div> 13: <div style="width: 100px;float:left;">Name</div> 14: <div style="width: 100px;float:left;"> 15: <%= Html.TextBox("Name", Model.Products[i].Name) %> 16: </div> 17: <div style="clear:both;"></div> 18: <div style="width: 100px;float:left;">Quantity</div> 19: <div style="width: 100px;float:left;"> 20: <%= Html.TextBox("Quantity", Model.Products[i].Quantity)%> 21: </div> 22: <div style="clear:both;"></div> 23: <div style="width: 100px;float:left;">Unit Price</div> 24: <div style="width: 100px;float:left;"> 25: <%= Html.TextBox("UnitPrice", Model.Products[i].UnitPrice)%> 26: </div> 27: <div style="clear:both;"><hr /></div> 28: <% } %> 29: 30: <h3>Totals</h3> 31: <div style="width: 100px;float:left;">Sub Total</div> 32: <div style="width: 100px;float:left;"> 33: <%= Html.TextBox("SubTotal", Model.Totals.SubTotal)%> 34: </div> 35: <div style="clear:both;"></div> 36: <div style="width: 100px;float:left;">Tax</div> 37: <div style="width: 100px;float:left;"> 38: <%= Html.TextBox("Tax", Model.Totals.Tax)%> 39: </div> 40: <div style="clear:both;"></div> 41: <div style="width: 100px;float:left;">Total</div> 42: <div style="width: 100px;float:left;"> 43: <%= Html.TextBox("Total", Model.Totals.Total)%> 44: </div> 45: <div style="clear:both;"></div> 46: <p /> 47: <input type="submit" name="Submit" value="Submit" /> 48: <% } %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Nothing fancy, just a bunch of div’s containing textboxes and a submit button. Just make note that the textboxes have the same name as the property they are going to display. Yea, yea, I know. I’m displaying unit price as a textbox instead of a label, but that’s beside the point (and trust me, this will not be how it’ll look on the production site!!). The way my controller works is that initially two dummy products are added to the basked object and the Totals are calculated based on what products were added in what quantities and their respective unit price. So when the page loads in edit mode, where the user can change the quantity and hit the submit button. In the ‘post’ version of the action method, the Totals get recalculated and the new total will be displayed on the screen. Here’s the code: 1: public ActionResult Index() 2: { 3: Product product1 = new Product 4: { 5: Id = 1, 6: Name = "Product 1", 7: Quantity = 2, 8: UnitPrice = 200m 9: }; 10:   11: Product product2 = new Product 12: { 13: Id = 2, 14: Name = "Product 2", 15: Quantity = 1, 16: UnitPrice = 150m 17: }; 18:   19: List<Product> products = new List<Product> { product1, product2 }; 20:   21: Basket basket = new Basket 22: { 23: Products = products, 24: Totals = ComputeTotals(products) 25: }; 26: return View(basket); 27: } 28:   29: [HttpPost] 30: public ActionResult Index(Basket basket) 31: { 32: basket.Totals = ComputeTotals(basket.Products); 33: return View(basket); 34: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } That’s that. Now I run the app, I see two products with the totals section below them. I look at the view source and I see that the input controls have the right ID, the right name and the right value as well. 1: <input id="ID" name="ID" type="text" value="1" /> 2: <input id="Name" name="Name" type="text" value="Product 1" /> 3: ... 4: <input id="ID" name="ID" type="text" value="2" /> 5: <input id="Name" name="Name" type="text" value="Product 2" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } So just as a regular user would do, I change the quantity value of one of the products and hit the submit button. The ‘post’ version of the Index method gets called and I had put a break-point on line 32 in the above snippet. When I hovered my mouse on the ‘basked’ object, happily assuming that the object would be all bound and ready for use, I was surprised to see both basket.Products and basket.Totals were null. Huh? A little research and I found out that the reason the DefaultModelBinder could not do its job is because of a naming mismatch on the input controls. What I mean is that when you have to bind to a custom .net type, you need more than just the property name. You need to pass a qualified name to the name property of the input control. I modified my view and the emitted code looked as below: 1: <input id="Product_Name" name="Product.Name" type="text" value="Product 1" /> 2: ... 3: <input id="Product_Name" name="Product.Name" type="text" value="Product 2" /> 4: ... 5: <input id="Totals_SubTotal" name="Totals.SubTotal" type="text" value="550" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, I update the quantity and hit the submit button and I see that the Totals object is populated, but the Products list is still null. Once again I went: ‘Hmm.. time for more research’. I found out that the way to do this is to provide the name as: 1: <%= Html.TextBox(string.Format("Products[{0}].ID", i), Model.Products[i].Id) %> 2: <!-- this will be rendered as --> 3: <input id="Products_0__ID" name="Products[0].ID" type="text" value="1" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } It was only now that I was able to see both the products and the totals being properly bound in the ‘post’ action method. Somehow, I feel this is kinda ‘clunky’ way of doing things. Seems like people at MS felt in a similar way and offered us a much cleaner way to solve this issue. The simple solution is that instead of using a Textbox, we can either use a TextboxFor or an EditorFor helper method. This one directly spits out the name of the input property as ‘Products[0].ID and so on. Cool right? I totally fell for this and changed my UI to contain EditorFor helper method. At this point, I ran the application, changed the quantity field and pressed the submit button. Of course my basket object parameter in my action method was correctly bound after these changes. I let the app complete the rest of the lines in the action method. When the page finally rendered, I did see that the quantity was changed to what I entered before the post. But, wait a minute, the totals section did not reflect the changes and showed the old values. My status: COMPLETELY PUZZLED! Just to recap, this is what my ‘post’ Index method looked like: 1: [HttpPost] 2: public ActionResult Index(Basket basket) 3: { 4: basket.Totals = ComputeTotals(basket.Products); 5: return View(basket); 6: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } A careful debug confirmed that the basked.Products[0].Quantity showed the updated value and the ComputeTotals() method also returns the correct totals. But still when I passed this basket object, it ended up showing the old totals values only. I began playing a bit with the code and my first guess was that the input controls got their values from the ModelState object. For those who don’t know, the ModelState is a temporary storage area that ASP.NET MVC uses to retain incoming attempted values plus binding and validation errors. Also, the fact that input controls populate the values using data taken from: Previously attempted values recorded in the ModelState["name"].Value.AttemptedValue Explicitly provided value (<%= Html.TextBox("name", "Some value") %>) ViewData, by calling ViewData.Eval("name") FYI: ViewData dictionary takes precedence over ViewData's Model properties – read more here. These two indicators led to my guess. It took me quite some time, but finally I hit this post where Brad brilliantly explains why this is the preferred behavior. My guess was right and I, accordingly modified my code to reflect the following way: 1: [HttpPost] 2: public ActionResult Index(Basket basket) 3: { 4: // read the following posts to see why the ModelState 5: // needs to be cleared before passing it the view 6: // http://forums.asp.net/t/1535846.aspx 7: // http://forums.asp.net/p/1527149/3687407.aspx 8: if (ModelState.IsValid) 9: { 10: ModelState.Clear(); 11: } 12:   13: basket.Totals = ComputeTotals(basket.Products); 14: return View(basket); 15: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } What this does is that in the case where your ModelState IS valid, it clears the dictionary. This enables the values to be read from the model directly and not from the ModelState. So the verdict is this: If you need to pass other parameters (like html attributes and the like) to your input control, use 1: <%= Html.TextBox(string.Format("Products[{0}].ID", i), Model.Products[i].Id) %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Since, in EditorFor, there is no direct and simple way of passing this information to the input control. If you don’t have to pass any such ‘extra’ piece of information to the control, then go the EditorFor way. The code used in the post can be found here.

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >