Search Results

Search found 4879 results on 196 pages for 'geeks'.

Page 106/196 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Using T4 to generate Configuration classes

    - by Justin Hoffman
    I wanted to try to use T4 to read a web.config and generate all of the appSettings and connectionStrings as properties of a class.  I elected in this template only to output appSettings and connectionStrings but you can see it would be easily adapted for app specific settings, bindings etc.  This allows for quick access to config values as well as removing the potential for typo's when accessing values from the ConfigurationManager. One caveat: a developer would need to remember to run the .tt file after adding an entry to the web.config.  However, one would quickly notice when trying to access the property from the generated class (it wouldn't be there).  Additionally, there are other options as noted here. The first step was to create the .tt file.  Note that this is a basic example, it could be extended even further I'm sure.  In this example I just manually input the path to the web.config file. <#@ template debug="false" hostspecific="true" language="C#" #><#@ output extension=".cs" #><#@ assembly Name="System.Configuration" #><#@ assembly name="System.Xml" #><#@ assembly name="System.Xml.Linq" #><#@ assembly name="System.Net" #><#@ assembly name="System" #><#@ import namespace="System.Configuration" #><#@ import namespace="System.Xml" #><#@ import namespace="System.Net" #><#@ import namespace="Microsoft.VisualStudio.TextTemplating" #><#@ import namespace="System.Xml.Linq" #>using System;using System.Configuration;using System.Xml;using System.Xml.Linq;using System.Linq;namespace MyProject.Web { public partial class Configurator { <# var xDocument = XDocument.Load(@"G:\MySolution\MyProject\Web.config"); var results = xDocument.Descendants("appSettings"); const string key = "key"; const string name = "name"; foreach (var xElement in results.Descendants()) {#> public string <#= xElement.Attribute(key).Value#>{get {return ConfigurationManager.AppSettings[<#= string.Format("{0}{1}{2}","\"" , xElement.Attribute(key).Value, "\"")#>];}} <#}#> <# var connectionStrings = xDocument.Descendants("connectionStrings"); foreach(var connString in connectionStrings.Descendants()) {#> public string <#= connString.Attribute(name).Value#>{get {return ConfigurationManager.ConnectionStrings[<#= string.Format("{0}{1}{2}","\"" , connString.Attribute(name).Value, "\"")#>].ConnectionString;}} <#} #> }} The resulting .cs file: using System;using System.Configuration;using System.Xml;using System.Xml.Linq;using System.Linq;namespace MyProject.Web { public partial class Configurator { public string ClientValidationEnabled{get {return ConfigurationManager.AppSettings["ClientValidationEnabled"];}} public string UnobtrusiveJavaScriptEnabled{get {return ConfigurationManager.AppSettings["UnobtrusiveJavaScriptEnabled"];}} public string ServiceUri{get {return ConfigurationManager.AppSettings["ServiceUri"];}} public string TestConnection{get {return ConfigurationManager.ConnectionStrings["TestConnection"].ConnectionString;}} public string SecondTestConnection{get {return ConfigurationManager.ConnectionStrings["SecondTestConnection"].ConnectionString;}} }} Next, I extended the partial class for easy access to the Configuration. However, you could just use the generated class file itself. using System;using System.Linq;using System.Xml.Linq;namespace MyProject.Web{ public partial class Configurator { private static readonly Configurator Instance = new Configurator(); public static Configurator For { get { return Instance; } } }} Finally, in my example, I used the Configurator class like so: [TestMethod] public void Test_Web_Config() { var result = Configurator.For.ServiceUri; Assert.AreEqual(result, "http://localhost:30237/Service1/"); }

    Read the article

  • ASP.NET Membership Password Hash -- .NET 3.5 to .NET 4 Upgrade Surprise!

    - by David Hoerster
    I'm in the process of evaluating how my team will upgrade our product from .NET 3.5 SP1 to .NET 4. I expected the upgrade to be pretty smooth with very few, if any, upgrade issues. To my delight, the upgrade wizard said that everything upgraded without a problem. I thought I was home free, until I decided to build and run the application. A big problem was staring me in the face -- I couldn't log on. Our product is using a custom ASP.NET Membership Provider, but essentially it's a modified SqlMembershipProvider with some additional properties. And my login was failing during the OnAuthenticate event handler of my ASP.NET Login control, right where it was calling my provider's ValidateUser method. After a little digging, it turns out that the password hash that the membership provider was using to compare against the stored password hash in the membership database tables was different. I compared the password hash from the .NET 4 code line, and it was a different generated hash than my .NET 3.5 code line. (Tip -- when upgrading, always keep a valid debug copy of your app handy in case you have to step through a lot of code.) So it was a strange situation, but at least I knew what the problem was. Now the question was, "Why was it happening?" Turns out that a breaking change in .NET 4 is that the default hash algorithm changed to SHA256. Hey, that's great -- stronger hashing algorithm. But what do I do with all the hashed passwords in my database that were created using SHA1? Well, you can make two quick changes to your app's web.config and everything will be OK. Basically, you need to override the default HashAlgorithmTypeproperty of your membership provider. Here are the two places to do that: 1. At the beginning of your element, add the following element: <system.web> <machineKey validation="SHA1" /> ... </system.web> 2. On your element under , add the following hashAlgorithmType attribute: <system.web> <membership defaultProvider="myMembership" hashAlgorithmType="SHA1"> ... </system.web> After that, you should be good to go! Hope this helps.

    Read the article

  • How Estimates Became Quotes

    - by Lee Brandt
    It’s our fault. Well, not completely, but we haven’t helped the situation any. All of what follows comes from my own experiences which, from talking to lots of other developers about it, seems to be pretty much par for the course. Where We Started When we first started estimating, we estimated pretty clearly. We would try to imagine something we’d done that was similar to the project being estimated and we’d toss it about in our heads a bit and see how much bigger or smaller we thought this new thing was, and add or subtract accordingly. We wouldn’t spend too much time on it, because we wanted to get to writing the software. Eventually, we’d come across some huge problem that there was now way we could’ve known about ahead of time. Either we didn’t see this thing or, we didn’t realize that this particular version of a problem would be so… problematic. We usually call this “not knowing what we don’t know”. It’s unavoidable. We just can’t know. Until we wade in and start putting some code together, there are just some things we won’t know… and some things we don’t even know that we don’t know. Y’know? So what happens? We go over budget. Project managers scream and dance the dance of the stressed-out project manager, and there is nothing we can do (or could’ve done) about it. We didn’t know. We thought about it for a bit and we didn’t see this herculean task sitting in the middle of our nice quiet project, and it has bitten us in the rear end. We now know how to handle this in the future, though. We will take some more time to pick around the requirements and discover all those things we don’t know. We’ll do some prototyping, we’ll read some blogs about similar projects, we’ll really grill the customer with questions during the requirements gathering phase. We’ll keeping asking “what else?” until the shove us down the stairs. We’ll take our time and uncover it all. We Learned, But Good The next time comes, and you know what happens? We do it. We grill the customer for weeks and prototype and read and research and we estimate everything down to the last button on the last form. Know what that gets us? It gets us three months of wasted time, and our estimate will still be off. Possibly off by a factor of four. WTF, mate? No way we could be surprised by something! We uncovered every particle. We turned every stone. How is it we still came across unknowns? Because we STILL didn’t know what we didn’t know. How could we? We didn’t know to ask. The worst part is, we’ve now convinced the product that this is NOT an estimate. It is a solid number based on massive research and an endless number of questions that they answered. There is absolutely now way you don’t know everything there is to know about this project now. No way there is anything you haven’t uncovered. And their faith in that “Esti-Quote” goes through the roof. When the project goes over this time, they might even begin to question whether or not you know what you’re doing. Who could blame them? You drilled them for weeks about every little thing, and when they complained about all the questions, you told them you wanted to uncover everything so there would be no surprises. SO we set them up to faile Guess, Then Plan We had a chance. At the beginning we could have just said, “That’s just a gut-feeling estimate, based on my past experience with similar projects. There could still be surprises.” If we spend SOME time doing SOME discovery and then bounce that against our own past experiences, we can come up with a fairly healthy estimate. We can then help the product owner understand that an estimate is a guess. Sure, it’s an educated guess, but it is still a guess. If we get it right it will be almost completely luck. Then, we help them to plan the development by taking that guess (yes, they still need the guess for planning purposes) and start measuring early and often to see if we still think we are right. We should adjust the estimate and alert the product owner as soon as we see problems (bad news does not age well) and we should be able to see any problems immediately if we are constantly measuring our pace. In lean software, we start with that guess and begin measuring cycle times immediately. Then we can make projections based on those cycle times and compare them to our estimate. This constant feedback is the best way to ensure that there are no surprises at the END of the project. There sill still be surprises, but we’ll see them sooner and have a better understanding of how they will affect our overall timeline. What do you think?

    Read the article

  • Introducing Deep Fried Devcast

    - by Matt Christian
    I've been working on a new podcast for the game development community called the Deep Fried Devcast.  Currently we are in pre-production but should have some episodes up in the near future.  Here is a quick FAQ about the show: What is the Deep Fried Devcast? The Deep Fried Devcast is a bi-weekly show all about game development.  The show will feature developer interviews, a focus on the technical aspects of game development (programming, technical design), the business of team game development (time management, project management), and other areas focused around the actual development of games. Wait, no game design?  No game discussions?! Calm down, calm down.  Although the focus of the podcast is on the technical aspects of game dev, there will be episodes and content focused on all areas of the gaming industry, including discussion on design, story, recent game releases, games we've been playing, etc...  Anything could show up in the Deep Fried Devcast and nothing is off limits. How can I help? We're always looking for new content ideas, emails, and anything you want to send us (within some kind of reason!).  You can even be a guest host if you want!  Email us at: deepfrieddevcast [AT] gmail [DOT] com Where's the podcast?! We're still recording it!  Don't worry, it will be up soon.  Keep an eye on www.deepfrieddevcast.com for the latest updates (that will be up soon too!).

    Read the article

  • Fixing some Visual Studio RC install issues

    - by terje
    The Visual Studio RC has shown some install issues in some cases, particularly for those who upgrades from VS 11 Beta.  I have listed the fixes known now below, and will update if there are more issues.  Note that a repair will not fix the issue, and a Windows restore and subsequent reinstall may not fix it either.  The system seems to remember too much. That was the case for me, at least.  The fixes below however, cures these issues. 1. The Team Explorer Build node doesn’t work You get an error saying System.TypeLoadException like this: To solve this do as follows: 1. Open a command prompt as administrator 2. Go to your program files directory for VS 2012 and down to  the extension folder like:   C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer 3. Run “gacutil –if Microsoft.TeamFoundation.Build.Controls.dll     2. The SQL Editor gives loading error When you start up VS 2012 RC you get a loading error message.  The same happens if you try to go from the menu to  SQL/Transact-SQL Editor/New Query.    To solve this do as follows: 1. Open Control Panel/Programs and Features 2. Locate the “Microsoft SQL Server 2012 Data-Tier App Framework     (Note , you might find up to 4 such instances) The ones with version numbers ending in 55 is from the SQL 2012 RC, the ones ending in 60 is from the SQL 2012 RTM.  There are two of each, one for x32 and one for x64.  Which is which no one knows. 3. Right click each of them, and select Repair. (It would be nice if someone with this issue tries only the latest RTM ones, and see if that clears the error, and comment back to this post. I am out of non-functioning VS’s )   3.  Errors referring to some extension You get errors referring to some extension that can’t be loaded, or can’t be found.  Check the activity log (see below), and verify there.  If you see yellow collision warnings there, the fix here should solve those too. To solve these:    1. Open a Visual Studio 2012 command prompt 2.  Run:   devenv /resetsettings     How to check for errors using the log Do as follows to get to the activity log for Visual studio 2012 RC 1. Open a Visual Studio 2012 command prompt 2. Run:   devenv /log This starts up Visual Studio.  3. Go to %appdata%/Microsoft\VisualStudio\11.0 4. Double click the file named ActivityLog.xml.  It will start up in your browser, and be formatted using the xslt in the same directory. 5.  Look for items marked in red.  Example for Issue 1 :

    Read the article

  • BizTalk 2009 - SQL Server Job Configuration

    - by StuartBrierley
    Following the installation of Biztalk Server 2009 on my development laptop I used the BizTalk Server Best Practice Analyser which highlighted the fact that two of the SQL Server Agent jobs that BizTalk relies on were not running successfully.  Upon investigation it turned out that these jobs needed to be configured before they would run successfully. To configure these jobs open SQL Server Management Studio, expand SQL Server Agent > Jobs and double click on the appropriate job.  Select Steps and then edit the appropriate entries. Backup BizTalk Server (BizTalkMgmtDb) This job is comprised of three steps BackupFull, MarkAndBackupLog and ClearBackupHistory. BackupFull exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ The frequency here is set/left as daily The name is left as BTS You must provide a full destination path for the backup files to be stored. There are also two optional parameters: A flag that controls if the job forces a full backup if a partial backup fails A parameter to control the time of day to run the full backup; the default is midnight UTC time For example: exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ , 0, 22 MarkAndBackUpLog exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ You must provide a destination path for the log backups. Optionally you can also add an extra parameter that tells the procedure to use local time: exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ ,1 Clear Backup History exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=7 This will clear out the instances in the MarkLog table older than 7 days.    DTA Purge and Archive (BizTalkDTADb) This job is comprised of a single step. Archive and Purge exec dtasp_BackupAndPurgeTrackingDatabase 0, --@nLiveHours tinyint, 1, --@nLiveDays tinyint = 0, 30, --@nHardDeleteDays tinyint = 0, null, --@nvcFolder nvarchar(1024) = null, null, --@nvcValidatingServer sysname = null, 0 --@fForceBackup int = 0 Any completed instance that is older than the live days plus live hours will be deleted, as will any associated data. Any data older than the HardDeleteDays will be deleted - this means that those long running orchestration instances that would otherwise never be purged will at some point have their data cleared down while allowing the instance to continue, thus preventing the DTA databse from growing indefinitely.  This should always be greater than the soft purge window. The NVC folder is the path for the backup files, if this is null the job will not run failing with the error : DTA Purge and Archive (BizTalkDTADb) Job failed SQL Server Management Studio, job activity monitor, view history The @nvcFolder parameter cannot be null. Archive and Purge step How long you choose to keep instances in the Tracking Database is really up to you. For development I have set this up as: exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 30, ’<destination path>’, null, 0 On a live server you may want to adjust these figures: exec dtasp_BackupAndPurgeTrackingDatabase 0, 15, 20, ’<destination path>’, null, 0

    Read the article

  • Why We Need UX Designers

    - by Tim Murphy
    Ok, so maybe this is really why I need UX designers.  While I have always had an interest in photography and can appreciate a well designed user interface putting one together is an entirely different endeavor.  Being color blind doesn’t help, but coming up with ideas is probably the biggest portion of the issue.  I can spot things that just don’t look like they work right, but what will? UX designers is an area that most companies do not spend much if any resources.  As they say, you only get one chance to make a first impression and and a poorly designed site or application is a bad first impression.  Given that they you would think that companies would invest more in appearance and usability. One of two things need to be done to rectify this issue.  Either we need to start educating our developers on user experience and design or we need to start finding ways to subsidize putting full time designers on our project teams.  Maybe it should be a time share type of situation, but something needs to be done.  As architects we need to impress on our project stakeholders the importance of User Experience and why it should be part of the budget.  If they hear it often enough eventually they may present it to you as their own idea. del.icio.us Tags: User Experience,UX,Application Design

    Read the article

  • New Horizons arrives at Neptune on a 25-year anniversary!

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/08/25/new-horizons-arrives-at-neptune-on-a-25-year-anniversary.aspxToday the New Horizons probe to the planet Pluto crosses the orbit of the planet Neptune. By a “cosmic coincidence”, this is exactly 25 years since Voyager 2 took close-up pictures of Neptune and its satellite Triton. For more see http://pluto.jhuapl.edu/mission/passingplanets/passingPlanets_current.php New Horizons current position is shown at http://pluto.jhuapl.edu/mission/whereis_nh.php

    Read the article

  • Installation experiences with NDepend under Win7/64 with restricted user permissions

    - by Marko Apfel
    Today Patrick gives me a new license for his static code analysis tool NDepend for my fresh machine with Win7/64. This platform is new for me, so some things are different to Win XP. Maybe that till yet some of these things are not well enough understandanded from me. So i stepped in some traps. Here are my notes to get NDepend running. Download of NDepend Professional Edition from http://www.ndepend.com/NDependDownload.aspx   Extracted to c:\program files (x86)\NDepend   Started NDepend.Install.VisualStudioAddin.exe this failed with Okay – sounds plausible.   Copy NDependProLicense.xml to this folder   Next try with NDepend.Install.VisualStudioAddin.exe opens the integration dialog   Registering in Visual Studio failed with   Manually unblock as described (first solution hint)   and here comes my largest understanding problem. After unblocking this file   and closing this dialog the next opening shows the blocking again: Why? So the same error during integration pops up.   Okay – tried the second solution hint with copying folders Copy all to a full accessable folder under c:\temp\   Now the installation works   looks good   copying the folders back to c:\program files (x86)\NDepend   starting Visual Studio failed with     Okay – copying the folder to a private application folder c:\users\apf\My Applications\NDepend   Installing again   Now Visual Studio runs and NDepend is integrated Nevertheless my machine is only used by me, i prefer “all user”-installations. The described way works sadly only for my account.

    Read the article

  • Thank you South Florida for a successful SPSouthFLA

    - by Leonard Mwangi
    I wanted to officially thank the organizers, speakers, volunteers and the attendees of SharePoint Saturday South Florida. Being the first event in South Florida the reception was phenomenon and the group of speakers from keynote by Joel Oleson to session’s speakers from well renowned speakers like John Holliday, Randy Disgrill, Richard Harbridge, Ameet Phadnis, Fabian Williams, Chris McNulty, Jaime Velez to organizers like Michael Hinckley amongst others. With my Business Intelligence (BI) presentation being on the last track of the day, I spent very quality time networking with these great guys and getting the insider scope on International SharePoint Community from Joel and his son which was mesmerizing. I had a very active audience to a point where we couldn’t accommodate all the contents within the 1hr allocated time because they were very engaged and wanted a deep dive session on news features like PowerPivot, enhancements on PerformancePoint, Excel Services amongst others in order to understand the business value and how SharePoint 2010 is making the self-service BI become a reality. These community events allows the attendees experience technology first hand and network with MVPs, authors, experts providing high quality educational sessions usually for free which is a reason to attend. I have made the slides for my session available for download for those interested http://goo.gl/VaH5x

    Read the article

  • Silverlight Cream Top Posted Authors July to December, 2010

    - by Dave Campbell
    It's past the first of January, and it's now time to recognize devs that have a large number of posts in Silverlight Cream. Ground Rules I pick what posts are on the blog Only posts that go in the database are included The author has to appear in SC at least 4 of the 6 months considered I averaged the monthly posts and am only showing Authors with an average greater than 1. Here are the Top Posted Authors at Silverlight Cream for July 1, 2010 through December 31, 2010: It is my intention to post a new list sometime shortly after the 1st of every month to recognize the top posted in the previous 6 months, so next up is January 1! Some other metrics for Silverlight Cream: At the time of this posting there are 7304 articles aggregated and searchable by partial Author, partial Title, keywords (in the synopsis), or partial URL. There are also 118 tags by which the articles can be searched. This is an increase of 317 posts over last month. At the time of this posting there are 783 articles tagged wp7dev. This is an increase of 119 posts over last month, or over a third of the posts added. Stay in the 'Light!

    Read the article

  • Book Review: Brownfield Application Development in .NET

    - by DotNetBlues
    I recently finished reading the book Brownfield Application Development in .NET by Kyle Baley and Donald Belcham.  The book is available from Manning.  First off, let me say that I'm a huge fan of Manning as a publisher.  I've found their books to be top-quality, over all.  As a Kindle owner, I also appreciate getting an ebook copy along with the dead tree copy.  I find ebooks to be much more convenient to read, but hard-copies are easier to reference. The book covers, surprisingly enough, working with brownfield applications.  Which is well and good, if that term has meaning to you.  It didn't for me.  Without retreading a chunk of the first chapter, the authors break code bases into three broad categories: greenfield, brownfield, and legacy.  Greenfield is, essentially, new development that hasn't had time to rust and is (hopefully) being approached with some discipline.  Legacy applications are those that are more or less stable and functional, that do not expect to see a lot of work done to them, and are more likely to be replaced than reworked. Brownfield code is the gray (brown?) area between the two and the authors argue, quite effectively, that it is the most likely state for an application to be in.  Brownfield code has, in some way, been allowed to tarnish around the edges and can be difficult to work with.  Although I hadn't realized it, most of the code I've worked on has been brownfield.  Sometimes, there's talk of scrapping and starting over.  Sometimes, the team dismisses increased discipline as ivory tower nonsense.  And, sometimes, I've been the ignorant culprit vexing my future self. The book is broken into two major sections, plus an introduction chapter and an appendix.  The first section covers what the authors refer to as "The Ecosystem" which consists of version control, build and integration, testing, metrics, and defect management.  The second section is on actually writing code for brownfield applications and discusses object-oriented principles, architecture, external dependencies, and, of course, how to deal with these when coming into an existing code base. The ecosystem section is just shy of 140 pages long and brings some real meat to the matter.  The focus on "pain points" immediately sets the tone as problem-solution, rather than academic.  The authors also approach some of the topics from a different angle than some essays I've read on similar topics.  For example, the chapter on automated testing is on just that -- automated testing.  It's all well and good to criticize a project as conflating integration tests with unit tests, but it really doesn't make anyone's life better.  The discussion on testing is more focused on the "right" level of testing for existing projects.  Sometimes, an integration test is the best you can do without gutting a section of functional code.  Even if you can sell other developers and/or management on doing so, it doesn't actually provide benefit to your customers to rewrite code that works.  This isn't to say the authors encourage sloppy coding.  Far from it.  Just that they point out the wisdom of ignoring the sleeping bear until after you deal with the snarling wolf. The other sections take a similarly real-world, workable approach to the pain points they address.  As the section moves from technical solutions like version control and continuous integration (CI) to the softer, process issues of metrics and defect tracking, the authors begin to gently suggest moving toward a zero defect count.  While that really sounds like an unreasonable goal for a lot of ongoing projects, it's quite apparent that the authors have first-hand experience with taming some gruesome projects.  The suggestions are grounded and workable, and the difficulty of some situations is explicitly acknowledged. I have to admit that I started getting bored by the end of the ecosystem section.  No matter how valuable I think a good project manager or business analyst is to a successful ALM, at the end of the day, I'm a gear-head.  Also, while I agreed with a lot of the ecosystem ideas, in theory, I didn't necessarily feel that a lot of the single-developer projects that I'm often involved in really needed that level of rigor.  It's only after reading the sidebars and commentary in the coding section that I had the context for the arguments made in favor of a strong ecosystem supporting the development process.  That isn't to say that I didn't support good product management -- indeed, I've probably pushed too hard, on occasion, for a strong ALM outside of just development.  This book gave me deeper insight into why some corners shouldn't be cut and how damaging certain sins of omission can be. The code section, though, kept me engaged for its entirety.  Many technical books can be used as reference material from day one.  The authors were clear, however, that this book is not one of these.  The first chapter of the section (chapter seven, over all) addresses object oriented (OO) practices.  I've read any number of definitions, discussions, and treatises on OO.  None of the chapter was new to me, but it was a good review, and I'm of the opinion that it's good to review the foundations of what you do, from time to time, so I didn't mind. The remainder of the book is really just about how to apply OOP to existing code -- and, just because all your code exists in classes does not mean that it's object oriented.  That topic has the potential to be extremely condescending, but the authors miraculously managed to never once make me feel like a dolt or that they were wagging their finger at me for my prior sins.  Instead, they continue the "pain points" and problem-solution presentation to give concrete examples of how to apply some pretty academic-sounding ideas.  That's a point worth emphasizing, as my experience with most OO discussions is that they stay in the academic realm.  This book gives some very, very good explanations of why things like the Liskov Substitution Principle exist and why a corporate programmer should even care.  Even if you know, with absolute certainty, that you'll never have to work on an existing code-base, I would recommend this book just for the clarity it provides on OOP. This book goes beyond just theory, or even real-world application.  It presents some methods for fixing problems that any developer can, and probably will, encounter in the wild.  First, the authors address refactoring application layers and internal dependencies.  Then, they take you through those layers from the UI to the data access layer and external dependencies.  Finally, they come full circle to tie it all back to the overall process.  By the time the book is done, you're left with a lot of ideas, but also a reasonable plan to begin to improve an existing project structure. Throughout the book, it's apparent that the authors have their own preferred methodology (TDD and domain-driven design), as well as some preferred tools.  The "Our .NET Toolbox" is something of a neon sign pointing to that latter point.  They do not beat the reader over the head with anything resembling a "One True Way" mentality.  Even for the most emphatic points, the tone is quite congenial and helpful.  With some of the near-theological divides that exist within the tech community, I found this to be one of the more remarkable characteristics of the book.  Although the authors favor tools that might be considered Alt.NET, there is no reason the advice and techniques given couldn't be quite successful in a pure Microsoft shop with Team Foundation Server.  For that matter, even though the book specifically addresses .NET, it could be applied to a Java and Oracle shop, as well.

    Read the article

  • How to: Add an HTTPS Endpoint to a Windows Azure Cloud Service

    - by kaleidoscope
    Technorati Tags: Ritesh,Windows Azure,Endpoints,https The process to add an HTTPS endpoint is a 3 step process. Configure the endpoint Upload the certificate to the Cloud Configure the SSL certificate (and then point the endpoint to that certificate) Reference – http://blogs.msdn.com/jnak/archive/2009/12/01/how-to-add-an-https-endpoint-to-a-windows-azure-cloud-service.aspx - Ritesh, D

    Read the article

  • Do you know the minimum builds to create on any branch?

    - by Martin Hinshelwood
    You should always have three builds on your team project. These should be setup and tested using an empty solution before you write any code at all. Figure: Three builds named in the format [TeamProject].[AreaPath]_[Branch].[Gate|CI|Nightly] for every branch.   These builds should use the same XAML build workflow; however you may set them up to run a different set of tests depending on the time it takes to run a full build. Gate – Only needs to run the smallest set of tests, but should run most if not all of the Unit Test. This is run before developers are allowed to check-in CI – This should run all Unit Tests and all of the automated UI tests. It is run after a developer check-in. Nightly – The Nightly build should run all of the Unit Tests, all of the Automated UI tests and all of the Load and Performance tests. The nightly build is time consuming and will run but once a night. Packaging of your Product for testing the next day may be done at this stage as well. Figure: You can control what tests are run and what data is collected while they are running. Note: We do not run all the tests every time because of the time consuming nature of running some tests, but ALL tests should be run overnight. Note: If you had a really large project with thousands of tests including long running Load tests you may need to add a Weekly build to the mix.     Figure: Bad example, you can’t tell what these builds do if they are in a larger list   Figure: Good example, you know exactly what project, branch and type of build these are for.   Technorati Tags: SSW,SSW Rules,VS2010,VS ALM,Team Build 2010,Team Build

    Read the article

  • Launch 2010 Technical Readiness Unofficial Q&amp;A.

    - by mbcrump
    I had an email from one of my readers about the 2010 Technical Readiness Series. Please read below: Hi Michael, I noticed you blogged a while back that you were going to attend MS 2010 Launch event. I’m going to the session in Seattle on May 27, is it worth it to attend? Also, I’m wondering if they give any free software away like VS2010 Pro? Any decent vendors? Looking forward to hearing back from you. I decided this information would probably benefit several instead of just responding back to the reader. In case you are not aware, MS has a 2010 launch event showing VS2010, Office 2010, SharePoint 2010 and SQL Server 2008R2.  You can sign up for an event here: https://microsoft.crgevents.com/Register2010/Content/Event_Selection.aspx I’ve answered the questions asked below. Q: Is it worth going? A: It is if you have had little or no exposure to the latest 2010 products. Most people are familiar with VS2010, but have not seen SharePoint 2010 Office 2010 or Windows Phone 7. It is designed to get you up to speed very quickly. If you have watched most of the MIX videos and keep up with .NET in general, you will benefit more by having the ability to ask questions.    Q: Did you get any free software? A: No, only demos including: VS2010/Office 2010 – They give you a link to the url where you can download the trial version. Windows 7 Enterprise – You get a DVD with the trial version loaded. Q: Do they give away any cool swag? A: Just a Microsoft T-Shirt (XL). Q: What about the vendor selection? A: At the event that I went to, most vendors were pushing SharePoint products. There wasn’t a lot of variety in the selection. Most vendors were giving away the typical pens, buttons and stickers and trial software. If you have any other questions, feel free to contact me. I will answer and add to this un-official FAQ.

    Read the article

  • Specs, Form and Function – What am I Missing?

    - by Barry Shulam
    0 0 1 628 3586 08041 29 8 4206 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Friday October 26th the Microsoft Surface RT arrived at the office.  I was summoned to my boss’s office for the grand unpacking.  If I had planned ahead I could have used my iPhone 4 to film the event and post it on YouTube however the desire to hold the device and turn it ON was more inviting than becoming a proxy reviewer for Engadget’s website.  1980 was the first time we had a personal computer in our house.  It was a  Kaypro computer. It weighed 29 pounds more than any persons lap could hold.  Then the term “portable computer” meant you could remove it from the building and take it else where.  Today I am typing on this entry on a Macbook Air which weighs 2.38 pounds. This morning Amazons front page main title is: “Much More for Much Less” I was born at the right time to start with the CPM operating system on the Kaypro thru the DOS, Windows, Linux, Mac OSX and mobile phone operating systems and languages.  If you are not aware Technology is moving at a rapid pace.  The New iPad (those who are keeping score – iPad4) is replacing a 7 month old machine the New iPad (iPad 3) I have used and owned many technology devices in my life.  The main point that most of the reader who are in the USA overlook is the fact that we are in the USA.  The devices we purchase have a great digital garden to support them.  The Kaypro computer had a 7-inch screen.  It was a TV tube with two colors – Black and Green.  You could see the 80-column screen flicker with characters – have you every played Pac-Man emulated on the screen with the ABC characters. Traveling across the world you will find that not all apps on your device will function as they did back home because they are not offered outside of your country of origin. I think the main question a buyer of technology should be asking is Function.  The greatest Specs with out function limit you.  The most beautiful form with out function is the same as a crystal vase on your shelf – not a good cereal bowl in the morning. Microsoft Surface RT, Amazon Kindle Fire and Apple iPad all great devices in their respective customers hands. My advice for those looking to purchase on this year:  If the device is your only technology device you buy what you WANT and LIKE. Consider this parallel universe if its not your only device?  Ever go shopping for clothing, shoes, and accessories with your wife, girlfriend, sister or mother?  If you listen carefully you will hear the little voices coming out of there heads saying:  “This goes well with that and I can use it also with that outfit” ”Do you think this clashes with that?”  “Ohh I love how that combination looks on you”.  Portable devices such as tablets and computers can offer a whole lot more when they are combined with the digital echo system you have at home and the manufacturer offers online. Pros of each Device: Microsoft Surface RT: There is a new functionality named SmartGlass which will let you share the content off your tablet to your XBOX 360.  Microsoft office is loaded on the tablet.  You can have more than one user profile on the tablet if you share it with others.   Amazon Kindle or Kindle HD: If you are an Amazon consumer with an annual Amazon Prime service you can consume videos and read books off the Amazon site.  Its the cheapest device.  Its a step up from the kindle reader in many ways.   Apple Ipad or Ipad mini: Over 270 Thousand applications.  Airplay permits you the ability to share to your TV screen. If you are a cord cutter (a person who gets their entertainment content over the web or air vs Cable Providers) the Airplay or Smart glass are a huge bonus.  iPad mini or not: The mini will fit in a purse where the larger one will not.  Its lighter which makes it nice to hold for prolonged periods.  It has an option for LTE wireless which non of the other sub 9 inch tables offer.  The screen is non retina which means the applications are smaller.  Speaking with individuals who are above 50 in age that wear glasses they retina does not make a difference for them however they prefer the larger iPad over the new mini.   Happy Shopping this Channuka Season.   The Kosher Coder.   Follow me on twitter @KosherCoder

    Read the article

  • Screen Aspect Ratio

    - by Bill Evjen
    Jeffrey Dean, Pixar Aspect Ratio is very important to home video. What is aspect ratio – the ratio from the height to the width 2.35:1 The image is 2.35 times wide as it is high Pixar uses this for half of our movies This is called a widescreen image When modified to fit your television screen They cut this to fit the box of your screen When a comparison is made huge chunks of picture is missing It is harder to find what is going on when these pieces are missing The whole is greater than the pieces themselves. If you are missing pieces – you are missing the movie The soul and the mood is in the film shots. Cutting it to fit a screen, you are losing 30% of the movie Why different aspect ratios? Film before the 1950s 1.33:1 Academy Standard There were all aspects of images though. There was no standard. Thomas Edison developed projecting images onto a wall/screen He didn’t patent it as he saw no value in it. Then 1.37:1 came about to add a strip of sound This is the same size as a 35mm film Around 1952 – TV comes along NTSC Television followed the Academy Standard (4x3) Once TV came out, movie theater attendance plummets So Film brought forth color to combat this. Also early 3D Also Widescreen was brought forth. Cinema-Scope Studios at the time made movies bigger and bigger There was a Napoleon movie that was actually 4x1 … really wide. 1.85:1 Academy Flat 2.35:1 Anamorphic Scope (aka Panavision/Cinemascope) Almost all movies are made in these two aspect ratios Pixar has done half in one and half in the other Why choose one over the other? Artist choice It is part of the story the director wants to tell Can we preserve the story outside of the theaters? TVs before 1998 – they were very square Now TVs are very wide Historical options Toy Story released as it was and people cut it in a way that wasn’t liked by the studio Pan and Scan is another option Cut and then scan left or right depending on where the action is Frame Height Pixar can go back and animate more picture to account for the bottom/top bars. You end up with more sky and more ground The characters seem to get lost in the picture You lose what the director original intended Re-staging For animated movies, you can move characters around – restage the scene. It is a new completely different version of the film This is the best possible option that Pixar came up with They have stopped doing this really as the demand as pretty much dropped off Why not 1.33 today? There has been an evolution of taste and demands. VHS is a linear item The focus is about portability and not about quality Most was pan and scan and the quality was so bad – but people didn’t notice DVD was introduced in 1996 You could have more content – two versions of the film You could have the widescreen version and the 1.33 version People realized that they are seeing more of the movie with the widescreen High Def Televisions (16x9 monitors) This was introduced in 2005 Blu-ray Disc was introduced in 2006 This is all widescreen You cannot find a square TV anymore TVs are roughly 1.85:1 aspect ratio There is a change in demand Users are used to black bars and are used to widescreen Users are educated now What’s next for in-flight entertainment? High Def IFE Personal Electronic Devices 3D inflight

    Read the article

  • Exception Handling And Other Contentious Political Topics

    - by Justin Jones
    So about three years ago, around the time of my last blog post, I promised a friend I would write this post. Keeping promises is a good thing, and this is my first step towards easing back into regular blogging. I fully expect him to return from Pennsylvania to buy me a beer over this. However, it’s been an… ahem… eventful three years or so, and blogging, unfortunately, got pushed to the back burner on my priority list, along with a few other career minded activities. Now that the personal drama of the past three years is more or less resolved, it’s time to put a few things back on the front burner. What I consider to be proper exception handling practices is relatively well known these days. There are plenty of blog posts out there already on this topic which more or less echo my opinions on this topic. I’ll try to include a few links at the bottom of the post. Several years ago I had an argument with a co-worker who posited that exceptions should be caught at every level and logged. This might seem like sanity on the surface, but the resulting error log looked something like this: Error: System.SomeException Followed by small stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace.   These were all the same exception. The problem with this approach is that the error log, if you run any kind of analytics on in, becomes skewed depending on how far up the stack trace your exception was thrown. To mitigate this problem, we came up with the concept of the “PreLoggedException”. Basically, we would log the exception at the very top level and subsequently throw the exception back up the stack encapsulated in this pre-logged type, which our logging system knew to ignore. Now the error log looked like this: Error: System.SomeException Followed by small stack trace. Much cleaner, right? Well, there’s still a problem. When your exception happens in production and you go about trying to figure out what happened, you’ve lost more or less all context for where and how this exception was thrown, because all you really know is what method it was thrown in, but really nothing about who was calling the method or why. What gives you this clue is the entire stack trace, which we’re losing here. I believe that was further mitigated by having the logging system pull a system stack trace and add it to the log entry, but what you’re actually getting is the stack for how you got to the logging code. You’re still losing context about the actual error. Not to mention you’re executing a whole slew of catch blocks which are sloooooooowwwww……… In other words, we started with a bad idea and kept band-aiding it until it didn’t suck quite so bad. When I argued for not catching exceptions at every level but rather catching them following a certain set of rules, my co-worker warned me “do yourself a favor, never express that view in any future interviews.” I suppose this is my ultimate dismissal of that advice, but I’m not too worried. My approach for exception handling follows three basic rules: Only catch an exception if 1. You can do something about it. 2. You can add useful information to it. 3. You’re at an application boundary. Here’s what that means: 1. Only catch an exception if you can do something about it. We’ll start with a trivial example of a login system that uses a file. Please, never actually do this in production code, it’s just concocted example. So if our code goes to open a file and the file isn’t there, we get a FileNotFound exception. If the calling code doesn’t know what to do with this, it should bubble up. However, if we know how to create the file from scratch we can create the file and continue on our merry way. When you run into situations like this though, What should really run through your head is “How can I avoid handling an exception at all?” In this case, it’s a trivial matter to simply check for the existence of the file before trying to open it. If we detect that the file isn’t there, we can accomplish the same thing without having to handle in in a catch block. 2. Only catch an exception if you can do something about it. Continuing with the poorly thought out file based login system we contrived in part 1, if the code calls a Login(…) method and the FileNotFound exception is thrown higher up the stack, the code that calls Login must account for a FileNotFound exception. This is kind of counterintuitive because the calling code should not need to know the internals of the Login method, and the data file is an implementation detail. What makes more sense, assuming that we didn’t implement any of the good advice from step 1, is for Login to catch the FileNotFound exception and wrap it in a new exception. For argument’s sake we’ll say LoginSystemFailureException. (Sorry, couldn’t think of anything better at the moment.) This gives us two stack traces, preserving the original stack trace in the inner exception, and also is much more informative to the calling code. 3. Only catch an exception if you’re at an application boundary. At some point we have to catch all the exceptions, even the ones we don’t know what to do with. WinForms, ASP.Net, and most other UI technologies have some kind of built in mechanism for catching unhandled exceptions without fatally terminating the application. It’s still a good idea to somehow gracefully exit the application in this case if possible though, because you can no longer be sure what state your application is in, but nothing annoys a user more than an application just exploding. These unhandled exceptions need to be logged, and this is a good place to catch them. Ideally you never want this option to be exercised, but code as though it will be. When you log these exceptions, give them a “Fatal” status (e.g. Log4Net) and make sure these bugs get handled in your next release. That’s it in a nutshell. If you do it right each exception will only get logged once and with the largest stack trace possible which will make those 2am emergency severity 1 debugging sessions much shorter and less frustrating. Here’s a few people who also have interesting things to say on this topic:  http://blogs.msdn.com/b/ericlippert/archive/2008/09/10/vexing-exceptions.aspx http://www.codeproject.com/Articles/9538/Exception-Handling-Best-Practices-in-NET I know there’s more but I can’t find them at the moment.

    Read the article

  • Multiple instances of Intellitrace.exe process

    - by Vincent Grondin
    Not so long ago I was confronted with a very bizarre problem… I was using visual studio 2010 and whenever I opened up the Test Impact view I would suddenly see my pc perf go down drastically…  Investigating this problem, I found out that hundreds of “Intellitrace.exe” processes had been started on my system and I could not close them as they would re-start as soon as I would close one.  That was very weird.  So I knew it had something to do with the Test Impact but how can this feature and Intellitrace.exe going crazy be related?  After a bit of thinking I remembered that a teammate (Etienne Tremblay, ALM MVP) had told me once that he had seen this issue before just after installing a MOCKING FRAMEWORK that uses the .NET Profiler API…  Apparently there’s a conflict between the test impact features of Visual Studio and some mocking products using the .NET profiler API…  Maybe because VS 2010 also uses this feature for Test Impact purposes, I don’t know… Anyways, here’s the fix…  Go to your VS 2010 and click the “Test” menu.  Then go to the “Edit Test Settings” and choose EACH test setting file applying the following actions (normally 2 files being “Local” and TraceAndTestImpact”: -          Select the Data And Diagnostic option on the left -          Make sure that the ASP.NET Client Proxy for Intellitrace and Test Impact option is NOT SELECTED -          Make sure that the Test Impact option is NOT SELECTED -          Save and close   Edit Test Settings   Problem solved…  For me having to choose between the “Test Impact” features and the “Mocking Framework” was a no brainer, bye bye test impact…  I did not investigate much on this subject but I feel there might be a way to have them both working by enabling one after the other in a precise sequence…  Feel free to leave a comment if you know how to make them both work at the same time!   Hope this helps someone out there !

    Read the article

  • MVC & Windows Authentication

    - by TJ
    Changed the ValidateUser function  public bool ValidateUser(string userName, string password)         {             bool validation;             try             {                 LdapConnection ldc = new LdapConnection(new LdapDirectoryIdentifier((string)null, false, false));                 NetworkCredential nc = new NetworkCredential(userName, password, "domainname.com");                 ldc.Credential = nc;                 ldc.AuthType = AuthType.Negotiate;                 ldc.Bind(nc); // user has authenticated at this point, as the credentials were used to login to the dc.                  string myvar = ldc.SessionOptions.DomainName;                 validation = true;             }             catch (LdapException)             {                 validation = false;             }             return validation;         }

    Read the article

  • WIF-less claim extraction from ACS: SWT

    - by Elton Stoneman
    WIF with SAML is solid and flexible, but unless you need the power, it can be overkill for simple claim assertion, and in the REST world WIF doesn’t have support for the latest token formats.  Simple Web Token (SWT) may not be around forever, but while it's here it's a nice easy format which you can manipulate in .NET without having to go down the WIF route. Assuming you have set up a Relying Party in ACS, specifying SWT as the token format: When ACS redirects to your login page, it will POST the SWT in the first form variable. It comes through in the BinarySecurityToken element of a RequestSecurityTokenResponse XML payload , the SWT type is specified with a TokenType of http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0 : <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:31:18.655Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:11:18.655Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="uuid:fc8d3332-d501-4bb0-84ba-d31aa95a1a6c" ValueType="http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> [ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> Reading the SWT is as simple as base-64 decoding, then URL-decoding the element value:     var wrappedToken = XDocument.Parse(HttpContext.Current.Request.Form[1]);     var binaryToken = wrappedToken.Root.Descendants("{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}BinarySecurityToken").First();     var tokenBytes = Convert.FromBase64String(binaryToken.Value);     var token = Encoding.UTF8.GetString(tokenBytes);     var tokenType = wrappedToken.Root.Descendants("{http://schemas.xmlsoap.org/ws/2005/02/trust}TokenType").First().Value; The decoded token contains the claims as key/value pairs, along with the issuer, audience (ACS realm), expiry date and an HMAC hash, which are in query string format. Separate them on the ampersand, and you can write out the claim values in your logged-in page:     var decoded = HttpUtility.UrlDecode(token);     foreach (var part in decoded.Split('&'))     {         Response.Write("<pre>" + part + "</pre><br/>");     } - which will produce something like this: http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant=2012-08-31T06:57:01.855Z http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod=http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname=XYZ http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider=http://fs.svc.xyz.com/adfs/services/trust Audience=http://localhost/x.y.z ExpiresOn=1346402225 Issuer=https://x-y-z.accesscontrol.windows.net/ HMACSHA256=oDCeEDDAWEC8x+yBnTaCLnzp4L6jI0Z/xNK95PdZTts= The HMAC hash lets you validate the token to ensure it hasn’t been tampered with. You'll need the token signing key from ACS, then you can re-sign the token and compare hashes. There's a full implementation of an SWT parser and validator here: How To Request SWT Token From ACS And How To Validate It At The REST WCF Service Hosted In Windows Azure, and a cut-down claim inspector on my github code gallery: ACS Claim Inspector. Interestingly, ACS lets you have a value for your logged-in page which has no relation to the realm for authentication, so you can put this code into a generic claim inspector page, and set that to be your logged-in page for any relying party where you want to check what's being sent through. Particularly handy with ADFS, when you're modifying the claims provided, and want to quickly see the results.

    Read the article

  • New Enhanced Visual WebGui WINWEB and .NETHTML5 Versions

    - by Webgui
    After a long wait and huge anticipation from the Visual WebGui community, I am happy to announce the release of new versions for the WINWEB and .NETHTML5 branches. The new 6.4.0 Release d and 6.4.0 beta3 versions are available after an extensive work on core capabilities of Visual WebGui including extension of existing controls and adding new controls such as Strip Controls, RibbonBar, DataGridView, ComboBox, PropertyGrid and RadioButton, DataGridView, ComboBox, PropertyGrid and RadioButton, as well as some major enhancements to both versions in terms of cross-browser support and performance.We apologize for the delay in the release of those most expected versions, but we believe that the extra time lead to a more mature and complete product. As you can see the changelog is pretty long and includes a list of enhancements, new features and bug fixes: http://visualwebgui.com/Developers/KB/tabid/654/article/w_changelogs/Default.aspx The new versions are available for all versions with open source and for The new versions are available for all versions with open sources for Visual Studio 2005, 2008 and 2010. You are welcome to download the WINWEB Free Trial and the Free .NETHTML5 beta on the downloads page.

    Read the article

  • Deleting a row from self-referencing table

    - by Jake Rutherford
    Came across this the other day and thought “this would be a great interview question!” I’d created a table with a self-referencing foreign key. The application was calling a stored procedure I’d created to delete a row which caused but of course…a foreign key exception. You may say “why not just use a the cascade delete option?” Good question, easy answer. With a typical foreign key relationship between different tables which would work. However, even SQL Server cannot do a cascade delete of a row on a table with self-referencing foreign key. So, what do you do?…… In my case I re-wrote the stored procedure to take advantage of recursion:   -- recursively deletes a Foo ALTER PROCEDURE [dbo].[usp_DeleteFoo]      @ID int     ,@Debug bit = 0    AS     SET NOCOUNT ON;     BEGIN TRANSACTION     BEGIN TRY         DECLARE @ChildFoos TABLE         (             ID int         )                 DECLARE @ChildFooID int                        INSERT INTO @ChildFoos         SELECT ID FROM Foo WHERE ParentFooID = @ID                 WHILE EXISTS (SELECT ID FROM @ChildFoos)         BEGIN             SELECT TOP 1                 @ChildFooID = ID             FROM                 @ChildFoos                             DELETE FROM @ChildFoos WHERE ID = @ChildFooID                         EXEC usp_DeleteFoo @ChildFooID         END                                    DELETE FROM dbo.[Foo]         WHERE [ID] = @ID                 IF @Debug = 1 PRINT 'DEBUG:usp_DeleteFoo, deleted - ID: ' + CONVERT(VARCHAR, @ID)         COMMIT TRANSACTION     END TRY     BEGIN CATCH         ROLLBACK TRANSACTION         DECLARE @ErrorMessage VARCHAR(4000), @ErrorSeverity INT, @ErrorState INT         SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(), @ErrorState = ERROR_STATE()         IF @ErrorState <= 0 SET @ErrorState = 1         INSERT INTO ErrorLog(ErrorNumber,ErrorSeverity,ErrorState,ErrorProcedure,ErrorLine,ErrorMessage)         VALUES(ERROR_NUMBER(), @ErrorSeverity, @ErrorState, ERROR_PROCEDURE(), ERROR_LINE(), @ErrorMessage)         RAISERROR (@ErrorMessage, @ErrorSeverity, @ErrorState)     END CATCH   This procedure will first determine any rows which have the row we wish to delete as it’s parent. It then simply iterates each child row calling the procedure recursively in order to delete all ancestors before eventually deleting the row we wish to delete.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >