Search Results

Search found 18096 results on 724 pages for 'let'.

Page 618/724 | < Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >

  • SQLAuthority News – Speaking Sessions at TechEd India – 3 Sessions – 1 Panel Discussion

    - by pinaldave
    Microsoft Tech-Ed India 2010 is considered as the major Technology event of the year for various IT professionals and developers. This event will feature a comprehensive forum in order   to learn, connect, explore, and evolve the current technologies we have today. I would recommend this event to you since here you will learn about today’s cutting-edge trends, thereby enhancing your work profile and getting ahead of the rest. But, the most important benefit of all might be the networking opportunity that that you can attain by attending the forum. You can build personal connections with various Microsoft experts and peers that will last even far beyond this event! It also feels good to let you know that I will be speaking at this year’s event! So, here are the sessions that await you in this mega-forum. Session 1: True Lies of SQL Server – SQL Myth Buster Date: April 12, 2010  Time: 11:15pm – 11:45pm In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myth and their resolution backing up with some demo. This demo session is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet  fun session. Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Date: April 12, 2010  Time: 2:30pm-3:30pm SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision making by allowing you to create, manage and propagate changes from single master view of your business entities. Also with MDS – Master Data-hub which is the vital component helps ensure reporting consistency across systems and deliver faster more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Date: April 14, 2010 Time: 5:00pm-6:00pm Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Panel Discussion: Harness the power of Web – SEO and Technical Blogging Date: April 12, 2010 Time: 5:00pm-6:00pm Here you will learn lots of tricks and tips about SEO and Technical Blogging from various Industry Technical Blogging Experts. This event will surely be one of the most important Tech conventions of 2010. TechEd is going to be a very busy time for Tech developers and enthusiasts, since every evening there will be a fun session to attend. If you are interested in any of the above topics for every session, I suggest that you visit each of them as you will learn so many things about the topic to be discussed. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • MySQL – Learning MySQL Online in 6 Hours – MySQL Fundamentals in 320 Minutes

    - by Pinal Dave
    MySQL is one of the most popular database language and I have been recently working with it a lot. Data have no barrier and every database have their own place. I have been working with MySQL for quite a while and just like SQL Server, I often find lots of people asking me if I have a tutorial which can teach them MySQL from the beginning. Here is the good news, I have written two different courses on MySQL Fundamentals, which is available online. The reason for writing two different courses was to keep the learning simple. Both of the courses are absolutely connected with other but designed if you watch either of the course independently you can watch them and learn without dependencies. However, if you ask me, I will suggest that you watch MySQL Fundamentals Part 1 course following with MySQL Fundamentals Part 2 course. Let us quickly explore outline of MySQL courses. MySQL Fundamental – 1 (157 minutes) MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP open source web application software stack. This course covers the fundamentals of MySQL, including how to install MySQL as well as written basic data retrieval and data modification queries. Introduction (duration 00:02:12) Installations and GUI Tools (duration 00:13:51) Fundamentals of RDBMS and Database Designs (duration 00:16:13) Introduction MYSQL Workbench (duration 00:31:51) Data Retrieval Techniques (duration 01:11:13) Data Modification Techniques (duration 00:20:41) Summary and Resources (duration 00:01:31) MySQL Fundamental – 2 (163 minutes) MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP open source web application software stack. In this course, which is part 2 of the Fundamentals of MySQL series, we explore more advanced topics such as stored procedures & user-defined functions, subqueries & joins, views and events & triggers. Introduction (duration 00:02:09) Joins, Unions and Subqueries (duration 01:03:56) MySQL Functions (duration 00:36:55) MySQL Views (duration 00:19:19) Stored Procedures and Stored Functions (duration 00:25:23) Triggers and Events (duration 00:13:41) Summary and Resources (duration 00:02:18) Note if you click on the link above and you do not see the play button to watch the course, you will have to login to the system and watch the course. I would like to throw a challenge to you – Can you watch both of the courses in a single day? If yes, once you are done watching the course on your Pluralsight Profile Page (here is my profile http://pluralsight.com/training/users/pinal-dave) you will get following badges. If you have already watched MySQL Fundamental Part 1, you can qualify by just watching MySQL Fundamental Part 2. Just send me the link to your profile and I will publish your name on this blog. For the first five people who send me email at Pinal at sqlauthority.com; I might have something cool as a giveaway as well. Watch the teaser of MySQL course. Reference: Pinal Dave (http://blog.sqlauthority.com)  Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Nepotism In The SQL Family

    - by Rob Farley
    There’s a bunch of sayings about nepotism. It’s unpopular, unless you’re the family member who is getting the opportunity. But of course, so much in life (and career) is about who you know. From the perspective of the person who doesn’t get promoted (when the family member is), nepotism is simply unfair; even more so when the promoted one seems less than qualified, or incompetent in some way. We definitely get a bit miffed about that. But let’s also look at it from the other side of the fence – the person who did the promoting. To them, their son/daughter/nephew/whoever is just another candidate, but one in whom they have more faith. They’ve spent longer getting to know that person. They know their weaknesses and their strengths, and have seen them in all kinds of situations. They expect them to stay around in the company longer. And yes, they may have plans for that person to inherit one day. Sure, they have a vested interest, because they’d like their family members to have strong careers, but it’s not just about that – it’s often best for the company as well. I’m not announcing that the next LobsterPot employee is one of my sons (although I wouldn’t be opposed to the idea of getting them involved), but actually, admitting that almost all the LobsterPot employees are SQLFamily members… …which makes this post good for T-SQL Tuesday, this month hosted by Jeffrey Verheul (@DevJef). You see, SQLFamily is the concept that the people in the SQL Server community are close. We have something in common that goes beyond ordinary friendship. We might only see each other a few times a year, at events like the PASS Summit and SQLSaturdays, but the bonds that are formed are strong, going far beyond typical professional relationships. And these are the people that I am prepared to hire. People that I have got to know. I get to know their skill level, how well they explain things, how confident people are in their expertise, and what their values are. Of course there people that I wouldn’t hire, but I’m a lot more comfortable hiring someone that I’ve already developed a feel for. I need to trust the LobsterPot brand to people, and that means they need to have a similar value system to me. They need to have a passion for helping people and doing what they can to make a difference. Above all, they need to have integrity. Therefore, I believe in nepotism. All the people I’ve hired so far are people from the SQL community. I don’t know whether I’ll always be able to hire that way, but I have no qualms admitting that the things I look for in an employee are things that I can recognise best in those that are referred to as SQLFamily. …like Ted Krueger (@onpnt), LobsterPot’s newest employee and the guy who is representing our brand in America. I’m completely proud of this guy. He’s everything I want in an employee. He’s an experienced consultant (even wrote a book on it!), loving husband and father, genuine expert, and incredibly respected by his peers. It’s not favouritism, it’s just choosing someone I’ve been interviewing for years. @rob_farley

    Read the article

  • ActionResult types in MVC2

    - by rajbk
    In ASP.NET MVC, incoming browser requests gets mapped to a controller action method. The action method returns a type of ActionResult in response to the browser request. A basic example is shown below: public class HomeController : Controller { public ActionResult Index() { return View(); } } Here we have an action method called Index that returns an ActionResult. Inside the method we call the View() method on the base Controller. The View() method, as you will see shortly, is a method that returns a ViewResult. The ActionResult class is the base class for different controller results. The following diagram shows the types derived from the ActionResult type. ASP.NET has a description of these methods ContentResult – Represents a text result. EmptyResult – Represents no result. FileContentResult – Represents a downloadable file (with the binary content). FilePathResult – Represents a downloadable file (with a path). FileStreamResult – Represents a downloadable file (with a file stream). JavaScriptResult – Represents a JavaScript script. JsonResult – Represents a JavaScript Object Notation result that can be used in an AJAX application. PartialViewResult – Represents HTML and markup rendered by a partial view. RedirectResult – Represents a redirection to a new URL. RedirectToRouteResult – Represents a result that performs a redirection by using the specified route values dictionary. ViewResult – Represents HTML and markup rendered by a view. To return the types shown above, you call methods that are available in the Controller base class. A list of these methods are shown below.   Methods without an ActionResult return type The MVC framework will translate action methods that do not return an ActionResult into one. Consider the HomeController below which has methods that do not return any ActionResult types. The methods defined return an int, object and void respectfully. public class HomeController : Controller { public int Add(int x, int y) { return x + y; }   public Employee GetEmployee() { return new Employee(); }   public void DoNothing() { } } When a request comes in, the Controller class hands internally uses a ControllerActionInvoker class which inspects the action parameters and invokes the correct action method. The CreateActionResult method in the ControllerActionInvoker class is used to return an ActionResult. This method is shown below. If the result of the action method is null, an EmptyResult instance is returned. If the result is not of type ActionResult, the result is converted to a string and returned as a ContentResult. protected virtual ActionResult CreateActionResult(ControllerContext controllerContext, ActionDescriptor actionDescriptor, object actionReturnValue) { if (actionReturnValue == null) { return new EmptyResult(); }   ActionResult actionResult = (actionReturnValue as ActionResult) ?? new ContentResult { Content = Convert.ToString(actionReturnValue, CultureInfo.InvariantCulture) }; return actionResult; }   In the HomeController class above, the DoNothing method will return an instance of the EmptyResult() Renders an empty webpage the GetEmployee() method will return a ContentResult which contains a string that represents the current object Renders the text “MyNameSpace.Controllers.Employee” without quotes. the Add method for a request of /home/add?x=3&y=5 returns a ContentResult Renders the text “8” without quotes. Unit Testing The nice thing about the ActionResult types is in unit testing the controller. We can, without starting a web server, create an instance of the Controller, call the methods and verify that the type returned is the expected ActionResult type. We can then inspect the returned type properties and confirm that it contains the expected values. Enjoy! Sulley: Hey, Mike, this might sound crazy but I don't think that kid's dangerous. Mike: Really? Well, in that case, let's keep it. I always wanted a pet that could kill me.

    Read the article

  • NuGet package manager in Visual Studio 2012

    - by sreejukg
    NuGet is a package manager that helps developers to automate the process of installing and upgrading packages in Visual Studio projects. It is free and open source. You can see the project in codeplex from the below link. http://nuget.codeplex.com/ Now days developers needed to work with several packages or libraries from various sources, a typical e.g. is jQuery. You will hardly find a website that not uses jQuery. When you include these packages as manually copying the files, it is difficult to task to update these files as new versions get released. NuGet is a Visual studio add on, that comes by default with Visual Studio 2012 that manages such packages. So by using NuGet, you can include new packages to you project as well as update existing ones with the latest versions. NuGet is a Visual Studio extension, and happy news for developers, it is shipped with Visual Studio 2012 by default. In this article, I am going to demonstrate how you can include jQuery (or anything similar) to a .Net project using the NuGet package manager. I have Visual Studio 2012, and I created an empty ASP.Net web application. In the solution explorer, the project looks like following. Now I need to add jQuery for this project, for this I am going to use NuGet. From solution explorer, right click the project, you will see “Manage NuGet Packages” Click on the Manage NuGet Packages options so that you will get the NuGet Package manager dialog. Since there is no package installed in my project, you will see “no packages installed” message. From the left menu, select the online option, and in the Search box (that is available in the top right corner) enter the name of the package you are looking for. In my case I just entered jQuery. Now NuGet package manager will search online and bring all the available packages that match my search criteria. You can select the right package and use the Install button just next to the package details. Also in the right pane, it will show the link to project information and license terms, you can see more details of the project you are looking for from the provided links. Now I have selected to install jQuery. Once installed successfully, you can find the green icon next to it that tells you the package has been installed successfully to your project. Now if you go to the Installed packages link from the left menu of package manager, you can see jQuery is installed and you can uninstall it by just clicking on the Uninstall button. Now close the package manager dialog and let us examine the project in solution explorer. You can see some new entries in your project. One is Scripts folder where the jQuery got installed, and a packages.config file. The packages.config is xml file that tells the NuGet package manager, the id and the version of the package you install. Based on this file NuGet package manager will identify the installed packages and the corresponding versions. Installing packages using NuGet package manager will save lot of time for developers and developers can get upgrades for the installed packages very easily.

    Read the article

  • Add New Features to WMP with Windows Media Player Plus

    - by DigitalGeekery
    Do you use Windows Media Player 11 or 12 as your default media player? Today, we’re going to show you how to add some handy new features and enhancements with the Windows Media Player Plus third party plug-in. Installation and Setup Download and install Media Player Plus! (link below). You’ll need to close out of Windows Media Player before you begin or you’ll receive the message below. The next time you open Media Player you’ll be presented with the Media Player Plus settings window. Some of the settings will be enabled by default, such as the Find as you type feature. Using Media Player Plus! Find as you type allows you to start typing a search term from anywhere in Media Player without having to be in the Search box. The search term will automatically fill in the search box and display the results.   You’ll also see Disable group headers in the Library Pane.   This setting will display library items in a continuous list similar to the functionality of Windows Media Player 10. Under User Interface you can enable displaying the currently playing artist and title in the title bar. This is enabled by default.   The Context Menu page allows you to enable context menu enhancements. The File menu enhancement allows you to add the Windows Context menu to Media Player on the library pane, list pane, or both. Right click on a Title, select File, and you’ll see the Windows Context Menu. Right-click on a title and select Tag Editor Plus. Tag Editor Plus allows you to quickly edit media tags.   The Advanced tab displays a number of tags that Media Player usually doesn’t show. Only the tags with the notepad and pencil icon are editable.   The Restore Plug-ins page allows you to configure which plug-ins should be automatically restored after a Media Player crash. The Restore Media at Startup page allows you to configure Media Player to resume playing the last playlist, track, and even whether it was playing or paused at the time the application was closed. So, if you close out in the middle of a song, it will begin playing from that point the next time you open Media Player. You can also set Media Player to rewind a certain number of seconds from where you left off. This is especially useful if you are in the middle of watching a movie. There’s also the option to have your currently playing song sent to Windows Live Messenger. You can access the settings at any time by going to Tools, Plug-in properties, and selecting Windows Media Player Plus. Windows Media Plus is a nice little free plug-in for WMP 11 and 12 that brings a lot of additional functionality to Windows Media Player. If you use Media Player 11 or WMP 12 in Windows 7 as your main player, you might want to give this a try. Download Windows Media Player Plus! Similar Articles Productive Geek Tips Install and Use the VLC Media Player on Ubuntu LinuxFixing When Windows Media Player Library Won’t Let You Add FilesMake VLC Player Look like Windows Media Player 10Make VLC Player Look like Windows Media Player 11Make Windows Media Player Automatically Open in Mini Player Mode TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7

    Read the article

  • TestDriven.Net 3.0 – All Systems Go

    - by Jamie Cansdale
    I’m pleased to announce that TestDriven.Net 3.0 is now available. Finally! I know many of you will already be using the Beta and RC versions, but if you look at the release notes you’ll see there’s been many refinements since then, so I highly recommend you install the RTM version. Here is a quick summary of a few new features: Visual Studio 2010 supports targeting multiple versions of the .NET framework (multi-targeting). This means you can easily upgrade your Visual Studio 2005/2008 solutions without necessarily converting them to use .NET 4.0. TestDriven.Net will execute your tests using the .NET version your test project is targeting (see ‘Properties > Application > Target framework’). There is now first class support for MSTest when using Visual Studio 2008 & 2010. Previous versions of TestDriven.Net had support for a limited number of MSTest attributes. This version supports virtually all MSTest unit testing related attributes, including support for deployment item and data driven test attributes. You should also find this test runner is quick. ;) There is a new ‘Go To Test/Code’ command on the code context menu. You can think of this as Ctrl-Tab for test driven developers; it will quickly flip back and forth between your tests and code under test. I recommend assigning a keyboard shortcut to the ‘TestDriven.NET.GoToTestOrCode’ command. NCover can now be used for code coverage on .NET 4.0. This is only officially supported since NCover 3.2 (your mileage may vary if you’re using the 1.5.8 version). Rather than clutter the ‘Output’ window, ignored or skipped tests will be placed on the ‘Task List’. You can double-click on these items to navigate to the offending test (or assign a keyboard shortcut to ‘View.NextTask’). If you’re using a Team, Premium or Ultimate edition of Visual Studio 2005-2010, a new ‘Test With > Performance’ command will be available. This command will perform instrumented performance profiling on your target code. A particular focus of this version has been to make it more keyboard friendly. Here’s a list of commands you will probably want to assign keyboard shortcuts to: Name Default What I use TestDriven.NET.RunTests Run tests in context   Alt + T TestDriven.NET.RerunTests Repeat test run   Alt + R TestDriven.NET.GoToTestOrCode Flip between tests and code   Alt + G TestDriven.NET.Debugger Run tests with debugger   Alt + D View.Output Show the ‘Output’ window Ctrl+ Alt + O   Edit.BreakLine Edit code in stack trace Enter   View.NextError Jump to next failed test Ctrl + Shift + F12   View.NextTask Jump to next skipped test   Alt + S   By default the ‘Output’ window will automatically activate when there is test output or a failed test (this is an option). The cursor will be positioned on the stack trace of the last failed test, ready for you to hit ‘Enter’ to jump to the fail point or ‘Esc’ to return to your source (assuming your ‘Output’ window is set to auto-hide).  If your ‘Output’ window isn’t set to auto-hide, you’ll need to hit ‘Ctrl + Alt + O’ then ‘Enter’. Alternatively you can use ‘Ctrl + Shift + F12’ (View.NextError) to navigate between all failed tests.   For more frequent updates or to give feedback, you can find me on twitter here. I hope you enjoy this version. Let me know how you get on. :)

    Read the article

  • Visual Studio App.config XML Transformation

    - by João Angelo
    Visual Studio 2010 introduced a much-anticipated feature, Web configuration transformations. This feature allows to configure a web application project to transform the web.config file during deployment based on the current build configuration (Debug, Release, etc). If you haven’t already tried it there is a nice step-by-step introduction post to XML transformations on the Visual Web Developer Team Blog and for a quick reference on the supported syntax you have this MSDN entry. Unfortunately there are some bad news, this new feature is specific to web application projects since it resides in the Web Publishing Pipeline (WPP) and therefore is not officially supported in other project types like such as a Windows applications. The keyword here is officially because Vishal Joshi has a nice blog post on how to extend it’s support to app.config transformations. However, the proposed workaround requires that the build action for the app.config file be changed to Content instead of the default None. Also from the comments to the said post it also seems that the workaround will not work for a ClickOnce deployment. Working around this I tried to remove the build action change requirement and at the same time add ClickOnce support. This effort resulted in a single MSBuild project file (AppConfig.Transformation.targets) available for download from GitHub. It integrates itself in the build process so in order to add app.config transformation support to an existing Windows Application Project you just need to import this targets file after all the other import directives that already exist in the *.csproj file. Before – Without App.config transformation support ... <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Target Name="BeforeBuild"> </Target> <Target Name="AfterBuild"> </Target> </Project> After – With App.config transformation support ... <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Import Project="C:\MyExtensions\AppConfig.Transformation.targets" /> <Target Name="BeforeBuild"> </Target> <Target Name="AfterBuild"> </Target> </Project> As a final disclaimer, the testing time was limited so any problem that you find let me know. The MSBuild project invokes the mage tool so the Framework SDK must be installed. Update: I finally had some spare time and was able to check the problem reported by Geoff Smith and believe the problem is solved. The Publish command inside Visual Studio triggers a build workflow different than through MSBuild command line and this was causing problems. I posted a new version in GitHub that should now support ClickOnce deployment with app.config tranformation from within Visual Studio and MSBuild command line. Also here is a link for the sample application used to test the new version using the Publish command with the install location set to be from a CD-ROM or DVD-ROM and selected that the application will not check for updates. Thanks to Geoff for spotting the problem.

    Read the article

  • Tulsa SharePoint Interest Group - How SharePoint 2010 Business Connectivity Services could change yo

    - by dmccollough
    Bio: Corey Roth is a consultant at Stonebridge specializing in SharePoint solutions in the Oil & Gas Industry. He has ten plus years of experience delivering solutions in the energy, travel, advertising and consumer electronics verticals. Corey has always focused on rapid adoption of new Microsoft technologies including Visual Studio 2010, SharePoint 2010, .NET Framework 4.0, LINQ, and SilverLight. He also contributed greatly to the beta phases of Visual Studio 2005. For his contributions, he was awarded the Microsoft Award for Customer Excellence (ACE). Corey is a graduate of Oklahoma State University. Corey is a member of the .NET Mafia (www.dotnetmafia.com) where he blogs about the latest technology and SharePoint. Abstract: How SharePoint 2010 Business Connectivity Services could change your life - The New BDC How many hours have your wasted building simple ASP.NET applications to do nothing more than simple CRUD operations against a database.  Many tools have made this easier, but now it's so easy, you'll be up and running in minutes.  This session will show you hot easy it is to get started integrating external data from your line of business systems in SharePoint 2010.  You will learn how to register an external content type using SharePoint Designer based upon a database table or web service and then build an external list.  With external lists, you will see how you can perform CRUD operations on your line of business directly from SharePoint without ever having to do manual configuration in XML files.  Finally, we will walk through how to create custom edit forms for your list using InfoPath 2010. Agenda: 6pm - 6:30 Pizza and Mingle - Sponsored by TekSystems 6:30 - 6:45 Announcements 6:45 - 7:45 Presentation! 7:45 - 8:00 Drawings and Door Prizes Location: TCC (Tulsa Community College) Northeast Campus 3727 East Apache Tulsa, OK 74115 918-594-8000 Campus Map | Live | Yahoo | Google | MapQuest Door Prizes: We will be giving away one of each of these: XBox 360 - Halo 3 ODST Telerik Premium Collection ($1300.00 value) ReSharper ($199.00 value) SQLSets ($149.00 value) 64 bit Windows 7 Introducing Windows 7 for Developers Developing Service-Oriented AJAX Applications on the Microsoft Platform Sponsors: Thanks to our sponsors: TekSystems - Thanks for purchasing the Pizza for our meetings. ISOCentric - Thanks for providing us hosting for the groups web site. Tulsa Community College - Thanks for providing us a place to have our meetings. NEVRON - Thanks for providing us prizes to give away. INETA.org - For allowing us to be a Charter Member and providing awesome Speakers! PERPETUUM Software - Thanks for providing us prizes to give away. Telerik - Thanks for providing us prizes to give away. GrapeCity - Thanks for providing us prizes to give away. SQLSets - Thanks for providing us prizes to give away. K2 - Thanks for providing us prizes to give away. Microsoft - For providing us with a lot of support and product giveaways! Orielly books - For providing us with books and discounts. Wrox books - For providing us with books and discounts. Have any special requests? Let us know at this link: http://tinyurl.com/lg5o38. RSVP for this month's meeting by responding to this thread: http://tinyurl.com/yafkzel . (Must be logged in to the site) Be SURE to RSVP no later than Noon on April 12th and you will get an extra entry for the prize drawings! So, do it now, before you forget and miss out! Show up for the first time or bring a new buddy and you both get TWO extra entries!

    Read the article

  • The Mysterious ARR Server Farm to URL Rewrite link

    - by OWScott
    Application Request Routing (ARR) is a reverse proxy plug-in for IIS7+ that does many things, including functioning as a load balancer.  For this post, I’m assuming that you already have an understanding of ARR.  Today I wanted to find out how the mysterious link between ARR and URL Rewrite is maintained.  Let me explain… ARR is unique in that it doesn’t work by itself.  It sits on top of IIS7 and uses URL Rewrite.  As a result, ARR depends on URL Rewrite to ‘catch’ the traffic and redirect it to an ARR Server Farm. As the last step of creating a new Server Farm, ARR will prompt you with the following: If you accept the prompt, it will create a URL Rewrite rule for you.  If you say ‘No’, then you’re on your own to create a URL Rewrite rule. When you say ‘Yes’, the Server Farm’s checkbox for “Use URL Rewrite to inspect incoming requests” will be checked.  See the following screenshot. However, I’m not a fan of this auto-rule.  The problem is that if I make any changes to the URL Rewrite rule, which I always do, and then make the wrong change in ARR, it will blow away my settings.  So, I prefer to create my own rule and manage it myself. Since I had some old rules that were managed by ARR, I wanted to update them so that they were no longer managed that way.  I took a look at a config in applicationHost.config to try to find out what property would bind the two together.  I assumed that there would be a property on the ServerFarm called something like urlRewriteRuleName that would serve as the link between ARR and URL Rewrite.  I found no such property.  After a bit of testing, I found that the name of the URL Rewrite rule is the only link between ARR and URL Rewrite.  I wouldn’t have guessed.  The URL Rewrite rule needs to be exactly ARR_{ServerFarm Name}_loadBalance, although it’s not case sensitive. Consider the following auto-created URL Rewrite rule: And, the link between ARR and URL Rewrite exists: Now, as soon as I rename that to anything else, for example, site.com ARR Binding, the link between ARR and URL Rewrite is broken. To be certain of the relationship, I renamed it back again and sure enough, the relationship was reestablished. Why is this important?  It’s only important if you want to decouple the relationship between ARR the URL Rewrite rule, but if you want to do so, the best way to do that is to rename the URL Rewrite rule.  If you uncheck the “Use URL Rewrite to inspect incoming requests” checkbox, it will delete your rule for you without prompting.  Conclusion The mysterious link between ARR and URL Rewrite only exists through the ARR Rule name.  If you want to break the link, simply rename the URL Rewrite rule.  It’s completely safe to do so, and, in my opinion, this is a rule that you should manage yourself anyway. 

    Read the article

  • Looking into ASP.Net MVC 4.0 Mobile Development - part 1

    - by nikolaosk
    In this post I will be looking how ASP.Net MVC 4.0 helps us to create web solutions that target mobile devices.We all experience the magic that is the World Wide Web through mobile devices. Millions of people around the world, use tablets and smartphones to view the contents of websites,e-shops and portals.ASP.Net MVC 4.0 includes a new mobile project template and the ability to render a different set of views for different types of devices.There is a new feature that is called browser overriding which allows us to control exactly what a user is going to see from your web application regardless of what type of device he is using.In order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 2012 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine.It will work fine in Windows 7 as well so do not worry if you do not have the latest Microsoft operating system.1) Launch VS 2012 and create a new Web Forms application by going to File - >New Project - > ASP.Net MVC 4 Web Application and then click OKHave a look at the picture below  2) From the available templates select Mobile Application and then click OK.Have a look at the picture below 3) When I run the application I get the mobile view of the page. I would like to show you what a typical ASP.Net MVC 4.0 application looks like. So I will create a new simple ASP.Net MVC 4.0 Web Application. When I run the application I get the normal page view.Have a look at the picture below.On the left is the mobile view and on the right the normal view. As you can see we have more or less the same content in our mobile application (log in,register) compared with the normal ASP.Net MVC 4.0 application but it is optimised for mobile devices. 4) Let me explain how and when the mobile view is selected and finally rendered.There is a feature in MVC 4.0 that is called Display Modes and with this feature the runtime will select a view.If we have 2 views e.g contact.mobile.cshtml and contact.cshtml in our application the Controller at some point will instruct the runtime to select and render a view named contact.The runtime will look at the browser making the request and will determine if it is a mobile browser or a desktop browser. So if there is a request from my IPhone Safari browser for a particular site, if there is a mobile view the MVC 4.0 will select it and render it. If there is not a mobile view, the normal view will be rendered.5) In the  ASP.Net MVC 4.0 (Internet application) I created earlier (not the first project which was a mobile one) I can run it once more and see how it looks on the browser. If I want to view it with a mobile browser I must download one emulator like Opera Mobile.You can download Opera Mobile hereWhen I run the application I get the same view in both the desktop and the mobile browser. That was to be expected. Have a look at the picture below 6) Then I create another version of the _Layout.mobile.cshtml view in the Shared folder.I simply copy and paste the _Layout.cshtml  into the same folder and then rename it to _Layout.mobile.cshtml and then just alter the contents of the _Layout.mobile.cshtml.When I run again the application I get a different view on the desktop browser and a different one on the Opera mobile browser.Have a look at the picture below ?he Controller will instruct the ASP.Net runtime to select and render a view named _Layout.mobile.cshtml when the request will come from a mobile browser.?he runtime knows that a browser is a mobile one through the ASP.Net browser capability provider. Hope it helps!!!

    Read the article

  • Screen Scraping Twitter

    - by BRADINO
    I got an email today asking for help to scrape Twitter. In particular, to be able to login. So I am going to show everyone, NOT to encourage anyone to violate Twitters terms of use but as an educational blog post about how PHP and cURL can be used to post variables and store cookies. Again, I am using the cScrape class I wrote, which you can download. Step 1 First go to twitter.com and look at the source code of the login to get the form field names and the form post location. You will see that the form posts to https://twitter.com/session and the username and password fields are session[username_or_email] and session[password] respectively. Step 2 Now you are ready to login. So using the fetch function in the Scrape class you create an associative array to contain the form values you want to post. The other thing you will need to do is uncomment the lines for CURLOPT_COOKIEFILE and CURLOPT_COOKIEJAR. Cookies will be required to stay logged in and scrape around. The paths to the cookie files need to be writable by your app. Also you will need to uncomment the line about CURLOPT_FOLLOWLOCATION. $data = array('session[username_or_email]' => "bradino", 'session[password]' => "secret"); $scrape->fetch('https://twitter.com/sessions',$data); Step 1.5 Oops that didn't work. All I got back was 403 Forbidden: The server understood the request, but is refusing to fulfill it. Ahhh I see another variable called authenticity_token I bet Twitter was looking for that. So let's back up and first hit twitter.com to get the authenticity_token variable, and then make the login post request with that variable included in our array of parameters. $scrape->fetch('https://twitter.com'); $data = array('session[username_or_email]' => "bradino", 'session[password]' => "secret"); $data['authenticity_token'] = $scrape->fetchBetween('name="authenticity_token" type="hidden" value="','"',$scrape->result); $scrape->fetch('https://twitter.com/sessions',$data); echo $scrape->result; So that's basically it. Now you are logged in and can scrape around and request other pages as you normally would. Sorry it wasn't a longer post. I really do enjoy this kind of stuff so if anyone has a request, hit me up. Errors? 1) Make sure that you are properly parsing the token variable 2) Make sure that you uncommented the lines about CURLOPT_COOKIEFILE and CURLOPT_COOKIEJAR, those options need to be enabled and be sure the path set is writable by your application 3) Make sure that the path to the cookie file is writable and that it is getting data written to it 4) If you get a message about being redirected you need to uncomment the line about CURLOPT_FOLLOWLOCATION, that option needs to be enabled true

    Read the article

  • PowerShell Control over Nikon D3000 Camera

    My wife got me a Nikon D3000 camera for Christmas last year, and Im loving it but still trying to wrap my head around some of its features.  For instance, when you plug it into a computer via USB, it doesnt show up as a drive like most cameras Ive used to, but rather it shows up as Computer\D3000.  After a bit of research, Ive learned that this is because it implements the MTP/PTP protocol, and thus doesnt actually let Windows mount the cameras storage as a drive letter.  Nikon describes the use of the MTP and PTP protocols in their cameras here. What Im really trying to do is gain access to the cameras file system via PowerShell.  Ive been using a very handy PowerShell script to pull pictures off of my cameras and organize them into folders by date.  Id love to be able to do the same thing with my Nikon D3000, but so far I havent been able to figure out how to get access to the files in PowerShell.  If you know, Id appreciate any links/tips you can provide.  All I could find is a shareware product called PTPdrive, which Im not prepared to shell out money for (yet).  (and yes you can do much the same thing with Windows 7s Import Pictures and Videos wizard, which is pretty good too) However, in my searching, I did find some really cool stuff you can do with PowerShell and one of these cameras, like actually taking pictures via PowerShell commands.  Credit for this goes to James ONeill and Mark Wilson.  Heres what I was able to do: Taking Pictures via PowerShell with D3000 First, connect your camera, turn it on, and launch PowerShell.  Execute the following commands to see what commands your device supports.  $dialog = New-Object -ComObject "WIA.CommonDialog" $device = $dialog.ShowSelectDevice() $device.Commands You should see something like this: Now, to take a picture, simply point your camera at something and then execute this command: $device.ExecuteCommand("{AF933CAC-ACAD-11D2-A093-00C04F72DC3C}") .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Imagine my surprise when this actually took a picture (with auto-focus): Imagine what you could do with a camera completely under the control of your computer  Time-lapse photography would be pretty simple, for instance, with a very simple loop that takes a picture and then sleeps for a minute (or whatever time period).  Hooked up to a laptop for portability (and an A/C power supply), this would be pretty trivial to implement.  I may have to give it a shot and report back. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to prepare for a telephone interview: ‘Develop an Interview Cheat Sheet’

    - by Maria Sandu
    At Oracle we often do telephone interviews in different stages of the process with candidates, due to the fact that we hire native speakers into other countries. On this blog we already have an article with tips and tricks for phone interviews that can help you during the telephone interviews. To help you prepare even better for a telephone interview we would like to introduce you the basics of developing a cheat sheet. The benefit of a telephone interview is that you will be sitting at home, at your table or desk, during the interview, and not in front of someone. So use this to your advantage. The Monster website has some useful and interesting tips and tricks for developing a cheat sheet. Carole Martin, who wrote this article, says that a cheat sheet will help you feel more prepared and confident when speaking to managers over the phone. Important to keep in mind is that you shouldn't memorise what's on the sheet or check it off during the interview. Only use your cheat sheet to remind you of key facts. Here are some suggestions to include on it: • Divide a piece of paper in 2 by drawing a line. Write on one side of the paper a list of requirements as mentioned in the job description. On the other side list your qualities to fulfill the requirements of the employer. This will help you in answering questions about why you are the best candidate for the job and how you fit the role. • Do research on the company, the industry sector and the competitors, so you will get a feeling for the company’s business and can ask more in-depth questions. • Be prepared for the most used introduction question: “Tell me a bit about yourself”. Prepare a 60-second personal statement or pitch in which you summarise who you are and what you can offer, so you will be able to sell yourself from on the very beginning. • Write down a minimum of 5 good examples to answer behavioral interview questions ("Tell me about a time when..." or "Give me an example of a time..." ). These questions are used by interviewers to see how you deal with similar situations as you might encounter in the job. Interviewers use this question as past behaviour is scientifically proven to be the best predictor for future behaviour. • List five questions to ask the interviewer about the job, the company and the industry to help you get a good understanding if the role and company really fit your needs and wants. To get some inspiration check this article on inc.com • Find out how much you are worth on the job market and determine your needs based on your living expenses, especially when moving abroad. • Ask for permission from the people you plan to use as a reference. Also make sure you have your CV at hand and an overview of your grades. Feel free to comment on this article and let us know what your experience is with developing a cheat sheet for a telephone interview. Good luck with the preparation of your sheet.

    Read the article

  • ASP.NET JavaScript Routing for ASP.NET MVC–Constraints

    - by zowens
    If you haven’t had a look at my previous post about ASP.NET routing, go ahead and check it out before you read this post: http://weblogs.asp.net/zowens/archive/2010/12/20/asp-net-mvc-javascript-routing.aspx And the code is here: https://github.com/zowens/ASP.NET-MVC-JavaScript-Routing   Anyways, this post is about routing constraints. A routing constraint is essentially a way for the routing engine to filter out route patterns based on the day from the URL. For example, if I have a route where all the parameters are required, I could use a constraint on the required parameters to say that the parameter is non-empty. Here’s what the constraint would look like: Notice that this is a class that inherits from IRouteConstraint, which is an interface provided by System.Web.Routing. The match method returns true if the value is a match (and can be further processed by the routing rules) or false if it does not match (and the route will be matched further along the route collection). Because routing constraints are so essential to the route matching process, it was important that they be part of my JavaScript routing engine. But the problem is that we need to somehow represent the constraint in JavaScript. I made a design decision early on that you MUST put this constraint into JavaScript to match a route. I didn’t want to have server interaction for the URL generation, like I’ve seen in so many applications. While this is easy to maintain, it causes maintenance issues in my opinion. So the way constraints work in JavaScript is that the constraint as an object type definition is set on the route manager. When a route is created, a new instance of the constraint is created with the specific parameter. In its current form the constraint function MUST return a function that takes the route data and will return true or false. You will see the NotEmpty constraint in a bit. Another piece to the puzzle is that you can have the JavaScript exist as a string in your application that is pulled in when the routing JavaScript code is generated. There is a simple interface, IJavaScriptAddition, that I have added that will be used to output custom JavaScript. Let’s put it all together. Here is the NotEmpty constraint. There’s a few things at work here. The constraint is called “notEmpty” in JavaScript. When you add the constraint to a parameter in your C# code, the route manager generator will look for the JsConstraint attribute to look for the name of the constraint type name and fallback to the class name. For example, if I didn’t apply the “JsConstraint” attribute, the constraint would be called “NotEmpty”. The JavaScript code essentially adds a function to the “constraintTypeDefs” object on the “notEmpty” property (this is how constraints are added to routes). The function returns another function that will be invoked with routing data. Here’s how you would use the NotEmpty constraint in C# and it will work with the JavaScript routing generator. The only catch to using route constraints currently is that the following is not supported: The constraint will work in C# but is not supported by my JavaScript routing engine. (I take pull requests so if you’d like this… go ahead and implement it).   I just wanted to take this post to explain a little bit about the background on constraints. I am looking at expanding the current functionality, but for now this is a good start. Thanks for all the support with the JavaScript router. Keep the feedback coming!

    Read the article

  • SQLAuthority News – Author Visit – SQL Server 2008 R2 Launch

    - by pinaldave
    June 11, 2010 was a wonderful day because I attended the very first SQL Server 2008 R2 Launch event held by Microsoft at Mumbai. I traveled to Mumbai from my home town, Ahmedabad. The event was located at one of the best hotels in Mumbai,”The Leela”. SQL Server R2 Launch was an evening event that had a few interesting talks. SQL PASS is associated with this event as one of the partners and its goal is to increase the awareness of the Community about SQL Server. I met many interesting people and had a great networking opportunity at the event. This event was kicked off with an awesome laser show and a “Welcome” video, which was followed by a Microsoft Executive session wherein there were several interesting demo. The very first demo was about Powerpivot. I knew beforehand that there will be Powerpivot demos because it is a very popular subject; however, I was really hoping to see other interesting demos from SQL Server 2008 R2. And believe me; I was happier to see the later demos. There were demos from SQL Server Utility Control Point, as well an integration of Bing Map with Reporting Servers. I really enjoyed the interactive and informative session by Shivaram Venkatesh. He had excellent presentation skills as well as ample technical knowledge to keep the audience attentive. I really liked his presentations skills wherein he did not read the whole slide deck; rather, he picked one point and using that point he told the story of the whole slide deck. I also enjoyed my conversation with Afaq Choonawala, who is one of the “gem guys” in Microsoft. I also want to acknowledge Ashwin Kini and Mohit Panchal for their excellent support to this event. Mumbai IT Pro is a user group which you can really count on for any kind of help. After excellent demos and a vibrant start of the event, all the audience was jazzed up. There were two vendors’ sessions right after the first session. Intel had 15 minutes to present; however, Intel’s representative, who had good knowledge of the subject, had nearly 30+ slides in his presentation, so he had to rush a bit to cover the whole slide deck. Intel presentations were followed up by another vendor presentation from NetApp. I have previously heard about this tool. After I saw the demo which did not work the first time the Net App presenter demonstrated it, I started to have a doubt on this product. I personally went to clarify my doubt to the demo booth after the presentation was over, but I realize the NetApp presenter or booth owner had absolutely a POOR KNOWLEDGE of SQL Server and even of their own NetApp product. The NetApp people tried to misguide us and when we argued, they started to say different things against what they said earlier. At one point in their presentation, they claimed their application does something very fast, which did not really happen in front of all the audience. They blamed SQL Server R2 DBCC CHECKDB command for their product’s failed demonstration. I know that NetApp has many great products; however, this one was not conveyed clearly and even created a negative impression to all of us. Well, let us not judge the potential, fun, education and enigma of the launch event through a small glitch. This event was jam-packed and extremely well-received by everybody who attended it. As what I said, average demos and good presentations by MS folks were really something to cheer about. Any launch event is considered as successful if it achieves its goal to excite users with its cutting edge technology; just like this event that left a very deep impression on me. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology Tagged: PASS, SQLPASS

    Read the article

  • Surface Review from Canadian Guy Who Didn&rsquo;t Go To Build

    - by D'Arcy Lussier
    I didn’t go to Build last week, opted to stay home and go trick-or-treating with my daughters instead. I had many friends that did go however, and I was able to catch up with James Chambers last night to hear about the conference and play with his Surface RT and Nokia 920 WP8 devices. I’ve been using Windows 8 for a while now, so I’m not going to comment on OS features – lots of posts out there on that already. Let me instead comment on the hardware itself. Size and Weight The size of the tablet was awesome. The Windows 8 tablet I’m using to reference this against is the one from Build 2011 (Samsung model) we received as well as my iPad. The Surface RT was taller and slightly heavier than the iPad, but smaller and lighter than the Samsung Win 8 tablet. I still don’t prefer the default wide-screen format, but the Surface RT is much more usable even when holding it by the long edge than the Samsung. Build Quality No issues with the build quality, it seemed very solid. But…y’know, people have been going on about how the Surface RT materials are so much better than the plastic feeling models Samsung and others put out. I didn’t really notice *that* much difference in that regard with the Surface RT. Interesting feature I didn’t expect – the Windows button on the device is touch-sensitive, not a mechanical one. I didn’t try video or anything, so I can’t comment on the media experience. The kickstand is a great feature, and the way the Surface RT connects to the combo case/keyboard touchcover is very slick while being incredibly simple. What About That Touch Cover Keyboard? So first, kudos to Microsoft on the touch cover! This thing was insanely responsive (including the trackpad) and really delivered on the thinness I was expecting. With that said, and remember this is with very limited use, I would probably go with the Type Cover instead of the Touch Cover. The difference is buttons. The Touch Cover doesn’t actually have “buttons” on the keyboard – hence why its a “touch” cover. You tap on a key to type it. James tells me after a while you get used to it and you can type very fast. For me, I just prefer the tactile feeling of a button being pressed/depressed. But still – typing on the touch case worked very well. Would I Buy One? So after playing with it, did I cry out in envy and rage that I wasn’t able to get one of these machines? Did I curse my decision to collect Halloween candy with my kids instead of being at Build getting hardware? Well – no. Even with the keyboard, the Surface RT is not a business laptop replacement device. While Office does come included, you can’t install any other applications outside of Windows Store Apps. This might be limiting depending on what other applications you need to have available on your computer. Surface RT is a great personal computing device, as long as you’re not already invested in a competing ecosystem. I’ve heard people make statements that they’re going to replace all the iPads in their homes with Surface tablets. In my home, that’s not feasible – my wife and daughters have amassed quite a collection of games via iTunes. We also buy all our music via iTunes as well, so even with the XBox streaming music service now available we’re still tied quite tightly to iTunes. So who is the Surface RT for? In my mind, if you’re looking for a solid, compact device that provides basic business functionality (read: email) or if you have someone that needs a very simple to use computer for email, web browsing, etc., then Surface RT is a great option. For me, I’m waiting on the Samsung Ativ Smart PC Pro and am curious to see what changes the Surface Pro will come with.

    Read the article

  • Redaction in AutoVue

    - by [email protected]
    As the trend to digitize all paper assets continues, so does the push to digitize all the processes around these assets. One such process is redaction - removing sensitive or classified information from documents. While for some this may conjure up thoughts of old CIA documents filled with nothing but blacked out pages, there are actually many uses for redaction today beyond military and government. Many companies have a need to remove names, phone numbers, social security numbers, credit card numbers, etc. from documents that are being scanned in and/or released to the public or less privileged users - insurance companies, banks and legal firms are a few examples. The process of digital redaction actually isn't that far from the old paper method: Step 1. Find a folder with a big red stamp on it labeled "TOP SECRET" Step 2. Make a copy of that document, since some folks still need to access the original contents Step 3. Black out the text or pages you want to hide Step 4. Release or distribute this new 'redacted' copy So where does a solution like AutoVue come in? Well, we've really been doing all of these things for years! 1. With AutoVue's VueLink integration and iSDK, we can integrate to virtually any content management system and view documents of almost any format with a single click. Finding the document and opening it in AutoVue: CHECK! 2. With AutoVue's markup capabilities, adding filled boxes (or other shapes) around certain text is a no-brainer. You can even leverage AutoVue's powerful APIs to automate the addition of markups over certain text or pre-defined regions using our APIs. Black out the text you want to hide: CHECK! 3. With AutoVue's conversion capabilities, you can 'burn-in' the comments into a new file, either as a TIFF, JPEG or PDF document. Burning-in the redactions avoids slip-ups like the recent (well-publicized) TSA one. Through our tight integrations, the newly created copies can be directly checked into the content management system with no manual intervention. Make a copy of that document: CHECK! 4. Again, leveraging AutoVue's integrations, we can now define rules in the system based on a user's privileges. An 'authorized' user wishing to view the document from the repository will get exactly that - no redactions. An 'unauthorized' user, when requesting to view that same document, can get redirected to open the redacted copy of the same document. Release or distribute the new 'redacted' copy: CHECK! See this movie (WMV format, 2mins, 20secs, no audio) for a quick illustration of AutoVue's redaction capabilities. It shows how redactions can be added based on text searches, manual input or pre-defined templates/regions. Let us know what you think in the comments. And remember - this is all in our flagship AutoVue product - no additional software required!

    Read the article

  • What are developer's problems with helpful error messages?

    - by Moo-Juice
    It continue to astounds me that, in this day and age, products that have years of use under their belt, built by teams of professionals, still to this day - fail to provide helpful error messages to the user. In some cases, the addition of just a little piece of extra information could save a user hours of trouble. A program that generates an error, generated it for a reason. It has everything at its disposal to inform the user as much as it can, why something failed. And yet it seems that providing information to aid the user is a low-priority. I think this is a huge failing. One example is from SQL Server. When you try and restore a database that is in use, it quite rightly won't let you. SQL Server knows what processes and applications are accessing it. Why can't it include information about the process(es) that are using the database? I know not everyone passes an Applicatio_Name attribute on their connection string, but even a hint about the machine in question could be helpful. Another candidate, also SQL Server (and mySQL) is the lovely string or binary data would be truncated error message and equivalents. A lot of the time, a simple perusal of the SQL statement that was generated and the table shows which column is the culprit. This isn't always the case, and if the database engine picked up on the error, why can't it save us that time and just tells us which damned column it was? On this example, you could argue that there may be a performance hit to checking it and that this would impede the writer. Fine, I'll buy that. How about, once the database engine knows there is an error, it does a quick comparison after-the-fact, between values that were going to be stored, versus the column lengths. Then display that to the user. ASP.NET's horrid Table Adapters are also guilty. Queries can be executed and one can be given an error message saying that a constraint somewhere is being violated. Thanks for that. Time to compare my data model against the database, because the developers are too lazy to provide even a row number, or example data. (For the record, I'd never use this data-access method by choice, it's just a project I have inherited!). Whenever I throw an exception from my C# or C++ code, I provide everything I have at hand to the user. The decision has been made to throw it, so the more information I can give, the better. Why did my function throw an exception? What was passed in, and what was expected? It takes me just a little longer to put something meaningful in the body of an exception message. Hell, it does nothing but help me whilst I develop, because I know my code throws things that are meaningful. One could argue that complicated exception messages should not be displayed to the user. Whilst I disagree with that, it is an argument that can easily be appeased by having a different level of verbosity depending on your build. Even then, the users of ASP.NET and SQL Server are not your typical users, and would prefer something full of verbosity and yummy information because they can track down their problems faster. Why to developers think it is okay, in this day and age, to provide the bare minimum amount of information when an error occurs? It's 2011 guys, come on.

    Read the article

  • Distinctly LINQ &ndash; Getting a Distinct List of Objects

    - by David Totzke
    Let’s say that you have a list of objects that contains duplicate items and you want to extract a subset of distinct items.  This is pretty straight forward in the trivial case where the duplicate objects are considered the same such as in the following example: List<int> ages = new List<int> { 21, 46, 46, 55, 17, 21, 55, 55 }; IEnumerable<int> distinctAges = ages.Distinct(); Console.WriteLine("Distinct ages:"); foreach (int age in distinctAges) { Console.WriteLine(age); } /* This code produces the following output: Distinct ages: 21 46 55 17 */ What if you are working with reference types instead?  Imagine a list of search results where items in the results, while unique in and of themselves, also point to a parent.  We’d like to be able to select a bunch of items in the list but then see only a distinct list of parents.  Distinct isn’t going to help us much on its own as all of the items are distinct already.  Perhaps we can create a class with just the information we are interested in like the Id and Name of the parents.  public class SelectedItem { public int ItemID { get; set; } public string DisplayName { get; set; } } We can then use LINQ to populate a list containing objects with just the information we are interested in and then get rid of the duplicates. IEnumerable<SelectedItem> list = (from item in ResultView.SelectedRows.OfType<Contract.ReceiptSelectResults>() select new SelectedItem { ItemID = item.ParentId, DisplayName = item.ParentName }) .Distinct(); Most of you will have guessed that this didn’t work.  Even though some of our objects are now duplicates, because we are working with reference types, it doesn’t matter that their properties are the same, they’re still considered unique.  What we need is a way to define equality for the Distinct() extension method. IEqualityComparer<T> Looking at the Distinct method we see that there is an overload that accepts an IEqualityComparer<T>.  We can simply create a class that implements this interface and that allows us to define equality for our SelectedItem class. public class SelectedItemComparer : IEqualityComparer<SelectedItem> { public new bool Equals(SelectedItem abc, SelectedItem def) { return abc.ItemID == def.ItemID && abc.DisplayName == def.DisplayName; } public int GetHashCode(SelectedItem obj) { string code = obj.DisplayName + obj.ItemID.ToString(); return code.GetHashCode(); } } In the Equals method we simply do whatever comparisons are necessary to determine equality and then return true or false.  Take note of the implementation of the GetHashCode method.  GetHashCode must return the same value for two different objects if our Equals method says they are equal.  Get this wrong and your comparer won’t work .  Even though the Equals method returns true, mismatched hash codes will cause the comparison to fail.  For our example, we simply build a string from the properties of the object and then call GetHashCode() on that. Now all we have to do is pass an instance of our IEqualitlyComarer<T> to Distinct and all will be well: IEnumerable<SelectedItem> list =     (from item in ResultView.SelectedRows.OfType<Contract.ReceiptSelectResults>()         select new SelectedItem { ItemID = item.dahfkp, DisplayName = item.document_code })                         .Distinct(new SelectedItemComparer());   Enjoy. Dave Just because I can… Technorati Tags: LINQ,C#

    Read the article

  • Update Metadata and Cover Art in Windows Media Player 12

    - by DigitalGeekery
    If you use Windows Media Player 12 in Windows 7, you may notice some of your media is missing information when displayed in the library. Today we look at how to edit and update metadata and cover art in WMP 12. By default, Windows Media Player will pull metadata, such as the title, artist, album, and cover art from the Internet. If you did not accept that default option during setup, we’ll need to turn the feature on first. Select Tools > Options from the top Menu bar. On the Library tab, ensure that Retrieve additional information form the Internet is checked. Click OK. Editing Metadata Now we’re ready to update some files. Find a media file with incorrect details or cover art. Right-click on the title and select Find album info. This will bring up the Find album information window. Here you’ll see the existing information that Windows Media Player interpreted as correct on the left side. The results of  WMP’s search for the media information are on the right. Click on Artists,  Albums , or Tracks to scroll through the search results and try to find a match. You can also type in new keywords in the Search box and hit enter (or click the Search button) to perform a new search.   If you find a correct match for your media file, click to select it and click Next. You’ll be prompted to confirm your selection, then click Finish. You should now see your media file displayed properly in Windows Media Player. Manually Entering Metadata If your search for the correct media information comes up empty, you can always manually enter the information yourself. On the Find album information window, click Edit under Existing Information. You can edit the existing information in the text boxes or the Genre dropdown box. There are a couple hidden text boxes below. Click next to Contributing Artist or Composer to enter that information.   Choosing Your Own Cover Art If your media file doesn’t pull the proper cover art, or if you simply wish to find a different image, you can add your own. Search online for a suitable image. An ideal size would be around 300 x 300 pixels, give or take. Right-click on the image copy the image. You’ll need to switch to Expanded title (if you haven’t already) to paste the image.   Paste your new image by right-clicking on the current image and select Paste album art. Note: If the image is not suitable size or type, the Paste album art option will not be available. Your new cover art will appear in Windows Media Player.   Even though it is pulled from the Internet, cover art is cached on your computer and will still be available when you are disconnected from the Internet. Are you new to Windows Media Player? If so, check out our article on how to Manage your music with Windows Media Player. Similar Articles Productive Geek Tips Make VLC Player Look like Windows Media Player 11Fixing When Windows Media Player Library Won’t Let You Add FilesMake VLC Player Look like Windows Media Player 10Add Images and Metadata to Windows 7 Media Center Movie LibraryMake VLC Player Look like Winamp 5 (Kinda) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 4: Service Client (using Service Metadata)

    - by Your DisplayName here!
    See parts 1, 2 and 3 first. In this part we will finally build a client for our federated service. There are basically two ways to accomplish this. You can use the WCF built-in tooling to generate client and configuration via the service metadata (aka ‘Add Service Reference’). This requires no WIF on the client side. Another approach would be to use WIF’s WSTrustChannelFactory to manually talk to the ADFS 2 WS-Trust endpoints. This option gives you more flexibility, but is slightly more code to write. You also need WIF on the client which implies that you need to run on a WIF supported operating system – this rules out e.g. Windows XP clients. We’ll start with the metadata way. You simply create a new client project (e.g. a console app) – call ‘Add Service Reference’ and point the dialog to your service endpoint. What will happen then is, that VS will contact your service and read its metadata. Inside there is also a link to the metadata endpoint of ADFS 2. This one will be contacted next to find out which WS-Trust endpoints are available. The end result will be a client side proxy and a configuration file. Let’s first write some code to call the service and then have a closer look at the config file. var proxy = new ServiceClient(); proxy.GetClaims().ForEach(c =>     Console.WriteLine("{0}\n {1}\n  {2} ({3})\n",         c.ClaimType,         c.Value,         c.Issuer,         c.OriginalIssuer)); That’s all. The magic is happening in the configuration file. When you in inspect app.config, you can see the following general configuration hierarchy: <client /> element with service endpoint information federation binding and configuration containing ADFS 2 endpoint 1 (with binding and configuration) ADFS 2 endpoint n (with binding and configuration) (where ADFS 2 endpoint 1…n are the endpoints I talked about in part 1) You will see a number of <issuer /> elements in the binding configuration where simply the first endpoint from the ADFS 2 metadata becomes the default endpoint and all other endpoints and their configuration are commented out. You now need to find the endpoint you want to use (based on trust version, credential type and security mode) and replace that with the default endpoint. That’s it. When you call the WCF proxy, it will inspect configuration, then first contact the selected ADFS 2 endpoint to request a token. This token will then be used to authenticate against the service. In the next post I will show you the more manual approach using the WIF APIs.

    Read the article

  • Silverlight Cream for January 30, 2011 -- #1037

    - by Dave Campbell
    In this Issue: Ollie Riches, Colin Eberhardt, Andrej Tozon, Arik Poznanski, Deborah Kurata(-2-), Jay Kimble, Yochay Kiriaty, Peter Kuhn, Mike Ormond, WindowsPhoneGeek(-2-), and Matthias Shapiro. Above the Fold: Silverlight: "Missing Chart Legend" Deborah Kurata WP7: "XNA for Silverlight developers: Part 2 - Text rendering" Peter Kuhn Shoutouts: Timmy Kokke has a post up discussing What’s new in the Expression Design January 2011 preview? From SilverlightCream.com: WP7Contrib: Thread safe ObservableCollection<T> Ollie Riches, one of the two originators of WP7Contrib, has a post up on the WP7C ObservableCollection... what and why. Windows Phone 7 DeferredLoadContentControl Colin Eberhardt's latest is one we should all take notice of... a content control that defers rendering to provide a better user experience... source code is available as are some good external links Andrej Tozon on Hey weigh! WP7 application SilverlightShow interviews WP7 Dev Andrej Tozon and gets his take on his app, challenges, tips, and the future of WP7. A ProgressBar With Text For Windows Phone 7 Arik Poznanski demonstrates putting text up on the progress bar to let your users know what you're up to... and it looks great in the screenshots. Charting in a Silverlight Application using MVVM Deborah Kurata is checking out the Charting control this time around... using the charting control from the toolbox in the MVVM app she built in the last post... C# and VB code as always. Missing Chart Legend Deborah Kurata's latest in the world of Charting and MVVM involves using a custom theme and having your chart legend disappear... never fear, she's gonna tell you how to fix that! Silverlight/WP7 tip: Detecting when in VS Design Mode Jay Kimble has a post up that not only resolves a question you may need answered during development (are you in VS design Mode), but it also helps resolve a class of problem that Jay explains. Windows Phone GPS Emulator Yochay Kiriaty points out that while part of the issues of building a GPS-driven app for WP7 is getting your head around the tools, the next hurdle is testing... and that's what he's really discussing... "Windows Phone GPS Emulator" ... if you're playing with the GPS, you'll want this. XNA for Silverlight developers: Part 2 - Text rendering Peter Kuhn's latest tutorial in his XNA series for Silverlight developers is up at SilverlightShow... in this tutorial, Peter discusses text... it's a vastly different game displaying text in XNA as compared to Silverlight ... check it out and see. OData and Windows Phone 7 Mike Ormond starts you off using OData on your WP7 by showing where to download the libraries, and not stopping until he has an app running that reads an OData feed, plus he plans on continuing the quest in future posts. WP7 ProgressOverlay control in depth: features and customization WindowsPhoneGeek has a couple new posts up. The first one is an in-depth look at the ProgressOverlay control in the Codeing4fun Toolkit... pretty cool to be able to put your logo or app logo up. On Testing Windows Phone 7 Applications – Part II: Dealing with the WP7 Application Model WindowsPhoneGeek also has 5 more WP7 testing tips... and these are a little more technical than the first set, and includes some good external links. Topics include: Tombstoning, Usability, Navigation, Capabilities, and Memory consumption. Fun Theme-Friendly Windows Phone Icon Matthias Shapiro explains how to have your WP7 icon change based on the theme your user has chosen... great examples, and XAML included Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • XNA Notes 008

    - by George Clingerman
    This week has been a rough one. I’ve been sick and then in some kind of slump for my afternoon coding sessions. It could be from the cold, could be I’m still tired from writing that Windows Phone 7 game development book (which is out now!) or it could just be I’m tired of winter and want some sunshine. All I know is that even while I’m stick, the XNA world keeps going along at it’s whirlwind pace. Below are the things I caught in between my coughing fits.. Time Critical XNA News: The 2011 MVP summit is almost here so pass along your feelings and thoughts so the MVPs can take them and share them with the team in person http://forums.create.msdn.com/forums/p/76317/464136.aspx#464136 Dream Build Play - there’s no new announcement yet, but you can’t get much more to the end of February than this! http://www.dreambuildplay.com/Main/Home.aspx XNA Team: Dean Johnson from the XNA team shares an excellent way of handling Guide.IsTrialMode on WP7 http://blogs.msdn.com/b/dejohn/archive/2011/02/21/calling-guide-istrialmode-on-windows-phone-7.aspx Nick Gravelyn tries a new tactic in deciding if there’s enough interest to develop a sequel or not. Don’t YOU want Pixel Man 2 to come out? http://nickgravelyn.com/pixelman2/ XNA MVPs: Andy “The ZMan” Dunn finally shares what he’s been secretly working on these past 4 months http://twitter.com/#!/The_Zman/status/40590269392887808 http://www.youtube.com/watch?v=Rg8Z0ZdYbvg&feature=youtu.be Joel Martinez lets developers around NYC know they should by signing up for Game Hack Day http://twitter.com/joelmartinez/statuses/41118590862102528 http://gamehackday.org/71fdk XNA Developers: Michael McLaughlin shares an XNA RenderTarget2D Sample http://geekswithblogs.net/mikebmcl/archive/2011/02/18/xna-rendertarget2d-sample.aspx Martin Caine starts a new series on Deferred Rendering in XNA 4.0 http://twitter.com/#!/MartinCaine/status/39735221339291648 http://martincaine.com/xna/deferred_rendering_in_xna_4_introduction ElemenyCy posts about his fun time with the IntermediateSerializer http://www.ubergamermonkey.com/xna/holy-bloated-xml-batman/ Ben Kane releases a narrated dev diary video for Project Splice. Let him know if you’d like to see more! (I know I do!) http://twitter.com/#!/benkane/status/39846959498002432 http://www.youtube.com/watch?v=1EmziXZUo08&feature=youtu.be Jason Swearingen (of Novaleaf) posts his part 1 of Spatial Partitioning solutions http://altdevblogaday.org/2011/02/21/spatial-partitioning-part-1-survey-of-spatial-partitioning-solutions/ Brian Lawson of Dark Flow Studios shares what his been up to lately with lots of pretty screenshots and hints of announcements from Microsoft... http://www.darkflowstudios.com/entry/short-and-sweet-part-1 Luke Avery starts a new blog where he plans on making XNA tutorials for beginners (and he’s got a few started already!) http://programmingwithovery.wordpress.com/ Xbox LIVE Indie Games (XBLIG): GameMarx Episode 10 http://www.gamemarx.com/video/the-show/24/ep-10-february-18-2010.aspx Minecraft clone FortressCraft coming to XBLIG http://www.eurogamer.net/articles/2011-02-23-minecraft-clone-fortresscraft-hits-xblig ezMuze+ starts an IndieGoGo fundraiser campaign to help fund their second game and get it onto even more devices! http://www.indiegogo.com/ezmuze Gamergeddon XBLIG round up http://www.gamergeddon.com/2011/02/20/xbox-indie-game-round-up-february-20th/?utm_campaign=twitter&utm_medium=twitter&utm_source=twitter JForce Games loses their Ego http://jforcegames.com/blog/index.php?itemid=121&catid=4 XNA Game Development: @BallerIndustry reminds all XNA developers that the Maths are important ;) http://twitter.com/#!/BallerIndustry/status/39317618280243200 http://www.youtube.com/watch?v=MjV3XDFsjP4&feature=player_embedded#at=106 @suhinini stumbles on an older but extremely useful post on XNA Content Pipeline debugging http://twitter.com/#!/suhinini/status/39270189476352000 http://badcorporatelogo.wordpress.com/2010/10/31/xna-content-pipeline-debugging-4-0/ XNA Game Development Workshops at Singapore Universities http://innovativesingapore.com/2011/02/xna-game-development-workshops-at-singapore-universities/ Indiefreaks announces that IGF v0.3 is out with Xbox 360 support, SunBurn 2.0.12 and it’s now Open Source! http://twitter.com/#!/indiefreaks/status/39391953971982336 @liotral announces a new series on properly designing a game http://twitter.com/#!/liortal53/status/39466905081217024 http://liortalblog.wordpress.com/2011/02/20/hello-cosmos/ Indies and XNA at CodeStock 2011 http://www.gamemarx.com/news/2011/02/20/indies-and-xna-at-codestock-2011.aspx Train Frontier Express posts about XNA Content Hotloading http://trainfrontierexpress.blogspot.com/2011/02/xna-content-hotloading-overview.html Slyprid announces a new character editor in Transmute http://twitter.com/#!/slyprid/status/40146992818696192 http://www.youtube.com/watch?v=OKhFAc78LDs&feature=youtu.be The XNA 2D from the ground up tutorial series http://xna-uk.net/blogs/darkgenesis/archive/2011/02/23/recap-the-xna-2d-from-the-ground-up-tutorial-series.aspx Sgt.Conker posts a “Clingerman” (hey that’s me!) to stay relevant http://www.sgtconker.com/2011/02/posting-a-clingerman-to-stay-relevant/

    Read the article

  • Apply Skins to Add Some Flair to Windows Media Player 12

    - by DigitalGeekery
    Tired of the same look and feel of Windows Media Player in Windows 7? We’ll show you how to inject new life into your media experience by applying skins in WMP 12. Adding Skins In Library view, click on View from the Menu and select Skin Chooser. By default, WMP 12 comes with only a couple of modest skins. When you select a skin from the left pane, a preview will be displayed to the right. To apply one of the skins, simply select it from the pane on the left and click Apply Skin.   You can also switch to the currently selected skin in the Skin chooser by selecting Skin from the View menu, or by pressing Crtl + 2. Media Player will open in Now Playing mode. Click on the Switch to Library button at the top left to return to Library view.     Ok, so the included skins are a little boring. You can find additional skins by selecting Tools > Download > Skins.   Or, by clicking on More Skins from within the Skin chooser.   You will be taken the the Microsoft website where you can choose from dozens of skins to download and install. Select a skin you’d like to try and click the link to download.   If prompted with a warning message about files containing scripts that access your library, click Yes. Note: These warning boxes may look a bit different depending on your browser. We are using Chrome for this example.   Click on View Now.   Your new skin will be on display. To get back to the Library mode, find and click the Return to Full Mode button.    Some skins may launch video in a separate window.   If you want to delete one of the skins, select it from the list within the Skin chooser and click the red “X.” You can also press the delete key on your keyboard.   Then click Yes to confirm.   Conclusion Using skins is a quick and easy way to add some style to Windows Media Player and switching back and forth between skins is a breeze. Regardless of your interests, you are sure to find a skin that fits your tastes. You may find WMP skins on other sites, but sticking with Microsoft’s website will ensure maximum compatibility. Skins for Windows Media Player Similar Articles Productive Geek Tips Make VLC Player Look like Windows Media Player 10Make VLC Player Look like Windows Media Player 11Make VLC Player Look like Winamp 5 (Kinda)Fixing When Windows Media Player Library Won’t Let You Add FilesInstall and Use the VLC Media Player on Ubuntu Linux TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative

    Read the article

< Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >