Search Results

Search found 2788 results on 112 pages for 'ben martin'.

Page 12/112 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • 12.04 Unity 3D Not working where as Unity 2D works fine

    - by Stephen Martin
    I updated from 11.10 to 12.04 using the distribution upgrade, after that I couldn't log into using the Unity 3D desktop after logging in I would either never get unity's launcher or I would get the launcher and once I tried to do anything the windows lost their decoration and nothing would respond. If I use Unity 2D it works fine and in fact I'm using it to type this. I managed to get some info out of dmesg that looks like it's the route of whats happening. dmesg: [ 109.160165] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung [ 109.160180] [drm] capturing error event; look for more information in /debug/dri/0/i915_error_state [ 109.167587] [drm:i915_wait_request] *ERROR* i915_wait_request returns -11 (awaiting 1226 at 1218, next 1227) [ 109.672273] [drm:i915_reset] *ERROR* Failed to reset chip. output of lspci | grep vga 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) output of /usr/lib/nux/unity_support_test -p IN 12.04 OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x300) OpenGL version string: 2.1 Mesa 8.0.2 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no The same command in 11.10: stephenm@mcr-ubu1:~$ /usr/lib/nux/unity_support_test -p OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Mobile Intel® GM45 Express Chipset OpenGL version string: 2.1 Mesa 7.11 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes stephenm@mcr-ubu1:~$ output of /var/log/Xorg.0.log [ 11.971] (II) intel(0): EDID vendor "LPL", prod id 307 [ 11.971] (II) intel(0): Printing DDC gathered Modelines: [ 11.971] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 12.770] (II) intel(0): Allocated new frame buffer 2176x800 stride 8704, tiled [ 15.087] (II) intel(0): EDID vendor "LPL", prod id 307 [ 15.087] (II) intel(0): Printing DDC gathered Modelines: [ 15.087] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 33.310] (II) XKB: reuse xkmfile /var/lib/xkb/server-93A39E9580D1D5B855D779F4595485C2CC66E0CF.xkm [ 34.900] (WW) intel(0): flip queue failed: Invalid argument [ 34.900] (WW) intel(0): Page flip failed: Invalid argument [ 34.900] (WW) intel(0): flip queue failed: Invalid argument [ 34.900] (WW) intel(0): Page flip failed: Invalid argument [ 34.913] (WW) intel(0): flip queue failed: Invalid argument [ 34.913] (WW) intel(0): Page flip failed: Invalid argument [ 34.913] (WW) intel(0): flip queue failed: Invalid argument [ 34.913] (WW) intel(0): Page flip failed: Invalid argument [ 34.926] (WW) intel(0): flip queue failed: Invalid argument [ 34.926] (WW) intel(0): Page flip failed: Invalid argument [ 34.926] (WW) intel(0): flip queue failed: Invalid argument [ 34.926] (WW) intel(0): Page flip failed: Invalid argument [ 35.501] (WW) intel(0): flip queue failed: Invalid argument [ 35.501] (WW) intel(0): Page flip failed: Invalid argument [ 35.501] (WW) intel(0): flip queue failed: Invalid argument [ 35.501] (WW) intel(0): Page flip failed: Invalid argument [ 41.519] [mi] Increasing EQ size to 512 to prevent dropped events. [ 42.079] (EE) intel(0): Detected a hung GPU, disabling acceleration. [ 42.079] (EE) intel(0): When reporting this, please include i915_error_state from debugfs and the full dmesg. [ 42.598] (II) intel(0): EDID vendor "LPL", prod id 307 [ 42.598] (II) intel(0): Printing DDC gathered Modelines: [ 42.598] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 51.052] (II) AIGLX: Suspending AIGLX clients for VT switch I know im using the beta version so I'm not expecting it to work but does any one know what the problem may be or even why they Unity compatibility test is describing my video card as vmware when its an intel i915 and Ubuntu is running on the metal not virtualised. Unity 3D worked fine in 11.10

    Read the article

  • Unity 3D Not working where as Unity 2D works fine [closed]

    - by Stephen Martin
    I updated from 11.10 to 12.04 using the distribution upgrade, after that I couldn't log into using the Unity 3D desktop after logging in I would either never get unity's launcher or I would get the launcher and once I tried to do anything the windows lost their decoration and nothing would respond. If I use Unity 2D it works fine and in fact I'm using it to type this. I managed to get some info out of dmesg that looks like it's the route of whats happening. dmesg: [ 109.160165] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung [ 109.160180] [drm] capturing error event; look for more information in /debug/dri/0/i915_error_state [ 109.167587] [drm:i915_wait_request] *ERROR* i915_wait_request returns -11 (awaiting 1226 at 1218, next 1227) [ 109.672273] [drm:i915_reset] *ERROR* Failed to reset chip. output of lspci | grep vga 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) output of /usr/lib/nux/unity_support_test -p IN 12.04 OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x300) OpenGL version string: 2.1 Mesa 8.0.2 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no The same command in 11.10: stephenm@mcr-ubu1:~$ /usr/lib/nux/unity_support_test -p OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Mobile Intel® GM45 Express Chipset OpenGL version string: 2.1 Mesa 7.11 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes stephenm@mcr-ubu1:~$ output of /var/log/Xorg.0.log [ 11.971] (II) intel(0): EDID vendor "LPL", prod id 307 [ 11.971] (II) intel(0): Printing DDC gathered Modelines: [ 11.971] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 12.770] (II) intel(0): Allocated new frame buffer 2176x800 stride 8704, tiled [ 15.087] (II) intel(0): EDID vendor "LPL", prod id 307 [ 15.087] (II) intel(0): Printing DDC gathered Modelines: [ 15.087] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 33.310] (II) XKB: reuse xkmfile /var/lib/xkb/server-93A39E9580D1D5B855D779F4595485C2CC66E0CF.xkm [ 34.900] (WW) intel(0): flip queue failed: Invalid argument [ 34.900] (WW) intel(0): Page flip failed: Invalid argument [ 34.900] (WW) intel(0): flip queue failed: Invalid argument [ 34.900] (WW) intel(0): Page flip failed: Invalid argument [ 34.913] (WW) intel(0): flip queue failed: Invalid argument [ 34.913] (WW) intel(0): Page flip failed: Invalid argument [ 34.913] (WW) intel(0): flip queue failed: Invalid argument [ 34.913] (WW) intel(0): Page flip failed: Invalid argument [ 34.926] (WW) intel(0): flip queue failed: Invalid argument [ 34.926] (WW) intel(0): Page flip failed: Invalid argument [ 34.926] (WW) intel(0): flip queue failed: Invalid argument [ 34.926] (WW) intel(0): Page flip failed: Invalid argument [ 35.501] (WW) intel(0): flip queue failed: Invalid argument [ 35.501] (WW) intel(0): Page flip failed: Invalid argument [ 35.501] (WW) intel(0): flip queue failed: Invalid argument [ 35.501] (WW) intel(0): Page flip failed: Invalid argument [ 41.519] [mi] Increasing EQ size to 512 to prevent dropped events. [ 42.079] (EE) intel(0): Detected a hung GPU, disabling acceleration. [ 42.079] (EE) intel(0): When reporting this, please include i915_error_state from debugfs and the full dmesg. [ 42.598] (II) intel(0): EDID vendor "LPL", prod id 307 [ 42.598] (II) intel(0): Printing DDC gathered Modelines: [ 42.598] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 51.052] (II) AIGLX: Suspending AIGLX clients for VT switch I know im using the beta version so I'm not expecting it to work but does any one know what the problem may be or even why they Unity compatibility test is describing my video card as vmware when its an intel i915 and Ubuntu is running on the metal not virtualised. Unity 3D worked fine in 11.10

    Read the article

  • GLSL - one-pass gaussian blur

    - by martin pilch
    It is possible to implement fragment shader to do one-pass gaussian blur? I have found lot of implementation of two-pass blur (gaussian and box blur): http://callumhay.blogspot.com/2010/09/gaussian-blur-shader-glsl.html http://www.gamerendering.com/2008/10/11/gaussian-blur-filter-shader/ http://www.geeks3d.com/20100909/shader-library-gaussian-blur-post-processing-filter-in-glsl/ and so on. I have been thinking of implementing gaussian blur as convolution (in fact, it is the convolution, the examples above are just aproximations): http://en.wikipedia.org/wiki/Gaussian_blur

    Read the article

  • ASP.NET MVC Custom Profile Provider

    - by Ben Griswold
    It’s been a long while since I last used the ASP.NET Profile provider. It’s a shame, too, because it just works with very little development effort: Membership tables installed? Check. Profile enabled in web.config? Check. SqlProfileProvider connection string set? Check.  Profile properties defined in said web.config file? Check. Write code to set value, read value, build and test. Check. Check. Check.  Yep, I thought the built-in Profile stuff was pure gold until I noticed how the user-based information is persisted to the database. It’s stored as xml and, well, that was going to be trouble if I ever wanted to query the profile data.  So, I have avoided the super-easy-to-use ASP.NET Profile provider ever since, until this week, when I decided I could use it to store user-specific properties which I am 99% positive I’ll never need to query against ever.  I opened up my ASP.NET MVC application, completed steps 1-4 (above) in about 3 minutes, started writing my profile get/set code and that’s where the plan broke down.  Oh yeah. That’s right.  Visual Studio auto-generates a strongly-type Profile reference for web site projects but not for ASP.NET MVC or Web Applications.  Bummer. So, I went through the steps of getting a customer profile provider working in my ASP.NET MVC application: First, I defined a CurrentUser routine and my profile properties in a custom Profile class like so: using System.Web.Profile; using System.Web.Security; using Project.Core;   namespace Project.Web.Context {     public class MemberPreferencesProfile : ProfileBase     {         static public MemberPreferencesProfile CurrentUser         {             get             {                 return (MemberPreferencesProfile)                     Create(Membership.GetUser().UserName);             }         }           public Enums.PresenceViewModes? ViewMode         {             get { return ((Enums.PresenceViewModes)                     ( base["ViewMode"] ?? Enums.PresenceViewModes.Category)); }             set { base["ViewMode"] = value; Save(); }         }     } } And then I replaced the existing profile configuration web.config with the following: <profile enabled="true" defaultProvider="MvcSqlProfileProvider"          inherits="Project.Web.Context.MemberPreferencesProfile">        <providers>     <clear/>     <add name="MvcSqlProfileProvider"          type="System.Web.Profile.SqlProfileProvider, System.Web,          Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices" applicationName="/"/>   </providers> </profile> Notice that profile is enabled, I’ve defined the defaultProvider and profile is now inheriting from my custom MemberPreferencesProfile class.  Finally, I am now able to set and get profile property values nearly the same way as I did with website projects: viewMode = MemberPreferencesProfile.CurrentUser.ViewMode; MemberPreferencesProfile.CurrentUser.ViewMode = viewMode;

    Read the article

  • Issue Creating SQL Login for AppPoolIdentity on Windows Server 2008

    - by Ben Griswold
    IIS7 introduced the option to run your application pool as AppPoolIdentity. With the release of IIS7.5, AppPoolIdentity was promoted to the default option.  You see this change if you’re running Windows 7 or Windows Server 2008 R2.  On my Windows 7 machine, I’m able to define my Application Pool Identity and then create an associated database login via the SQL Server Management Studio interface.  No problem.  However, I ran into some troubles when recently installing my web application onto a Windows Server 2008 R2 64-bit machine.  Strange, but the same approach failed as SSMS couldn’t find the AppPoolIdentity user.  Instead of using the tools, I created and executed the login via script and it worked fine.  Here’s the script, based off of the DefaultAppPool identity, if the same happens to you: CREATE LOGIN [IIS APPPOOL\DefaultAppPool] FROM WINDOWS WITH DEFAULT_DATABASE=[master] USE [Chinook] CREATE USER [IIS APPPOOL\DefaultAppPool] FOR LOGIN [IIS APPPOOL\DefaultAppPool]

    Read the article

  • OpenGL ES 2/3 vs OpenGL 3 (and 4)

    - by Martin Perry
    I have migrated my code from OpenGL ES 2/3 to OpenGL 3 (I added bunch of defines and abstract classes to encapsulate both versions, so I have both in one project and compile only one or another). All I need to change was context initialization and glClearDepth. I dont have any errors. This was kind of strange to me. Even shaders are working correctly (some of them are GL ES 3 - with #version 300 es in their header) Is this a kind of good solution, or should I rewrite something more, before I start adding another functionality like geometry shaders, performance tools etc ?

    Read the article

  • Microsoft, please help me diagnose TFS Administration permission issues!

    - by Martin Hinshelwood
    I recently had a fun time trying to debug a permission issue I ran into using TFS 2010’s TfsConfig. Update 5th March 2010 – In its style of true excellence my company has added rant to its “Suggestions for Better TFS”. <rant> I was trying to run the TfsConfig tool and I kept getting the message: “TF55038: You don't have sufficient privileges to run this tool. Contact your Team Foundation system administrator." This message made me think that it was something to do with the Install permissions as it is always recommended to use a single account to do every install of TFS. I did not install the original TFS on our network and my account was not used to do the TFS2010 install. But I did do the upgrade from 2010 beta 2 to 2010 RC with my current account. So I proceeded to do some checking: Am I in the administrators group on the server? Figure: Yes, I am in the administrators group on the server Am I in the Administration Console users list? Figure: Yes, I am in the Administration Console users list Have I reapplied the permissions in the Administration Console users list ticking all the options? Figure: Make sure you check all of the boxed if you want to have all the admin options Figure: Yes, I have made sure that all my options are correct. Am I in the Team Foundation administrators group? Figure: Yes, I am in the Team Foundation Administrators group Is my account explicitly SysAdmin on the Database server? Figure: Yes, I do have explicit SysAdmin on the database Can you guess what the problem was? The command line window was not running as the administrator! As with most other applications there should be an explicit error message that states: "You are not currently running in administrator mode; please restart the command line with elevated privileges!" This would have saved me 30 minutes, although I agree that I should change my name to Muppet and just be done with it. </rant>   Technorati Tags: Visual Studio ALM,Administration,Team Foundation Server Admin Console,TFS Admin Console

    Read the article

  • Deploy ASP.NET Web Applications with Web Deployment Projects

    - by Ben Griswold
    One may quickly build and deploy an ASP.NET web application via the Publish option in Visual Studio.  This option works great for most simple deployment scenarios but it won’t always cut it.  Let’s say you need to automate your deployments. Or you have environment-specific configuration settings. Or you need to execute pre/post build operations when you do your builds.  If so, you should consider using Web Deployment Projects. The Web Deployment Project type doesn’t come out-of-the-box with Visual Studio 2008.  You’ll need to Download Visual Studio® 2008 Web Deployment Projects – RTW and install if you want to follow along with this tutorial. I’ve created a shiny new ASP.NET MVC project.  Web Deployment Projects work with websites, web applications and MVC projects so feel free to go with any web project type you’d like.  Once your web application is in place, it’s time to add the Web Deployment project.  You can hunt and peck around the File > New > New Project… dialogue as long as you’d like, but you aren’t going to find what you need.  Instead, select the web project and then choose the “Add Web Deployment Project…” hiding behind the Build menu option. I prefer to name my projects based on the environment in which I plan to deploy.  In this case, I’ll be rolling to the QA machine. Don’t expect too much to happen at this point.  A seemingly empty project with a funny icon will be added to your solution.  That’s it. I want to take a minute and talk about configuration settings before we continue.  Some of the common settings which might change from environment to environment are appSettings, connectionStrings and mailSettings.  Here’s a look at my updated web.config: <appSettings>   <add key="MvcApplication293.Url" value="http://localhost:50596/" />     </appSettings> <connectionStrings>   <add name="ApplicationServices"        connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true"        providerName="System.Data.SqlClient"/> </connectionStrings>   <system.net>   <mailSettings>     <smtp from="[email protected]">         <network host="server.com" userName="username" password="password" port="587" defaultCredentials="false"/>     </smtp>   </mailSettings> </system.net> I want to update these values prior to deploying to the QA environment.  There are variations to this approach, but I like to maintain environment-specific settings for each of the web.config sections in the Config/[Environment] project folders.  I’ve provided a screenshot of the QA environment settings below. It may be obvious what one should include in each of the three files.  Basically, it is a copy of the associated web.config section with updated setting values.  For example, the AppSettings.config file may include a reference to the QA web url, the DB.config would include the QA database server and login information and the StmpSettings.config would include a QA Stmp server and user information. <?xml version="1.0" encoding="utf-8" ?> <appSettings>   <add key="MvcApplication293.Url" value="http://qa.MvcApplicatinon293.com/" /> </appSettings> AppSettings.config  <?xml version="1.0" encoding="utf-8" ?> <connectionStrings>   <add name="ApplicationServices"        connectionString="server=QAServer;integrated security=SSPI;database=MvcApplication293"        providerName="System.Data.SqlClient"/>   </connectionStrings> Db.config  <?xml version="1.0" encoding="utf-8" ?> <smtp from="[email protected]">     <network host="qaserver.com" userName="qausername" password="qapassword" port="587" defaultCredentials="false"/> </smtp> SmtpSettings.config  I think our web project is ready to deploy.  Now, it’s time to concentrate on the Web Deployment Project itself.  Right-click on the project file and open the Property Pages. The first thing to call out is the Configuration dropdown.  I only deploy a project which is built in Release Mode so I only setup the Web Deployment Project for this mode.  (This is when you change the Configuration selection to “Release.”)  I typically keep the Output Folder default value – .\Release\.  When the application is built, all artifacts will be dropped in the .\Release\ folder relative to the Web Deployment Project root.  The final option may be up for some debate.  I like to roll out updatable websites so I select the “Allow this precompiled site to be updatable” option.  I really do like to follow standard SDLC processes when I release my software but there are those times when you just have to make a hotfix to production and I like to keep this option open if need be.  If you are strongly opposed to this idea, please, by all means, don’t check the box. The next tab is boring.  I don’t like to deploy a crazy number of DLLs so I merge all outputs to a single assembly.  Again, you may have another option and feel free to change this selection if you so wish. If you follow my lead, take care when choosing a single assembly name.  The Assembly Name can not be the same as the website or any other project in your solution otherwise you’ll receive a circular reference build error.  In other words, I can’t name the assembly MvcApplication293 or my output window would start yelling at me. Remember when we called out our QA configuration files?  Click on the Deployment tab and you’ll see how where going to use them.  Notice the Web.config file section replacements value.  All this does is swap called out web.config sections with the content of the Config\QA\* files.  You can reduce or extend this list as you deem fit.  Did you see the “Use external configuration source file” option?  You know how you can point any of your web.config sections to an external file via the configSource attribute?  This option allows you to leverage that technique and instead of replacing the content of the sections, you will replace the configSource attribute value instead. <appSettings configSource="Config\QA\AppSettings.config" /> Go ahead and Apply your changes.  I’d like to take a look at the project file we just updated.  Right-click on the Web Deployment Project and select “Open Project File.” One of the first configuration blocks reflects core Release build settings.  There are a couple of points I’d like to call out here: DebugSymbols=false ensures the compilation debug attribute in your web.config is flipped to false as part of build process.  There’s some crumby (more likely old) documentation which implies you need a ToggleDebugCompilation task to make this happen.  Nope. Just make sure the DebugSymbols is set to false.  EnableUpdateable implies a single dll for the web application rather than a dll for each object and and empty view file. I think updatable applications are cleaner and include the benefit (or risk based on your perspective) that portions of the application can be updated directly on the server.  I called this out earlier but I wanted to reiterate. <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">     <DebugSymbols>false</DebugSymbols>     <OutputPath>.\Release</OutputPath>     <EnableUpdateable>true</EnableUpdateable>     <UseMerge>true</UseMerge>     <SingleAssemblyName>MvcApplication293</SingleAssemblyName>     <DeleteAppCodeCompiledFiles>true</DeleteAppCodeCompiledFiles>     <UseWebConfigReplacement>true</UseWebConfigReplacement>     <ValidateWebConfigReplacement>true</ValidateWebConfigReplacement>     <DeleteAppDataFolder>true</DeleteAppDataFolder>   </PropertyGroup> The next section is self-explanatory.  The content merely reflects the replacement value you provided via the Property Pages. <ItemGroup Condition="'$(Configuration)|$(Platform)' == 'Release|AnyCPU'">     <WebConfigReplacementFiles Include="Config\QA\AppSettings.config">       <Section>appSettings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\Db.config">       <Section>connectionStrings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\SmtpSettings.config">       <Section>system.net/mailSettings/smtp</Section>     </WebConfigReplacementFiles>   </ItemGroup> You’ll want to extend the ItemGroup section to include the files you wish to exclude from the build.  The sample ExcludeFromBuild nodes exclude all obj, svn, csproj, user, pdb artifacts from the build. Enough though they files aren’t included in your web project, you’ll need to exclude them or they’ll show up along with required deployment artifacts.  <ItemGroup Condition="'$(Configuration)|$(Platform)' == 'Release|AnyCPU'">     <WebConfigReplacementFiles Include="Config\QA\AppSettings.config">       <Section>appSettings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\Db.config">       <Section>connectionStrings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\SmtpSettings.config">       <Section>system.net/mailSettings/smtp</Section>     </WebConfigReplacementFiles>     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\obj\**\*.*" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\.svn\**\*.*" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\.svn\**\*" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\*.csproj" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\*.user" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\bin\*.pdb" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\Notes.txt" />   </ItemGroup> Pre/post build and Pre/post merge tasks are added to the final code block.  By default, your project file should look like the following – a completely commented out section. <!– To modify your build process, add your task inside one of        the targets below and uncomment it. Other similar extension        points exist, see Microsoft.WebDeployment.targets.   <Target Name="BeforeBuild">   </Target>   <Target Name="BeforeMerge">   </Target>   <Target Name="AfterMerge">   </Target>   <Target Name="AfterBuild">   </Target>   –> Update the section to remove all temporary Config folders and files after the build.  <!– To modify your build process, add your task inside one of        the targets below and uncomment it. Other similar extension        points exist, see Microsoft.WebDeployment.targets.     <Target Name="BeforeMerge">   </Target>   <Target Name="AfterMerge">   </Target>     <Target Name="BeforeBuild">      </Target>       –>   <Target Name="AfterBuild">     <!– WebConfigReplacement requires the Config files. Remove after build. –>     <RemoveDir Directories="$(OutputPath)\Config" />   </Target> That’s it for setup.  Save the project file, flip the solution to Release Mode and build.  If there’s an issue, consult the Output window for details.  If all went well, you will find your deployment artifacts in your Web Deployment Project folder like so. Both the code source and published application will be there. Inside the Release folder you will find your “published files” and you’ll notice the Config folder is no where to be found.  In the Source folder, all project files are found with the exception of the items which were excluded from the build. I’ll wrap up this tutorial by calling out a little Web Deployment pet peeve of mine: there doesn’t appear to be a way to add an existing web deployment project to a solution.  The best I can come up with is create a new web deployment project and then copy and paste the contents of the existing project file into the new project file.  It’s not a big deal but it bugs me. Download the Solution

    Read the article

  • When should I use Areas in TFS instead of Team Projects

    - by Martin Hinshelwood
    Well, it depends…. If you are a small company that creates a finite number of internal projects then you will find it easier to create a single project for each of your products and have TFS do the heavy lifting with reporting, SharePoint sites and Version Control. But what if you are not… Update 9th March 2010 Michael Fourie gave me some feedback which I have integrated. Ed Blankenship via @edblankenship offered encouragement and a nice quote. Ewald Hofman gave me a couple of Cons, and maybe a few more soon. Ewald’s company, Avanade, currently uses Areas, but it looks like the manual management is getting too much and the project is getting cluttered. What if you are likely to have hundreds of projects, possibly with a multitude of internal and external projects? You might have 1 project for a customer or 10. This is the situation that most consultancies find themselves in and thus they need a more sustainable and maintainable option. What I am advocating is that we should have 1 “Team Project” per customer, and use areas to create “sub projects” within that single “Team Project”. "What you describe is what we generally do internally and what we recommend. We make very heavy use of area path to categorize the work within a larger project." - Brian Harry, Microsoft Technical Fellow & Product Unit Manager for Team Foundation Server   "We tend to use areas to segregate multiple projects in the same team project and it works well." - Tiago Pascoal, Visual Studio ALM MVP   "In general, I believe this approach provides consistency [to multi-product engagements] and lowers the administration and maintenance costs. All good." - Michael Fourie, Visual Studio ALM MVP   “@MrHinsh BTW, I'm very much a fan of very large, if not huge, team projects in TFS. Just FYI :) Use Areas & Iterations.” Ed Blankenship, Visual Studio ALM MVP   This would mean that SSW would have a single Team Project called “SSW” that contains all of our internal projects and consequently all of the Areas and Iteration move down one hierarchy to accommodate this. Where we would have had “\SSW\Sprint 1” we now have “\SSW\SqlDeploy\Sprint1” with “SqlDeploy” being our internal project. At the moment SSW has over 70 internal projects and more than 170 total projects in TFS. This method has long term benefits that help to simplify the support model for companies that often have limited internal support time and many projects. But, there are implications as TFS does not provide this model “out-of-the-box”. These implications stretch across Areas, Iterations, Queries, Project Portal and Version Control. Michael made a good comment, he said: I agree with your approach, assuming that in a multi-product engagement with a client, they are happy to adopt the same process template across all products. If they are not, then it’ll either be easy to convince them or there is a valid reason for having a different template - Michael Fourie, Visual Studio ALM MVP   At SSW we have a standard template that we use and this is applied across the board, to all of our projects. We even apply any changes to the core process template to all of our existing projects as well. If you have multiple projects for the same clients on multiple templates and you want to keep it that way, then this approach will not work for you. However, if you want to standardise as we have at SSW then this approach may benefit you as well. Implications around Areas Areas should be used for topological classification/isolation of work items. You can think of this as architecture areas, organisational areas or even the main features of your application. In our scenario there is an additional top level item that represents the Project / Product that we want to chop our Team Project into. Figure: Creating a sub area to represent a product/project is easy. <teamproject> <teamproject>\<Functional Area/module whatever> Becomes: <teamproject> <teamproject>\<ProjectName>\ <teamproject>\<ProjectName>\<Functional Area/module whatever> Implications around Iterations Iterations should be used for chronological classification/isolation of work items. This could include isolated time boxes, milestones or release timelines and really depends on the logical flow of your project or projects. Due to the new level in Area we need to add the same level to Iteration. This is primarily because it is unlikely that the sprints in each of your projects/products will start and end at the same time. This is just a reality of managing multiple projects. Figure: Adding the same Area value to Iteration as the top level item adds flexibility to Iteration. <teamproject>\Sprint 1 Or <teamproject>\Release 1\Sprint 1 Becomes: <teamproject>\<ProjectName>\Sprint 1 Or <teamproject>\<ProjectName>\Release 1\Sprint 1 Implications around Queries Queries are used to filter your work items based on a specified level of granularity. There are a number of queries that are built into a project created using the MSF Agile 5.0 template, but we now have multiple projects and it would be a pain to have to edit all of the work items every time we changed project, and that would only allow one team to work on one project at a time.   Figure: The Queries that are created in a normal MSF Agile 5.0 project do not quite suit our new needs. In order for project contributors to be able to query based on their project we need a couple of things. The first thing I did was to create an “_Area Template” folder that has a copy of the project layout with all the queries setup to filter based on the “_Area Template” Area and the “_Sprint template” you can see in the Area and Iteration views. Figure: The template is currently easily drag and drop, but you then need to edit the queries to point at the right Area and Iteration. This needs a tool. I then created an “Areas” folder to hold all of the area specific queries. So, when you go to create a new TFS Sub-Project you just drag “_Area Template” while holding “Ctrl” and drop it onto “Areas”. There is a little setup here. That said I managed it in around 10 minutes which is not so bad, and I can imagine it being quite easy to build a tool to create these queries Figure: These new queries can be configured in around 10 minutes, which includes setting up the Area and Iteration as well. Version Control What about your source code? Well, that is the easiest of the lot. Just create a sub folder for each of your projects/products.   Figure: Creating sub folders in source control is easy as “Right click | Create new folder”. <teamproject>\DEV\Main\ Becomes: <teamproject>\<ProjectName>\DEV\Main\ Conclusion I think it is up to each company to make a call on how you want to configure your Team Projects and it depends completely on how many projects/products you are going to have for each customer including yourself. If we decide to utilise this route it will require some configuration to get our 170+ projects into this format, and I will probably be writing some tools to help. Pros You only have one project to upgrade when a process template changes – After going through an upgrade of over 170 project prior to the changes in the RC I can tell you that that many projects is no fun. Standardises your Process Template – You will always have the same Process implementation across projects/products without exception You get tighter control over the permissions – Yes, you can do this on a standard Team Project, but it gets a lot easier with practice. You can “move” work items from one “product” to another – Have we not always wanted to do that. You can rename your projects – Wahoo: everyone wants to do this, now you can. One set of Reporting Services reports to manage – You set an area and iteration to run reports anyway, so you may as well set both. Simplified Check-In Policies– There is only one set of check-in policies per client. This simplifies administration of policies. Simplified Alerts – As alerts are applied across multiple projects this simplifies your alert rules as per client. Cons All of these cons could be mitigated by a custom tool that helps automate creation of “Sub-projects” within Team Projects. This custom tool could create areas, Iteration, permissions, SharePoint and queries. It just does not exist yet :) You need to configure the Areas and Iterations You need to configure the permissions You may need to configure sub sites for SharePoint (depends on your requirement) – If you have two projects/products in the same Team Project then you will not see the burn down for each one out-of-the-box, but rather a cumulative for the Team Project. This is not really that much of a problem as you would have to configure your burndown graphs for your current iteration anyway. note: When you create a sub site to a TFS linked portal it will inherit the settings of its parent site :) This is fantastic as it means that you can easily create sub sites and then set the Area and Iteration path in each of the reports to be the correct one. Every team wants their own customization (via Ewald Hofman) - small teams of 2 persons against teams of 30 – or even outsourcing – need their own process, you cannot allow that because everybody gets the same work item types. note: Luckily at SSW this is not a problem as our template is standardised across all projects and customers. Large list of builds (via Ewald Hofman) – As the build list in Team Explorer is just a flat list it can get very cluttered. note: I would mitigate this by removing any build that has not been run in over 30 days. The build template and workflow will still be available in version control, but it will clean the list. Feedback Now that I have explained this method, what do you think? What other pros and cons can you see? What do you think of this approach? Will you be using it? What tools would you like to support you?   Technorati Tags: Visual Studio ALM,TFS Administration,TFS,Team Foundation Server,Project Planning,TFS Customisation

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • Learn Lean Software Development and Kanban Systems

    - by Ben Griswold
    I did an in-house presentation on Lean Software Development (LSD) and Kanban Systems this week.  Beyond what I had previously learned from various podcasts, I knew little about either topic prior to compiling my slide deck.  In the process of building my presentation, I learned a ton.  I found the concepts weren’t very difficult to grok; however, I found little detailed information was available online. Hence this post which is merely a list of valuable resources. Principles of Lean Thinking, Mary Poppendieck Lean Software Development, May Poppendieck Lean Programming, Mary Poppendieck Lean Software Development, Wikipedia Implementing Lean Software Thinking: From Concept to Cash, Poppendieck Lean Software Development Overview, Darrell Norton Lean Thinking: Banish Waste and Create Wealth in Your Corporation The Goal: A Process of Ongoing Improvement The Toyota Way Extreme Toyota: Radical Contradictions That Drive Success at the World’s Best Manufacturer Elegant Code Cast 17 – David Laribee on Lean / Kanban Herding Code Episode 42: Scott Bellware on BDD and Lean Development Seven Principles of Lean Software Development, Przemys?aw Bielicki Kanban Boards for Agile Project Management with Zen Author Nate Kohari Herding Code 55: Nate Kohari brings Your Moment of Zen James Shore on Kanban Systems Agile Zen Product Site A Leaner Form of Agile, David Laribee Kanban as Alternative Agile Implementation, Mark Levison Lean Software Development, Dr. Christoph Steindl Glossary of Lean Manufacturing Terms Why Pull? Why Kanban?, Corey Ladas

    Read the article

  • Installing Visual Studio Team Foundation Server Service Pack 1

    - by Martin Hinshelwood
    As has become customary when the product team releases a new patch, SP or version I like to document the install. Although I had no errors on my main computer, my netbook did have problems. Although I am not ready to call it a Service Pack problem just yet! Update 2011-03-10 – Running the Team Foundation Server 2010 Service Pack 1 install a second time worked As per Brian's post I am installing the Team Foundation Server Service Pack first and indeed as this is a single server local deployment I need to install both. If I only install one it will leave the other product broken. This however does not affect you if you are running Visual Studio and Team Foundation Server on separate computers as is normal in a production deployment. Main workhorse I will be installing the service pack first on my main computer as I want to actually use it here. Figure: My main workhorse I will also be installing this on my netbook which is obviously of significantly lower spec, but I will do that one after. Although, as always I had my fingers crossed, I was not really worried. Figure: KB2182621 Compared to Visual Studio there are not really a lot of components to update. Figure: TFS 2010 and SQL 2008 are the main things to update There is no “web” installer for the Team Foundation Server 2010 Service Pack, but that is ok as most people will be installing it on a production server and will want to have everything local. I would have liked a Web installer, but the added complexity for the product team is not work the capability for a 500mb patch. Figure: There is currently no way to roll SP1 and RTM together Figure: No problems with the file verification, phew Figure: Although the install took a while, it progressed smoothly   Figure: I always like a success screen Well, as far as the install is concerned everything is OK, but what about TFS? Can I still connect and can I still administer it. Figure: Service Pack 1 is reflected correctly in the Administration Console I am confident that there are no major problems with TFS on my system and that it has been updated to SP1. I can do all of the things that I used before with ease, and with the new features detailed by Brian I think I will be happy. Netbook The great god Murphy has stuck, and my poor wee laptop spat the Team Foundation Server 2010 Service Pack 1 out so fast it hit me on the back of the head. That will teach me for not looking… Figure: “Installation did not succeed” I am pretty sure should not be all caps! On examining the file I found that everything worked, except the actual Team Foundation Server 2010 serving step. Action: System Requirement Checks... Action complete Action: Downloading and/or Verifying Items c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp: Verifying signature for VS10-KB2182621.msp c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp Signature verified successfully for VS10-KB2182621.msp c:\757fe6efe9f065130d4838081911\DACFramework_enu.msi: Verifying signature for DACFramework_enu.msi c:\757fe6efe9f065130d4838081911\DACFramework_enu.msi Signature verified successfully for DACFramework_enu.msi c:\757fe6efe9f065130d4838081911\DACProjectSystemSetup_enu.msi: Verifying signature for DACProjectSystemSetup_enu.msi Exists: evaluating Exists evaluated to false c:\757fe6efe9f065130d4838081911\DACProjectSystemSetup_enu.msi Signature verified successfully for DACProjectSystemSetup_enu.msi c:\757fe6efe9f065130d4838081911\TSqlLanguageService_enu.msi: Verifying signature for TSqlLanguageService_enu.msi c:\757fe6efe9f065130d4838081911\TSqlLanguageService_enu.msi Signature verified successfully for TSqlLanguageService_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_x86_enu.msi: Verifying signature for SharedManagementObjects_x86_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_x86_enu.msi Signature verified successfully for SharedManagementObjects_x86_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_amd64_enu.msi: Verifying signature for SharedManagementObjects_amd64_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_amd64_enu.msi Signature verified successfully for SharedManagementObjects_amd64_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_x86_enu.msi: Verifying signature for SQLSysClrTypes_x86_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_x86_enu.msi Signature verified successfully for SQLSysClrTypes_x86_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_amd64_enu.msi: Verifying signature for SQLSysClrTypes_amd64_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_amd64_enu.msi Signature verified successfully for SQLSysClrTypes_amd64_enu.msi c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.cab: Verifying signature for vcruntime\Vc_runtime_x86.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.cab Signature verified successfully for vcruntime\Vc_runtime_x86.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.msi: Verifying signature for vcruntime\Vc_runtime_x86.msi c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.msi Signature verified successfully for vcruntime\Vc_runtime_x86.msi c:\757fe6efe9f065130d4838081911\SetupUtility.exe: Verifying signature for SetupUtility.exe c:\757fe6efe9f065130d4838081911\SetupUtility.exe Signature verified successfully for SetupUtility.exe c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.cab: Verifying signature for vcruntime\Vc_runtime_x64.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.cab Signature verified successfully for vcruntime\Vc_runtime_x64.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.msi: Verifying signature for vcruntime\Vc_runtime_x64.msi c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.msi Signature verified successfully for vcruntime\Vc_runtime_x64.msi c:\757fe6efe9f065130d4838081911\NDP40-KB2468871.exe: Verifying signature for NDP40-KB2468871.exe c:\757fe6efe9f065130d4838081911\NDP40-KB2468871.exe Signature verified successfully for NDP40-KB2468871.exe Action complete Action: Performing actions on all Items Entering Function: BaseMspInstallerT >::PerformAction Action: Performing Install on MSP: c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp targetting Product: Microsoft Team Foundation Server 2010 - ENU Returning IDOK. INSTALLMESSAGE_ERROR [Error 1935.An error occurred during the installation of assembly 'Microsoft.TeamFoundation.WebAccess.WorkItemTracking,version="10.0.0.0",publicKeyToken="b03f5f7f11d50a3a",processorArchitecture="MSIL",fileVersion="10.0.40219.1",culture="neutral"'. Please refer to Help and Support for more information. HRESULT: 0x80070005. ] Returning IDOK. INSTALLMESSAGE_ERROR [Error 1712.One or more of the files required to restore your computer to its previous state could not be found. Restoration will not be possible.] Patch (c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp) Install failed on product (Microsoft Team Foundation Server 2010 - ENU). Msi Log: MSI returned 0x643 Entering Function: MspInstallerT >::Rollback Action Rollback changes PerformMsiOperation returned 0x643 PerformMsiOperation returned 0x643 OnFailureBehavior for this item is to Rollback. Action complete Final Result: Installation failed with error code: (0x80070643), "Fatal error during installation. " (Elapsed time: 0 00:14:09). Figure: Error log for Team Foundation Server 2010 install shows a failure As there is really no information in this log as to why the installation failed so I checked the event log on that box. Figure: There are hundreds of errors and it actually looks like there are more problems than a failed Service Pack I am going to just run it again and see if it was because the netbook was slow to catch on to the update. Hears hoping, but even if it fails, I would question the installation of Windows (PDC laptop original install) before I question the Service Pack Figure: Second run through was successful I don’t know if the laptop was just slow, or what… Did you get this error? If you did I will push this to the product team as a problem, but unless more people have this sort of error, I will just look to write this off as a corrupted install of Windows and reinstall.

    Read the article

  • Kaiden and the Arachnoid Cyst

    - by Martin Hinshelwood
    Some of you may remember when my son Kaiden was born I posted pictures of him and his sister. Kaiden is now 15 months old and is progressing perfectly in every area except that and we had been worried that he was not walking yet. We were only really concerned as his sister was walking at 8 months. Figure: Kai as his usual self   Jadie and I were concerned over that and that he had a rather large head (noggin) so we talked to various GP’s and our health visitor who immediately dismissed our concerns every time. That was until about two months ago when we happened to get a GP whose daughter had Hyper Mobility and she recognised the symptoms immediately. We were referred to the Southbank clinic who were lovely and the paediatrician confirmed that he had Hyper Mobility after testing all of his faculties. This just means that his joints are overly mobile and would need a little physiotherapy to help him out. At the end the paediatrician remarked offhand that he has a rather large head and wanted to measure it. Sure enough he was a good margin above the highest percentile mark for his height and weight. The paediatrician showed the measurements to a paediatric consultant who, as a precautionary measure, referred us for an MRI at Yorkhill Children's hospital. Now, Yorkhill has always been fantastic to us, and this was no exception. You know we have NEVER had a correct diagnosis for the kids (with the exception of the above) from a GP and indeed twice have been proscribed incorrect medication that made the kids sicker! We now always go strait to Yorkhill to save them having to fix GP mistakes as well. Monday 24th May, 7pm The scan went fantastically, with Kaiden sleeping in the MRI machine for all but 5 minutes at the end where he waited patiently for it to finish. We were not expecting anything to be wrong as this was just a precautionary scan to make sure that nothing in his head was affecting his gross motor skills. After the scan we were told to expect a call towards the end of the week… Tuesday 25th May, 12pm The very next day we got a call from Southbank who said that they has found an Arachnoid Cyst and could we come in the next day to see a Consultant and that Kai would need an operation. Wednesday 26th May, 12:30pm We went into the Southbank clinic and spoke to the paediatric consultant who assured us that it was operable but that it was taking up considerable space in Kai’s head. Cerebrospinal fluid is building up as a cyst is blocking the channels it uses to drain. Thankfully they told us that prospects were good and that Kai would expect to make a full recovery before showing us the MRI pictures. Figure: Normal brain MRI cross section. This normal scan shows the spaces in the middle of the brain that contain and produce the Cerebrospinal fluid. Figure: Normal Cerebrospinal Flow This fluid is needed by the brain but is drained in the middle down the spinal column. Figure: Kai’s cyst blocking the four channels. I do not think that I need to explain the difference between the healthy picture and Kai’s picture. However you can see in this first picture the faint outline of the cyst in the middle that is blocking the four channels from draining. After seeing the scans a Neurosurgeon has decided that he is not acute, but needs an operation to unblock the flow. Figure: OMFG! You can see in the second picture the effect of the build up of fluid. If I was not horrified by the first picture I was seriously horrified by this one. What next? Kai is not presenting the symptoms of vomiting or listlessness that would show an immediate problem and as such we will get an appointment to see the Paediatric Neurosurgeon at the Southern General hospital in about 4 weeks. This timescale is based on the Neurosurgeon seeing the scans. After that Kai will need an operation to release the pressure and either remove the cyst completely or put in a permanent shunt (tube from brain to stomach) to bypass the blockage. We have updated his notes for the referral with additional recent information on top of the scan that the consultant things will help improve the timescales, but that is just a guess.   All we can do now is wait and see, and be watchful for tell tail signs of listlessness, eye problems and vomiting that would signify a worsening of his condition.   Technorati Tags: Personal

    Read the article

  • Render MVCContrib Grid with No Header Row

    - by Ben Griswold
    The MVCContrib Grid allows for the easy construction of HTML tables for displaying data from a collection of Model objects. I add this component to all of my ASP.NET MVC projects.  If you aren’t familiar with what the grid has to offer, it’s worth the looking into. What you may notice in the busy example below is the fact that I render my column headers independent of the grid contents.  This allows me to keep my headers fixed while the user searches through the table content which is displayed in a scrollable div*.  Thus, I needed a way to render my grid without headers. That’s where Grid Renderers come into play.  <table border="0" cellspacing="0" cellpadding="0" class="projectHeaderTable">     <tr>         <td class="memberTableMemberHeader">             <%= Html.GridColumnHeader("Member", "Index", "MemberFullName")%>              </td>         <td class="memberTableRoleHeader">             <%= Html.GridColumnHeader("Role", "Index", "ProjectRoleTypeName")%>              </td>                <td class="memberTableActionHeader">             Action         </td>     </tr> </table> <div class="scrollContentWrapper"> <% Html.Grid(Model)     .Columns(column =>             {                 column.For(c => c.MemberFullName).Attributes(@class => "memberTableMemberCol");                 column.For(c => c.ProjectRoleTypeName).Attributes(@class => "memberTableRoleCol");                 column.For(x => Html.ActionLink("View", "Details", new { Id = x.ProjectMemberId }) + " | " +                                 Html.ActionLink("Edit", "Edit", new { Id = x.ProjectMemberId }) + " | " +                                 Html.ActionLink("Remove", "Delete", new { Id = x.ProjectMemberId }))                     .Attributes(@class => "memberTableActionCol").DoNotEncode();             })     .Empty("There are no members associated with this project.")     .Attributes(@class => "lbContent")     .RenderUsing(new GridNoHeaderRenderer<ProjectMemberDetailsViewModel>())     .Render(); %> </div> <div class="scrollContentBottom">     <!– –> </div> <%=Html.Pager(Model) %> Maybe you noticed the reference to the GridNoHeaderRenderer class above?  Yep, rendering the grid with no header is straightforward.   public class GridNoHeaderRenderer<T> :     HtmlTableGridRenderer<T> where T: class {     protected override bool RenderHeader()     {         // Explicitly returning true would suppress the header         // just fine, however, Render() will always assume that         // items exist in collection and RenderEmpty() will         // never be called.           // In other words, return ShouldRenderHeader() if you         // want to maintain the Empty text when no items exist.         return ShouldRenderHeader();     } } Well, if you read through the comments, there is one catch.  You might be tempted to have the RenderHeader method always return true.  This would work just fine but you should return the result of ShouldRenderHeader() instead so the Empty text will continue to display if there are no items in the collection. The GridRenderer feature found in the MVCContrib Grid is so well put together, I just had to share.  * Though you can find countless alternatives to the fixed headers problem online, this is the only solution that I’ve ever found to reliably work across browsers. If you know something I don’t, please share.

    Read the article

  • Introduction to Lean Software Development and Kanban Systems – Eliminate Waste

    - by Ben Griswold
    In this post, we’ll continue the Lean Software Development and Kanban Systems series by concentrating on Principle #1: Eliminate Waste.   “Muda” is Waste in Japanese. In the next part of the series, we’ll dive into Principle #2: Create Knowledge / Amplify Learning. And I am going to be a little obnoxious about listing my Lean and Kanban references with every series post.  The references are great and they deserve this sort of attention. 

    Read the article

  • Protecting a WebCenter app with OAM 11g - the Webcenter side

    - by Martin Deh
    Recently, there was a customer requirment to enable a WebCenter custom portal application to have multiple login-type pages and have the authentication be handle through Oracle Access Manager (OAM) As my security colleagues would tell me, this is fully supported through OAM.  Basically, all that would have to be done is to define in OAM individual resources (directories, URLS , .etc) that needed to be secured. Once that was done, OAM would handle the rest and the user would typically then be prompted by a login page, which was provided by OAM.  I am not going to discuss talking about OAM security in this blog.  In addition, my colleague Chris Johnson (ATEAM security) has already blogged his side of the story here:  http://fusionsecurity.blogspot.com/2012/06/protecting-webcenter-app-with-oam-11g.html .  What I am going to cover is what was done on the WebCenter/ADF side of things. In the test application, basically the structure of pages defined in the pages.xml are as follows:  In this screenshot, notice that "Delegated Security" has been selected, and of the absence for the anonymous-role for the "secured" page (A - B is the same)  This essentially in the WebCenter world means that each of these pages are protected, and only accessible by those define by the applications "role".  For more information on how WebCenter handles security, which by the way extends from ADF security, please refer to the documentation.  The (default) navigation model was configured.  You can see that with this set up, a user will be able to view the "links", where the links define navigation to the respective page:   Note from this dialog, you could also set some security on each link via the "visible" property.  However, the recommended best practice is to set the permissions through the page hierarchy (pages.xml).  Now based on this set up, the expected behavior is that I could only see the link for secured A page only if I was already authenticated (logged in).  But, this is not the use case of the requirement, since any user (anonymous) should be able to view (and click on the link).  So how is this accomplished?  There is now a patch that enables this.  In addition, the portal application's web.xml will need an additional context parameter: <context-param>     <param-name>oracle.webcenter.navigationframework.SECURITY_LEVEL</param-name>     <param-value>public</param-value>  </context-param>  As Chris mentions in his part of the blog, the code that is responsible for displaying the "links" is based upon the retrieval of the navigation model "node" prettyURL.  The prettyURL is a generated URL that also includes the adf.ctrl-state token, which is very important to the ADF framework runtime.  URLs that are void of this token, get new tokens from the ADF runtime.  This can lead to potential memory issues.  <af:forEach var="node" varStatus="vs"    items="#{navigationContext.defaultNavigationModel.listModel['startNode=/,includeStartNode=false']}">                 <af:spacer width="10" height="10" id="s1"/>                 <af:panelGroupLayout id="pgl2" layout="vertical"                                      inlineStyle="border:blue solid 1px">                   <af:goLink id="pt_gl1" text="#{node.title}"                              destination="#{node.goLinkPrettyUrl}"                              targetFrame="#{node.attributes['Target']}"                              inlineStyle="font-size:large;#{node.selected ? 'font-weight:bold;' : ''}"/>                   <af:spacer width="10" height="10" id="s2"/>                   <af:outputText value="#{node.goLinkPrettyUrl}" id="ot2"                                  inlineStyle="font-size:medium; font-weight:bold;"/>                 </af:panelGroupLayout>               </af:forEach>  So now that the links are visible to all, clicking on a secure link will be intercepted by OAM.  Since the OAM can also configure in the Authentication Scheme, the challenging URL (the login page(s)) can also come from anywhere.  In this case the each login page have been defined in the custom portal application.  This was another requirement as well, since this login page also needed to have ADF based content.  This would not be possible if the login page came from OAM.  The following is the example login page: <?xml version='1.0' encoding='UTF-8'?> <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1"           xmlns:f="http://java.sun.com/jsf/core"           xmlns:h="http://java.sun.com/jsf/html"           xmlns:af="http://xmlns.oracle.com/adf/faces/rich">   <jsp:directive.page contentType="text/html;charset=UTF-8"/>   <f:view>     <af:document title="Settings" id="d1">       <af:panelGroupLayout id="pgl1" layout="vertical"/>       <af:outputText value="LOGIN FORM FOR A" id="ot1"/>       <form id="loginform" name="loginform" method="POST"             action="XXXXXXXX:14100/oam/server/auth_cred_submit">         <table>           <tr>             <td align="right">username:</td>             <td align="left">               <input name="username" type="text"/>             </td>           </tr>                      <tr>             <td align="right">password:</td>             <td align="left">               <input name="password" type="password"/>             </td>           </tr>                      <tr>             <td colspan="2" align="center">               <input value=" login " type="submit"/>             </td>           </tr>         </table>         <input name="request_id" type="hidden" value="${param['request_id']}"                id="itsss"/>       </form>     </af:document>   </f:view> </jsp:root> As you can see the code is pretty straight forward.  The most important section is in the form tag, where the submit is a POST to the OAM server.  This example page is mostly HTML, however, it is valid to have adf tags mixed in as well.  As a side note, this solution is really to tailored for a specific requirement.  Normally, there would be only one login page (or dialog/popup), and the OAM challenge resource would be /adfAuthentication.  This maps to the adfAuthentication servlet.  Please see the documentation for more about ADF security here. 

    Read the article

  • Getting Started with ASP.NET Membership, Profile and RoleManager

    - by Ben Griswold
    A new ASP.NET MVC project includes preconfigured Membership, Profile and RoleManager providers right out of the box.  Try it yourself – create a ASP.NET MVC application, crack open the web.config file and have a look.  First, you’ll find the ApplicationServices database connection: <connectionStrings>   <add name="ApplicationServices"        connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true"        providerName="System.Data.SqlClient"/> </connectionStrings>   Notice the connection string is referencing the aspnetdb.mdf database hosted by SQL Express and it’s using integrated security so it’ll just work for you without having to call out a specific database login or anything. Scroll down the file a bit and you’ll find each of the three noted sections: <membership>   <providers>     <clear/>     <add name="AspNetSqlMembershipProvider"          type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices"          enablePasswordRetrieval="false"          enablePasswordReset="true"          requiresQuestionAndAnswer="false"          requiresUniqueEmail="false"          passwordFormat="Hashed"          maxInvalidPasswordAttempts="5"          minRequiredPasswordLength="6"          minRequiredNonalphanumericCharacters="0"          passwordAttemptWindow="10"          passwordStrengthRegularExpression=""          applicationName="/"             />   </providers> </membership>   <profile>   <providers>     <clear/>     <add name="AspNetSqlProfileProvider"          type="System.Web.Profile.SqlProfileProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices"          applicationName="/"             />   </providers> </profile>   <roleManager enabled="false">   <providers>     <clear />     <add connectionStringName="ApplicationServices" applicationName="/" name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />     <add applicationName="/" name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />   </providers> </roleManager> Really. It’s all there. Still don’t believe me.  Run the application, walk through the registration process and finally login and logout.  Completely functional – and you didn’t have to do a thing! What else?  Well, you can manage your users via the Configuration Manager which is hiding in Visual Studio behind Projects > ASP.NET Configuration. The ASP.NET Web Site Administration Tool isn’t MVC-specific (neither is the Membership, Profile or RoleManager stuff) but it’s neat and I hardly ever see anyone using it.  Here you can set up and edit users, roles, and set access permissions for your site. You can manage application settings, establish your SMTP settings, configure debugging and tracing, define default error page and even take your application offline.  The UI is rather plain-Jane but it works great. And here’s the best of all.  Let’s say you, like most of us, don’t want to run your application on top of the aspnetdb.mdf database.  Let’s suppose you want to use your own database and you’d like to add the membership stuff to it.  Well, that’s easy enough. Take a look inside your [drive:]\%windir%\Microsoft.Net\Framework\v2.0.50727\ folder.  Here you’ll find a bunch of files.  If you were to run the InstallCommon.sql, InstallMembership.sql, InstallRoles.sql and InstallProfile.sql files against the database of your choices, you’d be installing the same membership, profile and role artifacts which are found in the aspnet.db to your own database.  Too much trouble?  Okay. Run [drive:]\%windir%\Microsoft.Net\Framework\v2.0.50727\aspnet_regsql.exe from the command line instead.  This will launch the ASP.NET SQL Server Setup Wizard which walks you through the installation of those same database objects into the new or existing database of your choice. You may not always have the luxury of using this tool on your destination server, but you should use it whenever you can.  Last tip: don’t forget to update the ApplicationServices connectionstring to point to your custom database after the setup is complete. At the risk of sounding like a smarty, everything I’ve mentioned in this post has been around for quite a while. The thing is that not everyone has had the opportunity to use it.  And it makes sense. I know I’ve worked on projects which used custom membership services.  Why bother with the out-of-the-box stuff, right?   And the .NET framework is so massive, who can know it all. Well, eventually you might have a chance to architect your own solution using any implementation you’d like or you will have the time to play around with another aspect of the framework.  When you do, think back to this post.

    Read the article

  • Scott Guthrie in Glasgow

    - by Martin Hinshelwood
    Last week Scott Guthrie was in Glasgow for his new Guathon tour, which was a roaring success. Scott did talks on the new features in Visual Studio 2010, Silverlight 4, ASP.NET MVC 2 and Windows Phone 7. Scott talked from 10am till 4pm, so this can only contain what I remember and I am sure lots of things he discussed just went in one ear and out another, however I have tried to capture at least all of my Ohh’s and Ahh’s. Visual Studio 2010 Right now you can download and install Visual Studio 2010 Candidate Release, but soon we will have the final product in our hands. With it there are some amazing improvements, and not just in the IDE. New versions of VB and C# come out of the box as well as Silverlight 4 and SharePoint 2010 integration. The new Intellisense features allow inline support for Types and Dictionaries as well as being able to type just part of a name and have the list filter accordingly. Even better, and my personal favourite is one that Scott did not mention, and that is that it is not case sensitive so I can actually find things in C# with its reasonless case sensitivity (Scott, can we please have an option to turn that off.) Another nice feature is the Routing engine that was created for ASP.NET MVC is now available for WebForms which is good news for all those that just imported the MVC DLL’s to get at it anyway. Another fantastic feature that will need some exploring is the ability to add validation rules to your entities and have them validated automatically on the front end. This removes the need to add your own validators and means that you can control an objects validation rules from a single location, the object. A simple command “GridView.EnableDynamicData(gettype(product))“ will enable this feature on controls. What was not clear was wither there would be support for this in WPF and WinForms as well. If there is, we can write our validation rules once and use everywhere. I was disappointed to here that there would be no inbuilt support for the Dynamic Language Runtime (DLR) with VS2010, but I think it will be there for .vNext. Because I have been concentrating on the Visual Studio ALM enhancements to VS2010 I found this section invaluable as I now know at least some of what I missed. Silverlight 4 I am not a big fan of Silverlight. There I said it, and I will probably get lynched for it. My big problem with Silverlight is that most of the really useful things I leaned from WPF do not work. I am only going to mention one thing and that is “x:Type”. If you are a WPF developer you will know how much power these 6 little letters provide; the ability to target templates at object types being the the most magical and useful. But, and this is a massive but, if you are developing applications that MUST run on platforms other than windows then Silverlight is your only choice (well that and Flash, but lets just not go there). And Silverlight has a huge install base as well.. 60% of all internet connected devices have Silverlight. Can Adobe say that? Even though I am not a fan of it my current project is a Silverlight one. If you start your XAML experience with Silverlight you will not be disappointed and neither will the users of the applications you build. Scott showed us a fantastic application called “Silverface” that is a Silverlight 4 Out of Browser application. I have looked for a link and can’t find one, but true to form, here is a fantastic WPF version called Fish Bowl from Microsoft. ASP.NET MVC 2 ASP.NET MVC is something I have played with but never used in anger. It is definitely the way forward, but WebForms is not dead yet. there are still circumstances when WebForms are better. If you are starting from greenfield and you are using TDD, then MVC is ultimately the only way you can go. New in version 2 are Dynamic Scaffolding helpers that let you control how data is presented in the UI from the Entities. Adding validation rules and other options that make sense there can help improve the overall ease of developing the UI. Also the Microsoft team have heard the cries of help from the larger site builders and provided “Areas” which allow a level of categorisation to your Controllers and Views. These work just like add-ins and have their own folder, but also have sub Controllers and Views. Areas are totally pluggable and can be dropped onto existing sites giving the ability to have boxed products in MVC, although what you do with all of those views is anyone's guess. They have been listening to everyone again with the new option to encapsulate UI using the Html.Action or Html.ActionRender. This uses the existing  .ascx functionality in ASP.NET to render partial views to the screen in certain areas. While this was possible before, it makes the method official thereby opening it up to the masses and making it a standard. At the end of the session Scott pulled out some IIS goodies including the IIS SEO Toolkit which can be used to verify your own site is “good” for search engine consumption. Better yet he suggested that you run it against your friends sites and shame them with how bad they are. note: make sure you have fixed yours first. Windows Phone 7 Series I had already seen the new UI for WP7 and heard about the developer story, but Scott brought that home by building a twitter application in about 20 minutes using the emulator. Scott’s only mistake was loading @plip’s tweets into the app… And guess what, it was written in Silverlight. When Windows Phone 7 launches you will be able to use about 90% of the codebase of your existing Silverlight application and use it on the phone! There are two downsides to the new WP7 architecture: No, your existing application WILL NOT work without being converted to either a Silverlight or XNA UI. NO, you will not be able to get your applications onto the phone any other way but through the Marketplace. Do I think these are problems? No, not even slightly. This phone is aimed at consumers who have probably never tried to install an application directly onto a device. There will be support for enterprise apps in the future, but for now enterprises should stay on Windows Phone 6.5.x devices. Post Event drinks At the after event drinks gathering Scott was checking out my HTC HD2 (released to the US this month on T-Mobile) and liked the Windows Phone 6.5.5 build I have on it. We discussed why Microsoft were not going to allow Windows Phone 7 Series onto it with my understanding being that it had 5 buttons and not 3, while Scott was sure that there was more to it from a hardware standpoint. I think he is right, and although the HTC HD2 has a DX9 compatible processor, it was never built with WP7 in mind. However, as if by magic Saturday brought fantastic news for all those that have already bought an HD2: Yes, this appears to be Windows Phone 7 running on a HTC HD2. The HD2 itself won't be getting an official upgrade to Windows Phone 7 Series, so all eyes are on the ROM chefs at the moment. The rather massive photos have been posted by Tom Codon on HTCPedia and they've apparently got WiFi, GPS, Bluetooth and other bits working. The ROM isn't online yet but according to the post there's a beta version coming soon. Leigh Geary - http://www.coolsmartphone.com/news5648.html  What was Scott working on on his flight back to the US?   Technorati Tags: VS2010,MVC2,WP7S,WP7 Follow: @CAMURPHY, @ColinMackay, @plip and of course @ScottGu

    Read the article

  • Website Vulnerabilities

    - by Ben Griswold
    The folks at the Open Web Application Security Project publish a list of the top 10 vulnerabilities. In a recent CodeBrew I provided a quick overview of them all and spent a good amount of time focusing on the most prevalent vulnerability, Cross Site Scripting (XSS).  I gave an overview of XSS, stepped through a quick demo (sorry vulnerable site), reviewed the three XSS variations and talked a bit about how to protect one’s site.  References and reading materials were also included in the presentation and, look at that, they are provided here too. Open Web Application Security Project The OWASP Top Ten Vulnerabilities (pdf) OWASP List of Vulnerabilities The 56 Geeks Project by Scott Johnson ha.ckers.org OWASP XSS Prevention Cheat Sheet Wikipedia Is XSS Solvable?, Don Ankney The Anatomy of Cross Site Scripting, Gavin Zuchlinski

    Read the article

  • Learn Cloud Computing – It’s Time

    - by Ben Griswold
    Last week, I gave an in-house presentation on cloud computing.  I walked through an overview of cloud computing – characteristics (on demand, elastic, fully managed by provider), why are we interested (virtualization, distributed computing, increased access to high-speed internet, weak economy), various types (public, private, virtual private cloud) and services models (IaaS, PaaS, SaaS.)  Though numerous providers have emerged in the cloud computing space, the presentation focused on Amazon, Google and Microsoft offerings and provided an overview of their platforms, costs, data tier technologies, management and security.  One of the biggest talking points was why developers should consider the cloud as part of their deployment strategy: You only have to pay for what you consume You will be well-positioned for one time event provisioning You will reap the benefits of automated growth and scalable technologies For the record: having deployed dozens of applications on various platforms over the years, pricing tends to be the biggest customer concern.  Yes, scalability is a customer consideration, too, but it comes in distant second.  Boy do I hope you’re still reading… You may be thinking, “Cloud computing is well and good and it sounds catchy, but should I bother?  After all, it’s just another technology bundle which I’m supposed to ramp up on because it’s the latest thing, right?”  Well, my clients used to be 100% reliant upon me to find adequate hosting for them.  Now I find they are often aware of cloud services and some come to me with the “possibility” that deploying to the cloud is the best solution for them.  It’s like the patient who walks into the doctor’s office with their diagnosis and treatment already in mind thanks to the handful of Internet searches they performed earlier that day.  You know what?  The customer may be correct about the cloud. It may be a perfect fit for their app.  But maybe not…  I don’t think there’s a need to learn about every technical thing under the sun, but if you are responsible for identifying hosting solutions for your customers, it is time to get up to speed on cloud computing and the various offerings (if you haven’t already.)  Here are a few references to get you going: DZone Refcardz #82 Getting Started with Cloud Computing by Daniel Rubio Wikipedia Cloud Computing – What is it? Amazon Machine Images (AMI) Google App Engine SDK Azure SDK EC2 Spot Pricing Google App Engine Team Blog Amazon EC2 Team Blog Microsoft Azure Team Blog Amazon EC2 – Cost Calculator Google App Engine – Cost and Billing Resources Microsoft Azure – Cost Calculator Larry Ellison has stated that cloud computing has been defined as "everything that we currently do" and that it will have no effect except to "change the wording on some of our ads" Oracle launches worldwide cloud-computing tour NoSQL Movement  

    Read the article

  • Learn Domain-Driven Design

    - by Ben Griswold
    I just wrote about how I like to present on unfamiliar topics. With this said, Domain-Driven Design (DDD) is no exception. This is yet another area I knew enough about to be dangerous but I certainly was no expert.  As it turns out, researching this topic wasn’t easy. I could be wrong, but it is as if DDD is a secret to which few are privy. If you search the Interwebs, you will likely find little information about DDD until you start rolling over rocks to find that one great write-up, a handful of podcasts and videos and the Readers’ Digest version of the Blue Book which apparently you must read if you really want to get the complete, unabridged skinny on DDD.  Even Wikipedia’s write-up is skimpy which I didn’t know was possible…   Here’s a list of valuable resources.  If you, too, are interested in DDD, this is a good starting place.  Domain-Driven Design: Tackling Complexity in the Heart of Software by Eric Evans Domain-Driven Design Quickly, by Abel Avram & Floyd Marinescu An Introduction to Domain-Driven Design by David Laribee Talking Domain-Driven Design with David Laribee Part 1, Deep Fried Bytes Talking Domain-Driven Design with David Laribee Part 2, Deep Fried Bytes Eric Evans on Domain Driven Design, .NET Rocks Domain-Driven Design Community Eric Evans on Domain Driven Design Jimmy Nilsson on Domain Driven Design Domain-Driven Design Wikipedia What I’ve Learned About DDD Since the Book, Eric Evans Domain Driven Design, Alt.Net Podcast Applying Domain-Driven Design and Patterns: With Examples in C# and .NET, Jimmy Nilsson Domain-Driven Design Discussion Group DDD: Putting the Model to Work by Eric Evans The Official DDD Site

    Read the article

  • LPR printing in Ubuntu

    - by Ben
    I'm trying to set up printing to a networked LPR printer, but I can't seem to make it work. The connexion information that IT gives is: Print Spooler: printing.domain.edu Printer Name: PrinterName Print port: 515 Bidirectional: Disabled It's an HP LaserJet p4015x, and I'm using the "HP LaserJet p4015x, hpcups 3.10.6 (color, 2-sided printing)" driver, according to CUPS. I have tried installing via the CUPS admin interface as lpd://printing.domain.edu:515/PrinterName, lpd://printing.domain.edu/PrinterName, and http://printing.domain.edu:515/PrinterName, and none of these has worked (i.e. test pages show up as "stopped").

    Read the article

  • Objective-C or C++ for iOS games?

    - by Martin Wickman
    I'm pretty confident programming in Objective-C and C++, but I find Objective-C to be somewhat easier to use and more flexible and dynamic in nature. What would be the pros and cons when using C++ instead of Obj-C for writing games in iOS? Or rather, are there any known problems with using Obj-C as compared to C++? For instance, I suspect there might be performance issues with Obj-C compared to code written in C/C++.

    Read the article

  • ASP.NET MVC HandleError Attribute

    - by Ben Griswold
    Last Wednesday, I took a whopping 15 minutes out of my day and added ELMAH (Error Logging Modules and Handlers) to my ASP.NET MVC application.  If you haven’t heard the news (I hadn’t until recently), ELMAH does a killer job of logging and reporting nearly all unhandled exceptions.  As for handled exceptions, I’ve been using NLog but since I was already playing with the ELMAH bits I thought I’d see if I couldn’t replace it. Atif Aziz provided a quick solution in his answer to a Stack Overflow question.  I’ll let you consult his answer to see how one can subclass the HandleErrorAttribute and override the OnException method in order to get the bits working.  I pretty much took rolled the recommended logic into my application and it worked like a charm.  Along the way, I did uncover a few HandleError fact to which I wasn’t already privy.  Most of my learning came from Steven Sanderson’s book, Pro ASP.NET MVC Framework.  I’ve flipped through a bunch of the book and spent time on specific sections.  It’s a really good read if you’re looking to pick up an ASP.NET MVC reference. Anyway, my notes are found a comments in the following code snippet.  I hope my notes clarify a few things for you too. public class LogAndHandleErrorAttribute : HandleErrorAttribute {     public override void OnException(ExceptionContext context)     {         // A word from our sponsors:         //      http://stackoverflow.com/questions/766610/how-to-get-elmah-to-work-with-asp-net-mvc-handleerror-attribute         //      and Pro ASP.NET MVC Framework by Steven Sanderson         //         // Invoke the base implementation first. This should mark context.ExceptionHandled = true         // which stops ASP.NET from producing a "yellow screen of death." This also sets the         // Http StatusCode to 500 (internal server error.)         //         // Assuming Custom Errors aren't off, the base implementation will trigger the application         // to ultimately render the "Error" view from one of the following locations:         //         //      1. ~/Views/Controller/Error.aspx         //      2. ~/Views/Controller/Error.ascx         //      3. ~/Views/Shared/Error.aspx         //      4. ~/Views/Shared/Error.ascx         //         // "Error" is the default view, however, a specific view may be provided as an Attribute property.         // A notable point is the Custom Errors defaultRedirect is not considered in the redirection plan.         base.OnException(context);           var e = context.Exception;                  // If the exception is unhandled, simply return and let Elmah handle the unhandled exception.         // Otherwise, try to use error signaling which involves the fully configured pipeline like logging,         // mailing, filtering and what have you). Failing that, see if the error should be filtered.         // If not, the error simply logged the exception.         if (!context.ExceptionHandled                || RaiseErrorSignal(e)                   || IsFiltered(context))                  return;           LogException(e); // FYI. Simple Elmah logging doesn't handle mail notifications.     }

    Read the article

  • ASP.NET Membership Provider Setup

    - by Ben Griswold
    In this screencast, Noah and I show you how to quickly get started with the ASP.NET Membership Provider.  We’ll take you through basic features and setup and walk you through membership table creation with the ASP.NET SQL Server Wizard. I’ve written about the ASP.NET Membership Provider and setup before.  If you missed the post, this introductory video may be for you.     This is one of our first screencasts.  If you have feedback, I’d love to hear it.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >