Search Results

Search found 2949 results on 118 pages for 'msdn'.

Page 18/118 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Monitoring Visual Studio 2010 Performance Problems

    - by TATWORTH
    At http://visualstudiogallery.msdn.microsoft.com/fa85b17d-3df2-49b1-bee6-71527ffef441, Microsoft have provided a tool for Visual Studio to provide reports on Visual Studio 2010 performance problems. The use of it has been discussed at http://blogs.msdn.com/b/visualstudio/archive/2011/05/02/perfwatson.aspx as follows: "Would you like Visual Studio 2010 to be even faster? Would you like any performance issue you see to be  reported automatically without any hassle? Well now you can, with the new Visual Studio PerfWatson extension! Install this extension and help us deliver a faster Visual Studio experience. We’re constantly working to improve the performance of Visual Studio and take feedback about it very seriously. Our investigations into these issues have found that there are a variety of scenarios where a long running task can cause the UI thread to hang or become unresponsive. Visual Studio PerfWatson is a low overhead telemetry system that helps us capture these instances of UI unresponsiveness and report them back to Microsoft automatically and anonymously. We then use this data to drive performance improvements that make Visual Studio faster." Now instead of complaining you too can help Microsoft locate and fix performance problems with Visual Studio 2010. The requirements are: "Following are the pre-requisites for installing Visual Studio PerfWatson: Windows Vista/2008/2008 R2/7 (Note: PerfWatson is not supported for Windows XP) Visual Studio 2010 SP1 (Professional, Premium, or Ultimate)"

    Read the article

  • Happy Day! VS2010 SP1, Project Server Integration, Load Test Feature Pack

    - by Aaron Kowall
    Microsoft released a PILE of Visual Studio goodness today: Visual Studio 2010 SP1(Including TFS SP1) Finally done with remembering which GDR packs, KB Patches, etc need to be installed with a new VS/TFS 2010 deployment.  Just grab the SP1.  It’s available today for MSDN Subscribers and March 10th for public download. TFS-Project Server Integration Feature Pack MSDN Subscribers got another little treat today with the TFS-Project Server integration feature pack.  We can now get project rollups and portfolio level management with Project Server yet still have the tight developer interaction with TFS.  Finally we can make the PMO happy without duplicate entry or MS Project gymnastics. Visual Studio Load Test Feature Pack This is a new benefit for Visual Studio 2010 Ultimate subscribers.  Previously there was a limit to Ultimate Load Testing of 250 virtual users. If you needed more, you had to buy virtual user license packs.  No more.  Now your Visual Studio Ultimate license allows you to simulate as many virtual users as you need!!  This is HUGE in improving adoption of regular load testing for development projects. All the Details are available from Soma’s blog. Technorati Tags: VS2010,TFS,Load Test

    Read the article

  • More than one way to skin an Audit

    - by BuckWoody
    I get asked quite a bit about auditing in SQL Server. By "audit", people mean everything from tracking logins to finding out exactly who ran a particular SELECT statement. In the really early versions of SQL Server, we didn't have a great story for very granular audits, so lots of workarounds were suggested. As time progressed, more and more audit capabilities were added to the product, and in typical database platform fashion, as we added a feature we didn't often take the others away. So now, instead of not having an option to audit actions by users, you might face the opposite problem - too many ways to audit! You can read more about the options you have for tracking users here: http://msdn.microsoft.com/en-us/library/cc280526(v=SQL.100).aspx  In SQL Server 2008, we introduced SQL Server Audit, which uses Extended Events to really get a simple way to implement high-level or granular auditing. You can read more about that here: http://msdn.microsoft.com/en-us/library/dd392015.aspx  As with any feature, you should understand what your needs are first. Auditing isn't "free" in the performance sense, so you need to make sure you're only auditing what you need to. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • The last word on C++ AMP...

    - by Daniel Moth
    Well, not the last word, but the last blog post I plan to do here on that topic. Over the last 12 months, I have published 45 blog posts related to C++ AMP on the Parallel Programming in Native Code, and the rest of the team has published even more. Occasionally I'll link to some of them from my own blog here, but today I decided to stop doing that - so if you relied on my personal blog pointing you to C++ AMP content, it is time you subscribed to the msdn blog. I will continue to blog about other topics here of course, so stay tuned. So, for the last time, I encourage you to read the latest two blog posts I published on the team blog bringing together essential reading material on C++ AMP Learn C++ AMP - a collection of links to take you from zero to hero. Present on C++ AMP - a walkthrough on how to give a presentation including slides. Got questions on C++ AMP? Hit the msdn forum! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • C# HashSet<T>

    - by Ben Griswold
    I hadn’t done much (read: anything) with the C# generic HashSet until I recently needed to produce a distinct collection.  As it turns out, HashSet<T> was the perfect tool. As the following snippet demonstrates, this collection type offers a lot: // Using HashSet<T>: // http://www.albahari.com/nutshell/ch07.aspx var letters = new HashSet<char>("the quick brown fox");   Console.WriteLine(letters.Contains('t')); // true Console.WriteLine(letters.Contains('j')); // false   foreach (char c in letters) Console.Write(c); // the quickbrownfx Console.WriteLine();   letters = new HashSet<char>("the quick brown fox"); letters.IntersectWith("aeiou"); foreach (char c in letters) Console.Write(c); // euio Console.WriteLine();   letters = new HashSet<char>("the quick brown fox"); letters.ExceptWith("aeiou"); foreach (char c in letters) Console.Write(c); // th qckbrwnfx Console.WriteLine();   letters = new HashSet<char>("the quick brown fox"); letters.SymmetricExceptWith("the lazy brown fox"); foreach (char c in letters) Console.Write(c); // quicklazy Console.WriteLine(); The MSDN documentation is a bit light on HashSet<T> documentation but if you search hard enough you can find some interesting information and benchmarks. But back to that distinct list I needed… // MSDN Add // http://msdn.microsoft.com/en-us/library/bb353005.aspx var employeeA = new Employee {Id = 1, Name = "Employee A"}; var employeeB = new Employee {Id = 2, Name = "Employee B"}; var employeeC = new Employee {Id = 3, Name = "Employee C"}; var employeeD = new Employee {Id = 4, Name = "Employee D"};   var naughty = new List<Employee> {employeeA}; var nice = new List<Employee> {employeeB, employeeC};   var employees = new HashSet<Employee>(); naughty.ForEach(x => employees.Add(x)); nice.ForEach(x => employees.Add(x));   foreach (Employee e in employees) Console.WriteLine(e); // Returns Employee A Employee B Employee C The Add Method returns true on success and, you guessed it, false if the item couldn’t be added to the collection.  I’m using the Linq ForEach syntax to add all valid items to the employees HashSet.  It works really great.  This is just a rough sample, but you may have noticed I’m using Employee, a reference type.  Most samples demonstrate the power of the HashSet with a collection of integers which is kind of cheating.  With value types you don’t have to worry about defining your own equality members.  With reference types, you do. internal class Employee {     public int Id { get; set; }     public string Name { get; set; }       public override string ToString()     {         return Name;     }          public bool Equals(Employee other)     {         if (ReferenceEquals(null, other)) return false;         if (ReferenceEquals(this, other)) return true;         return other.Id == Id;     }       public override bool Equals(object obj)     {         if (ReferenceEquals(null, obj)) return false;         if (ReferenceEquals(this, obj)) return true;         if (obj.GetType() != typeof (Employee)) return false;         return Equals((Employee) obj);     }       public override int GetHashCode()     {         return Id;     }       public static bool operator ==(Employee left, Employee right)     {         return Equals(left, right);     }       public static bool operator !=(Employee left, Employee right)     {         return !Equals(left, right);     } } Fortunately, with Resharper, it’s a snap. Click on the class name, ALT+INS and then follow with the handy dialogues. That’s it. Try out the HashSet<T>. It’s good stuff.

    Read the article

  • XNA Notes 009

    - by George Clingerman
    This past week the MVPs (myself included) were on Microsoft campus for the MVP summit. So I apologize in advance if you did something cool or heard of something cool happening with XNA and XBLIGs and it’s not in my notes. I did my best to stay on top of things, but honestly this community is fast and furious with what it’s doing and creating. I really can’t keep up and that’s fantastic! But here’s what I *did* notice while I was there on Microsoft Campus (and I did make sure to point out to the XNA team several of these very cool happenings while I had their ears). Time Critical XNA News: The XNA team wants you to know that Dream Build Play registration is now open! http://blogs.msdn.com/b/xna/archive/2011/02/28/registration-now-open-for-dream-build-play-2011-challenge.aspx Join the XNA-UK create on March 24, 2011 at the Microsoft Tech Days Conference http://xna-uk.net/blogs/darkgenesis/archive/2011/02/27/join-the-xna-uk-crew-at-the-microsoft-tech-days-conference-on-24th-march-2011.aspx XNA Team: Shawn Hargreaves shares one of the coolest things that’s happened in the XNA community http://blogs.msdn.com/b/shawnhar/archive/2011/03/02/xbox-indies-pivot-view.aspx Nick Gravelyn continues his unique marketing/work prioritization strategy as he tries to get to 5,000 Pixel Man users before he makes Pixel Man 2 (and he’s almost there!) http://nickgravelyn.com/pixelman2/ XNA MVPs: A lot of the XNA MVPs were at the Microsoft MVP Summit 2011. Due to NDAs, most things can’t be shared, but I’m sure if you’re curious you could ask them about the general vibe and feeling they got from the team and the future of XNA/XBLIG and more. Catalin Zima and team release the free WP7 game Chickens Can Dream http://twitter.com/CatalinZima/statuses/41174062923390976 http://www.amusedsloth.com/2011/02/chickens-can-dream-is-live/ Charles Humphrey (NemoKrad) posts his March talk source and PowerPoint http://xna-uk.net/blogs/randomchaos/archive/2011/03/04/march-2011-talk-post-processing-framework.aspx XNA Developers: Michael B. McLaughlin posts about ANTS Memory Profile and creates a CheckMemoryAllocationGame sample (extremely useful if you’re looking to see how much memory some operation allocates!) http://geekswithblogs.net/mikebmcl/archive/2011/02/28/ants-memory-profiler-7.0-review.aspx http://geekswithblogs.net/mikebmcl/archive/2011/03/01/checkmemoryallocationgame-sample.aspx Andy Schatz (2009 IGF winner for Monaco) talking XNA at GDC 2011 http://www.gamasutra.com/view/news/33313/GDC_2011_Andy_Schatz_Ill_Make_My_Last_Game_When_I_Die.php Xbox LIVE Indie Games (XBLIG): Clover: A Curious Tale by BinaryTweed is coming as a Deal of the Week during St. Patricks Day http://majornelson.com/archive/2011/03/03/comingsoontothexboxlivemarketplacemarchthird.aspx Ska Studios away at GDC but still very post happy as always http://www.ska-studios.com/2011/03/02/swamped-picture-pack/ http://www.ska-studios.com/2011/02/28/the-february-showcase/ http://www.ska-studios.com/2011/02/25/good-morning-gato-51-smelling-the-roses/ Just Press Start interviews Matthew Mikuszewski of Darkwind Media about Blocks Indie http://justpressstart.net/?p=516 Gamergeddon Xbox Indie Game Round Up - February 27th http://www.gamergeddon.com/2011/02/27/xbox-indie-game-round-up-february-27th/ http://www.gamergeddon.com/category/xbox-360/indie-games/ GameMarx does a round up of all the Xbox Live Indie Game podcasts that are currently available http://www.gamemarx.com/news/2011/02/27/xbox-live-indie-game-podcasts.aspx GameMarx episode 11 http://www.gamemarx.com/video/the-show/26/ep-11-february-25-2011.aspx In perhaps what I feel is the most exciting news I’ve heard all week, Michael C. Neel (ViNull of GameMarx fame) re-launch XboxIndies.com! http://www.gamemarx.com/news/2011/03/01/the-relaunch-of-xboxindies-com.aspx http://xboxindies.com/ Armless Octopus shares a little of what they heard from Luke Schneider of Radiangames during his GDC 2011 talk http://www.armlessoctopus.com/2011/03/02/gdc-2011-luke-schneider-offers-insight-into-radiangames-success/ VVGindiecast Episode 1 with guests Derek Strickland(Mr_Deeke), Kris Steele(Kriswd40 from FunInfused Games) and Dave Voyles(From armlessoctopus.com) http://vvgtv.com/2011/02/25/vvgindiecast-xblig-podcast/ If you’re doing Xbox LIVE Indie Game Reviews get in touch with XboxIndies.com to get into their aggregated feed http://forums.create.msdn.com/forums/p/76931/467189.aspx#467189 B.U.T.T.O.N and Flotilla represented XNA very well at the Independent Games Festival (are there any more games entered that were created using XNA? Stand up and be heard!) http://www.igf.com/php-bin/entry2011.php?id=374 Armless Ocotopus interview at GDC 2011 with Soulcaster creator Ian Stocker http://www.armlessoctopus.com/2011/03/04/gdc-2011-interview-with-soulcaster-creator-ian-stocker/ MommysBestGames gets a nod in the DarkBasic newsletter where it features the Explosionade Editor (just do a search for Explosionade to get to the interesting bits!) http://www.thegamecreators.com/pages/newsletters/newsletter_issue_98.html You may be hearing the cries of FortressCraft (coming soon to XBLIG) being so wrong for stealing the idea from MineCraft. But did you know the the game MineCraft started from was an XNA game called Infiniminer? XNA is getting it’s fingers into EVERYTHING! http://www.minecraftwiki.net/wiki/Infiniminer XNA Development: TorqueX is NOT dead thanks to the tremendous efforts of the XNA Community working on the CEV (special thanks to @PinoEire for all his hard work on making that happen!) http://www.garagegames.com/community/blogs/view/20878 http://torquecev.com/ Dave Henry has posted XNA 3.x adding platformer start kit to the network game state management on his new site http://twitter.com/#!/mort8088/status/43407715908853760 http://mort8088.com/2011/03/03/xna-3-x-adding-platformer-starter-kit-to-network-game-state-management/ Mark Bamford releases XNAViewer 4.0, great for running XNA games inside of a Windows Form (for building level editors, etc.) http://twitter.com/#!/xzodia04/status/43466830412660736 http://xnaviewer.codeplex.com/ Unit testing an XNA game with Resharper and NUnit http://smnbss.wordpress.com/2011/02/28/planetx-unit-testing-an-xna-game-with-resharper-and-nunit-wp7-xbox-xna/ XNA for Silverlight developers: Part 5 - Input (touch + gestures) http://ht.ly/1bxwUE Mike McLaughlin shares a link he stumbled across for those looking to understand vector and matrix math http://twitter.com/#!/mikebmcl/status/42587074725036032 http://chortle.ccsu.edu/VectorLessons/vectorIndex.html DigitalRune Resources Pooling in XNA (Part 1) http://www.digitalrune.com/Support/Blog/tabid/719/EntryId/84/DigitalRune-Helper-Library-Resource-Pooling-in-XNA-Part-1.aspx JohnK “bobthecbuilder” released a new SunBurn Update that lowers the requirements for Windows Games http://twitter.com/#!/bobthecbuilder/status/43457306578522112 http://www.synapsegaming.com/blogs/johnk/archive/2011/03/03/sunburn-update-windows-redistributable.aspx Quick update on the Indiefreaks Game Framework v0.4 development status http://indiefreaks.com/2011/03/04/quick-update-on-igf-v0-4-development/

    Read the article

  • Forcing an External Activation with Service Broker

    - by Davide Mauri
    In these last days I’ve been working quite a lot with Service Broker, a technology I’m really happy to work with, since it can give a lot of satisfaction. The scale-out solution one can easily build is simply astonishing. I’m helping a company to build a very scalable and – yet almost inexpensive – invoicing system that has to be able to scale out using commodity hardware. To offload the work from the main server to satellite “compute nodes” (yes, I’ve borrowed this term from PDW) we’re using Service Broker and the External Activator application available in the SQL Server Feature Pack. For those who are not used to work with SSB, the External Activation is a feature that allows you to intercept the arrival of a message in a queue right from your application code. http://msdn.microsoft.com/en-us/library/ms171617.aspx (Look for “Event-Based Activation”) In order to make life even more easier, Microsoft released the External Activation application that saves you even from writing even this code. http://blogs.msdn.com/b/sql_service_broker/archive/tags/external+activator/ The External Activator application can be configured to execute your own application so that each time a message – an invoice in my case – arrives in the target queue, the invoking application is executed and the invoice is calculated. The very nice feature of External Activator is that it can automatically execute as many configured application in order to process as many messages as your system can handle.  This also a lot of create a scale-out solution, leaving to the developer only a fraction of the problems that usually came with asynchronous programming. Developers are also shielded from Service Broker since everything can be encapsulated in Stored Procedures, so that – for them – developing such scale-out asynchronous solution is not much more complex than just executing a bunch of Stored Procedures. Now, if everything works correctly, you don’t have to bother of anything else. You put messages in the queue and your application, invoked by the External Activator, process them. But what happen if for some reason your application fails to process the messages. For examples, it crashes? The message is safe in the queue so you just need to process it again. But your application is invoked by the External Activator application, so now the question is, how do you wake up that app? Service Broker will engage the activation process only if certain conditions are met: http://msdn.microsoft.com/en-us/library/ms171601.aspx But how we can invoke the activation process manually, without having to wait for another message to arrive (the arrival of a new message is a condition that can fire the activation process)? The “trick” is to do manually with the activation process does: sending a system message to a queue in charge of handling External Activation messages: declare @conversationHandle uniqueidentifier; declare @n xml = N' <EVENT_INSTANCE>   <EventType>QUEUE_ACTIVATION</EventType>   <PostTime>' + CONVERT(CHAR(24),GETDATE(),126) + '</PostTime>   <SPID>' + CAST(@@SPID AS VARCHAR(9)) + '</SPID>   <ServerName>[your_server_name]</ServerName>   <LoginName>[your_login_name]</LoginName>   <UserName>[your_user_name]</UserName>   <DatabaseName>[your_database_name]</DatabaseName>   <SchemaName>[your_queue_schema_name]</SchemaName>   <ObjectName>[your_queue_name]</ObjectName>   <ObjectType>QUEUE</ObjectType> </EVENT_INSTANCE>' begin dialog conversation     @conversationHandle from service        [<your_initiator_service_name>] to service          '<your_event_notification_service>' on contract         [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification] with     encryption = off,     lifetime = 6000 ; send on conversation     @conversationHandle message type     [http://schemas.microsoft.com/SQL/Notifications/EventNotification] (@n) ;     end conversation @conversationHandle; That’s it! Put the code in a Stored Procedure and you can add to your application a button that says “Force Queue Processing” (or something similar) in order to start the activation process whenever you need it (which should not occur too frequently but it may happen). PS I know that the “fire-and-forget” (ending the conversation without waiting for an answer) technique is not a best practice, but in this case I don’t see how it can hurts so I decided to stay very close to the KISS principle []

    Read the article

  • Taking advantage of Windows Azure CDN and Dynamic Pages in ASP.NET - Caching content from hosted services

    - by Shawn Cicoria
    With the updates to Windows Azure CDN announced this week [1] I wanted to help illustrate the capability with a working sample that will serve up dynamic content from an ASP.NET site hosted in a WebRole. First, to get a good overview of the capability you can read the Overview of the Windows Azure CDN [2] content on MSDN. When you setup the ability to cache content from a hosted service, the requirement is to provide a path to your role’s DNS endpoint that ends in the path “/cdn”.  Additionally, you then map CDN to that service. What WAZ CDN does, is allow you to then map that through the CDN to your host.  The CDN will then make a request to your host on your client’s behalf. The requirement is still that your client, and any Url’s that are to be serviced through the CDN and this capability have to use the CDN DNS name and not your host – no different than what CDN does for Blog storage. The following 2 URL’s are samples of how the client needs to issue the requests. Windows Azure hosted service URL: http: //myHostedService.cloudapp.net/cdn/music.aspx   - for regular “dynamic” content Windows Azure CDN URL: http: //<identifier>.vo.msecnd.net/music.aspx   - for CDN “cachable” content. The first URL path’s the request direct to your host into the Azure datacenter.  The 2nd URL paths the request through the CDN infrastructure, where CDN will make the determination to request the content on behalf of the client to the Azure datacenter and your host on the /cdn path. The big advantage here is you can apply logic to your content creation.  What’s important is emitting the CDN friendly headers that allow CDN to request and re-request only when you designate based upon it’s rules of “staleness” as described in the overview page. With IIS7.5 there is an underlying issue when the Managed Module “OutputCache” is enabled that in order to emit a good header for your content, you’ll need to remove, and in my sample, helps provide CDN friendly headers.  You get IIS 7.5 when running under OS Family “2” in your service configuration. By default, and when the OutputCache managed module is loaded, if you use the HttpResponse.CachePolicy to set the Http Headers for “max-age” when the HttpCacheability is “Public”, you will NOT get the “max-age” emitted as part of the “Cache-control:” header.  Instead, the OutputCache module will remove “max-age” and just emit “public”.  It works ok when Cacheability is set to “private”. To work around the issue and ensure your code as follows emits the full max-age along with the public option, you need to remove as follows: <system.webServer>   <modules runAllManagedModulesForAllRequests="true">     <remove name="OutputCache"/>   </modules> </system.webServer>   Response.Cache.SetCacheability(HttpCacheability.Public); Response.Cache.SetMaxAge(TimeSpan.FromMinutes(rv));   In the attached solution, the way I approached it was to have a VirtualApplication under the root site that has it’s own web.config  - this VirtualApplication is the /cdn of the site and when deployed to Azure as a Web Role will surface as a distinct IIS Application – along with a separate AppDomain. The CDN Sample is a simple Web Forms site that the /default landing page contains 3 IFrames to host: 1. Content direct from the host @   http://xxxx.cloudapp.net/cdn 2. Content via the CDN @ http://azxxx.vo.msecnd.net  3. Simple list of recent requests – showing where the request came from.   When you run the sample the first time you hit the page, both the Host and the CDN will cause 2 initial requests to hit the host.  You won’t see the first requests in the list because of timing – but if you refresh, you’ll see that the list will show that you have 2 requests initially. 1. sourced direct from the Browser to the HOST 2. sourced via the CDN The picture above shows the call-outs of each of those requests – green rows showing requests coming direct to the HOST, yellow showing the CDN request.  The IP addresses of the green items are direct from the client, where the CDN is from the CDN data center. As you refresh the page (hit Ctrl+F5 to force a full refresh and avoid “304 – not changed”) you’ll see that the request to the HOST get’s processed direct; but the request to the CDN endpoint is serviced direct from the CDN and doesn’t incur any additional request back to the HOST. The following is the Headers from the CDN response (Status-Line) HTTP/1.1 200 OK Age 13 Cache-Control public, max-age=300 Connection keep-alive Content-Length 6212 Content-Type image/jpeg; charset=utf-8 Date Fri, 11 Mar 2011 20:47:14 GMT Expires Fri, 11 Mar 2011 20:52:01 GMT Last-Modified Fri, 11 Mar 2011 20:47:02 GMT Server Microsoft-IIS/7.5 X-AspNet-Version 4.0.30319 X-Powered-By ASP.NET   The following are the Headers from the HOST response (Status-Line) HTTP/1.1 200 OK Cache-Control public, max-age=300 Content-Length 6189 Content-Type image/jpeg; charset=utf-8 Date Fri, 11 Mar 2011 20:47:15 GMT Last-Modified Fri, 11 Mar 2011 20:47:02 GMT Server Microsoft-IIS/7.5 X-AspNet-Version 4.0.30319 X-Powered-By ASP.NET   You can see that with the CDN request, the countdown (age) starts for aging the content. The full sample is located here: CDNSampleSite.zip [1] http://blogs.msdn.com/b/windowsazure/archive/2011/03/09/now-available-updated-windows-azure-sdk-and-windows-azure-management-portal.aspx [2] http://msdn.microsoft.com/en-us/library/ff919703.aspx

    Read the article

  • SQL Server service accounts and SPNs

    - by simonsabin
    Service Principal Names (SPNs) are a must for kerberos authentication which is a must when using sharepoint, reporting services and sql server where you access one server that then needs to access another resource, this is called the double hop. The reason this is a complex problem is that the second hop has to be done with impersonation/delegation. For this to work there needs to be a way for the security system to make sure that the service in the middle is allowed to impersonate you, after all you are not giving the service your password. To do this you need to be using kerberos. The following is my simple interpretation of how kerberos works. I find the Kerberos documentation rediculously complex so the following might be sligthly wrong but I think its close enough. Keberos works on a ticketing system, the prinicipal is that you get a security token from AD and then you can pass that to the service in the middle which can then use that token to impersonate you. For that to work AD has to be able to identify who is allowed to use the token, in this case the service account.But how do you as a client know what service account the service in the middle is configured with. The answer is SPNs. The SPN is the mapping between your logical connection to the service account. One type of SPN is for the DNS name for the server and the port. i.e. MySQL.mydomain.com and 1433. You can see how this maps to SQL Server on that server, but how does it map to the account. Well it can be done in two ways, either you can have a mapping defined in AD or AD can use a default mapping (this is something I didn't know about). To map the SPN in AD then you have to add the SPN to the user account, this is documented in the first link below either directly or using a tool called SetSPN. You might say that is complex, well it is and thats why SQL Server tries to do it for you, at start up it tries to connect to AD and set the SPN on the account it is running as, clearly that can only happen IF SQL is running as a domain account AND importantly it has permission to do so. By default a normal domain user account doesn't have the correct permission, and is why so many people have this problem. If the account is a domain admin then it will have permission, but non of us run SQL using domain admin accounts do we. You might also note that the SPN contains the port number (this isn't a requirement now in sql 2008 but I won't go into that), so if you set it manually and you are using dynamic ports (the default for a named instance) what do you do, well every time the port changes you need to change the SPN allocated to the account. Thats why its advised to let SQL Server register the SPN itself. You may also have thought, well what happens if I change my service account, won't that lead to two accounts with the same SPN. Possibly. Having two accounts with the same SPN is definitely a problem. Why? Well because if there are two accounts Kerberos can't identify the exact account that the service is running as, it could be either account, and so your security falls back to NTLM. SETSPN is useful for finding duplicate SPNs Reading this you will probably be thinking Oh my goodness this is really difficult. It is however I've found today in investigating something else that there is an easy option. Use Network Service as your service account. Network Service is a special account and is tied to the computer. It appears that Network Service has the update rights to AD to set an SPN mapping for the computer account. This then allows the SPN mapping to work. I believe this also works for the local system account. To get all the SPNs in your AD run the following, it could be a large file, so you might want to restrict it to a specific OU, or CN ldifde -d "DC=<domain>" -l servicePrincipalName -F spn.txt You will read in the links below that you need SQL to register the SPN this is done how to use Kerberos authenticaiton in SQL Server - http://support.microsoft.com/kb/319723 Using Kerberos with SQL Server - http://blogs.msdn.com/sql_protocols/archive/2005/10/12/479871.aspx Understanding Kerberos and NTLM authentication in SQL Server Connections - http://blogs.msdn.com/sql_protocols/archive/2006/12/02/understanding-kerberos-and-ntlm-authentication-in-sql-server-connections.aspx Summary The only reason I personally know to use a domain account is when you can't get kerberos to work and you want to do BULK INSERT or other network service that requires access to a a remote server. In this case you have to resort to using SQL authentication and the SQL Server uses its service account to access the remote service, and thus you need a domain account. You migth need this if using some forms of replication. I've always found Kerberos awkward to setup and so fallen back to this domain account approach. So in summary to get Kerberos to work try using the network service or local system accounts. For a great post from the Adam Saxton of the SQL Server support team go to http://blogs.msdn.com/psssql/archive/2010/03/09/what-spn-do-i-use-and-how-does-it-get-there.aspx 

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

  • Parallelism in .NET – Part 17, Think Continuations, not Callbacks

    - by Reed
    In traditional asynchronous programming, we’d often use a callback to handle notification of a background task’s completion.  The Task class in the Task Parallel Library introduces a cleaner alternative to the traditional callback: continuation tasks. Asynchronous programming methods typically required callback functions.  For example, MSDN’s Asynchronous Delegates Programming Sample shows a class that factorizes a number.  The original method in the example has the following signature: public static bool Factorize(int number, ref int primefactor1, ref int primefactor2) { //... .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, calling this is quite “tricky”, even if we modernize the sample to use lambda expressions via C# 3.0.  Normally, we could call this method like so: int primeFactor1 = 0; int primeFactor2 = 0; bool answer = Factorize(10298312, ref primeFactor1, ref primeFactor2); Console.WriteLine("{0}/{1} [Succeeded {2}]", primeFactor1, primeFactor2, answer); If we want to make this operation run in the background, and report to the console via a callback, things get tricker.  First, we need a delegate definition: public delegate bool AsyncFactorCaller( int number, ref int primefactor1, ref int primefactor2); Then we need to use BeginInvoke to run this method asynchronously: int primeFactor1 = 0; int primeFactor2 = 0; AsyncFactorCaller caller = new AsyncFactorCaller(Factorize); caller.BeginInvoke(10298312, ref primeFactor1, ref primeFactor2, result => { int factor1 = 0; int factor2 = 0; bool answer = caller.EndInvoke(ref factor1, ref factor2, result); Console.WriteLine("{0}/{1} [Succeeded {2}]", factor1, factor2, answer); }, null); This works, but is quite difficult to understand from a conceptual standpoint.  To combat this, the framework added the Event-based Asynchronous Pattern, but it isn’t much easier to understand or author. Using .NET 4’s new Task<T> class and a continuation, we can dramatically simplify the implementation of the above code, as well as make it much more understandable.  We do this via the Task.ContinueWith method.  This method will schedule a new Task upon completion of the original task, and provide the original Task (including its Result if it’s a Task<T>) as an argument.  Using Task, we can eliminate the delegate, and rewrite this code like so: var background = Task.Factory.StartNew( () => { int primeFactor1 = 0; int primeFactor2 = 0; bool result = Factorize(10298312, ref primeFactor1, ref primeFactor2); return new { Result = result, Factor1 = primeFactor1, Factor2 = primeFactor2 }; }); background.ContinueWith(task => Console.WriteLine("{0}/{1} [Succeeded {2}]", task.Result.Factor1, task.Result.Factor2, task.Result.Result)); This is much simpler to understand, in my opinion.  Here, we’re explicitly asking to start a new task, then continue the task with a resulting task.  In our case, our method used ref parameters (this was from the MSDN Sample), so there is a little bit of extra boiler plate involved, but the code is at least easy to understand. That being said, this isn’t dramatically shorter when compared with our C# 3 port of the MSDN code above.  However, if we were to extend our requirements a bit, we can start to see more advantages to the Task based approach.  For example, supposed we need to report the results in a user interface control instead of reporting it to the Console.  This would be a common operation, but now, we have to think about marshaling our calls back to the user interface.  This is probably going to require calling Control.Invoke or Dispatcher.Invoke within our callback, forcing us to specify a delegate within the delegate.  The maintainability and ease of understanding drops.  However, just as a standard Task can be created with a TaskScheduler that uses the UI synchronization context, so too can we continue a task with a specific context.  There are Task.ContinueWith method overloads which allow you to provide a TaskScheduler.  This means you can schedule the continuation to run on the UI thread, by simply doing: Task.Factory.StartNew( () => { int primeFactor1 = 0; int primeFactor2 = 0; bool result = Factorize(10298312, ref primeFactor1, ref primeFactor2); return new { Result = result, Factor1 = primeFactor1, Factor2 = primeFactor2 }; }).ContinueWith(task => textBox1.Text = string.Format("{0}/{1} [Succeeded {2}]", task.Result.Factor1, task.Result.Factor2, task.Result.Result), TaskScheduler.FromCurrentSynchronizationContext()); This is far more understandable than the alternative.  By using Task.ContinueWith in conjunction with TaskScheduler.FromCurrentSynchronizationContext(), we get a simple way to push any work onto a background thread, and update the user interface on the proper UI thread.  This technique works with Windows Presentation Foundation as well as Windows Forms, with no change in methodology.

    Read the article

  • XNA Notes 005

    - by George Clingerman
    Another week and another crazy amount of activity going on in the XNA community. I’m fairly certain I missed over half of it. In fact, if I am missing things, make sure to email me and I’ll try and make sure I catch it next week! ([email protected]). Also, if you’ve got any advice, things you like/don’t like about the way these XNA Notes are going let me know. I always appreciate feedback (currently spammers are leaving me the nicest comments so you guys have work to do!) Without further ado, here’s this week’s notes! Time Critical XNA News The XNA Team Blob reminds us that February 7th is the last day to submit XNA 3.1 games to peer review! http://blogs.msdn.com/b/xna/archive/2011/01/31/7-days-left-to-submit-xna-gs-3-1-games-on-app-hub.aspx XNA MVPS Chris Williams kicks off the marketing campaign for our book http://geekswithblogs.net/cwilliams/archive/2011/01/28/143680.aspx Catalin Zima posts the comparison cheat sheet for why Angry Birds is different than Chickens Can’t Fly http://www.amusedsloth.com/2011/02/comparison-cheat-sheet-for-chickens-cant-fly-and-angry-birds/ Jim Perry congratulates the developers selected by Game Developer Magazine for Best Xbox LIVE Indie Games of 2010 http://machxgames.com/blog/?p=24 @NemoKrad posts his XNAKUUG talks for all to enjoy http://twitter.com/NemoKrad/statuses/33142362502864896 http://xna-uk.net/blogs/randomchaos/archive/2011/02/03/xblig-uk-2011-january-amp-february-talk.aspx George  (that’s me!) preps for his XNA talk coming up on the 8th http://twitter.com/clingermangw/statuses/32669550554124288 http://www.portlandsilverlight.net/Meetings/Details/15 XNA Developers FireFly posts the last tutorial in his XNA Tower Defense tutorial series http://forums.create.msdn.com/forums/p/26442/451460.aspx#451460 http://xnatd.blogspot.com/2011/01/tutorial-14-polishing-game.html @fredericmy posts the main difference when porting a game from Windows Phone 7 to Xbox 360 http://fairyengine.blogspot.com/2011/01/main-differences-when-porting-game-from.html @ElementCy creates a pretty rad video of a MineCraft type terrain created using XNA http://www.youtube.com/watch?v=Waw1f7wnl9I Andrew Russel gets the first XNA badge on gamedev.stackexchange http://twitter.com/_AndrewRussell/statuses/32322877004972032 http://gamedev.stackexchange.com/badges?tab=tags And his funding for ExEn has passed $7000 only $3000 to go http://twitter.com/_AndrewRussell/statuses/33042412804771840 Subodh Pushpak blogs about his Windows Phone 7 XNA talk http://geekswithblogs.net/subodhnpushpak/archive/2011/02/01/windows-phone-7-silverlight--xna-development-talk.aspx Slyprid releases the latest version of Transmute and needs more people to test http://twitter.com/slyprid/statuses/32452488418299904 http://forgottenstarstudios.com/ SpynDoctorGames celebrates the 1 year anniversary of Your Doodles Are Bugged! Congrats! http://twitter.com/SpynDoctorGames/statuses/32511689068908544 Noogy (creator of Dust the Elysian Tail) prepares his conversion to XNA 4.0 http://twitter.com/NoogyTweet/statuses/32522008449253376 @philippedasilva posts about the Indiefreaks Game Framework v0.2.0.0 Input management system http://twitter.com/philippedasilva/statuses/32763393957957632 http://indiefreaks.com/2011/02/02/behind-smart-input-system-feature/ Mommy’s Best Games debates what to do about High Scores with their new update http://mommysbest.blogspot.com/2011/02/high-score-shake-up.html @BinaryTweedDeej want to know if there’s anything the community needs to make XNA games for the PC. Give him some feedback! http://twitter.com/BinaryTweedDeej/status/32895453863354368 @mikebmcl promises to write us all a book (I can’t wait to read it!) http://twitter.com/mikebmcl/statuses/33206499102687233 @werezompire is going to live, LIVE, thanks to all the generosity and support from the community! http://twitter.com/werezompire/statuses/32840147644977153 Xbox LIVE Indie Games (XBLIG) Making money in Xbox 360 indie game development. Is it possible? http://www.bitmob.com/articles/making-money-in-xbox-360-indie-game-development-is-it-possible @AlejandroDaJ posts some thoughts abut the bitmob article http://twitter.com/AlejandroDaJ/statuses/31068552165330944 http://www.apathyworks.com/blog/view.php?id=00215 Kobun gets my respect as an XBLIG champion. I’m not sure who Kobun is, but if you’ve every read through the comment sections any time Kotaku writes about XBLIGs you’ll see a lot of confusion, disinformation in there. Kobun has been waging a secret war battling that lack of knowledge and he does it well. Also he’s running a pretty kick ass site for Xbox LIVE Indie Game reviews http://xboxindies.teamkobun.com/ @radiangames releases his last Xbox LIVE Indie Game...for now http://bit.ly/gMK6lE Playing Avaglide with the Kinect controller http://www.youtube.com/watch?v=UqAYbHww53o http://www.joystiq.com/2011/01/30/kinect-hacks-take-to-the-skies-with-avaglide/ Luke Schneider of Radiangames interviewed in Edge magazine http://www.next-gen.biz/features/radiangames-venture Digital Quarters posts thoughts on why XBLIG’s online requirement kills certain genres http://digitalquarters.blogspot.com/2011/02/thoughts-why-xbligs-online-requirement.html Mommy’s Best Games shares the news that several XBLIGs were featured in the March 2011 issue of Famitsu 360 http://forums.create.msdn.com/forums/p/33455/451487.aspx#451487 NaviFairy continues with his Indie-Game-A-Day http://gaygamer.net/2011/02/indie_game_a_day_epic_dungeon.html http://gaygamer.net/2011/02/indie_game_a_day_break_limit_r.html and more every day...that’s kind of the point! Keep your eye on this series! VVGTV continues with it’s awesome reviews/promotions for XBLIGs http://vvgtv.com/ http://vvgtv.com/2011/02/03/iredia-atrams-secret-xblig-review-2/ http://vvgtv.com/2011/02/02/poopocalypse-coming-soon-to-xblig/ ….and even more, you get the point. Magicka is an Indie Game doing really well on Steam AND it’s made using XNA http://www.magickagame.com/ http://twitter.com/Magickagame/statuses/32712762580799488 GameMarx reviews Antipole http://www.gamemarx.com/reviews/73/antipole-is-vvvvvvery-good.aspx Armless Octopus review Alpha Squad http://www.armlessoctopus.com/2011/01/28/xbox-indie-review-alpha-squad/ An interesting article about Kodu that Jim Perry found http://twitter.com/MachXGames/statuses/32848044105924608 http://www.develop-online.net/news/36915/10-year-old-Jordan-makes-games-The-UK-needs-more-like-her XNA Game Development Sgt. Conker posts about the Natur beta, a new book and whether you can make money with XBLIG http://www.sgtconker.com/ http://www.sgtconker.com/2011/01/a-new-book-on-the-block-and-a-new-natur-beta/ http://www.sgtconker.com/2011/01/making-money-in-xbox-360-indie-game-development-is-it-possible/ Tips for setting up SVN http://bit.ly/fKxgFh @bsimser found tons of royalty free music and soundfx for your XNA Games http://twitter.com/bsimser/statuses/31426632933711872 Post on the new features in the next Sunburn Editor http://www.synapsegaming.com/blogs/fivesidedbarrel/archive/2011/01/28/new-editor-features-prefabs-components-and-more.aspx @jasons_novaleaf posts source code for light pre-pass optimizations for #xna http://twitter.com/jasons_novaleaf/statuses/33348855403642880 http://jcoluna.wordpress.com/2011/02/01/xna-4-0-light-pre-pass-optimization-round-one/ I’ve been learning about doing an A.I. for turn based games and this article was a great resource. http://www.gamasutra.com/view/feature/1535/designing_ai_algorithms_for_.php?print=1

    Read the article

  • Forcing an External Activation with Service Broker

    - by Davide Mauri
    In these last days I’ve been working quite a lot with Service Broker, a technology I’m really happy to work with, since it can give a lot of satisfaction. The scale-out solution one can easily build is simply astonishing. I’m helping a company to build a very scalable and – yet almost inexpensive – invoicing system that has to be able to scale out using commodity hardware. To offload the work from the main server to satellite “compute nodes” (yes, I’ve borrowed this term from PDW) we’re using Service Broker and the External Activator application available in the SQL Server Feature Pack. For those who are not used to work with SSB, the External Activation is a feature that allows you to intercept the arrival of a message in a queue right from your application code. http://msdn.microsoft.com/en-us/library/ms171617.aspx (Look for “Event-Based Activation”) In order to make life even more easier, Microsoft released the External Activation application that saves you even from writing even this code. http://blogs.msdn.com/b/sql_service_broker/archive/tags/external+activator/ The External Activator application can be configured to execute your own application so that each time a message – an invoice in my case – arrives in the target queue, the invoking application is executed and the invoice is calculated. The very nice feature of External Activator is that it can automatically execute as many configured application in order to process as many messages as your system can handle.  This also a lot of create a scale-out solution, leaving to the developer only a fraction of the problems that usually came with asynchronous programming. Developers are also shielded from Service Broker since everything can be encapsulated in Stored Procedures, so that – for them – developing such scale-out asynchronous solution is not much more complex than just executing a bunch of Stored Procedures. Now, if everything works correctly, you don’t have to bother of anything else. You put messages in the queue and your application, invoked by the External Activator, process them. But what happen if for some reason your application fails to process the messages. For examples, it crashes? The message is safe in the queue so you just need to process it again. But your application is invoked by the External Activator application, so now the question is, how do you wake up that app? Service Broker will engage the activation process only if certain conditions are met: http://msdn.microsoft.com/en-us/library/ms171601.aspx But how we can invoke the activation process manually, without having to wait for another message to arrive (the arrival of a new message is a condition that can fire the activation process)? The “trick” is to do manually with the activation process does: sending a system message to a queue in charge of handling External Activation messages: declare @conversationHandle uniqueidentifier; declare @n xml = N' <EVENT_INSTANCE>   <EventType>QUEUE_ACTIVATION</EventType>   <PostTime>' + CONVERT(CHAR(24),GETDATE(),126) + '</PostTime>   <SPID>' + CAST(@@SPID AS VARCHAR(9)) + '</SPID>   <ServerName>[your_server_name]</ServerName>   <LoginName>[your_login_name]</LoginName>   <UserName>[your_user_name]</UserName>   <DatabaseName>[your_database_name]</DatabaseName>   <SchemaName>[your_queue_schema_name]</SchemaName>   <ObjectName>[your_queue_name]</ObjectName>   <ObjectType>QUEUE</ObjectType> </EVENT_INSTANCE>' begin dialog conversation     @conversationHandle from service        [<your_initiator_service_name>] to service          '<your_event_notification_service>' on contract         [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification] with     encryption = off,     lifetime = 6000 ; send on conversation     @conversationHandle message type     [http://schemas.microsoft.com/SQL/Notifications/EventNotification] (@n) ;     end conversation @conversationHandle; That’s it! Put the code in a Stored Procedure and you can add to your application a button that says “Force Queue Processing” (or something similar) in order to start the activation process whenever you need it (which should not occur too frequently but it may happen). PS I know that the “fire-and-forget” (ending the conversation without waiting for an answer) technique is not a best practice, but in this case I don’t see how it can hurts so I decided to stay very close to the KISS principle []

    Read the article

  • Execute TSQL statement with ExecuteStoreQuery in entity framework 4.0

    - by Jalpesh P. Vadgama
    I was playing with entity framework in recent days and I was searching something that how we can execute TSQL statement in entity framework. And I have found one great way to do that with entity framework ‘ExecuteStoreQuery’ method. It’s executes a TSQL statement against data source given enity framework context and returns strongly typed result. You can find more information about ExcuteStoreQuery from following link. http://msdn.microsoft.com/en-us/library/dd487208.aspx So let’s examine how it works. So Let’s first create a table against which we are going to execute TSQL statement. So I have added a SQL Express database as following. Now once we are done with adding a database let’s add a table called Client like following. Here you can see above Client table is very simple. There are only two fields ClientId and ClientName where ClientId is primary key and ClientName is field where we are going to store client name. Now it’s time to add some data to the table. So I have added some test data like following. Now it’s time to add entity framework model class. So right click project->Add new item and select ADO.NET entity model as following. After clicking on add button a wizard will start it will ask whether we need to create model classes from database or not but we already have our client table ready so I have selected generate from database as following. Once you process further in wizard it will be presented a screen where we can select the our table like following. Now once you click finish it will create model classes with for us. Now we need a gridview control where we need to display those data. So in Default.aspx page I have added a grid control like following. <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="EntityFramework._Default" %> <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> </asp:Content> <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <h2> Welcome to ASP.NET! </h2> <p> To learn more about ASP.NET visit <a href="http://www.asp.net" title="ASP.NET Website">www.asp.net</a>. </p> <p> You can also find <a href="http://go.microsoft.com/fwlink/?LinkID=152368&amp;clcid=0x409" title="MSDN ASP.NET Docs">documentation on ASP.NET at MSDN</a>. <asp:GridView ID="grdClient" runat="server"> </asp:GridView> </p> </asp:Content> Now once we are done with adding Gridview its time to write code for server side. So I have written following code in Page_load event of default.aspx page. protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { using (var context = new EntityFramework.TestEntities()) { ObjectResult<Client> result = context.ExecuteStoreQuery<Client>("Select * from Client"); grdClient.DataSource = result; grdClient.DataBind(); } } } Here in the above code you can see that I have written create a object of our entity model and then with the help of the ExecuteStoreQuery method I have execute a simple select TSQL statement which will return a object result. I have bind that object result with gridview to display data. So now we are done with coding.So let’s run application in browser. Following is output as expected. That’s it. Hope you like it. Stay tuned for more..Till then happy programming.

    Read the article

  • Adventures in Lab Management Configuration: Part 2 of 3

    - by Enrique Lima
    The first post was the high level overview. Now it is time for the details on what was done to the existing CMMI Project based on CMMI v 4.2. The first step was to go into Visual Studio, then from the Team Project Collection Settings and then to the Process Template Manager.  Once there, it was a matter of selecting the appropriate template (MSF for CMMI Process Improvement v5.0) and download to a point I could reference later (for example C:\Templates). Then on to using the steps from the guidance post. Since I was using an x64 deployment, I will make reference to the path as <toolpath>, however the actual path to reference in a 64-bit environment is “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE”. As I mentioned on the previous post, make sure to first perform a backup of the Configuration, Collection and Warehouse DBs.  If you did not apply any changes to the names and such, then you will find those as tfs_Configuration, tfs_DefaultCollection and tfs_Warehouse. Now, the work needed with the witadmin tool: That includes the uploading of the structures that differ from v4.2 to v5.0 There is likely going to be an issue with the naming of some fields. For example, TFS 2010 likes something along the lines of “Area ID”, whereas TFS 2008 would have had it as “AreaID”.  So, this will need to be corrected.  Some posts will have you go through this after the errors pop up.  I would recommend doing this process prior to executing the importwitd process.  witadmin listfields /collection:<path to collection> > c:\ListFields.txt Review the following fields: AreaID, review the Name property and validate if it states “AreaID”, the you will need to rename the Name field to reflect “Area ID”. ExternalLinkCount, RelatedLinkCount, HyperLinkCount, AttachedFileCount and IterationID would be the other fields to check. To correct the issue, then execute the following: witadmin changefield /collection:<path to collection> /n:"System.ExternalLinkCount" /name:"External Link Count" Repeat for Area ID, Related Link Count, Hyperlink Count, Attached File Count and Iteration ID.  Once this is done, proceed with the commands below. witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\TestCase.xml" witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\SharedStep.xml" witadmin importcategories /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\categories.xml" Modifications to the Bug Definition: First step is to export the existing definition. witadmin exportwitd /collection<path to collection> /p:<project> /n:bug /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Make modifications to recently exported MyBug.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Repeat the process for the the Scenario or Requirement Type Definition witadmin exportwitd /collection<path to collection> /p:<project> /n:requirement /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Make modifications to recently exported MyRequirement.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Provide the Bug Field Mapping definition, after creating the file as specified here: http://msdn.microsoft.com/en-us/library/ff452591.aspx#TCMBugFieldMapping tcm bugfieldmapping /import /mappingfile:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\bugfieldmappings.xml" /collection:<path to collection> /teamproject:<project name>

    Read the article

  • Null Values And The T-SQL IN Operator

    - by Jesse
    I came across some unexpected behavior while troubleshooting a failing test the other day that took me long enough to figure out that I thought it was worth sharing here. I finally traced the failing test back to a SELECT statement in a stored procedure that was using the IN t-sql operator to exclude a certain set of values. Here’s a very simple example table to illustrate the issue: Customers CustomerId INT, NOT NULL, Primary Key CustomerName nvarchar(100) NOT NULL SalesRegionId INT NULL   The ‘SalesRegionId’ column contains a number representing the sales region that the customer belongs to. This column is nullable because new customers get created all the time but assigning them to sales regions is a process that is handled by a regional manager on a periodic basis. For the purposes of this example, the Customers table currently has the following rows: CustomerId CustomerName SalesRegionId 1 Customer A 1 2 Customer B NULL 3 Customer C 4 4 Customer D 2 5 Customer E 3   How could we write a query against this table for all customers that are NOT in sales regions 2 or 4? You might try something like this: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE SalesRegionId NOT IN (2,4)   Will this work? In short, no; at least not in the way that you might expect. Here’s what this query will return given the example data we’re working with: CustomerId CustomerName SalesRegionId 1 Customer A 1 5 Customer E 5   I was expecting that this query would also return ‘Customer B’, since that customer has a NULL SalesRegionId. In my mind, having a customer with no sales region should be included in a set of customers that are not in sales regions 2 or 4.When I first started troubleshooting my issue I made note of the fact that this query should probably be re-written without the NOT IN clause, but I didn’t suspect that the NOT IN clause was actually the source of the issue. This particular query was only one minor piece in a much larger process that was being exercised via an automated integration test and I simply made a poor assumption that the NOT IN would work the way that I thought it should. So why doesn’t this work the way that I thought it should? From the MSDN documentation on the t-sql IN operator: If the value of test_expression is equal to any value returned by subquery or is equal to any expression from the comma-separated list, the result value is TRUE; otherwise, the result value is FALSE. Using NOT IN negates the subquery value or expression. The key phrase out of that quote is, “… is equal to any expression from the comma-separated list…”. The NULL SalesRegionId isn’t included in the NOT IN because of how NULL values are handled in equality comparisons. From the MSDN documentation on ANSI_NULLS: The SQL-92 standard requires that an equals (=) or not equal to (<>) comparison against a null value evaluates to FALSE. When SET ANSI_NULLS is ON, a SELECT statement using WHERE column_name = NULL returns zero rows even if there are null values in column_name. A SELECT statement using WHERE column_name <> NULL returns zero rows even if there are nonnull values in column_name. In fact, the MSDN documentation on the IN operator includes the following blurb about using NULL values in IN sub-queries or expressions that are used with the IN operator: Any null values returned by subquery or expression that are compared to test_expression using IN or NOT IN return UNKNOWN. Using null values in together with IN or NOT IN can produce unexpected results. If I were to include a ‘SET ANSI_NULLS OFF’ command right above my SELECT statement I would get ‘Customer B’ returned in the results, but that’s definitely not the right way to deal with this. We could re-write the query to explicitly include the NULL value in the WHERE clause: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE (SalesRegionId NOT IN (2,4) OR SalesRegionId IS NULL)   This query works and properly includes ‘Customer B’ in the results, but I ultimately opted to re-write the query using a LEFT OUTER JOIN against a table variable containing all of the values that I wanted to exclude because, in my case, there could potentially be several hundred values to be excluded. If we were to apply the same refactoring to our simple sales region example we’d end up with: 1: DECLARE @regionsToIgnore TABLE (IgnoredRegionId INT) 2: INSERT @regionsToIgnore values (2),(4) 3:  4: SELECT 5: c.CustomerId, 6: c.CustomerName, 7: c.SalesRegionId 8: FROM Customers c 9: LEFT OUTER JOIN @regionsToIgnore r ON r.IgnoredRegionId = c.SalesRegionId 10: WHERE r.IgnoredRegionId IS NULL By performing a LEFT OUTER JOIN from Customers to the @regionsToIgnore table variable we can simply exclude any rows where the IgnoredRegionId is null, as those represent customers that DO NOT appear in the ignored regions list. This approach will likely perform better if the number of sales regions to ignore gets very large and it also will correctly include any customers that do not yet have a sales region.

    Read the article

  • CodePlex Daily Summary for Tuesday, November 23, 2010

    CodePlex Daily Summary for Tuesday, November 23, 2010Popular ReleasesSilverlight and WP7 Exception Handling and Logging building block: first version: Zipped full source code.Wii Backup Fusion: Wii Backup Fusion 0.8.2 Beta: New in this release: - Update titles after language change - Tool tips for name/title - Transfer DVD to a specific image file - Download titles from wiitdb.com - Save Settings geometry - Titles and Cover language global in settings - Convert Files (images) to another format - Format WBFS partition - Create WBFS file - WIT path configurable in settings - Save last path in Files/Load - Sort game lists - Save column width - Sequenz of columns changeable - Set indicated columns in settings - Bus...TweetSharp: TweetSharp v2.0.0.0 - Preview 2: Preview 2 ChangesIntroduced support for .NET Framework 2.0 Supported Platforms: .NET 2.0, .NET 3.5 SP1, .NET 4.0 Client Profile, Windows Phone 7 Twitter API coverage: accounts blocks direct messages favorites friendships notifications saved searches timelines tweets users Preview 1 ChangesIntroduced support for OAuth Echo Supported Platforms: .NET 4.0, Windows Phone 7 Twitter API coverage: timelines tweets Version 2.0 Overview / RoadmapMajor rewrite for simplic...SQL Monitor: SQL Monitor 1.3: 1. change sys.sysprocesses to DMV: http://msdn.microsoft.com/en-us/library/ms187997.aspx select * from sys.dm_exec_connections select * from sys.dm_exec_requests select * from sys.dm_exec_sessions 2. adjust columns to fit without scrollingMiniTwitter: 1.60: MiniTwitter 1.60 ???? ?? Twitter ?????????????????????????????? ??????????????????、?????????????????? ?????????????????????????????? ???????????????????????????????????????Minemapper: Minemapper v0.1.1: Fixed 'Generate entire world image', wasn't working. Improved Java detection for biome support. Biomes button is automatically unchecked if Java cannot be found in either the path environment variable or the Windows Registry. Fixed problem where Height + and - buttons would change the height more than one each click. Added keyboard shortcuts for navigation controls. You can now press and hold navigation buttons to continuously pan/zoom.VFPX: FoxBarcode v.0.11: FoxBarcode v.0.11 - Released 2010.11.22 FoxBarcode is a 100% Visual FoxPro class that provides a tool for generating images with different bar code symbologies to be used in VFP forms and reports, or exported to other applications. Its use and distribution is free for all Visual FoxPro Community. Whats is new? Added a third parameter to the BarcodeImage() method Fixed some minor bugs History FoxBarcode v.0.10 - Released 2010.11.19 - 85 Downloads Project page: FoxBarcodeDotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 1.1.0.5: What is new in DotNetAge 1.1.0.5 ?Document Library features and template added. Resolve issues of templates Improving publishing service performance Opml support added. What is new in DotNetAge 1.1 ? D.N.A Core updatesImprove runtime performance , more stabilize. The DNA core objects model added. Personalization features added that allows users create the personal website, manage their resources, store personal data DynamicUIFixed the PageManager could not move page node bug. ...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.3.1 and demos: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form and Pager tested on mozilla, safari, chrome, opera, ie 9b/8/7/6DotSpatial: DotSpatial 11-21-2010: This release introduces the following Fixed bugs related to dispose, which caused issues when reordering layers in the legend Fixed bugs related to assigning categories where NULL values are in the fields New fast-acting resize using a bitmap "prediction" of what the final resize content will look like. ImageData.ReadBlock, ImageData.WriteBlock These allow direct file access for reading or writing a rectangular window. Bitmaps are used for holding the values. Removed the need to stor...Sincorde Silverlight Library: 2010 - November: Silverlight 4 Visual Studio 2010 Microsoft Expression dependencies removed Design-Time bug fixedMDownloader: MDownloader-0.15.24.6966: Fixed Updater; Fixed minor bugs;WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.1: Version: 2.0.0.1 (Milestone 1): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ....NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.01: Added new extensions for - object.CountLoopsToNull Added new extensions for DateTime: - DateTime.IsWeekend - DateTime.AddWeeks Added new extensions for string: - string.Repeat - string.IsNumeric - string.ExtractDigits - string.ConcatWith - string.ToGuid - string.ToGuidSave Added new extensions for Exception: - Exception.GetOriginalException Added new extensions for Stream: - Stream.Write (overload) And other new methods ... Release as of dotnetpro 01/2011Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-11-19: Code samples for Visual Studio 2010Prism Training Kit: Prism Training Kit 4.0: Release NotesThis is an updated version of the Prism training Kit that targets Prism 4.0 and added labs for some of the new features of Prism 4.0. This release consists of a Training Kit with Labs on the following topics Modularity Dependency Injection Bootstrapper UI Composition Communication MEF Navigation Note: Take into account that this is a Beta version. If you find any bugs please report them in the Issue Tracker PrerequisitesVisual Studio 2010 Microsoft Word 2...Free language translator and file converter: Free Language Translator 2.2: Starting with version 2.0, the translator encountered a major redesign that uses MEF based plugins and .net 4.0. I've also fixed some bugs and added support for translating subtitles that can show up in video media players. Version 2.1 shows the context menu 'Translate' in Windows Explorer on right click. Version 2.2 has links to start the media file with its associated subtitle. Download the zip file and expand it in a temporary location on your local disk. At a minimum , you should uninstal...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.4 Released: Hi, Today we are releasing Visifire 3.6.4 with few bug fixes: * Multi-line Labels were getting clipped while exploding last DataPoint in Funnel and Pyramid chart. * ClosestPlotDistance property in Axis was not behaving as expected. * In DateTime Axis, Chart threw exception on mouse click over PlotArea if there were no DataPoints present in Chart. * ToolTip was not disappearing while changing the DataSource property of the DataSeries at real-time. * Chart threw exception ...Microsoft SQL Server Product Samples: Database: AdventureWorks 2008R2 SR1: Sample Databases for Microsoft SQL Server 2008R2 (SR1)This release is dedicated to the sample databases that ship for Microsoft SQL Server 2008R2. See Database Prerequisites for SQL Server 2008R2 for feature configurations required for installing the sample databases. See Installing SQL Server 2008R2 Databases for step by step installation instructions. The SR1 release contains minor bug fixes to the installer used to create the sample databases. There are no changes to the databases them...VidCoder: 0.7.2: Fixed duplicated subtitles when running multiple encodes off of the same title.New ProjectsAlien Terminator: Alien TerminatorArchWeb 2.0: ArchWeb è un'applicazione che unita al sito web permette di gestire documenti aziendali e mail preimpostate. ArchWeb supporta file di qualsiasi estensione (di dimensini variabili fino a 1.5 GB), tutti i tipi di documenti del pacchetto Microsoft Office o del pacchetto Adobe.Azure Table Sync Lib: Developing custom sync providers for Windows Azure Table Storage to support the following sync scenarios 1. Azure Table <-> SQL Server 2. Azure Table <-> SQL CE 3. Azure Table <-> SQL AzureColecaoFilmes: Prova de conceito usando DDD, ASP.NET MVC 3, NHibernate, Fluent Nhibernate e Ninject.Coleotrack: Coleotrack: ASP.NET MVC Issue Tracking.dmgbois: Personal repo.DNN Social Bookmark: DotNetNuke plugin for adding social media components.docNET: docNET grew out of my own battles creating system documentation from inline comments in code files. Using the XML documentation file as base to generate documentation in different formats. As I'm kinda biased towards my own needs I'm starting from C#/ASP.NET => web output. Event Trivia: A Silverlight trivia application which can be played in between sessions at an event. It has been used at PDC09, PDC10 and MIX10 so far. Written by Pete Brown at Microsoft.F# MathProvider: F# Math Provider wrappers native Blas+Lapack runtimes for F# users to perform linear algebra easily. Frets Terminator: Frets TerminatorGurfing projects: Few summaryHexa.xText: Hexa.xText is a .Net command line tool to extract text from source files for later translation, just like the GNU xgettext does. Extracted text are placed inside po files that must be compiled into satellite assemblies through the included PO2Assembly MSBuild task.Hold Popup for Touch: Hold Popup offers an optional solution to get an event while having pressed a mouse button or have a touchdown over time. Instead of handling everything by yourself (such as timer for holding or handling events), hold popup handles all itself. Developed in .Net 4.0 for WPF.HumanResourceManagementSystem: Project NIIT MMS901 - Quater 5 Human Resource Management System People Management ModuleLive Messenger: Live Messenger is a Windows Live Messenger module for DNNMtechSystem: MultipointMouse Technology Education ChainNerdOnRails.ActiveQuery: an object-relational mapper for url-parameter written in .net for .netPHMS: Publishing House Management SystemRabbitEars: Thread safe event driven message notification for RabbitMQ using the .net API. Implmenet in vb.net project is a Visual Studio 2010 solution with two project the RabbitEars project and a demo windows form to allow users to test drive RabbitMQScreen Snapshot Manager: Screen Snapshot Manager is a desktop winforms application, which runs in background, and keeps taking snapshots of current screen after every 30 seconds for tracking purpose. :)SharePoint 2010 Workflow Actions IntelliSense: This project hosts a SharePoint 2010 Workflow Actions Schema file to assist developers in building Workflow Actions File for SharePoint Designer. Show Reader for MSDN Shows: Show Reader is a viewer for MSDN Shows based on .NET Framweork. In a single Window you can view your favorite shows with transcripts. Moreover, in the same window you can browse Microsoft web-sites where you can find Shows about .NET technologies, like the .NET Show, Channel9 and MSDN TV. ShowReader is available in two editions. A Windows Forms edition, written in Visual Basic 2005 and a Windows Presentation Foundation, written in Visual Basic 2008 It was developed by the Italian developer ...Subtitles Matcher: WPF application using prism and MEF automaticlly find and download subtitles for you media fileTweakSP2010: "Because SharePoint 2010 needs some Tweak'n" SharePoint Administration and Development extensions.TweetsSaver: ?????????zlibnet - c# zlib wrapper library: zlibnet - c# zlib wrapper library Features: -zip -unzip -compression/decompression stream -fast, since using the unmanaged zlib library -64bit support ZupSch: ZupSch is a little cms for student

    Read the article

  • WPF DataBinding, CollectionViewSource, INotifyPropertyChanged

    - by plotnick
    First time when I tried to do something in WPF, I was puzzled with WPF DataBinding. Then I studied thorougly next example on MSDN: http://msdn.microsoft.com/en-us/library/ms771319(v=VS.90).aspx Now, I quite understand how to use Master-Detail paradigm for a form which takes data from one source (one table) - both for master and detailed parts. I mean for example I have a grid with data and below the grid I have a few fields with detailed data of current row. But how to do it, if detailed data comes from different but related tables? for example: you have a Table 'Users' with columns - 'id' and 'Name'. You also have another table 'Pictures' with columns like 'id','Filename','UserId'. And now, using master-detail paradigm you have to built a form. And every time when you chose a row in a Master you should get all associated pictures in Details. What is the right way to do it? Could you please show me an example?

    Read the article

  • Active Directory Services: PrincipalContext -- What is the DN of a "container" object?

    - by Ranger Pretzel
    I'm currently trying to authenticate via Active Directory Services using the PrincipalContext class. I would like to have my application authenticate to the Domain using Sealed and SSL contexts. In order to do this, I have to use the following constructor of PrincipalContext (link to MSDN page): public PrincipalContext( ContextType contextType, string name, string container, ContextOptions options ) Specifically, I'm using the constructor as so: PrincipalContext domainContext = new PrincipalContext( ContextType.Domain, domain, container, ContextOptions.Sealing | ContextOptions.SecureSocketLayer); MSDN says about "container": The container on the store to use as the root of the context. All queries are performed under this root, and all inserts are performed into this container. For Domain and ApplicationDirectory context types, this parameter is the distinguished name (DN) of a container object. What is the DN of a container object? How do I find out what my container object is? Can I query the Active Directory (or LDAP) server for this?

    Read the article

  • Windows Vista/Win7 Privilege Problem: SeDebugPrivilege & OpenProcess

    - by KevenK
    Everything I've been able to find about escalating to the appropriate privileges for my needs has agreed with my current methods, but the problem exists. I'm hoping maybe someone has some Windows Vista/Win7 internals experience that might shine some light where there is only darkness. I'm sure this will get long, but please bare with me. Context: I'm working on an app that requires accessing the memory of other processes on the current machine. This, obviously, requires administrator rights. It also requires SeDebugPrivilege, which I believe myself to be acquiring correctly, although I question if more privileges aren't necessary and thus the cause of my problems. Code has so far worked successfully on all versions of Windows XP, and on my test Vista32 and Win7x64 environments. Process: Program will Always be run with Administrator Rights. This can be assumed throughout this post. Escalating the current process's Access Token to include SeDebugPrivilege rights. Using EnumProcesses to create a list of current PIDs on the system Opening a handle using OpenProcess with PROCESS_ALL_ACCESS access rights Using ReadProcessMemory to read the memory of the other process. Problem: Everything has been working fine during development and my personal testing (including Windows XP 32 & 64, Windows Vista 32, and Windows 7 x64). However, during a test deployment onto both Windows Vista(32-bit) and Windows 7(64-bit) machines of a colleague, there seems to be a privilege/rights problem with OpenProcess failing with a generic Access Denied error. This occurs both when running as a limited User (as would be expected) and also when run explicitly as Administrator (Right-click Run as Administrator and when run from an Administrator level command prompt). However, this problem has been unreproducible for myself in my test environment. I have witnessed the problem first hand, so I trust that the problem exists. The only difference that I can discern between the actual environment and my test environment is that the actual error is occurring when using a Domain Administrator account at the UAC prompt, whereas my tests (which work with no errors) use a local administrator account at the UAC prompt. It appears that although the credentials being used allow UAC to 'run as administrator', the process is still not obtaining the correct rights to be able to OpenProcess on another process. I am not familiar enough with the internals of Vista/Win7 to know what this might be, and I am hoping someone has an idea of what could be the cause. The Kicker: The person who has reported this error, and who's environment can regularly reproduce this bug, has a small application named along the lines of RunWithDebugEnabled which is a small bootstrap program which appears to escalate its own privileges and then launch the executable passed to it (thus inheriting the escalated privileges). When run with this program, using the same Domain Administrator credentials at UAC prompt, the program works correctly and is able to successfully call OpenProcess and operates as intended. So this is definitely a problem with acquiring the correct privileges, and it is known that the Domain Administrator account is an administrator account that should be able to access the correct rights. (Obviously obtaining this source code would be great, but I wouldn't be here if that were possible). Notes: As noted, the errors reported by the failed OpenProcess attempts are Access Denied. According to MSDN documentation of OpenProcess: If the caller has enabled the SeDebugPrivilege privilege, the requested access is granted regardless of the contents of the security descriptor. This leads me to believe that perhaps there is a problem under these conditions either with (1) Obtaining SeDebugPrivileges or (2) Requiring other privileges which have not been mentioned in any MSDN documentation, and which might differ between a Domain Administrator account and a Local Administrator account Sample Code: void sample() { ///////////////////////////////////////////////////////// // Note: Enabling SeDebugPrivilege adapted from sample // MSDN @ http://msdn.microsoft.com/en-us/library/aa446619%28VS.85%29.aspx // Enable SeDebugPrivilege HANDLE hToken = NULL; TOKEN_PRIVILEGES tokenPriv; LUID luidDebug; if(OpenProcessToken(GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES, &hToken) != FALSE) { if(LookupPrivilegeValue(NULL, SE_DEBUG_NAME, &luidDebug) != FALSE) { tokenPriv.PrivilegeCount = 1; tokenPriv.Privileges[0].Luid = luidDebug; tokenPriv.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; if(AdjustTokenPrivileges(hToken, FALSE, &tokenPriv, 0, NULL, NULL) != FALSE) { // Always successful, even in the cases which lead to OpenProcess failure cout << "SUCCESSFULLY CHANGED TOKEN PRIVILEGES" << endl; } else { cout << "FAILED TO CHANGE TOKEN PRIVILEGES, CODE: " << GetLastError() << endl; } } } CloseHandle(hToken); // Enable SeDebugPrivilege ///////////////////////////////////////////////////////// vector<DWORD> pidList = getPIDs(); // Method that simply enumerates all current process IDs ///////////////////////////////////////////////////////// // Attempt to open processes for(int i = 0; i < pidList.size(); ++i) { HANDLE hProcess = NULL; hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, pidList[i]); if(hProcess == NULL) { // Error is occurring here under the given conditions cout << "Error opening process PID(" << pidList[i] << "): " << GetLastError() << endl; } CloseHandle(hProcess); } // Attempt to open processes ///////////////////////////////////////////////////////// } Thanks! If anyone has some insight into what possible permissions/privileges/rights/etc that I may be missing to correctly open another process (Assuming the executable has been properly "Run as Administrator"ed) on Windows Vista and Windows 7 under the above conditions, it would be most greatly appreciated. I wouldn't be here if I weren't absolutely stumped, but I'm hopeful that once again the experience and knowledge of the group shines bright. I thank you for taking the time to read this wall of text. The good intentions alone are appreciated, thanks for being the type of person that makes SO so useful to all!

    Read the article

  • ASP.NET Routing : RouteCollection class missing

    - by Shyju
    I am developing a website(web forms , not MVC) in VS 2008(with SP1 ).I am trying to incorporate the ASP.NET Routing.I am following the MSDN tutorial to do it. http://msdn.microsoft.com/en-us/library/cc668201.aspx I have added the below items to my glbal.asax.cs file as per the tutorial protected void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); } public static void RegisterRoutes(RouteCollection routes) { routes.Add(new Route ( "Category/{action}/{categoryName}" , new CategoryRouteHandler() )); } When trying to build it is telling like "The type or namespace name 'RouteCollection' could not be found (are you missing a using directive or an assembly reference?) I have System.web imported to my global.asax file Any Idea how to get rid of it ?

    Read the article

  • Outlook 2003 View Control with C#

    - by Eleasar
    I want to embed a custom c# windows form (or WPF) user control into an outlook view. I am using Outlook 2003 and Visual Studio 2008. I did download an example for Outlook 2007 here: http://blogs.msdn.com/e2eblog/archive/2008/01/09/outlook-folder-homepage-hosting-wpf-activex-and-windows-forms-controls.aspx and also here: http://msdn.microsoft.com/en-us/library/aa479345.aspx I tested it and under 2007 it is working, but for 2003 i am getting the following error when i want to open the view: Could not complete the operation due to error 80131509 I can start it from Visual Studio, it is registering the folder just fine, debugging works and all that. It creates an HTML page that contains my type as an object parameter - but the Initialize method that should be called is either not present (not shown via JS) or it has some errors. The breakpoints for RegisterSafeForScripting are also never hit - maybe related to that. Thx in advance!

    Read the article

  • SmtpClient.SendAsync(); Not working on 64bit computer.

    - by j-t-s
    Hi All, Okay, Only a couple weeks ago I had an old 32bit computer, but had to buy a new computer, which happens to be 64bit. I've rewritten the method that sends e-mails asynchonously and it just does not work. And does NOT throw any exceptions either. What gives? I have also tried the Async method code found iin MSDN, and the BlackWasp tutorial code, and MANY others. They're all the same! The only difference here seems to be the fact that I'm now on a 64bit computer. How can I get around this? Google/MSDN has been of zero help. Thanks

    Read the article

  • When constructing a Bitmap with Bitmap.FromHbitmap(), how soon can the original bitmap handle be del

    - by GBegen
    From the documentation of Image.FromHbitmap() at http://msdn.microsoft.com/en-us/library/k061we7x%28VS.80%29.aspx : The FromHbitmap method makes a copy of the GDI bitmap; so you can release the incoming GDI bitmap using the GDIDeleteObject method immediately after creating the new Image. This pretty explicitly states that the bitmap handle can be immediately deleted with DeleteObject as soon as the Bitmap instance is created. Looking at the implementation of Image.FromHbitmap() with Reflector, however, shows that it is a pretty thin wrapper around the GDI+ function, GdipCreateBitmapFromHBITMAP(). There is pretty scant documentation on the GDI+ flat functions, but http://msdn.microsoft.com/en-us/library/ms533971%28VS.85%29.aspx says that GdipCreateBitmapFromHBITMAP() corresponds to the Bitmap::Bitmap() constructor that takes an HBITMAP and an HPALETTE as parameters. The documentation for this version of the Bitmap::Bitmap() constructor at http://msdn.microsoft.com/en-us/library/ms536314%28VS.85%29.aspx has this to say: You are responsible for deleting the GDI bitmap and the GDI palette. However, you should not delete the GDI bitmap or the GDI palette until after the GDI+ Bitmap::Bitmap object is deleted or goes out of scope. Do not pass to the GDI+ Bitmap::Bitmap constructor a GDI bitmap or a GDI palette that is currently (or was previously) selected into a device context. Furthermore, one can see the source code for the C++ portion of GDI+ in GdiPlusBitmap.h that the Bitmap::Bitmap() constructor in question is itself a wrapper for the GdipCreateBitmapFromHBITMAP() function from the flat API: inline Bitmap::Bitmap( IN HBITMAP hbm, IN HPALETTE hpal ) { GpBitmap *bitmap = NULL; lastResult = DllExports::GdipCreateBitmapFromHBITMAP(hbm, hpal, &bitmap); SetNativeImage(bitmap); } What I can't easily see is the implementation of GdipCreateBitmapFromHBITMAP() that is the core of this functionality, but the two remarks in the documentation seem to be contradictory. The .Net documentation says I can delete the bitmap handle immediately, and the GDI+ documentation says the bitmap handle must be kept until the wrapping object is deleted, but both are based on the same GDI+ function. Furthermore, the GDI+ documentation warns against using a source HBITMAP that is currently or previously selected into a device context. While I can understand why the bitmap should not be selected into a device context currently, I do not understand why there is a warning against using a bitmap that was previously selected into a device context. That would seem to prevent use of GDI+ bitmaps that had been created in memory using standard GDI. So, in summary: Does the original bitmap handle need to be kept around until the .Net Bitmap object is disposed? Does the GDI+ function, GdipCreateBitmapFromHBITMAP(), make a copy of the source bitmap or merely hold onto the handle to the original? Why should I not use an HBITMAP that was previously selected into a device context?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >