Search Results

Search found 26146 results on 1046 pages for 'white box testing'.

Page 45/1046 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • An introductory presentation about testing with MSTest, Visual Studio, and Team Foundation Server 2010

    - by Thomas Weller
    While it was very quiet here on my blog during the last months, this was not at all true for the rest of my professional life. The simple story is that I was too busy to find the time for authoring blog posts (and you might see from my previous ones that they’re usually not of the ‘Hey, I’m currently reading X’ or ‘I’m currently thinking about Y’ kind…). Anyway. Among the things I did during the last months were setting up a TFS environment (2010) and introducing a development team to the MSTest framework (aka. Visual Studio Unit Testing), some additional tools (e.g. Moq, Moles, White),  how this is supported in Visual Studio, and how it integrates into the broader context of the then new TFS environment. After wiping out all the stuff which was directly related to my former customer and reviewing/extending the Speaker notes, I thought I share this presentation (via Slideshare) with the rest of the world. Hopefully it can be useful to someone else out there… Introduction to testing with MSTest, Visual Studio, and Team Foundation Server 2010 View more presentations from Thomas Weller. Be sure to also check out the slide notes (either by viewing the presentation directly on Slideshare or - even better - by downloading it). They contain quite some additional information, hints, and (in my opinion) best practices.

    Read the article

  • creating bounding box list

    - by Christian Frantz
    I'm trying to create a list of bounding boxes for each cube drawn, so I can use the boxes to intersect with a ray that my mouse position is casting, but I have no idea how. I've created a list that stores the boxes, but how am I getting the values from each box? for (int x = 0; x < mapHeight; x++) { for (int z = 0; z < mapWidth; z++) { cubes.Add(new Vector3(x, map[x, z], z), Matrix.Identity, grass); boxList.Add(something here); } } public Cube(GraphicsDevice graphicsDevice) { device = graphicsDevice; var vertices = new List<VertexPositionTexture>(); BuildFace(vertices, new Vector3(0, 0, 0), new Vector3(0, 1, 1)); BuildFace(vertices, new Vector3(0, 0, 1), new Vector3(1, 1, 1)); BuildFace(vertices, new Vector3(1, 0, 1), new Vector3(1, 1, 0)); BuildFace(vertices, new Vector3(1, 0, 0), new Vector3(0, 1, 0)); BuildFaceHorizontal(vertices, new Vector3(0, 1, 0), new Vector3(1, 1, 1)); BuildFaceHorizontal(vertices, new Vector3(0, 0, 1), new Vector3(1, 0, 0)); cubeVertexBuffer = new VertexBuffer(device, VertexPositionTexture.VertexDeclaration, vertices.Count, BufferUsage.WriteOnly); cubeVertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); } There aren't any clearly defined variables for the bounds of each cube created, so where do I create the bounding box from?

    Read the article

  • Beta Testing Begins for New MySQL 5.6 Developer and DBA Certification Exams

    - by Brandye Barrington
    Be among the first to earn one of Oracle's new MySQL certifications. Exams for the Oracle Certified Professional, MySQL 5.6 Developer (OCP) and Oracle Certified Professional, MySQL 5.6 Database Administrator OCP) certifications are now in beta testing, are are thus available at a greatly discounted rate of $50 USD. Explore the Oracle Certification exam pages below, which share a wealth of details, including preparation steps, exam objectives, number of questions, time allotments, and pricing.  MySQL 5.6 Developer (exam 1Z1-882) MySQL 5.6 Database Administrator (exam 1Z1-883) START TODAYExam appointments are available now. Easily register online by taking the following steps: STEP 1: Go to pearsonvue.com/oracle. STEP 2: Select exam 1Z1-882 (for developers) or exam 1Z1-883 (for DBAs). These new OCP credentials raise the bar for MySQL Certified Developers and Database Administrators. Start today and be among the first to be awarded the new Oracle MySQL 5.6 certifications. QUICK LINKS Oracle Certified Professional, MySQL 5.6 Developer - certification track | exam | VIDEO (2:54) Oracle Certified Professional, MySQL 5.6 Database Administrator - certification track | exam | VIDEO (3:00) Oracle MySQL 5.6 Certification Launch Learn More: Beta Testing Registration for exam: Pearson VUE

    Read the article

  • ubuntu box just redisplaying login screen after update

    - by David M. Karr
    My Ubuntu 12.04 box has been working fine. A recent update may have messed something up. I normally run remote windows on it, and I noticed that my windows were failing to start up. I then tried logging into it directly from the GUI console, and I'm seeing that after I press enter on the (valid) password, the page just redisplays. It's not a password error, as that would give me an inline error. I see some messages appear and disappear quickly between the login screen going away and then redisplaying, but they go away too quickly to read. I was able to run the non-gui login, and I did an update and upgrade, and then rebooted, but it's doing the same thing. I have a Samba connection from my Windows box, and that's still working. If it matters, here's my uname output (somewhat elided): Linux ... 3.2.0-26-generic #41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux What can I do to troubleshoot this? Note that when I select "Guest Session", it lets me log in and displays the window manager. This seems significant to me. Does this mean that something specific to my login is causing it to fail? Note: If it matters, here's the output from /var/log/dmesg. The line about gdm seems interesting: [ 9.815883] Bluetooth: RFCOMM TTY layer initialized [ 9.815887] Bluetooth: RFCOMM socket layer initialized [ 9.815888] Bluetooth: RFCOMM ver 1.11 [ 9.879088] [PCSPP,TRISTATE] [ 9.879092] parport0: irq 7 detected [ 9.883935] type=1400 audit(1341871177.871:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=845 comm="apparmor_parser" [ 9.884365] type=1400 audit(1341871177.871:11): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/ntpd" pid=851 comm="apparmor_parser" [ 9.950397] e1000e 0000:00:19.0: irq 42 for MSI/MSI-X [ 9.961160] init: gdm main process (907) killed by TERM signal [ 9.966358] lp0: using parport0 (polling).

    Read the article

  • A Brand-new Automated Testing Tool is the Result of Telerik and ArtOfTest Merger

    Im sure youve already heard the great news about Telerik expansion and the new Telerik Automated Testing Tools division. I am excited to share what we worked on and produced for the last couple of months. New Release The latest Telerik release that went live this week added a completely new tool to Teleriks automated testing product line. The new QA Edition is tailored for QA Professionals. The QA Edition is a standalone tool that allows QAs to freely create, execute and maintain their tests without having to install Visual Studio. If you are a developer and you want something much faster and lightweight than VS, then the Standalone tool is worth trying. New IDE The QA Edition is a WPF application with interface built on top of the latest and greatest RadControls for WPF. This allowed us to configure and build intuitive and easy-to-use UI. Additionally, the rich ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQLAuthority News – Weekend Experiment with NuoDB – Points to Pondor and Whitepaper

    - by pinaldave
    This weekend I have downloaded the latest beta version of NuoDB. I found it much improved and better UI. I was very much impressed as the installation was very smooth and I was up and running in less than 5 minutes with the product. The tools which are related to the Administration of the NuoDB seems to get makeover during this beta release. As per the claim they support now Solaris platform and have improved the native MacOS installation. I neither have Mac nor Solaris – I wish I would have experimented with the same. I will appreciate if anyone out there can confirm how the installations goes on these platforms. I have previously blogged about my experiment with NuoDB here: SQL SERVER – Weekend Project – Experimenting with ACID Transactions, SQL Compliant, Elastically Scalable Database SQL SERVER – Beginning NuoDB – Who will Benefit and How to Start SQL SERVER – Follow up on Beginning NuoDB – Who will Benefit and How to Start – Part 2 I am very impressed with the product so far and I have decided to understand the product further deep. Here are few of the questions which I am going to try to find answers with regards to NuoDB. Just so it is clear – NuoDB is not NOSQL, matter of the fact, it is following all the ACID properties of the database. If ACID properties are crucial why many NoSQL products are not adhering to it? (There are few out there do follow ACID but not all). I do understand the scalability of the database however does elasticity is crucial for the database and if yes how? (Elasticity is where the workload on the database is heavily fluctuating and the need of more than a single database server is coming up). How NuoDB has built scalable, elastic and 100% ACID compliance database which supports multiple platforms? How is NOSQL compared to NuoDB’s new architecture? In the next coming weeks, I am going to explore above concepts and dive deeper into the understanding of the same. Meanwhile I have read following white paper written by Experts at University of California at Santa Barbara. Very interesting read and great starter on the subject Database Scalability, Elasticity, and Autonomy in the Cloud. Additionally, my questions are also talking about NoSQL, this weekend I have started to learn about NoSQL from Pluralsight‘s online learning library. I will share my experience very soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology Tagged: NuoDB

    Read the article

  • Ray-Box Intersection during Scene traversal with matrix transforms

    - by Myx
    Hello: There are a few ways that I'm testing my ray-box intersections: Using the ComputeIntersectionBox(...) method, that takes a ray and a box as arguments and computes the closest intersection of the ray and the box. This method works by forming a plane with each of the faces of the box and finding an intersection with each of the planes. Once an intersection is found, a check is made whether or not the point is on the surface of the box by checking that the intersection point is between the corner points. When I look at rays after running this algorithm on two different boxes, I obtain the correct intersections. Using ComputeIntersectionScene(...) method without using the matrix transformations on a scene that has two spheres, a dodecahedron (a triangular mesh), and two boxes. ComputeIntersectionScene(...) recursively traverses all of the nodes of the scene graph and computes the closest intersection with the given ray. This test in particular does not apply any transformations that parent nodes may have that also need to be applied to their children. With this test, I also obtain the correct intersections. Using ComputeIntersectionScene(...) method WITH the matrix transformations. This test works like the one above except that before finding an intersection between the ray and a node in the scene, the ray is transformed into the node's coordinate frame using the inverse of the node's transformation matrix and after the intersection has been computed, this intersection is transformed back into the world coordinates by applying the transformation matrix to the intersection point. When testing with the third method on the same scene file as described in 2, testing with 4 rays (thus one ray intersects the one sphere, one ray the the other sphere, one ray one box, and one ray the other box), only the two spheres get intersected and the two boxes do not get intersections. When I debug looking into my ComputeIntersectionBox(...) method, it actually tells me that the ray intersects every plane on the box but each intersection point does not lie on the box. This seems to be strange behavior, since when using test 2 without transformations, I obtain the correct box intersections (thus, I believe my ray-box intersection to be correct) and when using test 3 WITH transformations, I obtain the correct sphere intersections (thus, I believe my transformed ray should be OK). Any suggestions where I could be going wrong? Thank you in advance.

    Read the article

  • Count white spaces to the left of a line in a text file using C++

    - by insertable
    Hi all, I am just trying to count the number of white spaces to the LEFT of a line from a text file. I have been using count( line.begin(), line.end(), ' ' ); but obviously that includes ALL white spaces to the left, in between words and to the right. So basically what I'm wanting it to do is once it hits a non-space character stop it from counting the white spaces. Thanks everyone.

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • How to utilize Varnish for A/B Testing and Feature Rollout?

    - by Ken
    Hi all, wasn't really sure if this should go here on or stackoverlow - admins, please move if i'm mistaken (and sorry). Today we have our web layer exposed to the world. We would like to add Varnish in front of our web layer to accelerate the site and reduce calls to the backend. However, we have some concerns and i was wondering how most people approach them: A/B Testing - How do you test two "versions" of each page and compare? I mean, how does varnish know which page to serve up? If and how do you save seperate versions on each page? Feature rollout - how would you set up a simple feature rollout mechanism? Let's say i want to open a new feature/page to just 10% of the traffic.. and then later increase that to 20%? How do you handle code deployments? Do you purge your entire varnish cache every deployment? (We have deployments on a daily basis). Or do you just let it slowly expire (using TTL)? Any ideas and examples regarding these issues is greatly appreciated! Thanks in advance. Ken.

    Read the article

  • Is it worth hiring a hacker to perform some penetration testing on my servers ?

    - by Brann
    I'm working in a small IT company with paranoid clients, so security has always been an important consideration to us ; In the past, we've already mandated two penetration testing from independent companies specialized in this area (Dionach and GSS). We've also ran some automated penetration tests using Nessus. Those two auditors were given a lot of insider information, and found almost nothing* ... While it feels comfortable to think our system is perfectly sure (and it was surely comfortable to show those reports to our clients when they performed their due diligence work), I've got a hard time believing that we've achieved a perfectly sure system, especially considering that we have no security specialist in our company (Security has always been a concern, and we're completely paranoid, which helps, but that's far as it goes!) If hackers can hack into companies that probably employ at least a few people whose sole task is to ensure their data stays private, surely they could hack into our small business, right ? Does someone have any experience in hiring an "ethical hacker"? How to find one? How much would it cost? *The only recommendation they made us was to upgrade our remote desktop protocols on two windows servers, which they were able to access because we gave them the correct non-standard port and whitelisted their IP

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Unit Testing using InternalsVisibleToAttribute requires compiling with /out:filename.ext?

    - by Will Marcouiller
    In my most recent question: Unit Testing Best Practice? / C# InternalsVisibleTo() attribute for VBNET 2.0 while testing?, I was asking about InternalsVisibleToAttribute. I have read the documentation on how to use it, and everything is fine and understood. However, I can't instantiate my class Groupe from my Testing project. I want to be able to instantiate my internal class in my wrapper assembly, from my testing assembly. Any help is appreciated! EDIT #1 Here's the compile-time error I get when I do try to instantiate my type: Erreur 2 'Carra.Exemples.Blocs.ActiveDirectory.Groupe' n'est pas accessible dans ce contexte, car il est 'Private'. C:\Open\Projects\Exemples\Src\Carra.Exemples.Blocs.ActiveDirectory\Carra.Exemples.Blocs.ActiveDirectory.Tests\GroupeTests.vb 9 18 Carra.Exemples.Blocs.ActiveDirectory.Tests (This says that my type is not accessible in this context, because it is private.) But it's Friend (internal)! EDIT #2 Here's a piece of code as suggested for the Groupe class implementing the Public interface IGroupe: #Region "Importations" Imports System.DirectoryServices Imports System.Runtime.CompilerServices #End Region <Assembly: InternalsVisibleTo("Carra.Exemples.Blocs.ActiveDirectory.Tests")> Friend Class Groupe Implements IGroupe #Region "Membres privés" Private _classe As String = "group" Private _domaine As String Private _membres As CustomSet(Of IUtilisateur) Private _groupeNatif As DirectoryEntry #End Region #Region "Constructeurs" Friend Sub New() _membres = New CustomSet(Of IUtilisateur)() _groupeNatif = New DirectoryEntry() End Sub Friend Sub New(ByVal domaine As String) If (String.IsNullOrEmpty(domaine)) Then Throw New ArgumentNullException() _domaine = domaine _membres = New CustomSet(Of IUtilisateur)() _groupeNatif = New DirectoryEntry(domaine) End Sub Friend Sub New(ByVal groupeNatif As DirectoryEntry) _groupeNatif = groupeNatif _domaine = _groupeNatif.Path _membres = New CustomSet(Of IUtilisateur)() End Sub #End Region And the code trying to use it: #Region "Importations" Imports NUnit.Framework Imports Carra.Exemples.Blocs.ActiveDirectory.Tests #End Region <TestFixture()> _ Public Class GroupeTests <Test()> _ Public Sub CreerDefaut() Dim g As Groupe = New Groupe() Assert.IsNotNull(g) Assert.IsInstanceOf(Groupe, g) End Sub End Class EDIT #3 Damn! I have just noticed that I wasn't importing the assembly in my importation region. Nope, didn't solve anything =( Thanks!

    Read the article

  • Any suggestions for good automated web load testing tool?

    - by fmunkert
    What are some good automated tools for load testing (stress testing) web applications, that do not use record and replay of HTTP network packets? I am aware that there are numerous load testing tools on the market that record and replay HTTP network packets. But these are unsuitable for my purpose, because of this: The HTTP packet format changes very often in our application (e.g. when we optimize an AJAX call). We do not want to adapt all test scripts just because there is a slight change in HTTP packet format. Our test team shall not need to know any internals about our application to write their test scripts. A tool that replays HTTP packets, however, requires the team to know the format of HTTP requests and responses, such that they can adapt details of the replayed HTTP packets (e.g. user name). The automated load testing tool I am looking for should be able to let the test team write "black box" test scripts such as: Invoke web page at URL http://... . First, enter XXX into text field XXX. Then, press button XXX. Wait until response has been received from web server. Verify that text field XXX now contains the text XXX. The tool should be able to simulate up to several 1000 users, and it should be compatible with web applications using ASP.NET and AJAX.

    Read the article

  • After Adding "readonly" attribute on text box not able to remove it in one event

    - by Sreedhar K
    Steps to repro USE Internet Explorer Check unlimited check box Click on text box (It will remove tick/check from check box) Try to enter text in text box We cannot enter in the text box 4. Click again on the text box. Now we will be able to enter text in the text box We tried by 1. Making attribute readOnly to flase i.e. $('#myinput').attr('readOnly', false); 2. Calling $('#myinput').click(); Below is the HTML code <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Make input read only</title> <script type="text/javascript" src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.4.min.js"></script> </head> <body> <input id="myinput" type="text" /> <input id="mycheck" type="checkbox" /> <script type="text/javascript"> /*oncheck box click*/ $('#mycheck').click(function () { if ($(this).attr('checked')) { $('#myinput').attr('readOnly', 'readOnly'); } else { $('#myinput').removeAttr('readOnly'); /* also tried * $('#myinput').attr('readOnly', false); * $('#myinput').attr('readOnly', ''); */ } }); /*on text box click*/ $('#myinput').click(function () { $('#mycheck').removeAttr('checked'); $('#myinput').removeAttr('readOnly'); /* also tried * $('#myinput').attr('readOnly', false); * $('#myinput').attr('readOnly', ''); */ }); </script> </body> </html> Live copy

    Read the article

  • Don’t just P2V that server for Testing!

    - by Jonathan Kehayias
    If you use virtualization in your company, at some point in time you might be tempted to perform a Physical-To-Virtual conversion, also known as P2V.  The ability to create a complete working copy of a physical server as a virtual machine is really useful for migrating to a virtualized datacenter, but it can also wreak havoc in your environment if you use it to generate a copy of a server for testing and the only change you make is the name of the server.  The Problem: Consider that you...(read more)

    Read the article

  • Can the authentication box appear earlier after startup?

    - by John
    I'd expected the authentication "Enter your password" box to appear and need to be answered before any applications would run after startup - rather like Windows demands a password first - but it seems to be quite an unrushed process. CAN it be persuaded to get going more briskly ? (I'm usually about to respond to a new email before it interrupts me after starting Linux from scratch :) (I'm quite new to Linux: I'm on Ubuntu 12.04 LTS)

    Read the article

  • Mutation Testing

    You may have a twinge of doubt when your code passes all its unit tests. They might say that the code is OK, but if the code is definitely incorrect, will the unit tests fail? Mutation Testing is a relatively simple, but ingenious, way of checking that your tests will spot the fact that your code is malfunctioning. It is definitely something that every developer should be aware of.

    Read the article

  • Scriptable user-interfaces/frameworks for automated UI testing

    - by AareP
    I'm planning on using scripting for automated UI testing. Main application is written in c#, and I want it to be scriptable, so I can do everything end-user can do, but programmatically. What do you think of software that provides an interface for scripting, like VBA macros in Excel? Can this be future of all programming, big and small? What is the best way to build such an interface for your own application, dll-based or by parsing own scripting language?

    Read the article

  • Using IIS Logs for Performance Testing with Visual Studio

    - by Tarun Arora
    In this blog post I’ll show you how you can play back the IIS Logs in Visual Studio to automatically generate the web performance tests. You can also download the sample solution I am demo-ing in the blog post. Introduction Performance testing is as important for new websites as it is for evolving websites. If you already have your website running in production you could mine the information available in IIS logs to analyse the dense zones (most used pages) and performance test those pages rather than wasting time testing & tuning the least used pages in your application. What are IIS Logs To help with server use and analysis, IIS is integrated with several types of log files. These log file formats provide information on a range of websites and specific statistics, including Internet Protocol (IP) addresses, user information and site visits as well as dates, times and queries. If you are using IIS 7 and above you will find the log files in the following directory C:\Interpub\Logs\ Walkthrough 1. Download and Install Log Parser from the Microsoft download Centre. You should see the LogParser.dll in the install folder, the default install location is C:\Program Files (x86)\Log Parser 2.2. LogParser.dll gives us a library to query the iis log files programmatically. By the way if you haven’t used Log Parser in the past, it is a is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory. More details… 2. Create a new test project in Visual Studio. Let’s call it IISLogsToWebPerfTestDemo.   3.  Delete the UnitTest1.cs class that gets created by default. Right click the solution and add a project of type class library, name it, IISLogsToWebPerfTestEngine. Delete the default class Program.cs that gets created with the project. 4. Under the IISLogsToWebPerfTestEngine project add a reference to Microsoft.VisualStudio.QualityTools.WebTestFramework – c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.WebTestFramework.dll LogParser also called MSUtil - c:\users\tarora\documents\visual studio 2010\Projects\IisLogsToWebPerfTest\IisLogsToWebPerfTestEngine\obj\Debug\Interop.MSUtil.dll 5. Right click IISLogsToWebPerfTestEngine project and add a new classes – IISLogReader.cs The IISLogReader class queries the iis logs using the log parser. using System; using System.Collections.Generic; using System.Text; using MSUtil; using LogQuery = MSUtil.LogQueryClassClass; using IISLogInputFormat = MSUtil.COMIISW3CInputContextClassClass; using LogRecordSet = MSUtil.ILogRecordset; using Microsoft.VisualStudio.TestTools.WebTesting; using System.Diagnostics; namespace IisLogsToWebPerfTestEngine { // By making use of log parser it is possible to query the iis log using select queries public class IISLogReader { private string _iisLogPath; public IISLogReader(string iisLogPath) { _iisLogPath = iisLogPath; } public IEnumerable<WebTestRequest> GetRequests() { LogQuery logQuery = new LogQuery(); IISLogInputFormat iisInputFormat = new IISLogInputFormat(); // currently these columns give us suffient information to construct the web test requests string query = @"SELECT s-ip, s-port, cs-method, cs-uri-stem, cs-uri-query FROM " + _iisLogPath; LogRecordSet recordSet = logQuery.Execute(query, iisInputFormat); // Apply a bit of transformation while (!recordSet.atEnd()) { ILogRecord record = recordSet.getRecord(); if (record.getValueEx("cs-method").ToString() == "GET") { string server = record.getValueEx("s-ip").ToString(); string path = record.getValueEx("cs-uri-stem").ToString(); string querystring = record.getValueEx("cs-uri-query").ToString(); StringBuilder urlBuilder = new StringBuilder(); urlBuilder.Append("http://"); urlBuilder.Append(server); urlBuilder.Append(path); if (!String.IsNullOrEmpty(querystring)) { urlBuilder.Append("?"); urlBuilder.Append(querystring); } // You could make substitutions by introducing parameterized web tests. WebTestRequest request = new WebTestRequest(urlBuilder.ToString()); Debug.WriteLine(request.UrlWithQueryString); yield return request; } recordSet.moveNext(); } Console.WriteLine(" That's it! Closing the reader"); recordSet.close(); } } }   6. Connect the dots by adding the project reference ‘IisLogsToWebPerfTestEngine’ to ‘IisLogsToWebPerfTest’. Right click the ‘IisLogsToWebPerfTest’ project and add a new class ‘WebTest1Coded.cs’ The WebTest1Coded.cs inherits from the WebTest class. By overriding the GetRequestMethod we can inject the log files to the IISLogReader class which uses Log parser to query the log file and extract the web requests to generate the web test request which is yielded back for play back when the test is run. namespace IisLogsToWebPerfTest { using System; using System.Collections.Generic; using System.Text; using Microsoft.VisualStudio.TestTools.WebTesting; using Microsoft.VisualStudio.TestTools.WebTesting.Rules; using IisLogsToWebPerfTestEngine; // This class is a coded web performance test implementation, that simply passes // the path of the iis logs to the IisLogReader class which does the heavy // lifting of reading the contents of the log file and converting them to tests. // You could have multiple such classes that inherit from WebTest and implement // GetRequestEnumerator Method and pass differnt log files for different tests. public class WebTest1Coded : WebTest { public WebTest1Coded() { this.PreAuthenticate = true; } public override IEnumerator<WebTestRequest> GetRequestEnumerator() { // substitute the highlighted path with the path of the iis log file IISLogReader reader = new IISLogReader(@"C:\Demo\iisLog1.log"); foreach (WebTestRequest request in reader.GetRequests()) { yield return request; } } } }   7. Its time to fire the test off and see the iis log playback as a web performance test. From the Test menu choose Test View Window you should be able to see the WebTest1Coded test show up. Highlight the test and press Run selection (you can also debug the test in case you face any failures during test execution). 8. Optionally you can create a Load Test by keeping ‘WebTest1Coded’ as the base test. Conclusion You have just helped your testing team, you now have become the coolest developer in your organization! Jokes apart, log parser and web performance test together allow you to save a lot of time by not having to worry about what to test or even worrying about how to record the test. If you haven’t already, download the solution from here. You can take this to the next level by using LogParser to extract the log files as part of an end of day batch to a database. See the usage trends by user this solution over a longer term and have your tests consume the web requests now stored in the database to generate the web performance tests. If you like the post, don’t forget to share … Keep RocKiNg!

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >