Search Results

Search found 4085 results on 164 pages for 'jamie love'.

Page 125/164 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • Search Alternative Search Engines from within Bing’s Search Page

    - by Asian Angel
    So you love using Bing Search but may still be curious to see what another search engine will provide if used. Now you can search using another search engine from within the Bing Search page and enjoy numbered results using two simple user scripts. Note: These user scripts may also be added to other browsers as well (i.e. Iron, Opera, etc.). Before Bing Search does nicely on searches but what if you would like to try the same search with another search engine? Having to manually open a new tab, navigate to the appropriate website, and then start a new search is not too convenient. Another possible frustration for some people may be knowing just how many search results that they have looked through. Well, both of these small problems are easy to fix with two wonderful user scripts. Installing the Scripts The first script that we installed (you may do either one first) was for adding alternative search engine links. Click “Install” to get started… Note: For our example we had the Greasemonkey extension installed. When the confirmation window pops up click on “Install” to finish adding the user script to Firefox. Repeating the same procedure as above add your second script to Firefox. Confirm the second user script installation and you are ready to enjoy nicer Bing Search results. After As you can see there are two small unobtrusive differences in our search results. The alternative search engine links are conveniently located at the top of the page and now you can easily know just how many search results that you have looked through. The results when we decided to try the search in a transfer over to Yahoo. Our search transferred to Ask Search. The alternative search links can be very helpful if Bing is not providing the kind of search results that you are hoping for. Still going very nicely past the 100 mark… Conclusion If you have been wanting a small booster to searching with Bing then these two scripts will get you on your way. Using Opera Browser? See our how-to for adding user scripts to Opera here. Links Install the Bing (Alternate Search Engine Links) User Script Install the Bing Numbered Search Results User Script Download the Greasemonkey extension for Firefox (Mozilla Add-ons) Download the Stylish extension for Firefox (Mozilla Add-ons) Similar Articles Productive Geek Tips Organize Your Firefox Search Engines Into FoldersFix for Slow "Instant Search" In Outlook 2007Gain Access to a Search Box in Google ChromeManage Web Searches In SafariModify Firefox’s Search Bar Behavior with SearchLoad Options TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain

    Read the article

  • Blogging: MacJournal & Windows Live Writer

    - by Jeff Julian
    One thing I have learned about using a Mac is that Apple does not produce very many free applications. The ones they do are typically not full featured and to get the full feature you need to buy their upgraded version. For example, when it comes to Photo editing and cataloging, iPhoto is not a solution for large files or RAW processing, you need Aperture which is a couple hundred dollars. I am not complaining because I like it when an application has a product team who generates revenue with it, because the chance of them being around longer seems to be higher. What is my point in all of this? Apple does not produce a product for blogging/journaling like Microsoft does with Windows Live Writer. I love Windows Live Writer. If you are on a Windows box, it is a required tool in your toolbox if you publish to a blog. The cleanness of the interface, integration with most blog APIs and ability to Save Local or Publish as a Draft make capturing your thoughts for publishing now or later a very easy task. My hope is that Microsoft will port it to the Mac, but I don’t believe that will ever happen as it is not a revenue generating product and Microsoft doesn’t often port to a Mac besides Remote Desktop Connection and MSN Messenger. For my configuration I used to use only Boot Camp on my two MacBook Pros I have owned in the past three years because I’m a PC, but after four different rebuilds (not typically due to Windows, but Boot Camp or Parallels) I decided to move off the Boot Camp platform and to VMWare Fusion. This is a complete separate blog post that I should spec out in MacJournal, but I now always boot into the Mac OS and use Fusion for my AJI Software VM or my client’s VMs. It just seems to work better for me and I have a very nice way to backup my Windows environments with VMWare.Needless to say, there was need in my new laptop configuration for a blogging tool that worked natively on a Mac. I don’t like to power up my machine for writing a document or working on an image and need to boot up a VM just so I can use Windows. Some would say why not just use a Windows laptop and put the MBP on eBay? It is just a preference and right now, I like the Mac OS for day to day work. So in comes MacJournal, part of the current MacHeist package for $19.95 (MacJournal is normally $39.95). This product is definitely not WLW, but WLW is missing some features I like in MacJournal. I hope the price point comes down on MacJournal cause I could see paying $19.95 for it, but it is always hard for me to buy a piece of software for $39.95 when I can use something else. But I am a cheapskate when it comes to software packages. I suggest if you are using a Mac to drop what you are doing pick up the MacHeist bundle today before it is over, but if you are reading this later, than download the trial and see if MacJournal is a solution for you. If you have any other suggestions that are as nice or cheaper, please comment.Product LinksMacJournal by Mariners Software $39.95 (part of MacHeist bundle for $19.95 with only one day left)Windows Live Writer by MicrosoftThis post was created using MacJournal.[Update: The joys of formatting. Make sure if you are a Geekswithblogs.net member that you use this configuration to setup the Metablog formatting of paragraphs correctly]

    Read the article

  • PowerShell Control over Nikon D3000 Camera

    My wife got me a Nikon D3000 camera for Christmas last year, and Im loving it but still trying to wrap my head around some of its features.  For instance, when you plug it into a computer via USB, it doesnt show up as a drive like most cameras Ive used to, but rather it shows up as Computer\D3000.  After a bit of research, Ive learned that this is because it implements the MTP/PTP protocol, and thus doesnt actually let Windows mount the cameras storage as a drive letter.  Nikon describes the use of the MTP and PTP protocols in their cameras here. What Im really trying to do is gain access to the cameras file system via PowerShell.  Ive been using a very handy PowerShell script to pull pictures off of my cameras and organize them into folders by date.  Id love to be able to do the same thing with my Nikon D3000, but so far I havent been able to figure out how to get access to the files in PowerShell.  If you know, Id appreciate any links/tips you can provide.  All I could find is a shareware product called PTPdrive, which Im not prepared to shell out money for (yet).  (and yes you can do much the same thing with Windows 7s Import Pictures and Videos wizard, which is pretty good too) However, in my searching, I did find some really cool stuff you can do with PowerShell and one of these cameras, like actually taking pictures via PowerShell commands.  Credit for this goes to James ONeill and Mark Wilson.  Heres what I was able to do: Taking Pictures via PowerShell with D3000 First, connect your camera, turn it on, and launch PowerShell.  Execute the following commands to see what commands your device supports.  $dialog = New-Object -ComObject "WIA.CommonDialog" $device = $dialog.ShowSelectDevice() $device.Commands You should see something like this: Now, to take a picture, simply point your camera at something and then execute this command: $device.ExecuteCommand("{AF933CAC-ACAD-11D2-A093-00C04F72DC3C}") .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Imagine my surprise when this actually took a picture (with auto-focus): Imagine what you could do with a camera completely under the control of your computer  Time-lapse photography would be pretty simple, for instance, with a very simple loop that takes a picture and then sleeps for a minute (or whatever time period).  Hooked up to a laptop for portability (and an A/C power supply), this would be pretty trivial to implement.  I may have to give it a shot and report back. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Pigs in Socks?

    - by MightyZot
    My wonderful wife Annie surprised me with a cruise to Cozumel for my fortieth birthday. I love to travel. Every trip is ripe with adventure, crazy things to see and experience. For example, on the way to Mobile Alabama to catch our boat, some dude hauling a mobile home lost a window and we drove through a cloud of busting glass going 80 miles per hour! The night before the cruise, we stayed in the Malaga Inn and I crawled UNDER the hotel to look at an old civil war bunker. WOAH! Then, on the way to and from Cozumel, the boat plowed through two beautiful and slightly violent storms. But, the adventures you have while travelling often pale in comparison to the cult of personalities you meet along the way.  :) We met many cool people during our travels and we made some new friends. Todd and Andrea are in the publishing business (www.myneworleans.com) and teaching, respectively. Erika is a teacher too and Matt has a pig on his foot. This story is about the pig. Without that pig on Matt’s foot, we probably would have hit a buoy and drowned. Alright, so…this pig on Matt’s foot…this is no henna tatt, this is a man’s tattoo. Apparently, getting tattoos on your feet is very painful because there is very little muscle and fat and lots of nifty nerves to tell you that you might be doing something stupid. Pig and rooster tattoos carry special meaning for sailors of old. According to some sources, having a tattoo of a pig or rooster on one foot or the other will keep you from drowning. There are many great musings as to why a pig and a rooster might save your life. The most plausible in my opinion is that pigs and roosters were common livestock tagging along with the crew. Since they were shipped in wooden crates, pigs and roosters were often counted amongst the survivors when ships succumbed to Davy Jones’ Locker. I didn’t spend a whole lot of time researching the pig and the rooster, so consider these musings as you would a grain of salt. And, I was not able to find a lot of what you might consider credible history regarding the tradition. What I did find was a comfort, or solace, in the maritime tradition. Seems like raw traditions like the pig and the rooster are in danger of getting lost in a sea of non-permanence. I mean, what traditions are us old programmers and techies leaving behind for future generations? Makes me wonder what Ward Christensen has tattooed on his left foot.  I guess my choice would have to be a Commodore 64.   (I met Ward, by the way, in an elevator after he received his Dvorak awards in 1992. He was a very non-assuming individual sporting business casual and was very much a “sailor” of an old-school programmer. I can’t remember his exact words, but I think they were essentially that he felt it odd that he was getting an award for just doing his work. I’m sure that Ward doesn’t know this…he couldn’t have set a more positive example for a young 22 year old programmer. Thanks Ward!)

    Read the article

  • This Week in Geek History: Gmail Goes Public, Deep Blue Wins at Chess, and the Birth of Thomas Edison

    - by Jason Fitzpatrick
    Every week we bring you a snapshot of the week in Geek History. This week we’re taking a peek at the public release of Gmail, the first time a computer won against a chess champion, and the birth of prolific inventor Thomas Edison. Gmail Goes Public It’s hard to believe that Gmail has only been around for seven years and that for the first three years of its life it was invite only. In 2007 Gmail dropped the invite only requirement (although they would hold onto the “beta” tag for another two years) and opened its doors for anyone to grab a username @gmail. For what seemed like an entire epoch in internet history Gmail had the slickest web-based email around with constant innovations and features rolling out from Gmail Labs. Only in the last year or so have major overhauls at competitors like Hotmail and Yahoo! Mail brought other services up to speed. Can’t stand reading a Week in Geek History entry without a random fact? Here you go: gmail.com was originally owned by the Garfield franchise and ran a service that delivered Garfield comics to your email inbox. No, we’re not kidding. Deep Blue Proves Itself a Chess Master Deep Blue was a super computer constructed by IBM with the sole purpose of winning chess matches. In 2011 with the all seeing eye of Google and the amazing computational abilities of engines like Wolfram Alpha we simply take powerful computers immersed in our daily lives for granted. The 1996 match against reigning world chest champion Garry Kasparov where in Deep Blue held its own, but ultimately lost, in a  4-2 match shook a lot of people up. What did it mean if something that was considered such an elegant and quintessentially human endeavor such as chess was so easy for a machine? A series of upgrades helped Deep Blue outright win a match against Kasparov in 1997 (seen in the photo above). After the win Deep Blue was retired and disassembled. Parts of Deep Blue are housed in the National Museum of History and the Computer History Museum. Birth of Thomas Edison Thomas Alva Edison was one of the most prolific inventors in history and holds an astounding 1,093 US Patents. He is responsible for outright inventing or greatly refining major innovations in the history of world culture including the phonograph, the movie camera, the carbon microphone used in nearly every telephone well into the 1980s, batteries for electric cars (a notion we’d take over a century to take seriously), voting machines, and of course his enormous contribution to electric distribution systems. Despite the role of scientist and inventor being largely unglamorous, Thomas Edison and his tumultuous relationship with fellow inventor Nikola Tesla have been fodder for everything from books, to comics, to movies, and video games. Other Notable Moments from This Week in Geek History Although we only shine the spotlight on three interesting facts a week in our Geek History column, that doesn’t mean we don’t have space to highlight a few more in passing. This week in Geek History: 1971 – Apollo 14 returns to Earth after third Lunar mission. 1974 – Birth of Robot Chicken creator Seth Green. 1986 – Death of Dune creator Frank Herbert. Goodnight Dune. 1997 – Simpsons becomes longest running animated show on television. Have an interesting bit of geek trivia to share? Shoot us an email to [email protected] with “history” in the subject line and we’ll be sure to add it to our list of trivia. Latest Features How-To Geek ETC Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? Clean Up Google Calendar’s Interface in Chrome and Iron The Rise and Fall of Kramerica? [Seinfeld Video] GNOME Shell 3 Live CDs for OpenSUSE and Fedora Available for Testing Picplz Offers Special FX, Sharing, and Backup of Your Smartphone Pics BUILD! An Epic LEGO Stop Motion Film [VIDEO] The Lingering Glow of Sunset over a Winter Landscape Wallpaper

    Read the article

  • Book Review: Middleware Management with Oracle Enterprise Manager Grid Control 10g R5

    - by olaf.heimburger
    When you are familar with the Oracle Database and Middleware stack, chances are that you came across the Enterprise Manager. It comes in many versions for the database or the middleware and differs in its features. If meet someone who talks about Enterprise Manager, it might be possible that this person is talking about something completely different - Enterprise Manager Grid Control. Enterprise Manager Grid Control is the Oracle product for the data center that monitors all databases - and middleware components as well as operating systems. Since the database part is taken for granted, is needs some additional steps to get into the world of centralized middleware management. That's what this book is for - bringing you in the world of middleware management. The Authors This book is written by Debu Panda, former Product Management Director of the Oracle Fusion Middleware Management development team, and Arvind Maheshwari, Senior Software Development Manager of the Oracle Enterprise Manager development team. The Book Oracle Enterprise Manager conceptionally works for many different management areas. As a user you often think of managing databases with it. This is a wide area and deserves another book. The least known area is the middleware management and that's what the booked aimes for. The first 3 chapters cover the key features of Enterprise Manager Grid Control, Installing Enterprise Manager Grid Control, and Enterprise Manager Key Concepts and Subsystems. The foundation you need to understand the whole software and the following chapters. Read them in order and you are well prepared for the next 10 chapters on managing the various bits and pieces in your data center. The list of bits and pieces is always a surprise, no matter how often you open the book. You can manage Oracle WebLogic Server, Oracle Application Server, Oracle Forms and Reports Services, SOA Suite 10g, Oracle Service Bus 10g, Oracle Internet Directory, Oracle Virtual Directory, Oracle Access Manager, Oracle Identity Manager, Oracle Identity Federation, Oracle Coherence Cluster, Non-Oracle Middleware like Apache, Tomcat, JBoss, OBM WebSphere and much much more. The chapters for these components can be read in any order you like, you only need the foundation chapters and continue with the parts in your data center. Once you are done with them, don't forget to read the last chapter, Best Practices for Managing Middleware Components using Enterprise Manager. Read it, understand it, and implement it in your organization. This will save you valueable time and budget. Recommendation This book is mainly written for the Enterprise Manager newbies and saves you a lot of time while going through the standard product documentation. All chapters are considerable short and tell exactly what need to know to get started with. Nothing more and nothing less. That's the beauty of it and why I love it. Due to its limitation it will cover everything you'd like to know, but it gets you started and interested for more insights. But that is the job of the product documentation. The Details Title Middleware Management with Oracle Enterprise Manager Grid Control 10g R5 Authors Debu Panda and Arvind Maheshwari Paperback 310 pages ISBN 13 978-1-847198-34-1

    Read the article

  • Silverlight Cream for March 24, 2010 -- #819

    - by Dave Campbell
    In this Issue: Nokola, Tim Heuer, Christian Schormann, Brad Abrams, David Kelley, Phil Middlemiss, Michael Klucher, Brandon Watson, Kunal Chowdhury, Jacek Ciereszko, and Unni. Shoutouts: Michael Klucher has a short post up For Love of the Game (Development)…, where he's looking for some input from the developer community. Shawn Hargreaves has a link post up of all the Windows Phone MIX10 presentations Chris Cavanagh has a Soft-Body Physics for Windows Phone 7 post up that goes along with one he did 1-1/2 years ago! Jeff Weber posted An Open Letter To Microsoft Regarding The Silverlight Game Development Community Pete Brown posted his MIX10 Recap ... lots of information, and discussion of what he was up to ... I liked the Trivia app Pete... glad to hear that was yours :) I've changed my mind and added a WP7 tag to SilverlightCream. I'll straighten out all the Mobile plus Silverlight links to point at the WP7 tab hopefully tonight. From SilverlightCream.com: EasyPainter Source Pack 3: Adorners, Mouse Cursors and Frames Nokola has been busy with EasyPainter adding in Custom, Extensible Mouse Cursors and Customizable Adorners with extensible adorner frames, and best of all... all with source code! Simulate Geo Location in Silverlight Windows Phone 7 emulator Among the things we don't have in our WP7 emulators is Geo Location... Tim Heuer comes to the rescue with a simulator for it... too cool, Tim! Blend 4: About Path Layout, Part II Christian Schormann is back with Part 2 of his tutorial sequence on the new Path Layout. Really good info and definitely cool presentations of the control. Silverlight 4 + RIA Services - Ready for Business: Exposing OData Services Brad Abrams continues his series with a post on exposing OData services. This looks like a great tutorial on the topic... will probably resolve some questions I've been having :) No Silverlight and Preloader Experience(ish) - in 10 seconds... David Kelley exposes the code he uses on his site, designed to be friendly to Silverlight and non-Silverlight users alike. Merged Dictionaries of Style Resources and Blend Phil Middlemiss has a nice article up on Merged Dictionaries and using multiple resource dictionaries that the app chooses, but also be compatible with Prism and Blend while not eating your system resources out of house and home. XNA Game Studio and Windows Phone Emulator Compatibility Michael Klucher has a definitive post up about getting your XNA and system up-to-speed for WP7... a must-read if you've been running any of the other XNA drops. Windows Phone 7 301 Redirect Bug Brandon Watson reports a 301 Redirect bug on WP7 ... see the code and how he got it, then follow along as he explains all the debug paths he took and what the resolution (?) really is :) Silverlight 4: How to use the new Printing API? Kunal Chowdhury has a tutorial up on printing with Silverlight 4 RC... from the project layout to printing and then printing a smaller section... all good Printing problem in Silverlight 4.0 RC - loading images in code behind Jacek Ciereszko also is writing about printing, and in his case he had problems with loading an image dynamically and printing it... plus he provides a solution to the 'blank page' problem. ToolboxExampleAttribute - a new extension point in Blend 4 (and a few other extensibility related changes) Unni has an article up about Expression Blend 4's new ToolboxExampleAttribute which allow you to have multiple examples of the same type resulting in different XAML produced. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone    MIX10

    Read the article

  • Social Media Talk: Facebook, Really?? How Has It Become This Popular??

    - by david.talamelli
    If you have read some of my previous posts over the past few years either here or on my personal blog David's Journal on Tap you will know I am a Social Media enthusiast. I use various social media sites everday in both my work and personal life. I was surprised to read today on Mashable.com that Facebook now Commands 41% of Social Media Trafic. When I think of the Social Media sites I use most, the sites that jump into my mind first are LinkedIn, Blogging and Twitter. I do use Facebook in both work and in my personal life but on the list of sites I use it probably ranks closer to the bottom of the list rather than the top. I know Facebook is engrained in everything these days - but really I am not a huge Facebook fan - and I am finding that over the past 3-6 months my interest in Facebook is going down rather than up. From a work perspective - SM sites let me connect with candidates and communities and they help me talk about the things that I am doing here at Oracle. From a personal perspective SM sites let me keep in touch with friends and family both here and overseas in a really simple and easy way. Sites like LinkedIn give me a great way to proactively talk to both active and passive candidates. Twitter is fantastic to keep in touch with industry trends and keep up to date on the latest trending topics as well as follow conversations about whatever keyword you want to follow. Blogging lets me share my thoughts and ideas with others and while FB does have some great benefits I don't think the benefits outweigh the negatives of using FB. I use TweetDeck to keep track of my twitter feeds, the latest LinkedIn updates and Facebook updates. Tweetdeck is a great tool as it consolidates these 3 SM sites for me and I can quickly scan to see the latest news on any of them. From what I have seen from Facebook it looks like 70%-80% of people are using FB to grow their farm on farmville, start a mafia war on mafiawars or read their horoscope, check their love percentage, etc...... In between all these "updates" every now and again you do see a real update from someone who actually has something to say but there is so much "white noise" on FB from all the games and apps that is hard to see the real messages from all the 'games' information. I don't like having to scroll through what seems likes pages of farmville updates only to get one real piece of information. For me this is where FB's value really drops off. While I use SM everyday I try to use SM effectively. Sifting through so much noise is not effective and really I am not all that interested in Farmville, MafiaWars or any similar game/app. But what about Groups and Facebook Ads?? Groups are ok, but I am not sure I would call them SM game changers - yes there is a group for everything out there, but a group whether it is on FB or not is only as good as the community that supports and participates in it. Many of the Groups on FB (and elsewhere) are set up and never used or promoted by the moderator. I have heard that FB ads do have an impact, and I have not really looked at them - the question of cost jumps and return on investment comes to my mind though. FB does have some benefits, it is a great way to keep in touch with people and a great way to talk to others. I think it would have been interesting to see a different statistic measuring how effective that 41% of Social Media Traffic via FB really is or is it just a case of more people jumping online to play games. To me FB does not equal SM effectiveness, at the moment it is a tool that I sometimes need to use as opposed to want to use. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • ASP.NET Web API - Screencast series with downloadable sample code - Part 1

    - by Jon Galloway
    There's a lot of great ASP.NET Web API content on the ASP.NET website at http://asp.net/web-api. I mentioned my screencast series in original announcement post, but we've since added the sample code so I thought it was worth pointing the series out specifically. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. So - let's watch them together! Grab some popcorn and pay attention, because these are short. After each video, I'll talk about what I thought was important. I'm embedding the videos using HTML5 (MP4) with Silverlight fallback, but if something goes wrong or your browser / device / whatever doesn't support them, I'll include the link to where the videos are more professionally hosted on the ASP.NET site. Note also if you're following along with the samples that, since Part 1 just looks at the File / New Project step, the screencast part numbers are one ahead of the sample part numbers - so screencast 4 matches with sample code demo 3. Note: I started this as one long post for all 6 parts, but as it grew over 2000 words I figured it'd be better to break it up. Part 1: Your First Web API [Video and code on the ASP.NET site] This screencast starts with an overview of why you'd want to use ASP.NET Web API: Reach more clients (thinking beyond the browser to mobile clients, other applications, etc.) Scale (who doesn't love the cloud?!) Embrace HTTP (a focus on HTTP both on client and server really simplifies and focuses service interactions) Next, I start a new ASP.NET Web API application and show some of the basics of the ApiController. We don't write any new code in this first step, just look at the example controller that's created by File / New Project. using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace NewProject_Mvc4BetaWebApi.Controllers { public class ValuesController : ApiController { // GET /api/values public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } // GET /api/values/5 public string Get(int id) { return "value"; } // POST /api/values public void Post(string value) { } // PUT /api/values/5 public void Put(int id, string value) { } // DELETE /api/values/5 public void Delete(int id) { } } } Finally, we walk through testing the output of this API controller using browser tools. There are several ways you can test API output, including Fiddler (as described by Scott Hanselman in this post) and built-in developer tools available in all modern browsers. For simplicity I used Internet Explorer 9 F12 developer tools, but you're of course welcome to use whatever you'd like. A few important things to note: This class derives from an ApiController base class, not the standard ASP.NET MVC Controller base class. They're similar in places where API's and HTML returning controller uses are similar, and different where API and HTML use differ. A good example of where those things are different is in the routing conventions. In an HTTP controller, there's no need for an "action" to be specified, since the HTTP verbs are the actions. We don't need to do anything to map verbs to actions; when a request comes in to /api/values/5 with the DELETE HTTP verb, it'll automatically be handled by the Delete method in an ApiController. The comments above the API methods show sample URL's and HTTP verbs, so we can test out the first two GET methods by browsing to the site in IE9, hitting F12 to bring up the tools, and entering /api/values in the URL: That sample action returns a list of values. To get just one value back, we'd browse to /values/5: That's it for Part 1. In Part 2 we'll look at getting data (beyond hardcoded strings) and start building out a sample application.

    Read the article

  • SQL SERVER – What is a Technology Evangelist?

    - by pinaldave
    When you hear that someone is an “evangelist” the first thing that might pop into your mind is the Christian church.  In fact, the term did come from Christianity, and basically means someone who spreads the news about their faith.  In the technology world, the same definition is true. Technology evangelists are individuals who, professionally or in their spare time, spread the news about the latest new products.  Sounds like a salesperson, right?  No they are absolutely different. Salespeople also keep up to date with a large number of people, and like to convince others to buy their product – and some will go to any lengths to sell!  An evangelist, on the other hand, is brutally honest about the product, even if sometimes it means not making a sale.  An evangelist is out there to tell the TRUTH.  A salesperson needs to make sales. An Evangelist offers a Solution independent of Technology used – a Salesperson offers Particular Technology. With this definition in mind, you can probably think of a few technology evangelists you already know.  Maybe it’s a relative or a neighbor, someone who loves keeping up with the latest trends and is always willing to tell you about them if you ask even the simplest question.  And, in fact, they probably are evangelists and don’t even know it.  For a long time, the work of technology evangelism was in the hands of community and community technology leaders. Luckily now various organizations have understood the importance of the community and helping community to reach their goals. This has lead them to create role of “Technology Evangelists”. Let me talk about one of the most famous Evangelist of the SQL Server technology. Technology Evangelist only belongs to technology and above any country, race, location or any other thing. They are dedicated to the technology. Vinod Kumar is such a man, who have given a lot to community. For years he was a Technology Evangelist for Microsoft, and maintained a blog that was dedicated to spreading his enthusiasm for his favorite products.  He is one of the most respected Evangelists in the field, and has done a lot of work to define the job for other professionals. Vinod’s career has since progressed to the Microsoft Technology Center (read his post), but he is continuing to be a strong presence in the evangelism community.  I have a lot of respect for Vinod.  He has done a lot for the community and technology evangelism.  Everybody has dream to serve community the way he does, and he is a great role model for evangelists everywhere. On his blog, Vinod created one of the best descriptions of a Technology Evangelist.  It defined the position and also made the distinction between evangelist and salesperson extremely clear.  I will include the highlights of that list here, because no one can say it better than Vinod: Bundle of energy – Passion is their middle name Wonderful Story tellers Empathy, Trust, Loyalty, Openness, Accessibility and Warmth Technology Enthusiast – Doers Love people, people and more people – Community oriented Unique Style and Leadership qualities !!! Self-Confident, Self-Motivated but a student (To read the full list, see: Evangelism Beyond Borders with Evangelists) His blog is a must-read for anyone interested in technology evangelism as a career or simply a hobby.  His advice about how to gain an audience and become a trusted advisor is the best in the business. I think there is an evangelist in everyone. I, too, consider myself a technology evangelist.  Regular readers of this blog will recognize that I am dedicated to bringing information to the masses, and that I pride myself on being both brutally and honest and giving every product fair consideration. I think there is no better way of saying following subject. “Once an Evangelist – Always an Evangelist!” Reference: Pinal Dave (http://blog.SQLAuthority.com)     Filed under: About Me, Database, MVP, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Evangelist

    Read the article

  • Windows Phone–A beautiful phone which I admire but I don’t recommend to friends and family

    - by Gopinath
    Microsoft’s Windows Phones are the most beautiful phones I’ve seen. Look at the photo which Microsoft shared on their Facebook page today. It’s gorgeous. Windows Phones come in vibrant colors and the user interface is very lively. When you keep an iPhone, Android Phone & a Windows Phone on a table, Windows Phone definitely stands out. Android and iOS interfaces are routine – a bunch of apps icons arranged in rows and multiple screens. Windows Phone is very different, the live tiles concept mesmerizes us. I love Windows Phone, but neither I buy one nor I recommend to family/friends! Why? Because it does not have all the Apps I need. Microsoft advertises that Windows Phone has 100K apps on its Windows Market Place. It’s true, there are 100K+ apps available for Windows Phone but not many of them are really useful and most of the popular Apps I use on Android are not available. When I say this to my friends at Microsoft, they don’t agree and one of them asked me list the apps that are not available. For him today I spent an hour quickly scanning through the apps installed on my Google Nexus and searched for same apps on Windows Market Place. As expected many of them are not available. Here is the list of my favorite Android apps that are not available for Windows Phone Mint – I use this app more than any of the Banking Apps I’ve installed on my mobile. It’s one app to keep a tab on all the expenses and income, the best money management and tracking app. Google Chrome – Web without Google Chrome is too boring, either on Desktop or on mobile. IE is too heavy and Firefox is loosing its grip. Chrome is the new darling of web. Pulse, Flipboard – Flipboard and Pulse are one of the best apps for reading news and following content of favorite blogs. Dropbox – Sync content across devices and provides access to your content on any device.It really does not matter what is your gadget – mobile, tablet or computer; Dropbox lets you access your content. GMail, Google Maps – Should I say how important are these two apps in our day to day life!! Vonage Extension – For around 30 bucks a month, Vonage provide landline service in USA + unlimited calls to India and many other countries + Vonage Extension App that lets Android/iOS mobile to make unlimited international calls for free. Without Vonage Extension app, I’m almost cutoff from my family and friends back home in India. Instagram – The most popular camera app used from a common man to celebrities. Raaga, Dhingana  – Music is part and parcel of life and these two apps are the most like popular apps to listen to Indian music. Quora – Quora is the place where most of the sensible discussions happen on web. Google Analytics, Google Adsense – I’m a blogger and these two apps mean a lot to me The list goes on and on! There are many useful apps that are not available on Windows Phone – TuneIn, MyTWC, Chrome To Phone, Google Voice, etc. Without all these apps, Windows Phone is just another old Nokia phone. Even though Windows Phone is the most beautiful phone, it needs Apps to attract customers. Without apps a smartphone is more or less a dumb feature phone which we loved to use before release of iPhone. Wish in an year or two the beautiful Windows Phone may have all the missing Apps. When it happens I’ll buy a phone for myself and recommend it to my family & friends. But till then I prefer to stay away.

    Read the article

  • What makes them click ?

    - by Piet
    The other day (well, actually some weeks ago while relaxing at the beach in Kos) I read ‘Neuro Web Design - What makes them click?’ by Susan Weinschenk. (http://neurowebbook.com) The book is a fast and easy read (no unnecessary filler) and a good introduction on how your site’s visitors can be steered in the direction you want them to go. The Obvious The book handles some of the more known/proven techniques, like for example that ratings/testimonials of other people can help sell your product or service. Another well known technique it talks about is inducing a sense of scarcity/urgency in the visitor. Only 2 seats left! Buy now and get 33% off! It’s not because these are known techniques that they stop working. Luckily 2/3rd of the book handles less obvious techniques, otherwise it wouldn’t be worth buying. The Not So Obvious A less known influencing technique is reciprocity. And then I’m not talking about swapping links with another website, but the fact that someone is more likely to do something for you after you did something for them first. The book cites some studies (I always love the facts and figures) and gives some actual examples of how to implement this in your site’s design, which is less obvious when you think about it. Want to know more ? Buy the book! Other interesting sources For a more general introduction to the same principles, I’d suggest ‘Yes! 50 Secrets from the Science of Persuasion’. ‘Yes!…’ cites some of the same studies (it seems there’s a rather limited pool of studies covering this subject), but of course doesn’t show how to implement these techniques in your site’s design. I read ‘Yes!…’ last year, making ‘Neuro Web Design’ just a little bit less interesting. !!!Always make sure you’re able to measure your changes. If you haven’t yet, check out the advanced segmentation in Google Analytics (don’t be afraid because it says ‘beta’, it works just fine) and Google Website Optimizer. Worth Buying? Can I recommend it ? Sure, why not. I think it can be useful for anyone who ever had to think about the design or content of a site. You don’t have to be a marketing guy to want a site you’re involved with to be successful. The content/filler ratio is excellent too: you don’t need to wade through dozens of pages to filter out the interesting bits. (unlike ‘The Design of Sites’, which contains too much useless info and because it’s in dead-tree format, you can’t google it) If you like it, you might also check out ‘Yes! 50 Secrets from the Science of Persuasion’. Tip for people living in Europe: check Amazon UK for your book buying needs. Because of the low UK Pound exchange rate, it’s usually considerably cheaper and faster to get a book delivered to your doorstep by Amazon UK compared to having to order it at the local book store or web-shop.

    Read the article

  • Pet Store Loyalty Programs: I'm Not Loyal Yet!

    - by ruth.donohue
    After two years of constantly being asked (aka "pestered) by my now eight-year-old daughter for a dog (or any pet that is more interactive than a goldfish), I've finally compromised with a hamster purely by chance. Friends of ours had recently brought home a female hamster, and (surprise, surprise) two weeks later, they were looking for homes for 11 baby hamster pups. Since the pups were not yet ready to be weaned from their mother, my daughter and I had several weeks to get ready -- and we spent that extra time visiting a number of local pet stores and purchasing an assortment of hamster books, toys, exercise equipment, food, bedding, and cage -- not cheap! Now, I'm usually an online shopper (i.e. I love reading user reviews and comparing prices), but for kids, there is absolutely no online substitute for actually walking into a store and physically picking out something you want. We have two competing pet shops within close proximity to where we live, and I signed up for their rewards programs to get discounts on select items. I'm sure it takes a while to get my data into the system (after all, I did fill out a form the old-fashioned way), but as it has been more than two weeks for one store and over a week for the other, the window of opportunity is getting smaller as we by now pretty much have most of what we think we need. Everything I've purchased has been purely hamster or small animal related, so in an ideal world, the stores would have me easily figured out as a hamster owner. Here is what I would be expecting of a loyalty rewards program: Point me to some useful links, either information provied by the company or external websites where I can learn more. Any value-add a business can provide to make my life easier makes me a much more loyal customer. What things can I expect as a new pet owner? Any hamster communities? Any hamster-related events? Any vets that specialize in small animals in the vicinity? Send me an email with other related products I may be interested in. Upsell and cross-sell to me. We've go the basics and a couple of luxuries, but at this point, I'm pretty excited (surprisingly) about the hamster, and my daughter is footing the bill with her birthday and Christmas money. She and I would be more than happy to spend her money! Get this information to me faster. As I mentioned, my window of opportunity is getting smaller, as eithe rmy daughter's money will run out on other things or we'll start losing the thrill of buying new hamster toys and treats. I realize this is easier said than done, and undoubtedly, the stores are getting value knowing my basic customer information and purchase history. Buth, they could really benefit by delivering a loyalty program that really earned my loyalty. "Goldeen" needs a new water bottle, yogurt chips, and chew toys as he doesn't seem to like the ones we bought. So for now, I'll just go to whichever store is the most convenient. Oh, and just for fun (not related to this post), here are a couple of videos my daughter really got a kick out of watching: Hamster on a Piano Tic in a Spin-Dryer

    Read the article

  • Change Desktop Resolution With a Keyboard Shortcut

    - by Matthew Guay
    Do you find yourself changing your monitor resolution several times a day?  If so, you might like this handy way to set a keyboard shortcut for your most-used resolutions. Most users rarely have to change their screen resolution often, as LCD monitors usually only look best at their native resolution.  But netbooks present a unique situation, as their native resolution is usually only 1024×600.  Some newer netbooks offer higher resolutions which may not looks as crisp as the native resolution but can be handy for using a program that expects a higher resolution.  This is the perfect situation for a keyboard shortcut to help you change the resolution without having to hassle with dialogs and menus each time, and HRC – HotKey Resolution Changer makes it easy to do. Create Keyboard Shortcuts Download the HRC – HotKey Resolution Changer (link below), unzip, and then run HRC.exe in the folder. This will start a tray icon, and will not automatically open the HRC window.  You don’t have to install HRC.  Double-click the tray icon to open it.  Note: Windows 7 automatically hides new tray icons, so if you can’t see it, click the arrow to see the hidden tray icons. By default, HRC will show two entries with your default resolutions, color depth, and refresh rate. Add a keyboard shortcut by clicking the Change button over the resolution.  Press the keyboard shortcut you want to press to switch to that resolution; we entered Ctrl+Alt+1 for our default resolution.  Make sure not to use a keyboard shortcut you use in another application, as this will override it.  Click Set when you’ve entered the hotkey(s) you want. Now, on the second entry, select the resolution you want for your alternate resolution.  The drop-down list will only show your monitor’s supported resolutions, so you don’t have to worry about choosing an incorrect resolution.  You can also set a different color depth or refresh rate for this resolution.  Now add a keyboard shortcut for this resolution as well. You can set keyboard shortcuts for up to 9 different resolutions with HRC.  Click the Select number of HotKeys button on the left, and choose the number of resolutions you want to set.  Here we have unique keyboard shortcuts for our three most-used resolutions on our netbook. HRC must be kept running to use the keyboard shortcuts, so click the Minimize to tray icon which is the second icon to the right.  This will keep it running in the tray. If you want to be able to change your resolution anytime, you’ll want HRC to automatically start with Windows.  Create a shortcut to HRC, and paste it into your Windows startup folder.  You can easily open this folder by entering the following in the Run command or in the address bar in Explorer: %appdata%\Microsoft\Windows\Start Menu\Programs\Startup   Conclusion HRC- HotKey Resolution Changer gives you a great way to quickly change your screen resolution with a keyboard shortcut.  Whether or not you love keyboard shortcuts, this is still a much easier way to switch between your most commonly used resolutions. Download HRC – HotKey Resolution Changer Similar Articles Productive Geek Tips Create a Keyboard Shortcut to Access Hidden Desktop Icons and FilesGet Mac’s Hide Others (cmd+opt+H) Keyboard Shortcut for WindowsHide Desktop Icon Text on Windows 7 or VistaShow Keyboard Shortcut Access Keys in Windows VistaKeyboard Ninja: 21 Keyboard Shortcut Articles TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative

    Read the article

  • gnome3 and unity bug in 11.10

    - by LinuxNoob
    Yes I am new to Linux but am learning stuff daily as I finally migrated my work systems to Ubuntu 11.10 and they works pretty decent given the fact that some of the application I have to use to do my job are not supported by Linux ... yet. Anyways, I was working with Unity, not even knowing there was a Gnome3 environment until I ran across a post while searching for a fix on another unrelated issue. Lo and behold, I was able after much searching to log in Gnome 3 after installing the required packages. I don't know how many articles I had to read for someone to tell me how to log into Gnome3 or into Unity by selecting the little wheel on the login screen to get the drop down for the different environments ...(I know... I'm such a Noob but I'm sure we all were at one time or another). Again, after using Unity, I was hooked on Linux and decided if I can get it to work with all the Apps I need at work and play a little WOW every now and again, I'm good to go. But again to the point. I logged into Gnome3. No icons but just a cool experience to see what the heck this thing does. I eventually figured out how to use it and I love Gnome 3 but it is as buggy as the C++ programs I first learned to write in college. I don't know if it's because i installed the packages after I was up and running in Unity or what. The issues I have are that it locks the desktop up randomly..just locks it up. Usually when i have a lot of screens opened. Never fails that it does it when i put the mouse up in the top/left corner to swap Apps. Then I can't get to the windows in the back to shut them down. A few times i just have to turn the power off on my PC ( sigh, what a POS I say to myself every time I decide "oh it was an isolated incidence"...try it again..it's so cool and designed the way I would design a desktop .. intuitive...just cool man cool) but it fails me again and again. If I had to guess I would say it's a memory problem..i do have 2gigs but nevertheless it just stops working. The mouse doesn't respond ...eventually the keyboard doesn't respond. Just the Power Off button is the only way out. Never had an issue like this in Unity. Using the same Apps in the same way and I bet the system will crash in 45 minutes to an hour. Usually, I have 7-10 windows opened. I wish I knew enough to get the correct logs, I'm sure it's a bug. I'm not doing anything fancy..just running email, word docs, jave apps, vpn, pidgin, maybe some SQL stuff or communication Apps, etc. Anyways I've decided to wait for another major release to try it again... until then I'm sticking with Unity which is also very cool.

    Read the article

  • Java Script – Content delivery networks (CDN) can bit you in the butt.

    - by Ryan Ternier
    As much as I love the new CDN’s that Google, Microsoft and a few others have publically released, there are some strong gotchas that could come up and bite you in the ass if you’re not careful. But before we jump into that, for those that are not 100% sure what a CDN is (besides Canadian).   Content Delivery Network. A way of distributing your static content across various servers in different physical locations.  Because this static content is stored on many servers around the world, whenever a user needs to access this content, they are given the closest server to their location for this data. Already you can probably see the immediate bonuses to a system like this: Lower bandwidth Even small script files downloaded thousands of times will start to take a noticeable hit on your bandwidth meter. Less connections/hits to your web server which gives better latency If you manage many servers, you don’t need to manually update each server with scripts. A user will download a script for each website they visit. If a user is redirected to many domains/sub-domains within your web site, they might download many copies of the same file. When a system sees multiple requests from the same  domain, they will ignore the download   Those are just a handful of the many bonuses a CDN will give you. And for the average website, a CDN is great choice. Check out the following CDN links for their solutions: Google AJAX Library: http://code.google.com/apis/ajaxlibs/ Microsoft Ajax library: http://www.asp.net/ajaxlibrary/cdn.ashx The Gotcha There is always a catch. Here are some issues I found with using CDN’s that hopefully can help you make your decision. HTTP / HTTPS If you are running a website behind SSL, make sure that when you reference your CDN data that you use https:// vs. http://. If you forget this users will get a very nice message telling them that their secure connection is trying to access unsecure data. For a developer this is fairly simple, but general users will get a bit anxious when seeing this. Trusted Sites Internet Explorer has this really nifty feature that allows users to specify what sites they trust, and by some defaults IE7 only allows trusted sites to be viewed.  No problem, they set your website as trusted. But what about your CDN? If a user sets your websites to trusted, but not the CDN, they will not download those static files. This has the potential to totally break your web site. Pedantic Network Admins This alone is sometimes the killer of projects. However, always be careful when you are going to use a CDN for a professional project. If a network / security admin sees that you’re referencing an outside source, or that a call from a website might hit an outside domain.. panties will be bunched, emails will be spewed out and well, no one wants that.

    Read the article

  • A Few of My Favorite HTML5 and CSS3 Online Tools

    - by dwahlin
    I really enjoy coding up HTML5, CSS3, and JavaScript applications but there are some things that I’m better off writing with the help of a development tool. For example, CSS3 gradients aren’t exactly the most fun thing to write by hand and the same could be said for animations, transforms, or styles that require various vendor extensions. There are a lot of online tools that can simplify building HTML5/CSS3 sites and increase productivity in the process so I thought I’d put together a post on a few of my favorites tools. HTML5 Boilerplate HTML5 Boilerplate provides a great way to get started building HTML5 sites. It includes many best practices out of the box and even includes a few tricks that many people don’t even know about. The custom download option allows you to pick the features that you want to include in the files that’s generated. You can read more about it here.   Initializr Although HTML5 Boilerplate provides a great foundation for starting HTML5 sites, it focuses on providing a starting shell structure (namely an html page, JavaScript files, and a CSS stylesheet) and doesn’t include much in the way of page content to get started with. Initializer builds on HTML5 Boilerplate and provides an initial test page that can be tweaked to meet your needs. It also provides several different customization options to include/exclude features. CSS3 Maker CSS3 provides a lot of great features ranging from gradient support to rounded corners. Although many of the features are fairly straightforward there are some that are pretty involved such as gradients, animations, and really any styles that require custom vendor extensions to use across browsers. Sure, you can type everything by hand, but sites such as CSS3 Maker provide a visual way to generate CSS3 styles. CSS3, Please! CSS3, Please! is a code generation tool that can be used to generate cross-browser CSS3 styles quickly and easily. All of the main things you can do with CSS3 are available including a clever way to visually generate CSS3 transform styles.       Ultimate CSS Gradient Generator CSS3 Maker (above) has a gradient generator built-in but my favorite tool for creating CSS3 gradients is the Ultimate CSS Gradient Generator. If you’ve created gradients in tools like Photoshop then you’ll love what this tool has to offer especially since it makes it extremely straightforward to work with different gradient stops. @font-face Fonts Although @font-face has been available for awhile, I think fonts are cool and wanted to mention a site that provides a lot of font choices. When used correctly fonts can really enhance a page and when used incorrectly (think Comic Sans) they can absolutely ruin a page. Several sites exist that provide fonts that can be used with @font-face definitions in CSS style sheets. One of my favorites is Font Squirrel.   HTML5 & CSS3 Support and Tests Interested in knowing what HTML5 and CSS3 features a given browser supports? Want to know how various browsers stack up with each other as far as HTML5/CSS3 support. Look no further than the HTML5 & CSS3 Support page or the HTML5 Test page.   CSS3 Easing Animation Tool CSS3 animations aren’t widely supported across browsers right now (I’m not really using them at this point) but they do offer a lot of promise. Creating easings for animations can definitely be a challenge but they’re something that are critical for adding that “professional touch” to your animations. Fortunately you can use the Ceaser CSS Easing Animation Tool to simplify the process and handle animation easing with…...ease.   There are several other online tools that I like but these are some of the ones I find myself using the most. If you have any favorite online tools that simplify working with HTML5 or CSS3 let me know.     For more information about onsite or online training, mentoring and consulting solutions for HTML5, jQuery, .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

  • SEO: Nested List vs List, Split Over Divs vs Definition List

    - by Jon P
    From an SEO perspective which, if any, is better: Option 1: Nested lists with h2 tags <ul id="mainpoints"> <li><h2>Powerful Analysis</h2> <ul> <li>Charting and indicators</li> <li>Daily trading signals</li> <li>Company health checks</li> </ul> </li> <li><h2>World Market Data</h2> <ul> [List Items removed for brevity] </ul> </li> <li><h2>Daily Market Data</h2> <ul> [List Items removed for brevity] </ul> </li> </ul> Option 2: Divs with h2 and lists <div id="mainpoints"> <div> <h2>Powerful Analysis</h2> <ul> <li>Charting and indicators</li> <li>Daily trading signals</li> <li>Company health checks</li> </ul> </div> <div> <h2>World Market Data</h2> <ul> [List Items removed for brevity] </ul> </div> <div> <h2>Daily Market Data</h2> <ul> [List Items removed for brevity] </ul> </div> </div> Option 3: Definition List <dl id="mainpoints"> <dt>Powerful Analysis</dt> <dd>- Charting and indicators</dd> <dd>- Daily trading signals</dd> <dd>- Company health checks</dd> <dt>World Market Data</dt> [List Items removed for brevity] <dt>Daily Market Data</dt> [List Items removed for brevity] </dl> My instincts tell me that semanticaly the pure list options (1 & 3) are the best and that h2 may be more SEO friendly (1 & 2) which would point to option 1 as being the best option. I do love the lean makeup of the definition list but will I take an SEO hit by losing the h2 tags? Before anyone asks, h2 is not valid markup in a dt tag. Are my instincts right with a nested list being the way to go?

    Read the article

  • Do We Indeed Have a Future? George Takei on Star Wars.

    - by Bil Simser
    George Takei (rhymes with Okay), probably best known for playing Hikaru Sulu on the original Star Trek, has always had deep concerns for the present and the future. Whether on Earth or among the stars, he has the welfare of humanity very much at heart. I was digging through my old copies of Famous Monsters of Filmland, a great publication on monster and films that I grew up with, and came across this. This was his reaction to STAR WARS from issue 139 of Famous Monsters of Filmland and was written June 6, 1977. It is reprinted here without permission but I hope since the message is still valid to this day and has never been reprinted anywhere, nobody will mind me sharing it. STAR WARS is the most pre-posterously diverting galactic escape and at the same time the most hideously credible portent of the future yet.While I thrilled to the exploits that reminded me of the heroics of Errol Flynn as Robin Hood, Burt Lancaster as the Crimson Pirate and Buster Crabbe as Flash Gordon, I was at the same time aghast at the phantasmagoric violence technology can place at our disposal. STAR WARS raised in my mind the question - do we indeed have a future?It seems to me what George Lucas has done is to masterfully guide us on a journey through space and time and bring us back face to face with today's reality. STAR WARS is more than science fiction, I think it is science fictitious reality.Just yesterday, June 7, 1977, I read that the United States will embark on the production of a neutron bomb - a bomb that will kill people on a gigantic scale but will not destroy buildings. A few days before that, I read that the Pentagon is fearful that the Soviets may have developed a warhead that could neutralize ours that have a capacity for that irrational concept overkill to the nth power. Already, it seems we have the technology to realize the awesome special effects simulations that we saw in the film.The political scene of STAR WARS is that of government by force and power, of revolutions based on some unfathomable grievance, survival through a combination of cunning and luck and success by the harnessing of technology -  a picture not very much at variance from the political headlines that we read today.And most of all, look at the people; both the heroes in the film and the reaction of the audience. First, the heroes; Luke Skywalker is a pretty but easily led youth. Without any real philosophy to guide him, he easily falls under the influence of a mystical old man believed previously to be an eccentric hermit. Recognize a 1960's hippie or a 1970's moonie? Han Solo has a philosophy coupled with courage and skill. His philosophy is money. His proficiency comes for a price - the highest. Solo is a thoroughly avaricious mercenary. And the Princess, a decisive, strong, self-confident and chilly woman. The audience cheered when she wielded a gun. In all three, I missed qualities that could be called humane - love, kindness, yes, I missed sensuality. I also missed a sense of ideals and faith. In this regard the machines seemed more human. They demonstrated real affection for each other and an occasional poutiness. They exhibited a sense of fidelity and constancy. The machines were humanized and the humans conversely seemed mechanical.As a member of the audience, I was swept up by the sheer romantic escapsim of it all. The deering-dos, the rope swing escape across the pit, the ray gun battles and especially the swash buckle with the ray swords. Great fun!But I just hope that we weren't too intoxicated by the escapism to be able to focus on the recognizable. I hope the beauty of the effects didn't narcotize our sensitivity to violence. I hope the people see through the fantastically well done futuristic mirrors to the disquieting reflection of our own society. I hope they enjoy STAR WARS without being "purely entertained".

    Read the article

  • How to make software development decisions based on facts

    - by Laila
    We love to hear stories about the many and varied ways our customers use the tools that we develop, but in our earnest search for stories and feedback, we'd rather forgotten that some of our keenest users are fellow RedGaters, in the same building. It was almost by chance that we discovered how the SQL Source Control team were using SmartAssembly. As it happens, there is a separate account (here on Simple-Talk) of how SmartAssembly was used to support the Early Access program; by providing answers to specific questions about how the SQL Source Control product was used. But what really got us all grinning was how valuable the SQL Source Control team found the reports that SmartAssembly was quickly and painlessly providing. So gather round, my friends, and I'll tell you the Tale Of The Framework Upgrade . <strange mirage effect to denote a flashback. A subtle background string of music starts playing in minor key> Kevin and his team were undecided. They weren't sure whether they could move their software product from .NET 2 to .NET 3.5 , let alone to .NET 4. You see, they were faced with having to guess what version of .NET was already installed on the average user's machine, which I'm sure you'll agree is no easy task. Upgrading their code to .NET 3.5 might put a barrier to people trying the tool, which was the last thing Kevin wanted: "what if our users have to download X, Y, and Z before being able to open the application?" he asked. That fear of users having to do half an hour of downloads (.followed by at least ten minutes of installation. followed by a five minute restart) meant that Kevin's team couldn't take advantage of WCF (Windows Communication Foundation). This made them sad, because WCF would have allowed them to write their code in a much simpler way, and in hours instead of days (as was the case with .NET 2). Oh sure, they had a gut feeling that this probably wasn't the case, 3.5 had been out for so many years, but they weren't sure. <background music switches to major key> SmartAssembly Feature Usage Reporting gave Kevin and his team exactly what they needed: hard data on their users' systems, both hardware and software. I was there, I saw it happen, and that's not the sort of thing a woman quickly forgets. I'll always remember his last words (before he went to lunch): "You get lots of free information by just checking a box in SmartAssembly" is what he said. For example, they could see how many CPU cores their customers were using, and found out that they should be making use of parallelism to take advantage of available cores. But crucially, (and this is the moral of my tale, dear reader), Kevin saw that 99% of SQL Source Control's users were on .NET 3.5 or above.   So he knew that they could make the switch and that is was safe to do so. With this reassurance, they could use WCF to not only make development easier, but to also give them a really nice way to do inter-process communication between the Source Control and the SQL Compare products. To have done that on .NET 2.0 was certainly possible <knowing chuckle>, but Microsoft have made it a lot easier with WCF. <strange mirage effect to denote end of flashback> So you see, with Feature Usage Reporting, they finally got the hard evidence they needed to safely make the switch to .NET 3.5, knowing it would not inconvenience their users. And that, my friends, is just the sort of thing we like to hear.

    Read the article

  • A Letter for Your CEO About Social Marketing’s Future

    - by Mike Stiles
    We’ll leave it to you to decide if or how to sneak this in front of them. Dear Chief: This social marketing thing looks serious. It’s gone beyond having a Facebook page and putting our info and a few promotions on it. It’s seriously disrupting how we’ve always done marketing. And its implications reach well beyond marketing. My concern is that we stay positioned ahead of these changes and are prepared to embrace, adapt and capitalize on these new capabilities as opposed to spending valuable time and money trying to shoehorn social into “the way we’ve always done things.” I’m also concerned about what happens if our competition executes on this before we do. The days of being able to impose our ad messaging on the masses to great effect are numbered. The public now has the tech tools and ability to filter out things that are irrelevant to them. And frankly, spending ad dollars to reach unlikely prospects isn’t the most efficient path for us either. Today, our customers have to genuinely love what we do. That starts with a renewed, customer-centric focus on the quality and usability of our product. If their experience with it is bad, they now have very connected, loud voices that will testify against us. We can’t afford that. Next, their customer service experience, before and after the sale, has to be a pleasant surprise. That requires truly knowing our customers and listening to them. Lip service won’t cut it. We have to get and use as much data on the customer as possible, interact with them wherever they want to interact with us, and commit to impressing them. If we do, they’ll get out there and advertise for us. Since peer-to-peer recommendation is the most effective marketing, that’s money in the bank. Social marketing is about forming relationships, same as how individuals use social. We want them to know us, trust us, and get real value from knowing us. That requires honesty and transparency that before now might have been uncomfortable. I propose that if we clearly make everything we do about our customers’ wants and needs, we’ll have nothing to hide. It will solidify customer loyalty, retention, and thus, revenue. These things can’t happen without certain tools and structural changes in the organization. There are social cloud platforms that integrate social management into all of the necessary areas: CRM, customer service, sales, marketing automation, content marketing, ecommerce, etc. This is will give us a real-time, complete view of the customer so their every interaction with us is attentive, personalized, accurate, relevant, and satisfying. Without it, we’re just a collage of disjointed systems, each gathering data that informs only its own departmental silo. The customer is voluntarily giving us everything we need to know about them to win them over, but we have to start listening and putting the pieces together. There’s still time. Brands are coming to terms with this transition to the socially enabled enterprise, but so far they aren’t moving very fast. Like us, they’re dealing with long-entrenched technologies and processes. CMO’s and CIO’s have to form new partnerships. Content operations have to be initiated and properly staffed and funded. Various departments must be able to utilize interconnected big data. What will separate the winners from the losers? Well chief, that’s why I’m writing you. It’s in your hands. These initiatives won’t get the kind of priority and seriousness that inspire actual deadlines & action unless they come from your desk. You have to be the champion of customer centricity. You have to be our change agent. You have to be our innovator. Otherwise, it’s going to be business as usual, and that puts us in a very vulnerable place. Sincerely, Your Team @mikestilesPhoto: Gary Scott, stock.xchng

    Read the article

  • Type Casting variables in PHP: Is there a practical example?

    - by Stephen
    PHP, as most of us know, has weak typing. For those who don't, PHP.net says: PHP does not require (or support) explicit type definition in variable declaration; a variable's type is determined by the context in which the variable is used. Love it or hate it, PHP re-casts variables on-the-fly. So, the following code is valid: $var = "10"; $value = 10 + $var; var_dump($value); // int(20) PHP also alows you to explicitly cast a variable, like so: $var = "10"; $value = 10 + $var; $value = (string)$value; var_dump($value); // string(2) "20" That's all cool... but, for the life of me, I cannot conceive of a practical reason for doing this. I don't have a problem with strong typing in languages that support it, like Java. That's fine, and I completely understand it. Also, I'm aware of—and fully understand the usefulness of—type hinting in function parameters. The problem I have with type casting is explained by the above quote. If PHP can swap types at-will, it can do so even after you force cast a type; and it can do so on-the-fly when you need a certain type in an operation. That makes the following valid: $var = "10"; $value = (int)$var; $value = $value . ' TaDa!'; var_dump($value); // string(8) "10 TaDa!" So what's the point? Can anyone show me a practical application or example of type casting—one that would fail if type casting were not involved? I ask this here instead of SO because I figure practicality is too subjective. Edit in response to Chris' comment Take this theoretical example of a world where user-defined type casting makes sense in PHP: You force cast variable $foo as int -- (int)$foo. You attempt to store a string value in the variable $foo. PHP throws an exception!! <--- That would make sense. Suddenly the reason for user defined type casting exists! The fact that PHP will switch things around as needed makes the point of user defined type casting vague. For example, the following two code samples are equivalent: // example 1 $foo = 0; $foo = (string)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; // example 2 $foo = 0; $foo = (int)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; UPDATE Guess who found himself using typecasting in a practical environment? Yours Truly. The requirement was to display money values on a website for a restaurant menu. The design of the site required that trailing zeros be trimmed, so that the display looked something like the following: Menu Item 1 .............. $ 4 Menu Item 2 .............. $ 7.5 Menu Item 3 .............. $ 3 The best way I found to do that wast to cast the variable as a float: $price = '7.50'; // a string from the database layer. echo 'Menu Item 2 .............. $ ' . (float)$price; PHP trims the float's trailing zeros, and then recasts the float as a string for concatenation.

    Read the article

  • Your Job Search Should be More Than Just a New Year's Resolution

    - by david.talamelli
    I love the beginning of a new year, it is a great chance to refocus and either re-evaluate goals you are working to or even set new ones. I don't have any statistics to measure this but I am sure that one of the more popular new year's resolutions in the general workforce is to either get a new job or work to further develop one's career. I think this is a good idea, in today's competitive work force people should have a plan of what they want to do, what role they are after and how to get there. One common mistake I think many people make though is that a career plan shouldn't be a once a year thought. When people finish with the holiday season with their new year's resolution to find a new job fresh in their mind, you can see the enthusiasm and motivation a person has to make something happen. Emails are sent, calls are made, applications are made, networking is happening, etc..... Finding the right role that you are after however can be difficult, while it would be great if that dream role was available just at the time you happened to be looking for it - in reality this is not always the case. Job Seekers need to keep reminding themselves that while sometimes that dream job they are after is available at the same time they are looking, that also a Job search can be a difficult and long process. Many people who set out with the best of intentions in January to find a new job can soon lose interest in a job search if they do not immediately find a role. Just like the Christmas decorations are put away and the photos from New Year's are stored away - a Job Seeker's motivation may slowly decrease until that person finds themselves 12 months later in the same situation in same role and looking for that new opportunity again. Rather than just "going for it" and looking for a role in the month of January, a person's job search or career plan should be an ongoing activity and thought process that is constantly updated and evaluated over the course of the year. It can be hard to stay motivated over an extended period of time, especially when you are newly motivated and ready for that new role and the results are not immediate. Rather than letting your job search fall down the priority list and into the "too hard basket" a few ideas that may keep your enthusiasm fresh Update your resume every 6 months, even if you are not looking for a job - it is easy to forget what you have accomplished if you don't keep your details updated. Also it is good to be prepared and have a resume ready to go in case you do get an unexpected phone call for that 'dream job' you have been hoping for. Work out what you want out of your next role before you begin your job search - rather than aimlessly searching job ads or talking to people - think of the organisations or type of role you would like before you search. If you know what you are looking for it will be much easier to work out how to get there than if you do not know what you want. Don't expect immediate results once you decide to look for another job, things don't always fall into place. Timing and delivery can be important pieces of being selected for a role, companies don't hire every role in January. Have an open mind - people you meet or talk to may not result in immediate results for your job search but every connection may help you get a bit closer to what you are after . These actions will not guarantee a positive result, but in today's competitive work force every little of extra preparation and planning helps. All the best for 2011 and I hope your career plan whatever it may be is a success.

    Read the article

  • A SharePoint Developer&rsquo;s Toolchest

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). When we develop for SharePoint, we end up using many tools, third party or Microsoft, to facilitate our development. What are some of your favorite tools? Mine are as below - 1. Reflector: When I saw reflector, I was pretty convinced that a tool better and more useful than it doesn’t exist. Well I was wrong! Redgate took over reflector and they still offer it as a free version, but they have a paid version called reflector pro. It lets you debug third party source code, as if you had the source code. Brilliant! Who needs documentation anymore when you have real code? 2. ULS Viewer: It is no secret, reading ULS logs is a pain in the rear. Well, not so with ULS Viewer, which does work with SharePoint 2007 as well. But it’s just way cooler with SharePoint 2010. You know when you get an error in SharePoint 2010 it shows you an error like as below: Well, the ULS Viewer will allow you to set filtering critereon, allowing you to immediately zero in, into an error, across multiple WFEs even. Also there are numerous other facilities built into the tool, such as advanced filtering, critical error notifications, etc. A must have! You can read the documentation of the ULSViewer here. 3. SPDisposeCheck: Did you know that the MySite object is strange? What is strange about it? That you have to dispose it even if you didn’t create it!? Well who the hell remembers all that! Honestly I do! And you should too. But there is a tool to help you sanitize your code. And that is SPDisposeCheck. You run it against your DLL or EXE, and it will give you suggestions on where you might have missed calling dispose on an object. You still have to use your head, but having this tool helps. 4. DebugView: Debugging for SharePoint can be difficult sometimes. Sometimes your breakpoints don’t get hit. And while you can try and make them hit, it is sometimes easier to just write a bunch of Debug.WriteLines, and catch them from an external application such as DebugView. You simply use your code, and DebugView will catch all the Debug.WriteLine’s in your code like this - 5. BGInfo: One annoying thing about SharePoint projects, it causes the number of servers to multiply like bunnies. As I’m RDP’ing into many computers trying to diagnose a crazy issue, sometimes it becomes hard to remember which machine is which. BGInfo puts all that on the wallpaper, alongwith a bunch of other useful info. A bit like this - 5. WSPBuilder: SharePoint 2007 only, but I think there maybe a version for SP2010 coming later. I think the VS2010 tools for SP2010 development are quite nice, so WSPBuilder, well so far I don’t miss it. But lets see what WSPBuilder for 2010 brings – I haven’t seen it yet. However, I want to confidently assert that WSPBuilder for SP2007 is simply awesome. 6. SharePoint Manager: The SharePoint Manager 2010 is a SharePoint object model explorer. It enables you to browse every site on the local farm and view every property. It also enables you to change the properties. The VS2010 dev tools now include a server explorer, which show you a subset of properties in read-only. I would LOVE to see SharePoint manager like functionality built into VS2010. SharePoint Manager, a total must-have. Comment on the article ....

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >