Search Results

Search found 6079 results on 244 pages for 'power law'.

Page 136/244 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • SQLAuthority News – History of the Database – 5 Years of Blogging at SQLAuthority

    - by pinaldave
    Don’t miss the Contest:Participate in 5th Anniversary Contest   Today is this blog’s birthday, and I want to do a fun, informative blog post. Five years ago this day I started this blog. Intention – my personal web blog. I wrote this blog for me and still today whatever I learn I share here. I don’t want to wander too far off topic, though, so I will write about two of my favorite things – history and databases.  And what better way to cover these two topics than to talk about the history of databases. If you want to be technical, databases as we know them today only date back to the late 1960’s and early 1970’s, when computers began to keep records and store memories.  But the idea of memory storage didn’t just appear 40 years ago – there was a history behind wanting to keep these records. In fact, the written word originated as a way to keep records – ancient man didn’t decide they suddenly wanted to read novels, they needed a way to keep track of the harvest, of their flocks, and of the tributes paid to the local lord.  And that is how writing and the database began.  You could consider the cave paintings from 17,0000 years ago at Lascaux, France, or the clay token from the ancient Sumerians in 8,000 BC to be the first instances of record keeping – and thus databases. If you prefer, you can consider the advent of written language to be the first database.  Many historians believe the first written language appeared in the 37th century BC, with Egyptian hieroglyphics. The ancient Sumerians, not to be outdone, also created their own written language within a few hundred years. Databases could be more closely described as collections of information, in which case the Sumerians win the prize for the first archive.  A collection of 20,000 stone tablets was unearthed in 1964 near the modern day city Tell Mardikh, in Syria.  This ancient database is from 2,500 BC, and appears to be a sort of law library where apprentice-scribes copied important documents.  Further archaeological digs hope to uncover the palace library, and thus an even larger database. Of course, the most famous ancient database would have to be the Royal Library of Alexandria, the great collection of records and wisdom in ancient Egypt.  It was created by Ptolemy I, and existed from 300 BC through 30 AD, when Julius Caesar effectively erased the hard drives when he accidentally set fire to it.  As any programmer knows who has forgotten to hit “save” or has experienced a sudden power outage, thousands of hours of work was lost in a single instant. Databases existed in very similar conditions up until recently.  Cuneiform tablets gave way to papyrus, which led to vellum, and eventually modern paper and the printing press.  Someday the databases we rely on so much today will become another chapter in the history of record keeping.  Who knows what the databases of tomorrow will look like! Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • Using Live Data in Database Development Work

    - by Phil Factor
    Guest Editorial for Simple-Talk Newsletter... in which Phil Factor reacts with some exasperation when coming across a report that a majority of companies were still using financial and personal data for both developing and testing database applications. If you routinely test your development work using real production data that contains personal or financial information, you are probably being irresponsible, and at worst, risking a heavy financial penalty for your company. Surprisingly, over 80% of financial companies still do this. Plenty of data breaches and fraud have happened from the use of real data for testing, and a data breach is a nightmare for any organisation that suffers one. The cost of each data breach averages out at around $7.2 million in the US in notification, escalation, credit monitoring, fines, litigation, legal costs, and lost business due to customer churn, £1.9 million in the UK. 70% of data breaches are done from within the organisation. Real data can be exploited in a number of ways for malicious or criminal purposes. It isn't just the obvious use of items such as name and address, date of birth, social security number, and credit card and bank account numbers: Data can be exploited in many subtle ways, so there are excellent reasons to ensure that a high priority is given to the detection and prevention of any data breaches. You'll never successfully guess all the ways that real data can be exploited maliciously, or the ease with which it can be accessed. It would be silly to argue that developers never need access to a copy of the database containing live data. Developers sometimes need to track a bug that can only be replicated on the data from the live database. However, it has to be done in a very restrictive harness. The law makes no distinction between development and production databases when a data breach occurs, so the data has to be held with all appropriate security measures in place. In Europe, the use of personal data for testing requires the explicit consent of the people whose data is being held. There are federal standards such as GLBA, PCI DSS and HIPAA, and most US States have privacy legislation. The task of ensuring compliance and tight security in such circumstances is an expensive and time-consuming overhead. The developer is likely to suffer investigation if a data breach occurs, even if the company manages to stay in business. Ironically, the use of copies of live data isn't usually the most effective way to develop or test your data. Data is usually time-specific and isn't usually current by the time it is used for testing, Existing data doesn't help much for new functionality, and every time the data is refreshed from production, any test data is likely to be overwritten. Also, it is not always going to test all the 'edge' conditions that are likely to flush out bugs. You still have the task of simulating the dynamics of actual usage of the database, and here you have no alternative to creating 'spoofed' data. Because of the complexities of relational data, It used to be that there was no realistic alternative to developing and testing with live data. However, this is no longer the case. Real data can be obfuscated, or it can be created entirely from scratch. The latter process used to be impractical, now that there are plenty of third-party tools to choose from. The process of obfuscation isn't risk free. The process must access the live data, and the success of the obfuscation process has to be carefully monitored. Database data security isn't an exciting topic to you or I, but to a hacker it can be an all-consuming obsession, especially if there is financial or political gain involved. This is not the sort of adversary one would wish for and it is far better to accept, and work with, security restrictions that exist for using live data in database development work, especially when the tools exist to create large realistic database test data that can be better for several aspects of testing.

    Read the article

  • #altnetseattle &ndash; REST Services

    - by GeekAgilistMercenary
    Below are the notes I made in the REST Architecture Session I helped kick off with Andrew. RSS, ATOM, and such needed for better discovery.  i.e. there still is a need for some type of discovery. Difficult is modeling behaviors in a RESTful way.  ??  Invoking some type of state against an object.  For instance in the case of a POST vs. a GET.  The GET is easy, comes back as is, but what about a POST, which often changes some state or something. Challenge is doing multiple workflows with stateful workflows.  How does batch work.  Maybe model the batch as a resource. Frameworks aren’t particularly part of REST, REST is REST.  But point argued that REST is modeled, or part of modeling a state machine of some sort… ? Nothing is 100% reliable w/ REST – comparisons drawn with TCP/IP.  Sufficient probability is made however for the communications, but the idea of a possible failure has to be built into the usage model of REST. Ruby on Rails / RESTfully, and others used.  What were their issues, what do they do.  ATOM feeds, object serialized, using LINQ to XML w/ this.  No state machine libraries. Idempotent areas around REST and single change POST changes are inherent in the architecture. REST – one of the constrained languages is for the interaction w/ the system.  Limiting what can be done on the resources.  - disagreement, there is no agreed upon REST verbs. Sam Ruby – RESTful services.  Expanded the verbs within REST/HTTP pushes you off the web.  Of the existing verbs POST leaves the most up for debate. Robert Reem used Factory to deal with the POST to handle the new state.  The POST identifying what it just did by the return. Different states are put into POST, so that new prospective verbs, without creating verbs for REST/HTTP can be used to advantage without breaking universal clients. Biggest issue with REST services is their lack of state, yet it is also one of their biggest strengths.  What happens is that the client takes up the often onerous task of handling all state, state machines, and other extraneous resource management.  All the GETs, POSTs, DELETEs, INSERTs get all pushed into abstraction.  My 2 cents is that this in a way ends up pushing a huge proprietary burden onto the REST services often removing the point of REST to be simple and to the point. WADL does provide discovery and some state control (sort of?) Statement made, "WADL" isn't needed.  The JSON, XML, or other client side returned data handles this. I then applied the law of 2 feet rule for myself and headed to finish up these notes, post to the Wiki, and figure out what I was going to do next.  For the original Wiki entry check it out here. I will be adding more to this post with a subsequent post.  Please do feel free to post your thoughts and ideas about this, as I am sure everyone in the session will have more for elaboration.

    Read the article

  • Get to Know a Candidate (8 of 25): Rocky Anderson&ndash;Justice Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Ross Carl “Rocky” Anderson served two terms as the 33rd mayor of Salt Lake City, Utah, between 2000 and 2008.  He is the Executive Director of High Road for Human Rights.  Prior to serving as Mayor, he practiced law for 21 years in Salt Lake City, during which time he was listed in Best Lawyers in America, was rated A-V (highest rating) by Martindale-Hubbell, served as Chair of the Utah State Bar Litigation Section[4] and was Editor-in-Chief of, and a contributor to, Voir Dire legal journal. As mayor, Anderson rose to nationwide prominence as a champion of several national and international causes, including climate protection, immigration reform, restorative criminal justice, LGBT rights, and an end to the "war on drugs". Before and after the invasion by the U.S. of Iraq in 2003, Anderson was a leading opponent of the invasion and occupation of Iraq and related human rights abuses. Anderson was the only mayor of a major U.S. city who advocated for the impeachment of President George W. Bush, which he did in many venues throughout the United States. Anderson's work and advocacy led to local, national, and international recognition in numerous spheres, including being named by Business Week as one of the top twenty activists in the world on climate change,serving on the Newsweek Global Environmental Leadership Advisory Board, and being recognized by the Human Rights Campaign as one of the top ten straight advocates in the United States for LGBT equality. He has also received numerous awards for his work, including the EPA Climate Protection Award, the Sierra Club Distinguished Service Award, the Respect the Earth Planet Defender Award, the National Association of Hispanic Publications Presidential Award, The Drug Policy Alliance Richard J. Dennis Drugpeace Award, the Progressive Democrats of America Spine Award, the League of United Latin American Citizens Profile in Courage Award, the Bill of Rights Defense Committee Patriot Award, the Code Pink (Salt Lake City) Pink Star honor, the Morehouse University Gandhi, King, Ikeda Award, and the World Leadership Award for environmental programs. Formerly a member of the Democratic Party, Anderson expressed his disappointment with that Party in 2011, stating, “The Constitution has been eviscerated while Democrats have stood by with nary a whimper. It is a gutless, unprincipled party, bought and paid for by the same interests that buy and pay for the Republican Party." Anderson announced his intention to run for President in 2012 as a candidate for the newly-formed Justice Party. Although founded by Rocky Anderson of Utah, the Justice Party was first recognized by Mississippi and describes itself as advocating economic justice through measures such as green jobs and a right to organize, environment justice through enforcing employee safeguards in trade agreements, and social and civic justice through universal health care. In its first press release, the Utah Justice Party set forth its goals for justice in the economic, environmental, social and civic realms, along with a call to rid the corrupting influence of big money from government, to reverse the erosion of rights guaranteed by the Constitution, and to stop draining American resources to support illegal wars of aggression. Its press release says its grassroots supporters believe that now is the time for all to "shed their skeptical view that their voices don't matter", that "our 2-party system is a 'duopoly' controlled by the same corporate and military interests", and that the people must act to ensure "that our nation will achieve a brighter, sustainable future.” Anderson has ballot access in CO, CT, FL, ID, LA, MI, MN, MS, NJ, NM, OR, RI, TN, UT, VT, WA (152 electoral votes) and has write-in access in AL, AK, DE, GA, IL, IO, KS, MD, MO, NE, NH, NY, PA, TX Learn more about Rocky Anderson and Justice Party on Wikipedia.

    Read the article

  • Feynman's inbox

    - by user12607414
    Here is Richard Feynman writing on the ease of criticizing theories, and the difficulty of forming them: The problem is not just to say something might be wrong, but to replace it by something — and that is not so easy. As soon as any really definite idea is substituted it becomes almost immediately apparent that it does not work. The second difficulty is that there is an infinite number of possibilities of these simple types. It is something like this. You are sitting working very hard, you have worked for a long time trying to open a safe. Then some Joe comes along who knows nothing about what you are doing, except that you are trying to open the safe. He says ‘Why don’t you try the combination 10:20:30?’ Because you are busy, you have tried a lot of things, maybe you have already tried 10:20:30. Maybe you know already that the middle number is 32 not 20. Maybe you know as a matter of fact that it is a five digit combination… So please do not send me any letters trying to tell me how the thing is going to work. I read them — I always read them to make sure that I have not already thought of what is suggested — but it takes too long to answer them, because they are usually in the class ‘try 10:20:30’. (“Seeking New Laws”, page 161 in The Character of Physical Law.) As a sometime designer (and longtime critic) of widely used computer systems, I have seen similar difficulties appear when anyone undertakes to publicly design a piece of software that may be used by many thousands of customers. (I have been on both sides of the fence, of course.) The design possibilities are endless, but the deep design problems are usually hidden beneath a mass of superfluous detail. The sheer numbers can be daunting. Even if only one customer out of a thousand feels a need to express a passionately held idea, it can take a long time to read all the mail. And it is a fact of life that many of those strong suggestions are only weakly supported by reason or evidence. Opinions are plentiful, but substantive research is time-consuming, and hence rare. A related phenomenon commonly seen with software is bike-shedding, where interlocutors focus on surface details like naming and syntax… or (come to think of it) like lock combinations. On the other hand, software is easier than quantum physics, and the population of people able to make substantial suggestions about software systems is several orders of magnitude bigger than Feynman’s circle of colleagues. My own work would be poorer without contributions — sometimes unsolicited, sometimes passionately urged on me — from the open source community. If a Nobel prize winner thought it was worthwhile to read his mail on the faint chance of learning a good idea, I am certainly not going to throw mine away. (In case anyone is still reading this, and is wondering what provoked a meditation on the quality of one’s inbox contents, I’ll simply point out that the volume has been very high, for many months, on the Lambda-Dev mailing list, where the next version of the Java language is being discussed. Bravo to those of my colleagues who are surfing that wave.) I started this note thinking there was an odd parallel between the life of the physicist and that of a software designer. On second thought, I’ll bet that is the story for anybody who works in public on something requiring special training. (And that would be pretty much anything worth doing.) In any case, Feynman saw it clearly and said it well.

    Read the article

  • Customer owes me half my payment. Should I take ownership of his AWS account for charging? How?

    - by Cawas
    Background They paid me my first half (back in April 15th) before even we could get into an agreement. Very nice of him! Then I've finished the 2 weeks job of setting up the servers, using his AWS credentials he had just bought. I waited for another 2 weeks for everything settling up, and it was all running fine. He did what he needed with his sftp account, everyone were happy. Now, it has been almost 2 months since I've finished the job and I still didn't get the 2nd half. I must assume, it's not much money (about U$400, converted), but it would help me pay the bills at least. Heck, the Amazon bills they are paying are little less than that (for now). Measures I'm wondering how I can go to charge him now. First thought, of course, would be taking everything down and say "pay now, or be doomed". If that's not good enough, then I lost it. I have no contracts and I doubt I could get a law suit in this country for such a low value based only on emails. And I don't really want to get too agressive here - there might be a business chance in the future and I don't want to ruin it. Second though would be just changing the password. But then he probably could gain access again by some recovery means. That's where my question may mainly relay. How can I do it and not leaving any room for recovery from his side? I even got the first AWS "your account was created" mail from himself, showing me I could begin my job, back then. Lastly, do you have any other idea on what I can and what I should do in this case? Responding to Answers Please, consider reading the current answers and comments. This is not a very simple case. I've considered many, many options (including all lawful ones) before considering this ones I've listed here, and I am willing to take the loss and all that. That's not the point. The point is being practical here. I will call him again and talk about it. I will do terrorism on getting lawyers and getting contract. I am ready to go all forth while I have time and energy for it. But, in practice, there is this extra thing I can do to assure myself of the work I've done. I can basically take it back and delete everything! I'd only take his password because I can find no other way to do it within Amazon. Maybe, contacting Amazon and explaining the situation? I don't know. Give me ideas on this technical side! And thank everyone for the attention and helping me clarifying the issue so far! :)

    Read the article

  • Hosted Monitoring

    - by Grant Fritchey
    The concept of using services to take the place of writing a lot of your own code goes way, way back in computing history. The fundamentals of the concept go back to the dawn of computing with places like IBM hosting time-shares for computing power that you could rent for short periods of time. But things really took off with the building of the Web. Now, all the growth with virtual machines, hosted machines, hosted services from vendors like Amazon and Microsoft, the need to keep all of your software locally on physical boxes is just going the way of the dodo. There will likely always be some pieces of software that you keep on machines on your property or on your person, but the concept of keeping fundamental services locally is going away. As someone put it to me once, if you were starting a business right now, would you bother setting up an Exchange server to manage your email or would you just go to one of the external mail services for everything? For most of us (who are not Exchange admins) the answer is pretty easy. With all this momentum to having external services manage more and more of the infrastructure that’s not business unique, why would you burn up a server and license instance setting up monitoring for your SQL Servers? Of course, some of you are dealing with hyper-sensitive data that might require, through law or treaty, that you lock it down and never expose it to the intertubes, but most of us are not. So, what if someone else took on the basic hassle of setting up monitoring on your systems? That’s what we’re working on here at Red Gate. Right now it’s a private test, but we’re growing it and developing it and it’ll be going to a public beta, probably (hopefully) this year. I’m running it on my machines right now. The concept is pretty simple. You put a relay on your server, poke a hole in your firewall for it, and we start monitoring your server using SQL Monitor. It’s actually shocking how easy it is to get going. You still have to adjust your alerting thresholds, but that’s a standard part of alerting. Your pain threshold and my pain threshold for any given alert may be different. But from there, we do all the heavy lifting, keeping your data online and available, providing you with access to the information about how your servers are behaving, everything. Maybe it’s just me, but I’m really excited by this. I think we’re getting to a place where we can really help the small and medium sized businesses get a monitoring solution in place, quickly and easily. All you crazy busy, and possibly accidental, DBAs and system admins finally can set up monitoring without taking all the time to configure systems, run installs, and all the rest. You just have to tweak your alerts and you’re ready to run. If you are interested in checking it out, you can apply for the closed beta through the Monitor web page.

    Read the article

  • GDB Not Skipping Functions without Debug Info

    - by Alan Lue
    I compiled GDB 7 on a Mac OS X Leopard system. When stepping through a C program, GDB fails to step through 'printf()' statements, which probably don't have associated debug information, and starts printing "Cannot find bounds of current function." Here's some output: $ /usr/local/bin/gdb try1 GNU gdb (GDB) 7.1 Copyright (C) 2010 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-apple-darwin10". (gdb) list 1 #include <stdio.h> 2 static void display(int i, int *ptr); 3 4 int main(void) { 5 int x = 5; 6 int *xptr = &x; 7 printf("In main():\n"); 8 printf(" x is %d and is stored at %p.\n", x, &x); 9 printf(" xptr holds %p and points to %d.\n", xptr, *xptr); 10 display(x, xptr); (gdb) b 6 Breakpoint 1 at 0x1e8e: file try1.c, line 6. (gdb) r Starting program: /tmp/try1 Breakpoint 1, main () at try1.c:6 6 int *xptr = &x; (gdb) n 7 printf("In main():\n"); (gdb) n 0x0000300a in ?? () (gdb) n Cannot find bounds of current function (gdb) n Cannot find bounds of current function Any idea what's going on? Alan

    Read the article

  • VS2008 EF and non crud SP usage.

    - by SteveO
    Using an edmx version of EF. My returned data is a join between tables that has a COMPOUND filter on the primary table. In essence this query is going to return a SEGMENT of Law codes and descriptions that a user can tie to a Sex Offender report. I have a complex SP because Linq2SQL cannot pass in a between statement, or at least that is how I understand the error. The Code itself is broken up by '-' marks. 39-13-504 "Aggravated Sexual Battery" User wants to have a query with 4 parmas 39, 13, 500, 599. Get all codes from Title 39 and Chapter 13 with parts between 500 and 599. I have the SP in place to do the work, is there are way to consume the SP within the EF? I find many blogs about SPs with CRUD operations as their use of an SP. That doesn't fit this need at all. I do not have a single table but a join to the "prior selections" table that maps the key for the code. Any pointers on how to get a READ with an SP? TIA

    Read the article

  • Debugging MinGW program with gdb on Windows, not terminating at assert failure

    - by devil
    How do I set up gdb on window so that it does not allow a program with assertion failure to terminate? I intend to check the stack trace and variables in the program. For example, running this test.cpp program compiled with MinGW 'g++ -g test.cpp -o test' in gdb: #include <cassert> int main(int argc, char ** argv) { assert(1==2); return 0; } Gives: $ gdb test.exe GNU gdb 6.8 Copyright (C) 2008 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "i686-pc-mingw32"... (gdb) r Starting program: f:\code/test.exe [New thread 4616.0x1200] Error: dll starting at 0x77030000 not found. Error: dll starting at 0x75f80000 not found. Error: dll starting at 0x77030000 not found. Error: dll starting at 0x76f30000 not found. Assertion failed: 1==2, file test.cpp, line 2 This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Program exited with code 03. (gdb) I would like to be able to stop the program from terminating immediately, like how Visual Studio's debugger and gdb on Linux does it. I have done a search and found some stuff on trapping signals but I can't seem to find a good post on how to set up gdb to do this.

    Read the article

  • Processing a log to fix a malformed IP address ?.?.?.x

    - by skymook
    I would like to replace the first character 'x' with the number '7' on every line of a log file using a shell script. Example of the log file: 216.129.119.x [01/Mar/2010:00:25:20 +0100] "GET /etc/.... 74.131.77.x [01/Mar/2010:00:25:37 +0100] "GET /etc/.... 222.168.17.x [01/Mar/2010:00:27:10 +0100] "GET /etc/.... My humble beginnings... #!/bin/bash echo Starting script... cd /Users/me/logs/ gzip -d /Users/me/logs/access.log.gz echo Files unzipped... echo I'm totally lost here to process the log file and save it back to hd... exit 0 Why is the log file IP malformed like this? My web provider (1and1) has decide not to store IP address, so they have replaced the last number with the character 'x'. They told me it was a new requirement by 'law'. I personally think that is bs, but that would take us off topic. I want to process these log files with AWstats, so I need an IP address that is not malformed. I want to replace the x with a 7, like so: 216.129.119.7 [01/Mar/2010:00:25:20 +0100] "GET /etc/.... 74.131.77.7 [01/Mar/2010:00:25:37 +0100] "GET /etc/.... 222.168.17.7 [01/Mar/2010:00:27:10 +0100] "GET /etc/.... Not perfect I know, but least I can process the files, and I can still gain a lot of useful information like country, number of visitors, etc. The log files are 200MB each, so I thought that a shell script is the way to go because I can do that rapidly on my Macbook Pro locally. Unfortunately, I know very little about shell scripting, and my javascript skills are not going to cut it this time. I appreciate your help.

    Read the article

  • Obj-C: ++variable is increasing by two instead of one

    - by Eli Garfinkel
    I am writing a program that asks users yes/no questions to help them decide how to vote in an election. I have a variable representing the question number called questionnumber. Each time I go through the switch-break loop, I add 1 to the questionnumber variable so that the next question will be displayed. This works fine for the first two questions. But then it skips the third question and moves on to the fourth. When I have more questions in the list, it skips every other question. Somewhere, for some reasons, the questionnumber variable is increasing when I don't want it to. Please look at the code below and tell me what I'm doing wrong. Thank you! Eli import "MainView.h" import @implementation MainView @synthesize Question; @synthesize mispar; int conservative = 0; int liberal = 0; int questionnumber = 1; (IBAction)agreebutton:(id)sender { ++liberal; } (IBAction)disagreebutton:(id)sender { ++conservative; } (IBAction)nextbutton:(id)sender { ++questionnumber; switch (questionnumber) { case 2: Question.text = @"Congress should pass a law that would ban Americans from earning more than one hundred million dollars in any given year."; break; case 3: Question.text = @"It is not fair to admit people to a university or employ them on the basis of merit alone. Factors such as race, gender, class, and sexual orientation must also be considered."; break; case 4: Question.text = @"There are two Americas - one for the rich and one for the poor."; break; case 5: Question.text = @"Top quality health care should be free for all."; break; default: break; } } @end

    Read the article

  • c# web extracting programming, which libraries, examples samples please

    - by user287745
    I have just started programming and have made a few small applications in C and C#. My understanding is that programming for web and thing related to web is nowadays a very easy task. Please note this is for personnel learning, not for rent a coder or any money making. An application which can run on any Windows platform even Windows 98. The application should start automatically at a scheduled time and do the following. Connect to a site which displays stock prices summary (high low current open). Capture the data (excluding the other things in the site.) And save it to disk (an SQL database) Please note:- Internet connection is assumed to be there always. Do not want to know how to make database schema or database. The stock exchange has no law prohibiting the use of the data provided on its site, but I do not want to mention the name in case I am wrong, but it's for personal private use only. The data of summary of pricing is arranged in a table such that when copied pasted to MS Excel it automatically forms a table. need steps guidance please, examples, lbraries

    Read the article

  • Audio output from Silverlight

    - by leecarter
    I'm looking to develop a Silverlight application which will take a stream of data (not an audio stream as such) from a web server. The data stream would then be manipulated to give audio of a certain format (G.711 a-Law for example) which would then be converted into PCM so that additional effects can be applied (such as boosting the volume). I'm OK up to this point. I've got my data, converted the G.711 into PCM but my problem is being able to output this PCM audio to the sound card. I basing a solution on some C# code intended for a .Net application but in Silverlight there is a problem with trying to take a copy of a delegate (function pointer) which will be the topic of a separate question once I've produced a simple code sample. So, the question is... How can I output the PCM audio that I have held in a data structure (currently an array) in my Silverlight to the user? (Please don't say write the byte values to a text box) If it were a MP3 or WMA file I would play it using a MediaElement but I don't want to have to make it into a file as this would put a crimp on applying dynamic effects to the audio. I've seen a few posts from people saying low level audio support is poor/non-existant in Silverlight so I'm open to any suggestions/ideas people may have.

    Read the article

  • gdb: Cannot find new threads: generic error

    - by Alexander Gladysh
    When I run GDB against a program which loads a .so which is linked to pthreads, GDB reports error "Cannot find new threads: generic error". I probably miss something in my Ubuntu configuration (as it was installed from minimal install). Any clues? $ gdb --args lua -lluarocks.require GNU gdb (GDB) 7.0-ubuntu Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /usr/bin/lua...(no debugging symbols found)...done. (gdb) run Starting program: /usr/bin/lua -lluarocks.require Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio require 'ev' [Thread debugging using libthread_db enabled] Cannot find new threads: generic error (gdb) q A debugging session is active. Inferior 1 [process 4986] will be killed. Quit anyway? (y or n) y This function gets called on require 'ev': http://github.com/brimworks/lua-ev/blob/master/lua_ev.c#L25-65 Additional information about my system: $ uname -a Linux localhost 2.6.31-20-generic #58-Ubuntu SMP Fri Mar 12 04:38:19 UTC 2010 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 9.10 Release: 9.10 Codename: karmic

    Read the article

  • Backdoor in OpenBSD how is it that no developer saw it ? And what about other Linux ? [closed]

    - by user310291
    It had been revealed that there have been backdoor implanted in OpenBSD http://www.infoworld.com/d/developer-world/software-security-honesty-the-best-policy-285 OpenBSD is opensource, how is it that nobody in the community developper could see it in the source code ? So how can one trust all the other "opensource" Linux Of course OpenBSD is only a case, the point is not about OpenBSD, it is about opensource in general. my question is not about Openbsd per se it's about source code os inspection especially c/c++ since most are written in these languages. Also once the source is compiled how one can be sure that it really reflects the source code ? If a law requires that a backdoor being implanted and obliges to deny that kind of action under the guise of security, how can you be sure that the system has not been corrupted by some tools ? As said there is there is a "nondisclosure agreement" My guess is that 99.99% of developpers in the world are just incapable of understanding os source code and won't even bother to look at it. And above all nobody wonders about why the gov wants such massive backdoor, and that of course they will pressure medias to deny.

    Read the article

  • Python's asyncore to periodically send data using a variable timeout. Is there a better way?

    - by Nick Sonneveld
    I wanted to write a server that a client could connect to and receive periodic updates without having to poll. The problem I have experienced with asyncore is that if you do not return true when dispatcher.writable() is called, you have to wait until after the asyncore.loop has timed out (default is 30s). The two ways I have tried to work around this is 1) reduce timeout to a low value or 2) query connections for when they will next update and generate an adequate timeout value. However if you refer to 'Select Law' in 'man 2 select_tut', it states, "You should always try to use select() without a timeout." Is there a better way to do this? Twisted maybe? I wanted to try and avoid extra threads. I'll include the variable timeout example here: #!/usr/bin/python import time import socket import asyncore # in seconds UPDATE_PERIOD = 4.0 class Channel(asyncore.dispatcher): def __init__(self, sock, sck_map): asyncore.dispatcher.__init__(self, sock=sock, map=sck_map) self.last_update = 0.0 # should update immediately self.send_buf = '' self.recv_buf = '' def writable(self): return len(self.send_buf) > 0 def handle_write(self): nbytes = self.send(self.send_buf) self.send_buf = self.send_buf[nbytes:] def handle_read(self): print 'read' print 'recv:', self.recv(4096) def handle_close(self): print 'close' self.close() # added for variable timeout def update(self): if time.time() >= self.next_update(): self.send_buf += 'hello %f\n'%(time.time()) self.last_update = time.time() def next_update(self): return self.last_update + UPDATE_PERIOD class Server(asyncore.dispatcher): def __init__(self, port, sck_map): asyncore.dispatcher.__init__(self, map=sck_map) self.port = port self.sck_map = sck_map self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.bind( ("", port)) self.listen(16) print "listening on port", self.port def handle_accept(self): (conn, addr) = self.accept() Channel(sock=conn, sck_map=self.sck_map) # added for variable timeout def update(self): pass def next_update(self): return None sck_map = {} server = Server(9090, sck_map) while True: next_update = time.time() + 30.0 for c in sck_map.values(): c.update() # <-- fill write buffers n = c.next_update() #print 'n:',n if n is not None: next_update = min(next_update, n) _timeout = max(0.1, next_update - time.time()) asyncore.loop(timeout=_timeout, count=1, map=sck_map)

    Read the article

  • Screenscraping and reverse engineering health based web tool

    - by ArbInv
    Hi There is a publicly available free tool which has been built to help people understand the impact of various risk factors on their health / life expectancy. I am interested in understanding the data that sits behind the tool. To get this out it would require putting in a range of different socio-demographic factors and analyzing the resulting outputs. This would need to be done across many thousand different individual profiles. The tool was probably built on some standard BI platorm. I have no interest in how the tool was built but do want to get to the data within it. The site has a Terms of Use Agreement which includes: Not copying, distribute, adapt, create derivative works of, translate, or otherwise modify the said tool Not decompile, disassemble, reverse assemble, or otherwise reverse engineer the tool. The said institution retains all rights, title and interest in and to the Tool, and any and all modifications thereof, including all copyright, copyright registrations, trade secrets, trademarks, goodwill and confidential and proprietary information related thereto. Would i be in effect breaking the law if i were to point a screen scraping tool which downloaded the data that sits behind the tool in question?? Any advice welcomed? THANKS

    Read the article

  • Why does setting the margin on a div not affect the position of child content?

    - by DanM
    I'd like to understand a little more clearly how css margins work with divs and child content. If I try this... <div style="clear: both; margin-top: 2em;"> <input type="submit" value="Save" /> </div> ...the Save button is right up against the User Role (margin fail): If I change it to this... <div style="clear: both;"> <input style="margin-top: 2em;" type="submit" value="Save" /> </div> ...there is a gap between the Save button and the User Role (margin win): Questions: Can someone explain what I'm observing? Why doesn't putting a margin on the div cause the input to move down? Why must I put the margin on the input itself? There must be some fundamental law of css I am not grasping.

    Read the article

  • Python: need to get energies of charge pairs.

    - by Two786
    I am new to python. I have to make a program for a project that takes a PDB format file as input and returns a list of all the intra-chain and inter-chain charge pairs and their energies (using coulomb’s law assuming a dielectric constant of (?) of 40.0). For simplicity, the charged residues for this program are just Arg (CZ), Lys (NZ), Asp (CG) and Glu (CD) with the charge bearing atoms for each indicated in parentheses. The program should report any attractive or repulsive interactions within 8.0 Å. Here is some additional information needed for the program. Eij = energy of interaction between atoms i and j in kilocalories/mole (kcals/mol) qi = charge for atom i (+1 for Lys or Arg, -1 for Glu or Asp) rij = distance between atoms i and j in angstroms using the distance formula The output should adhere to the following format: First residue : Second residue Distance Energy Lys 10 Chain A: ASP 46 Chain A D= 4.76 ang E= -2.32 kcals/mol (For some reason I can't organize the top two rows, but the first row should be lables and below it the corresponding values.) I really have no idea how to tackle this problem, any and all help is greatly appreciated. I hope this is the right place to ask. Thank you in advance. Using python 2.5

    Read the article

  • c++ and visual studio 08, how to develop the following web extracting application. folloow up of las

    - by user287745
    the purpose is to use c++ in a useful way. i have just started programming and have made a few small applications in c and c#. my understanding is that programming for web and thing related to web is now a days a very easy task. please note this is for personnel learning not for rent a coder or any money making. an application which can run on any windows platform even win98. the application should start automatically at a scheduled time and do the following. connect to a site which displays stock prices summary (high low current open ). captures the data (excluding the other things in the site) and saves it to disk ( a sql database) please note:- internet connection is assumed to be there always. do not want to know how to make database schema or database. the stock exchange has no law prohibiting the use of the data provided on its site, but i do not want to mention the name in case i am wrong, but its for personnel private use only. the data of summary of pricing is arranged in a table such that when copied pasted to ms excel it automatically forms a table. guidance needed thank u.

    Read the article

  • Why does setting the margin on a div not affect the position of child content?

    - by DanM
    I'd like to understand a little more clearly how css margins work with divs and child content. If I try this... <div style="clear: both; margin-top: 2em;"> <input type="submit" value="Save" /> </div> ...the Save button is right up against the User Role (margin fail): If I change it to this... <div style="clear: both;"> <input style="margin-top: 2em;" type="submit" value="Save" /> </div> ...there is a gap between the Save button and the User Role (margin win): Questions: Can someone explain what I'm observing? Why doesn't putting a margin on the div cause the input to move down? Why must I put the margin on the input itself? There must be some fundamental law of css I am not grasping.

    Read the article

  • Interval arithmetic to correctly deal with end of month - Oracle SQL

    - by user2003974
    I need a function which will do interval arithmetic, dealing "correctly" with the different number of days in a month. For my version of "correctly" - see below! First try select to_date('31-May-2014') + interval '1' months from dual This returns an error, because there is no 31st June. I understand that this behaviour is expected due to the ANSI standard. Second try select add_months(to_date('31-May-2014'),1) from dual This correctly (in my use case) returns 30th June 2014, which is great. BUT select add_months(to_date('28-Feb-2014'),1) from dual returns 31st March 2014, when I want 28th March 2014. Background This has to do with legal deadlines. The deadlines are expressed in law as a number of months (say, 3) from a base date. If the base date is last day of the month and three months later the month is longer, then the deadline does NOT extend to the end of the longer month (as per the add_months function). However, if the base date is last day of the month and three months later the month is shorter, then the deadline expires on the last day of the shorter month. Question Is there a function that does what I need? I have intervals (year to month) stored in a table, so preferably the function would look like: add_interval_correctly(basedate DATE, intervaltoadd INTERVAL YEAR TO MONTH)

    Read the article

  • Installing Lubuntu 14.04.1 forcepae fails

    - by Rantanplan
    I tried to install Lubuntu 14.04.1 from a CD. First, I chose Try Lubuntu without installing which gave: ERROR: PAE is disabled on this Pentium M (PAE can potentially be enabled with kernel parameter "forcepae" ... Following the description on https://help.ubuntu.com/community/PAE, I used forcepae and tried Try Lubuntu without installing again. That worked fine. dmesg | grep -i pae showed: [ 0.000000] Kernel command line: file=/cdrom/preseed/lubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- forcepae [ 0.008118] PAE forced! On the live-CD session, I tried installing Lubuntu double clicking on the install button on the desktop. Here, the CD starts running but then stops running and nothing happens. Next, I rebooted and tried installing Lubuntu directly from the boot menu screen using forcepae again. After a while, I receive the following error message: The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again. Hitting Enter brings me to the desktop. For what errors should I search? And how? Finally, I rebooted once more and tried Check disc for defects with forcepae option; no errors have been found. Now, I am wondering how to find the error or whether it would be better to follow advice c in https://help.ubuntu.com/community/PAE: "Move the hard disk to a computer on which the processor has PAE capability and PAE flag (that is, almost everything else than a Banias). Install the system as usual but don't add restricted drivers. After the install move the disk back." Thanks for some hints! Perhaps some of the following can help: On Lubuntu 12.04: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr mce cx8 mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe up bts est tm2 bogomips : 1284.76 clflush size : 64 cache_alignment : 64 address sizes : 32 bits physical, 32 bits virtual power management: uname -a Linux humboldt 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:45:51 UTC 2014 i686 i686 i386 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise cpuid eax in eax ebx ecx edx 00000000 00000002 756e6547 6c65746e 49656e69 00000001 000006d6 00000816 00000180 afe9f9bf 00000002 02b3b001 000000f0 00000000 2c04307d 80000000 80000004 00000000 00000000 00000000 80000001 00000000 00000000 00000000 00000000 80000002 20202020 20202020 65746e49 2952286c 80000003 6e655020 6d756974 20295228 7270204d 80000004 7365636f 20726f73 30352e31 007a4847 Vendor ID: "GenuineIntel"; CPUID level 2 Intel-specific functions: Version 000006d6: Type 0 - Original OEM Family 6 - Pentium Pro Model 13 - Stepping 6 Reserved 0 Brand index: 22 [not in table] Extended brand string: " Intel(R) Pentium(R) M processor 1.50GHz" CLFLUSH instruction cache line size: 8 Feature flags afe9f9bf: FPU Floating Point Unit VME Virtual 8086 Mode Enhancements DE Debugging Extensions PSE Page Size Extensions TSC Time Stamp Counter MSR Model Specific Registers MCE Machine Check Exception CX8 COMPXCHG8B Instruction SEP Fast System Call MTRR Memory Type Range Registers PGE PTE Global Flag MCA Machine Check Architecture CMOV Conditional Move and Compare Instructions FGPAT Page Attribute Table CLFSH CFLUSH instruction DS Debug store ACPI Thermal Monitor and Clock Ctrl MMX MMX instruction set FXSR Fast FP/MMX Streaming SIMD Extensions save/restore SSE Streaming SIMD Extensions instruction set SSE2 SSE2 extensions SS Self Snoop TM Thermal monitor 31 reserved TLB and cache info: b0: unknown TLB/cache descriptor b3: unknown TLB/cache descriptor 02: Instruction TLB: 4MB pages, 4-way set assoc, 2 entries f0: unknown TLB/cache descriptor 7d: unknown TLB/cache descriptor 30: unknown TLB/cache descriptor 04: Data TLB: 4MB pages, 4-way set assoc, 8 entries 2c: unknown TLB/cache descriptor On Lubuntu 14.04.1 live-CD with forcepae: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe bts est tm2 bogomips : 1284.68 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 32 bits virtual power management: uname -a Linux lubuntu 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:12 UTC 2014 i686 i686 i686 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.1 LTS Release: 14.04 Codename: trusty cpuid CPU 0: vendor_id = "GenuineIntel" version information (1/eax): processor type = primary processor (0) family = Intel Pentium Pro/II/III/Celeron/Core/Core 2/Atom, AMD Athlon/Duron, Cyrix M2, VIA C3 (6) model = 0xd (13) stepping id = 0x6 (6) extended family = 0x0 (0) extended model = 0x0 (0) (simple synth) = Intel Pentium M (Dothan B1) / Celeron M (Dothan B1), 90nm miscellaneous (1/ebx): process local APIC physical ID = 0x0 (0) cpu count = 0x0 (0) CLFLUSH line size = 0x8 (8) brand index = 0x16 (22) brand id = 0x16 (22): Intel Pentium M, .13um feature information (1/edx): x87 FPU on chip = true virtual-8086 mode enhancement = true debugging extensions = true page size extensions = true time stamp counter = true RDMSR and WRMSR support = true physical address extensions = false machine check exception = true CMPXCHG8B inst. = true APIC on chip = false SYSENTER and SYSEXIT = true memory type range registers = true PTE global bit = true machine check architecture = true conditional move/compare instruction = true page attribute table = true page size extension = false processor serial number = false CLFLUSH instruction = true debug store = true thermal monitor and clock ctrl = true MMX Technology = true FXSAVE/FXRSTOR = true SSE extensions = true SSE2 extensions = true self snoop = true hyper-threading / multi-core supported = false therm. monitor = true IA64 = false pending break event = true feature information (1/ecx): PNI/SSE3: Prescott New Instructions = false PCLMULDQ instruction = false 64-bit debug store = false MONITOR/MWAIT = false CPL-qualified debug store = false VMX: virtual machine extensions = false SMX: safer mode extensions = false Enhanced Intel SpeedStep Technology = true thermal monitor 2 = true SSSE3 extensions = false context ID: adaptive or shared L1 data = false FMA instruction = false CMPXCHG16B instruction = false xTPR disable = false perfmon and debug = false process context identifiers = false direct cache access = false SSE4.1 extensions = false SSE4.2 extensions = false extended xAPIC support = false MOVBE instruction = false POPCNT instruction = false time stamp counter deadline = false AES instruction = false XSAVE/XSTOR states = false OS-enabled XSAVE/XSTOR = false AVX: advanced vector extensions = false F16C half-precision convert instruction = false RDRAND instruction = false hypervisor guest status = false cache and TLB information (2): 0xb0: instruction TLB: 4K, 4-way, 128 entries 0xb3: data TLB: 4K, 4-way, 128 entries 0x02: instruction TLB: 4M pages, 4-way, 2 entries 0xf0: 64 byte prefetching 0x7d: L2 cache: 2M, 8-way, sectored, 64 byte lines 0x30: L1 cache: 32K, 8-way, 64 byte lines 0x04: data TLB: 4M pages, 4-way, 8 entries 0x2c: L1 data cache: 32K, 8-way, 64 byte lines extended feature flags (0x80000001/edx): SYSCALL and SYSRET instructions = false execution disable = false 1-GB large page support = false RDTSCP = false 64-bit extensions technology available = false Intel feature flags (0x80000001/ecx): LAHF/SAHF supported in 64-bit mode = false LZCNT advanced bit manipulation = false 3DNow! PREFETCH/PREFETCHW instructions = false brand = " Intel(R) Pentium(R) M processor 1.50GHz" (multi-processing synth): none (multi-processing method): Intel leaf 1 (synth) = Intel Pentium M (Dothan B1), 90nm

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >