Search Results

Search found 11778 results on 472 pages for 'mark ms smith'.

Page 242/472 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • C#: System.Collections.Concurrent.ConcurrentQueue vs. Queue

    - by James Michael Hare
    I love new toys, so of course when .NET 4.0 came out I felt like the proverbial kid in the candy store!  Now, some people get all excited about the IDE and it’s new features or about changes to WPF and Silver Light and yes, those are all very fine and grand.  But me, I get all excited about things that tend to affect my life on the backside of development.  That’s why when I heard there were going to be concurrent container implementations in the latest version of .NET I was salivating like Pavlov’s dog at the dinner bell. They seem so simple, really, that one could easily overlook them.  Essentially they are implementations of containers (many that mirror the generic collections, others are new) that have either been optimized with very efficient, limited, or no locking but are still completely thread safe -- and I just had to see what kind of an improvement that would translate into. Since part of my job as a solutions architect here where I work is to help design, develop, and maintain the systems that process tons of requests each second, the thought of extremely efficient thread-safe containers was extremely appealing.  Of course, they also rolled out a whole parallel development framework which I won’t get into in this post but will cover bits and pieces of as time goes by. This time, I was mainly curious as to how well these new concurrent containers would perform compared to areas in our code where we manually synchronize them using lock or some other mechanism.  So I set about to run a processing test with a series of producers and consumers that would be either processing a traditional System.Collections.Generic.Queue or a System.Collection.Concurrent.ConcurrentQueue. Now, I wanted to keep the code as common as possible to make sure that the only variance was the container, so I created a test Producer and a test Consumer.  The test Producer takes an Action<string> delegate which is responsible for taking a string and placing it on whichever queue we’re testing in a thread-safe manner: 1: internal class Producer 2: { 3: public int Iterations { get; set; } 4: public Action<string> ProduceDelegate { get; set; } 5: 6: public void Produce() 7: { 8: for (int i = 0; i < Iterations; i++) 9: { 10: ProduceDelegate(“Hello”); 11: } 12: } 13: } Then likewise, I created a consumer that took a Func<string> that would read from whichever queue we’re testing and return either the string if data exists or null if not.  Then, if the item doesn’t exist, it will do a 10 ms wait before testing again.  Once all the producers are done and join the main thread, a flag will be set in each of the consumers to tell them once the queue is empty they can shut down since no other data is coming: 1: internal class Consumer 2: { 3: public Func<string> ConsumeDelegate { get; set; } 4: public bool HaltWhenEmpty { get; set; } 5: 6: public void Consume() 7: { 8: bool processing = true; 9: 10: while (processing) 11: { 12: string result = ConsumeDelegate(); 13: 14: if(result == null) 15: { 16: if (HaltWhenEmpty) 17: { 18: processing = false; 19: } 20: else 21: { 22: Thread.Sleep(TimeSpan.FromMilliseconds(10)); 23: } 24: } 25: else 26: { 27: DoWork(); // do something non-trivial so consumers lag behind a bit 28: } 29: } 30: } 31: } Okay, now that we’ve done that, we can launch threads of varying numbers using lambdas for each different method of production/consumption.  First let's look at the lambdas for a typical System.Collections.Generics.Queue with locking: 1: // lambda for putting to typical Queue with locking... 2: var productionDelegate = s => 3: { 4: lock (_mutex) 5: { 6: _mutexQueue.Enqueue(s); 7: } 8: }; 9:  10: // and lambda for typical getting from Queue with locking... 11: var consumptionDelegate = () => 12: { 13: lock (_mutex) 14: { 15: if (_mutexQueue.Count > 0) 16: { 17: return _mutexQueue.Dequeue(); 18: } 19: } 20: return null; 21: }; Nothing new or interesting here.  Just typical locks on an internal object instance.  Now let's look at using a ConcurrentQueue from the System.Collections.Concurrent library: 1: // lambda for putting to a ConcurrentQueue, notice it needs no locking! 2: var productionDelegate = s => 3: { 4: _concurrentQueue.Enqueue(s); 5: }; 6:  7: // lambda for getting from a ConcurrentQueue, once again, no locking required. 8: var consumptionDelegate = () => 9: { 10: string s; 11: return _concurrentQueue.TryDequeue(out s) ? s : null; 12: }; So I pass each of these lambdas and the number of producer and consumers threads to launch and take a look at the timing results.  Basically I’m timing from the time all threads start and begin producing/consuming to the time that all threads rejoin.  I won't bore you with the test code, basically it just launches code that creates the producers and consumers and launches them in their own threads, then waits for them all to rejoin.  The following are the timings from the start of all threads to the Join() on all threads completing.  The producers create 10,000,000 items evenly between themselves and then when all producers are done they trigger the consumers to stop once the queue is empty. These are the results in milliseconds from the ordinary Queue with locking: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 4284 5153 4226 4554.33 4: 10 10 4044 3831 5010 4295.00 5: 100 100 5497 5378 5612 5495.67 6: 1000 1000 24234 25409 27160 25601.00 And the following are the results in milliseconds from the ConcurrentQueue with no locking necessary: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 3647 3643 3718 3669.33 4: 10 10 2311 2136 2142 2196.33 5: 100 100 2480 2416 2190 2362.00 6: 1000 1000 7289 6897 7061 7082.33 Note that even though obviously 2000 threads is quite extreme, the concurrent queue actually scales really well, whereas the traditional queue with simple locking scales much more poorly. I love the new concurrent collections, they look so much simpler without littering your code with the locking logic, and they perform much better.  All in all, a great new toy to add to your arsenal of multi-threaded processing!

    Read the article

  • SSIS - Access Denied with UNC paths - The file name is a device or contains invalid characters

    - by simonsabin
    I spent another day tearing my hair out yesterday trying to resolve an issue with SSIS packages runnning in SQLAgent (not got much left at the moment, maybe I should contact the SSIS team for a wig). My situation was that I am deploying packages to a development server, and to provide isolation I was running jobs with a proxy account that only had access to the development servers. Proxies are an awesome feature and mean that you should never have to "just run the job as sysadmin". The issue I was facing was that the job step was failing. The job step was a simple execution of the package.The following errors appeared in my log file. I always check the "Log step output in history" for a job step, this ensures you get all the output from the command that you run. I'll blog about this later. If looking at the output in sysdtslog90 then you will have an entry with datacode -1073573533 and error message File or directory "<filename>" represented by connection "<connection>" does not exist.  Not exactly helpful. If you get the output from the console then you will also get these errors. 0xC0202070 "The file name property is not valid. The file name is a device or contains invalid characters." 0xC001401E "specified in the connection was not valid." It appears this error is due to the use of a UNC path and the account runnnig the package not having access to all the folders in the path. Solution To solve this you need to ensure that the proxy account has access to ALL folders in the path you are accessing. To check this works, logon as the relevant proxy user, or run a command window as the specified user. Then try and do net use \\server\share and then do a dir for each folder in the path and check you have access. If these work and you still have the problem then you have some other problem, sorry. The following are posts on experts exchange that also discuss this,http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SSIS/Q_24056047.htmlhttp://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SSIS/Q_23968903.html This blog had a post about it being a 64 bit issue. That definitely wasn't the issue for me as I was on a 32 bit server http://blogs.perkinsconsulting.com/post/64-bit-SQL-Server-2005-SSIS-and-UNC-paths-Part-2.aspx  

    Read the article

  • Example WLST Script to Obtain JDBC and JTA MBean Values

    - by Daniel Mortimer
    Introduction Following on from the blog entry "Get an Offline or Online WebLogic Domain Summary Using WLST!", I have had a request to create a smaller example which only collects a selection of JDBC (System Resource) and JTA configuration and runtime MBeans values. So, here it is. Download Sample Script You can grab the sample script by clicking here. Instructions to Run: 1. After download, extract the zip to the machine hosting the WebLogic environment. You should have three directories along with a readme.txt output Sample_Output scripts 2. In the scripts directory, find the start wrapper script startWLSTJDBCSummarizer.sh (Unix) or startWLSTJDBCSummarizer.cmd (MS Windows). Open the appropriate file in an editor and change the environment variable settings to suit your system. Example - startWLSTDomainSummarizer.cmd set WL_HOME=D:\product\FMW11g\wlserver_10.3 set DOMAIN_HOME=D:\product\FMW11g\user_projects\domains\MyDomain set WLST_OUTPUT_PATH=D:\WLSTDomainSummarizer\output\ set WLST_OUTPUT_FILE=WLST_JDBC_Summary_Via_MBeans.html call "%WL_HOME%\common\bin\wlst.cmd" WLS_JDBC_Summary_Online.py Note: The WLST_OUTPUT_PATH directory value must have a trailing slash. If there is no trailing slash, the script will error and not continue.  3. Run the shell / command line wrapper script. It should launch WLST and kick off "WLS_JDBC_Summary_Online.py". This will hit you with some prompts e.g. Is your domain Admin Server up and running and do you have the connection details? (Y /N ): Y Enter connection URL to Admin Server e.g t3://mymachine.acme.com:7001 : t3://localhost:7001 Enter weblogic username: weblogic Enter weblogic username password (function prompt 1): welcome1 (Note: the value typed in for password will not be echoed back to the console). 4. If the scripts run successfully, you should get a HTML summary in the specified output directory. See example screenshots below: Screenshot 1 - JDBC System Resource Tab Page  Screenshot 2 - JTA Tab Page 5. For the HTML to render correctly, ensure the .js and .css files provided (review the output directory created by the zip file extraction) are accessible. For example, to view the HTML locally (without using a web server), place the HTML output, jquery-ui.js, spry.js and wlstsummarizer.css in the same directory. Disclaimer This is a sample script. I have tested it against WebLogic Server 10.3.6 domains on MS Windows and Unix.  I cannot guarantee that the script will run error free or produce the expected output on your system. If you have any feedback add a comment to the blog. I will endeavour to fix any problems with my WLST code. Credits JQuery: http://jquery.com/ Spry (Adobe) : https://github.com/adobe/Spryhttp://www.red-team-design.com/cool-headings-with-pseudo-elements

    Read the article

  • Michael Crump&rsquo;s notes for 70-563 PRO &ndash; Designing and Developing Windows Applications usi

    - by mbcrump
    TIME TO GO PRO! This is my notes for 70-563 PRO – Designing and Developing Windows Applications using .NET Framework 3.5 I created it using several resources (various certification web sites, msdn, official ms 70-548 book). The reason that I created this review is because a) I am taking the exam. b) MS did not create a book for this exam. Use the(MS 70-548)book. c) To make sure I am familiar with each before the exam. I hope that it provides a good start for your own notes. I hope that someone finds this useful. At least, it will give you a starting point of what to expect to know on the PRO exam. Also, for those wondering, the PRO exam does contains very little code. It is basically all theory. 1. Validation Controls – How to prevent users from entering invalid data on forms. (MaskedTextBox control and RegEx) 2. ServiceController – used to start and control the behavior of existing services. 3. User Feedback (know winforms Status Bar, Tool Tips, Color, Error Provider, Context-Sensitive and Accessibility) 4. Specific (derived) exceptions must be handled before general (base class) exceptions. By moving the exception handling for the base type Exception to after exception handling of ArgumentNullException, all ArgumentNullException thrown by the Helper method will be caught and logged correctly. 5. A heartbeat method is a method exposed by a Web service that allows external applications to check on the status of the service. 6. New users must master key tasks quickly. Giving these tasks context and appropriate detail will help. However, advanced users will demand quicker paths. Shortcuts, accelerators, or toolbar buttons will speed things along for the advanced user. 7. MSBuild uses project files to instruct the build engine what to build and how to build it. MSBuild project files are XML files that adhere to the MSBuild XML schema. The MSBuild project files contain complete file, build action, and dependency information for each individual projects. 8. Evaluating whether or not to fix a bug involves a triage process. You must identify the bug's impact, set the priority, categorize it, and assign a developer. Many times the person doing the triage work will assign the bug to a developer for further investigation. In fact, the workflow for the bug work item inside of Team System supports this step. Developers are often asked to assess the impact of a given bug. This assessment helps the person doing the triage make a decision on how to proceed. When assessing the impact of a bug, you should consider time and resources to fix it, bug risk, and impacts of the bug. 9. In large projects it is generally impossible and unfeasible to fix all bugs because of the impact on schedule and budget. 10. Code reviews should be conducted by a technical lead or a technical peer. 11. Testing Applications 12. WCF Services – application state 13. SQL Server 2005 / 2008 Express Edition – reliable storage of data / Microsoft SQL Server 3.5 Compact Database– used for client computers to retrieve and save data from a shared location. 14. SQL Server 2008 Compact Edition – used for minimum possible memory and can synchronize data with a corporate SQL Server 2008 Database. Supports offline user and minimum dependency on external components. 15. MDI and SDI Forms (specifically IsMDIContainer) 16. GUID – in the case of data warehousing, it is important to define unique keys. 17. Encrypting / Security Data 18. Understanding of Isolated Storage/Proper location to store items 19. LINQ to SQL 20. Multithreaded access 21. ADO.NET Entity Framework model 22. Marshal.ReleaseComObject 23. Common User Interface Layout (ComboBox, ListBox, Listview, MaskedTextBox, TextBox, RichTextBox, SplitContainer, TableLayoutPanel, TabControl) 24. DataSets Class - http://msdn.microsoft.com/en-us/library/system.data.dataset%28VS.71%29.aspx 25. SQL Server 2008 Reporting Services (SSRS) 26. SystemIcons.Shield (Vista UAC) 27. Leverging stored procedures to perform data manipulation for a database schema that can change. 28. DataContext 29. Microsoft Windows Installer Packages, ClickOnce(bootstrapping features), XCopy. 30. Client Application Services – will authenticate users by using the same data source as a ASP.NET web application. 31. SQL Server 2008 Caching 32. StringBuilder 33. Accessibility Guidelines for Windows Applications http://msdn.microsoft.com/en-us/library/ms228004.aspx 34. Logging erros 35. Testing performance related issues. 36. Role Based Security, GenericIdentity and GenericPrincipal 37. System.Net.CookieContainer will store session data for webapps (see isolated storage for winforms) 38. .NET CLR Profiler tool will identify objects that cause performance issues. 39. ADO.NET Synchronization (SyncGroup) 40. Globalization - CultureInfo 41. IDisposable Interface- reports on several questions relating to this. 42. Adding timestamps to determine whether data has changed or not. 43. Converting applications to .NET Framework 3.5 44. MicrosoftReportViewer 45. Composite Controls 46. Windows Vista KNOWN folders. 47. Microsoft Sync Framework 48. TypeConverter -Provides a unified way of converting types of values to other types, as well as for accessing standard values and sub properties. http://msdn.microsoft.com/en-us/library/system.componentmodel.typeconverter.aspx 49. Concurrency control mechanisms The main categories of concurrency control mechanisms are: Optimistic - Delay the checking of whether a transaction meets the isolation rules (e.g., serializability and recoverability) until its end, without blocking any of its (read, write) operations, and then abort a transaction, if the desired rules are violated. Pessimistic - Block operations of a transaction, if they may cause violation of the rules. Semi-optimistic - Block operations in some situations, and do not block in other situations, while delaying rules checking to transaction's end, as done with optimistic. 50. AutoResetEvent 51. Microsoft Messaging Queue (MSMQ) 4.0 52. Bulk imports 53. KeyDown event of controls 54. WPF UI components 55. UI process layer 56. GAC (installing, removing and queuing) 57. Use a local database cache to reduce the network bandwidth used by applications. 58. Sound can easily be annoying and distracting to users, so use it judiciously. Always give users the option to turn sound off. Because a user might have sound off, never convey important information through sound alone.

    Read the article

  • Product Naming Conventions - Does it make sense

    - by NeilHambly
    Maybe it’s just me, but with some of the MS Products being released in 2010 with "2010" in their product name, is the naming of the SQL Server product suite being released with product name that doesn’t make sense, our latest SQL Server Release which is now just about to be released is "SQL Server 2008 R2" My question is do you think this product name is ? Good, Bad or just plain confusing IMHO I think we could have been better placed if this was named "SQL Server 2010"...(read more)

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Challenges in multi-player Android Game Server with RESTful Nature

    - by Kush
    I'm working on an Android Game based on Contract Bridge, as a part of my college Summer Internship project. The game will be multi-player such that 4 Android devices can play it, so there's no BOT or CPU player to be developed. At the time of getting project, I realized that most of the students had already worked on the project but none of their works is reusable now (for variety of reasons like, undocumented code and design architecture, different platform implementation). I have experience working on several open source projects and hence I emphasis to work out on this project such that components I make become reusable as much as possible. Now, as the game is multi-player and entire game progress will be handled on server, I'm currently working on Server's design, since I wanted to make game server reusable such that any client platform can use it, I was previously confused in selecting Socket or REST for Game Server's design, but later finalized to work on REST APIs for the server. Now, since I have to keep all players in-sync while they make movements in game, on server I've planned to use Database which will keep all players' progress, specific for each table (in Bridge, 4 players play on single table, and server will handle many such game tables). I don't know if its an appropriate decision to use database as shared medium to track progress of each game table (let me know if there's an appropriate or better option). Obviously, when game is completed for the table, data for that table on server's database is discarded. Now the problem is that, access to REST service is an HTTP call, so as long as client doesn't make any request, server will remain idle, and consider a situation where A player has played a card on his device and the device requests to apply this change on the server. Now, I need to let rest of the three devices know that the player has played a card, and also update view on their device. AFAIK, REST cannot provide a sort-of Push-notification system, since the connection to the server is not persistent. One solution that I thought was to make each device constantly poll the server for any change (like every 56 ms) and when changes are found, reflect it on the device. But I feel this is not an elegant way, as every HTTP request is expensive. (and I choose REST to make game play experience robust since, a mobile device tends to get disconnected from Internet, and if there's Socket-like persistent connection then entire game progress is subject to lost. Also, portability on client-end is important) Also, imagining a situation where 10 game tables are in progress and 40 players are playing, a server must be capable to handle flooded HTTP requests from all the devices which make it every 56 ms. So I wonder if the situation is assumed as DoS attack. So, explaining the situation, am I going on the right track for the server design? I wanted to be sure before I proceed much further with the code.

    Read the article

  • Announcing Oracle Enterprise Content Management Suite 11g

    - by [email protected]
    Today Oracle announced Oracle Enterprise Content Management Suite 11g. This is a major release for us, and reinforces our three key themes at Oracle: Complete New in this release - Oracle ECM Suite 11g is built on a single, unified repository. Every piece of content - documents, HTML pages, digital assets, scanned images - is stored and accessbile directly from the repository, whether you are working on websites, creating brand logos, processing accounts payable invoices, or running records and retention functions. It makes complete, end-to-end management of content possible, from the point it enters the organization, through its entire lifecycle. Also new in this release, the installation, access, monitoring and administration of Oracle ECM Suite 11g is centralized. As a complete system, organizations can lower the costs of training and usage by having a centralized source of information that is easily administered. As part of this new unified repository release, Oracle has released a benchmarking white paper that shows the extreme performance and scalability of Oracle ECM Suite. When tested on a two node UCM Server running on Sun Oracle DB Machine Half Rack Hardware with an Exadata storage server, Oracle ECM Suite 11g is able to ingest over 178 million documents per day. Open Oracle ECM Suite 11g is built on a service-oriented architecture. All functions are available through standards-based services calls in Web Services or Java. In this release Oracle unveils Open Web Content Management. Open Web Content Management is a revolutionary approach to web content management that decouples the content management process from the process of creating web applications. One piece of this approach is our one-click web content management. With one click, a web application builder can drag content services into their application, enabling their users to also edit content with just one click. Open Web Content Management is also open because it enables Web developers to add Web content management to new and existing JavaServer Pages (JSP), JavaServer Faces (JSF) and Oracle Application Development Framework (ADF) Faces applications Open content distribution - Oracle ECM Suite 11g offers flexible deployment options with a built-in smart cache so organizations can deliver Web sites or Web applications without requiring Oracle ECM Suite as part of the delivery system Integrated Oracle ECM Suite 11g also offers a series of next generation desktop integrations, providing integrations such as: New MS Office integration with menus to access managed content, insert managed links, and compare managed documents using standard MS Office reviewing tools Automatic identity tagging of documents on download - to help users understand which versions they are viewing and prevent duplicate content items in the content repository. New "smart productivity folders" to show a users workflow inbox, saved searches and checked out content directly from Windows Explorer Drag and drop metadata pop-ups Check in and check out for all file formats with any standard WebDAV server As part of Oracle's Enterprise Application Documents initiative, Oracle Content Management 11g also provides certified application integrations with solution templates You can read the press release here. You can see more assets at the launch center here. You can sign up for the announcement webinar and hear more about the new features here. You can read the benchmarking study here.

    Read the article

  • HTML5/JS - Choppy Game Loop

    - by Rikonator
    I have been experimenting with HTML5/JS, trying to create a simple game when I hit a wall. My choice of game loop is too choppy to be actually of any use in a game. I'm trying for a fixed time step loop, rendering only when required. I simply use a requestAnimationFrame to run Game.update which finds the elapsed time since the last update, and calls State.update to update and render the current state. State.prototype.update = function(ms) { this.ticks += ms; var updates = 0; while(this.ticks >= State.DELTA_TIME && updates < State.MAX_UPDATES) { this.updateState(); this.updateFrameTicks += State.DELTA_TIME; this.updateFrames++; if(this.updateFrameTicks >= 1000) { this.ups = this.updateFrames; this.updateFrames = 0; this.updateFrameTicks -= 1000; } this.ticks -= State.DELTA_TIME; updates++; } if(updates > 0) { this.renderFrameTicks += updates*State.DELTA_TIME; this.renderFrames++; if(this.renderFrameTicks >= 1000) { this.rps = this.renderFrames; this.renderFrames = 0; this.renderFrameTicks -= 1000; } this.renderState(updates*State.DELTA_TIME); } }; But this strategy does not work very well. This is the result: http://jsbin.com/ukosuc/1 (Edit). As it is apparent, the 'game' has fits of lag, and when you tab out for a long period and come back, the 'game' behaves unexpectedly - updates faster than intended. This is either a problem due to something about game loops that I don't quite understand yet, or a problem due to implementation which I can't pinpoint. I haven't been able to solve this problem despite attempting several variations using setTimeout and requestAnimationFrame. (One such example is http://jsbin.com/eyarod/1/edit). Some help and insight would really be appreciated!

    Read the article

  • What&rsquo;s new in VS.10 &amp; TFS.10?

    - by johndoucette
    Getting my geek on… I have decided to call the products VS.10 (Visual Studio 2010), TP.10 (Test Professional 2010),  and TFS.10 (Team Foundation Server 2010) Thanks Neno Loje. What's new in Visual Studio & Team Foundation Server 2010? Focusing on Visual Studio Team System (VSTS) ALM-related parts: Visual Studio Ultimate 2010 NEW: IntelliTrace® (aka the historical debugger) NEW: Architecture Tools New Project Type: Modeling Project UML Diagrams UML Use Case Diagram UML Class Diagram UML Sequence Diagram (supports reverse enginneering) UML Activity Diagram UML Component Diagram Layer Diagram (with Team Build integration for layer validation) Architecuture Explorer Dependency visualization DGML Web & Load Tests Visual Studio Premium 2010 NEW: Architecture Tools Read-only model viewer Development Tools Code Analysis New Rules like SQL Injection detection Rule Sets Code Profiler Multi-Tier Profiling JScript Profiling Profiling applications on virtual machines in sampling mode Code Metrics Test Tools Code Coverage NEW: Test Impact Analysis NEW: Coded UI Test Database Tools (DB schema versioning & deployment) Visual Studio Professional 2010 Debuger Mixed Mode Debugging for 64-bit Applications Export/Import of Breakpoints and data tips Visual Studio Test Professional 2010 Microsoft Test Manager (MTM, formerly known as "Camano")) Fast Forward Testing Visual Studio Team Foundation Server 2010 Work Item Tracking and Project Management New MSF templatesfor Agile and CMMI (V 5.0) Hierarchical Work Items Custom Work Item Link Types Ready to use Excel agile project management workbooks for managing your backlogs (including capacity planing) Convert Work Item query to an Excel report MS Excel integration Support for Work Item hierarchies Formatting is preserved after doing a 'Refresh' MS Project integration Hierarchy and successor/predecessor info is now synchronized NEW: Test Case Management Version Control Public Workspaces Branch & Merge Visualization Tracking of Changesets & Work Items Gated Check-In Team Build Build Controllers and Agents Workflow 4-based build process NEW: Lab Management (only a pre-release is avaiable at the moment!) Project Portal & Reporting Dashboards (on SharePoint Portal) Burndown Chart TFS Web Parts (to show data from TFS) Administration & Operations Topology enhancements Application tier network load balancing (NLB) SQL Server scale out Improved Sharepoint flexibility Report Server flexibility Zone support Kerberos support Separation of TFS and SQL administration Setup Separate install from configure Improved installation wizards Optional components Simplified account requirements Improved Reporting Services configuration Setup consolidation Upgrading from previous TFS versions Improved IIS flexibility Administration Consolidation of command line tools User rename support Project Collections Archive/restore individual project collections Move Team Project Collections Server consolidation Team Project Collection Split Team Project Collection Isolation Server request cancellation Licensing: TFS server license included in MSDN subscriptions Removed features (former features not part of Visual Studio 2010): Debug » Start With Application Verifier Object Test Bench IntelliSense for C++ / CLI Debugging support for SQL 2000

    Read the article

  • How Microsoft listens

    - by Stacy Vicknair
    This being my freshman year as an MVP, I had a realization that I perhaps should be embarrassed hasn’t happened sooner. The realization comes much like the iconic M&Ms commercial where the M&Ms run into Santa and exclaim, “He does exist!” My personal realization arguably has a greater implication: Microsoft does listen. This is the most important lesson that I received this year attending the MVP Summit. My hope is that I can convince you that we are empowered to make a difference. Instead of using “Man I hate how this works / doesn’t work!” as cooler conversation, we can use it as true interaction with Microsoft. We as customers to Microsoft need to stop asking the question “Will this work for me?” and instead ask “How can this work for me?” There are three quick resources that the average developer has access to today that they can use to be heard by the product teams, and by no means should you think twice if you have a concern that you’d like a real response on. MVPs MVPs are members of your community who have a deep relationship with Microsoft and will have connections to their associated product group. Don’t think of them as just a resource for answers, but also as your ambassador for getting your experiences heard. You can find your local MVPs by browsing the directory at: https://mvp.support.microsoft.com/communities/mvp.aspx Evangelists Evangelists are employees of Microsoft who work to foster and grow communities in their assigned region. They are first-class citizens of Microsoft and are often deeply involved with the product groups. As a result, they will be more than glad to direct your questions or concerns to those who can answer them most expertly. With that said, evangelists are also very busy people (who do amazing things for the community) and might not be able to get you that conversation as quickly as a local MVP. You can find your local evangelist at the following website: http://msdn.microsoft.com/en-us/bb905078.aspx Microsoft Connect This is one of the resources that I haven’t used enough, but it cannot be understated. Connect is the starting point of the social conversation that happens between Microsoft and the community daily. Connect acts as a portal where you can provide new feedback as well as comment and rate the feedback provided by others. Power is in numbers when it comes to Connect, so the exposure that your feedback can get not only lets you know that you aren’t the only one who wants change, but also lets Microsoft know the same. https://connect.microsoft.com   Technorati Tags: Microsoft,MVP,Feedback,Connect

    Read the article

  • [News] Interview de Don Syme, le cr?ateur de F#

    Nous avons plusieurs fois eu l'occasion de mentionner Don Syme sur DNG, notamment lors de son travail sur les g?n?rics dans C#. Responsable R&D chez MS Research ? Cambridge, il contribue d?sormais ? F# et r?pond ici ? une interview de Richard Morris. Tr?s int?ressant.

    Read the article

  • C#/.NET Little Wonders: The Timeout static class

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. When I started the “Little Wonders” series, I really wanted to pay homage to parts of the .NET Framework that are often small but can help in big ways.  The item I have to discuss today really is a very small item in the .NET BCL, but once again I feel it can help make the intention of code much clearer and thus is worthy of note. The Problem - Magic numbers aren’t very readable or maintainable In my first Little Wonders Post (Five Little Wonders That Make Code Better) I mention the TimeSpan factory methods which, I feel, really help the readability of constructed TimeSpan instances. Just to quickly recap that discussion, ask yourself what the TimeSpan specified in each case below is 1: // Five minutes? Five Seconds? 2: var fiveWhat1 = new TimeSpan(0, 0, 5); 3: var fiveWhat2 = new TimeSpan(0, 0, 5, 0); 4: var fiveWhat3 = new TimeSpan(0, 0, 5, 0, 0); You’d think they’d all be the same unit of time, right?  After all, most overloads tend to tack additional arguments on the end.  But this is not the case with TimeSpan, where the constructor forms are:     TimeSpan(int hours, int minutes, int seconds);     TimeSpan(int days, int hours, int minutes, int seconds);     TimeSpan(int days, int hours, int minutes, int seconds, int milliseconds); Notice how in the 4 and 5 parameter version we suddenly have the parameter days slipping in front of hours?  This can make reading constructors like those above much harder.  Fortunately, there are TimeSpan factory methods to help make your intention crystal clear: 1: // Ah! Much clearer! 2: var fiveSeconds = TimeSpan.FromSeconds(5); These are great because they remove all ambiguity from the reader!  So in short, magic numbers in constructors and methods can be ambiguous, and anything we can do to clean up the intention of the developer will make the code much easier to read and maintain. Timeout – Readable identifiers for infinite timeout values In a similar way to TimeSpan, let’s consider specifying timeouts for some of .NET’s (or our own) many methods that allow you to specify timeout periods. For example, in the TPL Task class, there is a family of Wait() methods that can take TimeSpan or int for timeouts.  Typically, if you want to specify an infinite timeout, you’d just call the version that doesn’t take a timeout parameter at all: 1: myTask.Wait(); // infinite wait But there are versions that take the int or TimeSpan for timeout as well: 1: // Wait for 100 ms 2: myTask.Wait(100); 3:  4: // Wait for 5 seconds 5: myTask.Wait(TimeSpan.FromSeconds(5); Now, if we want to specify an infinite timeout to wait on the Task, we could pass –1 (or a TimeSpan set to –1 ms), which what the .NET BCL methods with timeouts use to represent an infinite timeout: 1: // Also infinite timeouts, but harder to read/maintain 2: myTask.Wait(-1); 3: myTask.Wait(TimeSpan.FromMilliseconds(-1)); However, these are not as readable or maintainable.  If you were writing this code, you might make the mistake of thinking 0 or int.MaxValue was an infinite timeout, and you’d be incorrect.  Also, reading the code above it isn’t as clear that –1 is infinite unless you happen to know that is the specified behavior. To make the code like this easier to read and maintain, there is a static class called Timeout in the System.Threading namespace which contains definition for infinite timeouts specified as both int and TimeSpan forms: Timeout.Infinite An integer constant with a value of –1 Timeout.InfiniteTimeSpan A static readonly TimeSpan which represents –1 ms (only available in .NET 4.5+) This makes our calls to Task.Wait() (or any other calls with timeouts) much more clear: 1: // intention to wait indefinitely is quite clear now 2: myTask.Wait(Timeout.Infinite); 3: myTask.Wait(Timeout.InfiniteTimeSpan); But wait, you may say, why would we care at all?  Why not use the version of Wait() that takes no arguments?  Good question!  When you’re directly calling the method with an infinite timeout that’s what you’d most likely do, but what if you are just passing along a timeout specified by a caller from higher up?  Or perhaps storing a timeout value from a configuration file, and want to default it to infinite? For example, perhaps you are designing a communications module and want to be able to shutdown gracefully, but if you can’t gracefully finish in a specified amount of time you want to force the connection closed.  You could create a Shutdown() method in your class, and take a TimeSpan or an int for the amount of time to wait for a clean shutdown – perhaps waiting for client to acknowledge – before terminating the connection.  So, assume we had a pub/sub system with a class to broadcast messages: 1: // Some class to broadcast messages to connected clients 2: public class Broadcaster 3: { 4: // ... 5:  6: // Shutdown connection to clients, wait for ack back from clients 7: // until all acks received or timeout, whichever happens first 8: public void Shutdown(int timeout) 9: { 10: // Kick off a task here to send shutdown request to clients and wait 11: // for the task to finish below for the specified time... 12:  13: if (!shutdownTask.Wait(timeout)) 14: { 15: // If Wait() returns false, we timed out and task 16: // did not join in time. 17: } 18: } 19: } We could even add an overload to allow us to use TimeSpan instead of int, to give our callers the flexibility to specify timeouts either way: 1: // overload to allow them to specify Timeout in TimeSpan, would 2: // just call the int version passing in the TotalMilliseconds... 3: public void Shutdown(TimeSpan timeout) 4: { 5: Shutdown(timeout.TotalMilliseconds); 6: } Notice in case of this class, we don’t assume the caller wants infinite timeouts, we choose to rely on them to tell us how long to wait.  So now, if they choose an infinite timeout, they could use the –1, which is more cryptic, or use Timeout class to make the intention clear: 1: // shutdown the broadcaster, waiting until all clients ack back 2: // without timing out. 3: myBroadcaster.Shutdown(Timeout.Infinite); We could even add a default argument using the int parameter version so that specifying no arguments to Shutdown() assumes an infinite timeout: 1: // Modified original Shutdown() method to add a default of 2: // Timeout.Infinite, works because Timeout.Infinite is a compile 3: // time constant. 4: public void Shutdown(int timeout = Timeout.Infinite) 5: { 6: // same code as before 7: } Note that you can’t default the ShutDown(TimeSpan) overload with Timeout.InfiniteTimeSpan since it is not a compile-time constant.  The only acceptable default for a TimeSpan parameter would be default(TimeSpan) which is zero milliseconds, which specified no wait, not infinite wait. Summary While Timeout.Infinite and Timeout.InfiniteTimeSpan are not earth-shattering classes in terms of functionality, they do give you very handy and readable constant values that you can use in your programs to help increase readability and maintainability when specifying infinite timeouts for various timeouts in the BCL and your own applications. Technorati Tags: C#,CSharp,.NET,Little Wonders,Timeout,Task

    Read the article

  • Of C# Iterators and Performance

    - by James Michael Hare
    Some of you reading this will be wondering, "what is an iterator" and think I'm locked in the world of C++.  Nope, I'm talking C# iterators.  No, not enumerators, iterators.   So, for those of you who do not know what iterators are in C#, I will explain it in summary, and for those of you who know what iterators are but are curious of the performance impacts, I will explore that as well.   Iterators have been around for a bit now, and there are still a bunch of people who don't know what they are or what they do.  I don't know how many times at work I've had a code review on my code and have someone ask me, "what's that yield word do?"   Basically, this post came to me as I was writing some extension methods to extend IEnumerable<T> -- I'll post some of the fun ones in a later post.  Since I was filtering the resulting list down, I was using the standard C# iterator concept; but that got me wondering: what are the performance implications of using an iterator versus returning a new enumeration?   So, to begin, let's look at a couple of methods.  This is a new (albeit contrived) method called Every(...).  The goal of this method is to access and enumeration and return every nth item in the enumeration (including the first).  So Every(2) would return items 0, 2, 4, 6, etc.   Now, if you wanted to write this in the traditional way, you may come up with something like this:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         List<T> newList = new List<T>();         int count = 0;           foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 newList.Add(i);             }         }           return newList;     }     So basically this method takes any IEnumerable<T> and returns a new IEnumerable<T> that contains every nth item.  Pretty straight forward.   The problem?  Well, Every<T>(...) will construct a list containing every nth item whether or not you care.  What happens if you were searching this result for a certain item and find that item after five tries?  You would have generated the rest of the list for nothing.   Enter iterators.  This C# construct uses the yield keyword to effectively defer evaluation of the next item until it is asked for.  This can be very handy if the evaluation itself is expensive or if there's a fair chance you'll never want to fully evaluate a list.   We see this all the time in Linq, where many expressions are chained together to do complex processing on a list.  This would be very expensive if each of these expressions evaluated their entire possible result set on call.    Let's look at the same example function, this time using an iterator:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         int count = 0;         foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 yield return i;             }         }     }   Notice it does not create a new return value explicitly, the only evidence of a return is the "yield return" statement.  What this means is that when an item is requested from the enumeration, it will enter this method and evaluate until it either hits a yield return (in which case that item is returned) or until it exits the method or hits a yield break (in which case the iteration ends.   Behind the scenes, this is all done with a class that the CLR creates behind the scenes that keeps track of the state of the iteration, so that every time the next item is asked for, it finds that item and then updates the current position so it knows where to start at next time.   It doesn't seem like a big deal, does it?  But keep in mind the key point here: it only returns items as they are requested. Thus if there's a good chance you will only process a portion of the return list and/or if the evaluation of each item is expensive, an iterator may be of benefit.   This is especially true if you intend your methods to be chainable similar to the way Linq methods can be chained.    For example, perhaps you have a List<int> and you want to take every tenth one until you find one greater than 10.  We could write that as:       List<int> someList = new List<int>();         // fill list here         someList.Every(10).TakeWhile(i => i <= 10);     Now is the difference more apparent?  If we use the first form of Every that makes a copy of the list.  It's going to copy the entire list whether we will need those items or not, that can be costly!    With the iterator version, however, it will only take items from the list until it finds one that is > 10, at which point no further items in the list are evaluated.   So, sounds neat eh?  But what's the cost is what you're probably wondering.  So I ran some tests using the two forms of Every above on lists varying from 5 to 500,000 integers and tried various things.    Now, iteration isn't free.  If you are more likely than not to iterate the entire collection every time, iterator has some very slight overhead:   Copy vs Iterator on 100% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 5 Copy 5 5 5 Iterator 5 50 50 Copy 28 50 50 Iterator 27 500 500 Copy 227 500 500 Iterator 247 5000 5000 Copy 2266 5000 5000 Iterator 2444 50,000 50,000 Copy 24,443 50,000 50,000 Iterator 24,719 500,000 500,000 Copy 250,024 500,000 500,000 Iterator 251,521   Notice that when iterating over the entire produced list, the times for the iterator are a little better for smaller lists, then getting just a slight bit worse for larger lists.  In reality, given the number of items and iterations, the result is near negligible, but just to show that iterators come at a price.  However, it should also be noted that the form of Every that returns a copy will have a left-over collection to garbage collect.   However, if we only partially evaluate less and less through the list, the savings start to show and make it well worth the overhead.  Let's look at what happens if you stop looking after 80% of the list:   Copy vs Iterator on 80% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 4 Copy 5 5 4 Iterator 5 50 40 Copy 27 50 40 Iterator 23 500 400 Copy 215 500 400 Iterator 200 5000 4000 Copy 2099 5000 4000 Iterator 1962 50,000 40,000 Copy 22,385 50,000 40,000 Iterator 19,599 500,000 400,000 Copy 236,427 500,000 400,000 Iterator 196,010       Notice that the iterator form is now operating quite a bit faster.  But the savings really add up if you stop on average at 50% (which most searches would typically do):     Copy vs Iterator on 50% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 2 Copy 5 5 2 Iterator 4 50 25 Copy 25 50 25 Iterator 16 500 250 Copy 188 500 250 Iterator 126 5000 2500 Copy 1854 5000 2500 Iterator 1226 50,000 25,000 Copy 19,839 50,000 25,000 Iterator 12,233 500,000 250,000 Copy 208,667 500,000 250,000 Iterator 122,336   Now we see that if we only expect to go on average 50% into the results, we tend to shave off around 40% of the time.  And this is only for one level deep.  If we are using this in a chain of query expressions it only adds to the savings.   So my recommendation?  If you have a resonable expectation that someone may only want to partially consume your enumerable result, I would always tend to favor an iterator.  The cost if they iterate the whole thing does not add much at all -- and if they consume only partially, you reap some really good performance gains.   Next time I'll discuss some of my favorite extensions I've created to make development life a little easier and maintainability a little better.

    Read the article

  • Microsoft Desktop Player is a Valuable Tool for IT Pro’s

    - by Mysticgeek
    If you are an IT Professional, a new education tool introduced by Microsoft is the MS Desktop Player. Today we take a look at what it has to offer, from Webcasts, White Papers, Training Videos, and more. Microsoft Desktop Player You can run the player from the website (shown here) or download the application for use on your local machine (link below). It allows you to easily access MS training and information in a central interface. To get the Desktop version, download the .msi file from the site… And run through the installer…   When you first start out, enter in if you’re an IT Pro, Developer and your role. Then you can decide on the resources you’re looking for such as Exchange Server, SharePoint, Windows 7, Security…etc. Here is an example of checking out a Podcast on Office 2007 setup and configuration from TechNet radio. Under Settings you can customize your search results and local resources. This helps you narrow down pertinent information for your needs. If you find something you really like, hover the pointer over the screen and you can add it to your library, share it, send feedback, and check for additional resources. If you don’t need items in your library they can be easily deleted. Under the News tab you get previews of Microsoft news items, clicking on it will open the full article in a separate browser. While you’re watching a presentation you can show or hide the details related to it. Conclusion Microsoft Desktop Player is currently in Beta, but has a lot of cool features to offer for your learning needs. You can easily find Podcasts, Webcasts, and more without having to browse all over the place. In our experience we didn’t notice any bugs, and what it offers so far works well. If you’re a geek who’s constantly browsing TechNet and other Microsoft learning sites, this helps keep everything consolidated in one app.  Download Microsoft Desktop Player Similar Articles Productive Geek Tips Fixing When Windows Media Player Library Won’t Let You Add FilesBuilt-in Quick Launch Hotkeys in Windows VistaNew Vista Syntax for Opening Control Panel Items from the Command-lineHow to Get Virtual Desktops on Windows XPWindows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool

    Read the article

  • Career path to get into computer science research

    - by srinathhs
    I taught this question will be appropriate to ask here. I am currently a software engineer working mainly on Java stuff , along with some android. My question : I want to be a researcher in "computer science" down the line 6 - 7 yrs, what do you folks suggest should be my path to reach it ? Constraints : I cannot cannot do formal MS or PHD , I simply cant afford it. I can dedicate certain amount of time per day to study and research.

    Read the article

  • Content Locking network with Microsoft Web Development tools? [closed]

    - by Jose Garcia
    I want my team to develop a content locking network for a client. But we dont know what is need to devlop such a network. We need to code it from Scratch. Please give us some advice. Content locking networks http://www.blamads.com/ adworkmedia.com These two networks are made with MS Tools it seems. FAQ http://support.blamads.com/index.php?pg=kb.book&id=5 https://www.adworkmedia.com/cpa-network-features.php

    Read the article

  • View the Real Links Behind Shortened URLs in Chrome

    - by Asian Angel
    When you encounter shortened URLs there is always that worry in the back of your mind about where they really lead to. Now you can get a “sneak peak” at the real links behind those URLs with the View Thru extension for Google Chrome. The URL Shortening services officially supported at this time are: bit.ly, cli.gs, ff.im, goo.gl, is.gd, nyti.ms, ow.ly, post.ly, su.pr, & tinyurl.com. Before When you encounter a shortened URL you are pretty much on your own in deciding whether to trust that link or not. It would really be nice if you could just hover your mouse over those links and know where they will lead ahead of time. After Once you have the extension installed you are ready to access that link viewing goodness. Please note that you will need to reload any pages that were open prior to installing the extension. For our first example we chose a shortened URL from “bit.ly”. As you can see the entire link behind the shortened URL is displayed very nicely…no hidden surprises there! Note: There are no options to worry with for the extension. Another perfect result for the “goo.gl URL” shown below. View Thru will certainly remove a lot of the stress related to clicking on shortened URLs. Bonus Find Just out of curiosity we looked for a shortened URL not listed as being officially supported at this time. We found one with the “http://nyti.ms/” domain and View Thru showed the link perfectly…so be sure to give it a try on other services too. Conclusion If you worry about where a shortened URL will really lead you then the View Thru extension can help alleviate that stress. Links Download the View Thru extension (Google Chrome Extensions) Similar Articles Productive Geek Tips See Where Shortened URLs “Link To” in Your Favorite BrowserVerify the Destinations of Shortened URLs the Easy WayCreate Shortened goo.gl URLs in Google Chrome the Easy WayCreate Shortened goo.gl URLs in Your Favorite BrowserAccess Google Chrome’s Special Pages the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job?

    Read the article

  • SB Timmy

    - by csharp-source.net
    SB Timmy is IMAP mail client for WAP/WML devices. It's written in C#/ASP.NET (works both with MS .NET Framework and Mono). Timmy handles all types of MIME (base64, quoted-printable encoded; multipart messages). It can send mail through SMTP. It's possible to download message attachments to your mobile device (like JPEG photos). Timmy is multi-language (currently english and lithuanian translations).

    Read the article

  • Live Mail folder and Thunderbird

    - by Umair Mustafa
    Me Friends Hello, Guys I'm facing a small issue and that is, I created a New account(MS live Email account) on Thunderbird and set the incoming protocol to IMAP and "pop3.live.com" Port No 995 , SSL to SSL/TLS and Outgoing protocol to SMTP and "smtp.live.com" Port No 25 , SSL to STARTTLS Now What I want is that there are sub folders in my Web based Live Email account which are not appearing when I create the account in Thunderbird. Please tell me how to get this thing done

    Read the article

  • Launch 2010 Technical Readiness Unofficial Q&amp;A.

    - by mbcrump
    I had an email from one of my readers about the 2010 Technical Readiness Series. Please read below: Hi Michael, I noticed you blogged a while back that you were going to attend MS 2010 Launch event. I’m going to the session in Seattle on May 27, is it worth it to attend? Also, I’m wondering if they give any free software away like VS2010 Pro? Any decent vendors? Looking forward to hearing back from you. I decided this information would probably benefit several instead of just responding back to the reader. In case you are not aware, MS has a 2010 launch event showing VS2010, Office 2010, SharePoint 2010 and SQL Server 2008R2.  You can sign up for an event here: https://microsoft.crgevents.com/Register2010/Content/Event_Selection.aspx I’ve answered the questions asked below. Q: Is it worth going? A: It is if you have had little or no exposure to the latest 2010 products. Most people are familiar with VS2010, but have not seen SharePoint 2010 Office 2010 or Windows Phone 7. It is designed to get you up to speed very quickly. If you have watched most of the MIX videos and keep up with .NET in general, you will benefit more by having the ability to ask questions.    Q: Did you get any free software? A: No, only demos including: VS2010/Office 2010 – They give you a link to the url where you can download the trial version. Windows 7 Enterprise – You get a DVD with the trial version loaded. Q: Do they give away any cool swag? A: Just a Microsoft T-Shirt (XL). Q: What about the vendor selection? A: At the event that I went to, most vendors were pushing SharePoint products. There wasn’t a lot of variety in the selection. Most vendors were giving away the typical pens, buttons and stickers and trial software. If you have any other questions, feel free to contact me. I will answer and add to this un-official FAQ.

    Read the article

  • What's wrong with my htaccess ? (500 Error)

    - by Dany Khalife
    I've written a small htaccess file to redirect Internet Explorer users to a specific page Here are the contents : # MS Internet Explorer - Mozilla v4 RewriteEngine On RewriteCond %{HTTP_USER_AGENT} ^Mozilla/4(.*)MSIE RewriteRule ^index\.php$ /sorry.php [L] # All other browsers #RewriteRule ^index\.html$ /index.32.html [L] Any clue why this would give a 500 Internal Server Error ? I have used mod rewrite before so i have the module loaded there...

    Read the article

  • How to shoot yourself in the foot (DO NOT Read in the office)

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/21/how-to-shoot-yourself-in-the-foot-do-not-read.aspxLet me make it absolutely clear - the following is:merely collated by your Geek from http://www.codeproject.com/Lounge.aspx?msg=3917012#xx3917012xxvery, very very funny so you read it in the presence of others at your own riskso here is the list - you have been warned!C You shoot yourself in the foot.   C++ You accidently create a dozen instances of yourself and shoot them all in the foot. Providing emergency medical assistance is impossible since you can't tell which are bitwise copies and which are just pointing at others and saying "That's me, over there."   FORTRAN You shoot yourself in each toe, iteratively, until you run out of toes, then you read in the next foot and repeat. If you run out of bullets, you continue anyway because you have no exception-handling facility.   Modula-2 After realizing that you can't actually accomplish anything in this language, you shoot yourself in the head.   COBOL USEing a COLT 45 HANDGUN, AIM gun at LEG.FOOT, THEN place ARM.HAND.FINGER on HANDGUN.TRIGGER and SQUEEZE. THEN return HANDGUN to HOLSTER. CHECK whether shoelace needs to be retied.   Lisp You shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds...   BASIC Shoot yourself in the foot with a water pistol. On big systems, continue until entire lower body is waterlogged.   Forth Foot yourself in the shoot.   APL You shoot yourself in the foot; then spend all day figuring out how to do it in fewer characters.   Pascal The compiler won't let you shoot yourself in the foot.   Snobol If you succeed, shoot yourself in the left foot. If you fail, shoot yourself in the right foot.   HyperTalk Put the first bullet of the gun into foot left of leg of you. Answer the result.   Prolog You tell your program you want to be shot in the foot. The program figures out how to do it, but the syntax doesn't allow it to explain.   370 JCL You send your foot down to MIS with a 4000-page document explaining how you want it to be shot. Three years later, your foot comes back deep-fried.   FORTRAN-77 You shoot yourself in each toe, iteratively, until you run out of toes, then you read in the next foot and repeat. If you run out of bullets, you continue anyway because you still can't do exception-processing.   Modula-2 (alternative) You perform a shooting on what might be currently a foot with what might be currently a bullet shot by what might currently be a gun.   BASIC (compiled) You shoot yourself in the foot with a BB using a SCUD missile launcher.   Visual Basic You'll really only appear to have shot yourself in the foot, but you'll have so much fun doing it that you won't care.   Forth (alternative) BULLET DUP3 * GUN LOAD FOOT AIM TRIGGER PULL BANG! EMIT DEAD IF DROP ROT THEN (This takes about five bytes of memory, executes in two to ten clock cycles on any processor and can be used to replace any existing function of the language as well as in any future words). (Welcome to bottom up programming - where you, too, can perform compiler pre-processing instead of writing code)   APL (alternative) You hear a gunshot and there's a hole in your foot, but you don't remember enough linear algebra to understand what happened. or @#&^$%&%^ foot   Pascal (alternative) Same as Modula-2 except that the bullet is not the right type for the gun and your hand is blown off.   Snobol (alternative) You grab your foot with your hand, then rewrite your hand to be a bullet. The act of shooting the original foot then changes your hand/bullet into yet another foot (a left foot).   Prolog (alternative) You attempt to shoot yourself in the foot, but the bullet, failing to find its mark, backtracks to the gun, which then explodes in your face.   COMAL You attempt to shoot yourself in the foot with a water pistol, but the bore is clogged, and the pressure build-up blows apart both the pistol and your hand. or draw_pistol aim_at_foot(left) pull_trigger hop(swearing)   Scheme As Lisp, but none of the other appendages are aware of this happening.   Algol You shoot yourself in the foot with a musket. The musket is aesthetically fascinating and the wound baffles the adolescent medic in the emergency room.   Ada If you are dumb enough to actually use this language, the United States Department of Defense will kidnap you, stand you up in front of a firing squad and tell the soldiers, "Shoot at the feet." or The Department of Defense shoots you in the foot after offering you a blindfold and a last cigarette. or After correctly packaging your foot, you attempt to concurrently load the gun, pull the trigger, scream and shoot yourself in the foot. When you try, however, you discover that your foot is of the wrong type. or After correctly packing your foot, you attempt to concurrently load the gun, pull the trigger, scream, and confidently aim at your foot knowing it is safe. However the cordite in the round does an Unchecked Conversion, fires and shoots you in the foot anyway.   Eiffel   You create a GUN object, two FOOT objects and a BULLET object. The GUN passes both the FOOT objects a reference to the BULLET. The FOOT objects increment their hole counts and forget about the BULLET. A little demon then drives a garbage truck over your feet and grabs the bullet (both of it) on the way. Smalltalk You spend so much time playing with the graphics and windowing system that your boss shoots you in the foot, takes away your workstation and makes you develop in COBOL on a character terminal. or You send the message shoot to gun, with selectors bullet and myFoot. A window pops up saying Gunpowder doesNotUnderstand: spark. After several fruitless hours spent browsing the methods for Trigger, FiringPin and IdealGas, you take the easy way out and create ShotFoot, a subclass of Foot with an additional instance variable bulletHole. Object Oriented Pascal You perform a shooting on what might currently be a foot with what might currently be a bullet fired from what might currently be a gun.   PL/I You consume all available system resources, including all the offline bullets. The Data Processing & Payroll Department doubles its size, triples its budget, acquires four new mainframes and drops the original one on your foot. Postscript foot bullets 6 locate loadgun aim gun shoot showpage or It takes the bullet ten minutes to travel from the gun to your foot, by which time you're long since gone out to lunch. The text comes out great, though.   PERL You stab yourself in the foot repeatedly with an incredibly large and very heavy Swiss Army knife. or You pick up the gun and begin to load it. The gun and your foot begin to grow to huge proportions and the world around you slows down, until the gun fires. It makes a tiny hole, which you don't feel. Assembly Language You crash the OS and overwrite the root disk. The system administrator arrives and shoots you in the foot. After a moment of contemplation, the administrator shoots himself in the foot and then hops around the room rabidly shooting at everyone in sight. or You try to shoot yourself in the foot only to discover you must first reinvent the gun, the bullet, and your foot.or The bullet travels to your foot instantly, but it took you three weeks to load the round and aim the gun.   BCPL You shoot yourself somewhere in the leg -- you can't get any finer resolution than that. Concurrent Euclid You shoot yourself in somebody else's foot.   Motif You spend days writing a UIL description of your foot, the trajectory, the bullet and the intricate scrollwork on the ivory handles of the gun. When you finally get around to pulling the trigger, the gun jams.   Powerbuilder While attempting to load the gun you discover that the LoadGun system function is buggy; as a work around you tape the bullet to the outside of the gun and unsuccessfully attempt to fire it with a nail. In frustration you club your foot with the butt of the gun and explain to your client that this approximates the functionality of shooting yourself in the foot and that the next version of Powerbuilder will fix it.   Standard ML By the time you get your code to typecheck, you're using a shoot to foot yourself in the gun.   MUMPS You shoot 583149 AK-47 teflon-tipped, hollow-point, armour-piercing bullets into even-numbered toes on odd-numbered feet of everyone in the building -- with one line of code. Three weeks later you shoot yourself in the head rather than try to modify that line.   Java You locate the Gun class, but discover that the Bullet class is abstract, so you extend it and write the missing part of the implementation. Then you implement the ShootAble interface for your foot, and recompile the Foot class. The interface lets the bullet call the doDamage method on the Foot, so the Foot can damage itself in the most effective way. Now you run the program, and call the doShoot method on the instance of the Gun class. First the Gun creates an instance of Bullet, which calls the doFire method on the Gun. The Gun calls the hit(Bullet) method on the Foot, and the instance of Bullet is passed to the Foot. But this causes an IllegalHitByBullet exception to be thrown, and you die.   Unix You shoot yourself in the foot or % ls foot.c foot.h foot.o toe.c toe.o % rm * .o rm: .o: No such file or directory % ls %   370 JCL (alternative) You shoot yourself in the head just thinking about it.   DOS JCL You first find the building you're in in the phone book, then find your office number in the corporate phone book. Then you have to write this down, then describe, in cubits, your exact location, in relation to the door (right hand side thereof). Then you need to write down the location of the gun (loading it is a proprietary utility), then you load it, and the COBOL program, and run them, and, with luck, it may be run tonight.   VMS   $ MOUNT/DENSITY=.45/LABEL=BULLET/MESSAGE="BYE" BULLET::BULLET$GUN SYS$BULLET $ SET GUN/LOAD/SAFETY=OFF/SIGHT=NONE/HAND=LEFT/CHAMBER=1/ACTION=AUTOMATIC/ LOG/ALL/FULL SYS$GUN_3$DUA3:[000000]GUN.GNU $ SHOOT/LOG/AUTO SYS$GUN SYS$SYSTEM:[FOOT]FOOT.FOOT   %DCL-W-ACTIMAGE, error activating image GUN -CLI-E-IMGNAME, image file $3$DUA240:[GUN]GUN.EXE;1 -IMGACT-F-NOTNATIVE, image is not an OpenVMS Alpha AXP image or %SYS-F-FTSHT, foot shot (fifty lines of traceback omitted) sh,csh, etc You can't remember the syntax for anything, so you spend five hours reading manual pages, then your foot falls asleep. You shoot the computer and switch to C.   Apple System 7 Double click the gun icon and a window giving a selection for guns, target areas, plus balloon help with medical remedies, and assorted sound effects. Click "shoot" button and a small bomb appears with note "Error of Type 1 has occurred."   Windows 3.1 Double click the gun icon and wait. Eventually a window opens giving a selection for guns, target areas, plus balloon help with medical remedies, and assorted sound effects. Click "shoot" button and a small box appears with note "Unable to open Shoot.dll, check that path is correct."   Windows 95 Your gun is not compatible with this OS and you must buy an upgrade and install it before you can continue. Then you will be informed that you don't have enough memory.   CP/M I remember when shooting yourself in the foot with a BB gun was a big deal.   DOS You finally found the gun, but can't locate the file with the foot for the life of you.   MSDOS You shoot yourself in the foot, but can unshoot yourself with add-on software.   Access You try to point the gun at your foot, but it shoots holes in all your Borland distribution diskettes instead.   Paradox Not only can you shoot yourself in the foot, your users can too.   dBase You squeeze the trigger, but the bullet moves so slowly that by the time your foot feels the pain, you've forgotten why you shot yourself anyway. or You buy a gun. Bullets are only available from another company and are promised to work so you buy them. Then you find out that the next version of the gun is the one scheduled to actually shoot bullets.   DBase IV, V1.0 You pull the trigger, but it turns out that the gun was a poorly designed hand grenade and the whole building blows up.   SQL You cut your foot off, send it out to a service bureau and when it returns, it has a hole in it but will no longer fit the attachment at the end of your leg. or Insert into Foot Select Bullet >From Gun.Hand Where Chamber = 'LOADED' And Trigger = 'PULLED'   Clipper You grab a bullet, get ready to insert it in the gun so that you can shoot yourself in the foot and discover that the gun that the bullets fits has not yet been built, but should be arriving in the mail _REAL_SOON_NOW_. Oracle The menus for coding foot_shooting have not been implemented yet and you can't do foot shooting in SQL.   English You put your foot in your mouth, then bite it off. (For those who don't know, English is a McDonnell Douglas/PICK query language which allegedly requires 110% of system resources to run happily.) Revelation [an implementation of the PICK Operating System] You'll be able to shoot yourself in the foot just as soon as you figure out what all these bullets are for.   FlagShip Starting at the top of your head, you aim the gun at yourself repeatedly until, half an hour later, the gun is finally pointing at your foot and you pull the trigger. A new foot with a hole in it appears but you can't work out how to get rid of the old one and your gun doesn't work anymore.   FidoNet You put your foot in your mouth, then echo it internationally.   PicoSpan [a UNIX-based computer conferencing system] You can't shoot yourself in the foot because you're not a host. or (host variation) Whenever you shoot yourself in the foot, someone opens a topic in policy about it.   Internet You put your foot in your mouth, shoot it, then spam the bullet so that everybody gets shot in the foot.   troff rmtroff -ms -Hdrwp | lpr -Pwp2 & .*place bullet in footer .B .NR FT +3i .in 4 .bu Shoot! .br .sp .in -4 .br .bp NR HD -2i .*   Genetic Algorithms You create 10,000 strings describing the best way to shoot yourself in the foot. By the time the program produces the optimal solution, humans have evolved wings and the problem is moot.   CSP (Communicating Sequential Processes) You only fail to shoot everything that isn't your foot.   MS-SQL Server MS-SQL Server’s gun comes pre-loaded with an unlimited supply of Teflon coated bullets, and it only has two discernible features: the muzzle and the trigger. If that wasn't enough, MS-SQL Server also puts the gun in your hand, applies local anesthetic to the skin of your forefinger and stitches it to the gun's trigger. Meanwhile, another process has set up a spinal block to numb your lower body. It will then proceeded to surgically remove your foot, cryogenically freeze it for preservation, and attach it to the muzzle of the gun so that no matter where you aim, you will shoot your foot. In order to avoid shooting yourself in the foot, you need to unstitch your trigger finger, remove your foot from the muzzle of the gun, and have it surgically reattached. Then you probably want to get some crutches and go out to buy a book on SQL Server Performance Tuning.   Sybase Sybase's gun requires assembly, and you need to go out and purchase your own clip and bullets to load the gun. Assembly is complicated by the fact that Sybase has hidden the gun behind a big stack of reference manuals, but it hasn't told you where that stack is. While you were off finding the gun, assembling it, buying bullets, etc., Sybase was also busy surgically removing your foot and cryogenically freezing it for preservation. Instead of attaching it to the muzzle of the gun, though, it packed your foot on dry ice and sent it UPS-Ground to an unnamed hookah bar somewhere in the middle east. In order to shoot your foot, you must modify your gun with a GPS system for targeting and hire some guy named "Indy" to find the hookah bar and wire the coordinates back to you. By this time, you've probably become so daunted at the tasks stand between you and shooting your foot that you hire a guy who's read all the books on Sybase to help you shoot your foot. If you're lucky, he'll be smart enough both to find your foot and to stop you from shooting it.   Magic software You spend 1 week looking up the correct syntax for GUN. When you find it, you realise that GUN will not let you shoot in your own foot. It will allow you to shoot almost anything but your foot. You then decide to build your own gun. You can't use the standard barrel since this will only allow for standard bullets, which will not fire if the barrel is pointed at your foot. After four weeks, you have created your own custom gun. It blows up in your hand without warning, because you failed to initialise the safety catch and it doesn't know whether the initial state is "0", 0, NULL, "ZERO", 0.0, 0,0, "0.0", or "0,00". You fix the problem with your remaining hand by nesting 12 safety catches, and then decide to build the gun without safety catch. You then shoot the management and retire to a happy life where you code in languages that will allow you to shoot your foot in under 10 days.FirefoxLets you shoot yourself in as many feet as you'd like, while using multiple great addons! IEA moving target in terms of standard ammunition size and doesn't always work properly with non-Microsoft ammunition, so sometimes you shoot something other than your foot. However, it's the corporate world's standard foot-shooting apparatus. Hackers seem to enjoy rigging websites up to trigger cascading foot-shooting failures. Windows 98 About the same as Windows 95 in terms of overall bullet capacity and triggering mechanisms. Includes updated DirectShot API. A new version was released later on to support USB guns, Windows 98 SE.WPF:You get your baseball glove and a ball and you head out to your backyard, where you throw balls to your pitchback. Then your unkempt-haired-cargo-shorts-and-sandals-with-white-socks-wearing neighbor uses XAML to sculpt your arm into a gun, the ball into a bullet and the pitchback into your foot. By now, however, only the neighbor can get it to work and he's only around from 6:30 PM - 3:30 AM. LOGO: You very carefully lay out the trajectory of the bullet. Then you start the gun, which fires very slowly. You walk precisely to the point where the bullet will travel and wait, but just before it gets to you, your class time is up and one of the other kids has already used the system to hack into Sony's PS3 network. Flash: Someone has designed a beautiful-looking gun that anyone can shoot their feet with for free. It weighs six hundred pounds. All kinds of people are shooting themselves in the feet, and sending the link to everyone else so that they can too. That is, except for the criminals, who are all stealing iOS devices that the gun won't work with.APL: Its (mostly) all greek to me. Lisp: Place ((gun in ((hand sight (foot then shoot))))) (Lots of Insipid Stupid Parentheses)Apple OS/X and iOS Once a year, Steve Jobs returns from sick leave to tell millions of unwavering fans how they will be able to shoot themselves in the foot differently this year. They retweet and blog about it ad nauseam, and wait in line to be the first to experience "shoot different".Windows ME Usually fails, even at shooting you in the foot. Yo dawg, I heard you like shooting yourself in the foot. So I put a gun in your gun, so you can shoot yourself in the foot while you shoot yourself in the foot. (Okay, I'm not especially proud of this joke.) Windows 2000 Now you really do have to log in, before you are allowed to shoot yourself in the foot.Windows XPYou thought you learned your lesson: Don't use Windows ME. Then, along came this new creature, built on top of Windows NT! So you spend the next couple days installing antivirus software, patches and service packs, just so you can get that driver to install, and then proceed to shoot yourself in the foot. Windows Vista Newer! Glossier! Shootier! Windows 7 The bullets come out a lot smoother. Active Directory Each bullet now has an attached Bullet Identifier, and can be uniquely identified. Policies can be applied to dictate fragmentation, and the gun will occasionally have a confusing delay after the trigger has been pulled. PythonYou try to use import foot; foot.shoot() only to realize that's only available in 3.0, to which you can't yet upgrade from 2.7 because of all those extension libs lacking support. Solaris Shoots best when used on SPARC hardware, but still runs the trigger GUI under Java. After weeks of learning the appropriate STOP command to prevent the trigger from automatically being pressed on boot, you think you've got it under control. Then the one time you ever use dtrace, it hits a bug that fires the gun. MySQL The feature that allows you to shoot yourself in the foot has been in development for about 6 years, and they are adding it into the next version, which is coming out REAL SOON NOW, promise! But you can always check it out of source control and try it yourself (just not in any environment where data integrity is important because it will probably explode.) PostgreSQLAllows you to have a smug look on your face while you shoot yourself in the foot, because those MySQL guys STILL don't have that feature. NoSQL Barrel? Who needs a barrel? Just put the bullet on your foot, and strike it with a hammer. See? It's so much simpler and more efficient that way. You can even strike multiple bullets in one swing if you swing with a good enough arc, because hammers are easy to use. Getting them to synchronize is a little difficult, though.Eclipse There are about a dozen different packages for shooting yourself in the foot, with weird interdependencies on outdated components. Once you finally navigate the morass and get one installed, you then have something to look at while you shoot yourself in the foot with that package: You can watch the screen redraw.Outlook Makes it really easy to let everyone know you shot yourself in the foot!Shooting yourself in the foot using delegates.You really need to shoot yourself in the foot but you hate firearms (you don't want any dependency on the specifics of shooting) so you delegate it to somebody else. You don't care how it is done as long is shooting your foot. You can do it asynchronously in case you know you may faint so you are called back/slapped in the face by your shooter/friend (or background worker) when everything is done.C#You prepare the gun and the bullet, carefully modeling all of the physics of a bullet traveling through a foot. Just before you're about to pull the trigger, you stumble on System.Windows.BodyParts.Foot.ShootAt(System.Windows.Firearms.IGun gun) in the extended framework, realize you just wasted the entire afternoon, and shoot yourself in the head.PHP<?phprequire("foot_safety_check.php");?><!DOCTYPE HTML><html><head> <!--Lower!--><title>Shooting me in the foot</title></head> <body> <!--LOWER!!!--><leg> <!--OK, I made this one up...--><footer><?php echo (dungSift($_SERVER['HTTP_USER_AGENT'], "ie"))?("Your foot is safe, but you might want to wear a hard hat!"):("<div class=\"shot\">BANG!</div>"); ?></footer></leg> </body> </html>

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >