Search Results

Search found 10515 results on 421 pages for 'automatically'.

Page 411/421 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • Help with understanding why UAC dialog pops up on Win7 for our application

    - by Tim
    We have a C++ unmanaged application that appears to cause a UAC prompt. It seems to happen on Win7 and NOT on Vista Unfortunately the UAC dlg is system modal so I can't attach a debugger to check in the code where it is, and running under msdev (we're using 2008) runs in elevated mode. We put a message box at the start of our program/winmain but it doesn't even get that far, so apparently this is in the startup code. What can cause a UAC notification so early and what other things can I do to track down the cause? EDIT Apparently the manifest is an important issue here, but it seems not to be helping me - or perhaps I am not configuring the manifest file correctly. Can someone provide a sample manifest? Also, does the linker/UAC magic figure out that the program "might" write to the registry and set its UAC requirements based on that? There are code paths that might trigger UAC, but we are not even at that point when the UAC dlg comes up. An additional oddity is that this does not seem to happen on Vista with UAC turned on. Here is a manifest (that I think is/was generated automatically): <?xml version='1.0' encoding='UTF-8' standalone='yes'?> <assembly xmlns='urn:schemas-microsoft-com:asm.v1' manifestVersion='1.0'> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level='asInvoker' uiAccess='false' /> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type='win32' name='Microsoft.Windows.Common-Controls' version='6.0.0.0' processorArchitecture='*' publicKeyToken='6595b64144ccf1df' language='*' /> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type='win32' name='Microsoft.Windows.Common-Controls' version='6.0.0.0' processorArchitecture='x86' publicKeyToken='6595b64144ccf1df' language='*' /> </dependentAssembly> </dependency> </assembly> And then this one was added to the manifest list to see if it would help <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="1.0.0.0" processorArchitecture="x86" name="[removed for anonymity]" type="win32" /> <description> [removed for anonymity] </description> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.Windows.Common-Controls" version="6.0.0.0" processorArchitecture="x86" publicKeyToken="6595b64144ccf1df" language="*" /> </dependentAssembly> </dependency> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"/> </requestedPrivileges> </security> </trustInfo> </assembly> The following is from the actual EXE using the ManifestViewer tool - <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="1.0.0.0" processorArchitecture="x86" name="[removed]" type="win32" /> <description>[removed]</description> - <dependency> - <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.Windows.Common-Controls" version="6.0.0.0" processorArchitecture="x86" publicKeyToken="6595b64144ccf1df" language="*" /> </dependentAssembly> </dependency> - <dependency> - <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.Windows.Common-Controls" version="6.0.0.0" processorArchitecture="*" publicKeyToken="6595b64144ccf1df" language="*" /> </dependentAssembly> </dependency> - <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> - <security> - <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false" /> </requestedPrivileges> </security> </trustInfo> </assembly> It appears that it might be due to the xp compatibility setting on our app. I'll have to test that. (we set that in the installer I found out because some sound drivers don't work correctly on win7)

    Read the article

  • Form Submitting Incorrect Information to MySQL Database

    - by ThatMacLad
    I've created a form that submits data to a MySQL database but the Date, Time, Year and Month fields constantly revert to the exact same date (1st January 1970) despite the fact that when I submit the information to the database the form displays the current date, time etc to me. I've already set it so that the time and date fields automatically display the current time and date. Could someone please help me with this. Form: <html> <head> <title>Blog | New Post</title> <link rel="stylesheet" href="css/newposts.css" type="text/css" /> </head> <body> <div class="new-form"> <div class="header"> <a href="edit.php"><img src="images/edit-home-button.png"></a> </div> <div class="form-bg"> <?php if (isset($_POST['submit'])) { $month = htmlspecialchars(strip_tags($_POST['month'])); $date = htmlspecialchars(strip_tags($_POST['date'])); $year = htmlspecialchars(strip_tags($_POST['year'])); $time = htmlspecialchars(strip_tags($_POST['time'])); $title = htmlspecialchars(strip_tags($_POST['title'])); $entry = $_POST['entry']; $timestamp = strtotime($month . " " . $date . " " . $year . " " . $time); $entry = nl2br($entry); if (!get_magic_quotes_gpc()) { $title = addslashes($title); $entry = addslashes($entry); } mysql_connect ('localhost', 'root', 'root') ; mysql_select_db ('tmlblog'); $sql = "INSERT INTO php_blog (timestamp,title,entry) VALUES ('$timestamp','$title','$entry')"; $result = mysql_query($sql) or print("Can't insert into table php_blog.<br />" . $sql . "<br />" . mysql_error()); if ($result != false) { print "<p class=\"success\">Your entry has successfully been entered into the blog. </p>"; } mysql_close(); } ?> <?php $current_month = date("F"); $current_date = date("d"); $current_year = date("Y"); $current_time = date("H:i"); ?> <form method="post" action="<?php echo $_SERVER['PHP_SELF']; ?>"> <input class="field" type="text" name="date" id="date" size="2" value="<?php echo $current_month; ?>" /> <input class="field" type="text" name="date" id="date" size="2" value="<?php echo $current_date; ?>" /> <input class="field" type="text" name="date" id="date" size="2" value="<?php echo $current_year; ?>" /> <input type="text" name="time" id="time" size="5"value="<?php echo $current_time; ?>" /> <input class="field2" type="text" id="title" value="Title Goes Here." name="title" size="40" /> <textarea class="textarea" cols="80" rows="20" name="entry" id="entry" class="field2"></textarea> <input class="field" type="submit" name="submit" id="submit" value="Submit"> </form> </div> </div> </div> <div class="bottom"></div> </body> </html>

    Read the article

  • How does Ocaml decide precedence for user-defined operators?

    - by forefinger
    I want nice operators for complex arithmetic to make my code more readable. Ocaml has a Complex module, so I just want to add operators that call those functions. The most intuitive way for me is to make a new complex operator from all of the usual operators by appending '&' to the operator symbol. Thus +& and *& will be complex addition and multiplication. I would also like ~& to be complex conjugation. If I'm going to use these operators, I want them to associate the same way that normal arithmetic associates. Based on the following sessions, they are automatically behaving the way I want, but I would like to understand why, so that I don't get horrible bugs when I introduce more operators. My current guess is that their precedence is done by lexically sorting the operator symbols according to an ordering that is consistent with normal arithmetic precedence. But I cannot confirm this. Session one: # open Complex;; # let (+&) a b = add a b;; val ( +& ) : Complex.t -> Complex.t -> Complex.t = <fun> # let ( *&) a b = mul a b;; val ( *& ) : Complex.t -> Complex.t -> Complex.t = <fun> # one +& zero *& one +& zero *& one;; - : Complex.t = {re = 1.; im = 0.} # zero +& one *& zero +& one *& zero;; - : Complex.t = {re = 0.; im = 0.} # i +& i *& i +& i *& i *& i;; - : Complex.t = {re = -1.; im = 0.} Session two: # open Complex;; # let ( *&) a b = mul a b;; val ( *& ) : Complex.t -> Complex.t -> Complex.t = <fun> # let (+&) a b = add a b;; val ( +& ) : Complex.t -> Complex.t -> Complex.t = <fun> # one +& zero *& one +& zero *& one;; - : Complex.t = {re = 1.; im = 0.} # zero +& one *& zero +& one *& zero;; - : Complex.t = {re = 0.; im = 0.} # i +& i *& i +& i *& i *& i;; - : Complex.t = {re = -1.; im = 0.} # let (~&) a = conj a;; val ( ~& ) : Complex.t -> Complex.t = <fun> # (one +& i) *& ~& (one +& i);; - : Complex.t = {re = 2.; im = 0.}

    Read the article

  • Objects in Java ArrayList don't get updated.

    - by Sbm007
    This is going to be a very long post, hopefully you can understand what I'm talking about and I appreciate any help. Thanks Basically, I've created a personal, non-commercial project (which I don't plan to release) that can read ZIP and RAR files. It can only read the contents in the archive, the folders inside, the files inside the folders and its properties (such as last modified date, last modified time, CRC checksum, uncompressed size, compressed size and file name). It can't extract files either, so it's really a ZIP/RAR viewer if you may. Anyway that's slightly irrelevant to my problem but I thought I'd give you some background info. Now for my problem: I can successfully list all the folders and files inside a ZIP archive, so now I want to take that raw input and link it together in some useful way. I made 2 classes: ArchiveFile (represents a file inside a ZIP) and ArchiveFolder (represents a folder inside a ZIP). They both have some useful methods such as getLastModifiedDate, getName, getPath and so on. But the difference is that ArchiveFolder can hold an ArrayList of ArchiveFile's and additional ArchiveFolder's (think of this as files and folders inside a folder). Now I want to populate my raw input into one root ArchiveFolder, which will have all the files in the root dir of the ZIP in the ArchiveFile's ArrayList and any additional folders in the root dir of the ZIP in the ArchiveFolder's ArrayList (and this process can continue on like this like a chain reaction (more files/folders in that ArchiveFolder etc etc). So I came up with the following code: while (archive.hasMore()) { String path = ""; ArchiveFolder current = root; String[] contents = archive.getName().split("/"); for (int x = 0; x < contents.length; ++x) { if (x == (contents.length - 1) && !archive.getName().endsWith("/")) { // If on last item and item is a file path += contents[x]; // Update final path ArchiveFile file = new ArchiveFile(path, contents[x], archive.getUncompressedSize(), archive.getCompressedSize(), archive.getModifiedTime(), archive.getModifiedDate(), archive.getCRC()); current.addFile(file); // Create and add the file to the current ArchiveFolder } else if (x == (contents.length - 1)) { // Else if we are on last item and it is a folder path += contents[x] + "/"; // Update final path ArchiveFolder folder = new ArchiveFolder(path, contents[x], archive.getModifiedTime(), archive.getModifiedDate()); current.addFolder(folder); // Create and add this folder to the current ArchiveFile } else { // Else if we are still traversing through the path path += contents[x] + "/"; // Update path ArchiveFolder folder = new ArchiveFolder(path, contents[x]); current.addFolder(folder); // Create and add folder (remember we do not know the modified date/time as all we know is the path, so we can deduce the name only) current = folder; // Update current ArchiveFolder to the newly created one for the next iteration of the for loop } } archive.getNext(); } Assume that root is the root ArchiveFolder (initially empty). And that archive.getName() returns the name of the current file OR folder in the following fashion: file.txt or folder1/file2.txt or folder4/folder2/ (this is a empty folder) etc. So basically the relative path from the root of the ZIP archive. Please read through the comments in the above code to familiarize yourself with it. Also assume that the addFolder method in an ArchiveFile, only adds the folder if it doesn't exist already (so there are no multiple folders) and it also updates the time and date of an existing folder if it is blank (ie it was a intermediate folder we only knew the name of, but now we know its details). The code for addFolder is (pretty self-explanitory): public void addFolder(ArchiveFolder folder) { int loc = folders.indexOf(folder); // folders is the ArrayList containing ArchiveFolder's if (loc == -1) { folders.add(folder); } else { ArchiveFolder real = folders.get(loc); if (real.time == null) { real.setTime(folder.getTime()); real.setDate(folder.getDate()); } } } So I can't see anything wrong with the code, it works and after finishing, the root ArchiveFolder contains all the files in the root of the ZIP as I want it to, and it contains all the direcories in the root folder as I want it to. So you'd think it works as expected, but no the ArchiveFolder's in the root folder don't contain the data inside those 'child' folders, it's just a blank folder with no additional files and folders (while it does really contain some more files/folders when viewed in WinZip). After debugging using Eclipse, the for loop does iterate through all the files (even those not included above), so this led me to believe that there is a problem with this line of the code: current = folder; What it does is, it updates the current folder (used as an intermediate by the loop) to the newly added folder. I thought Java passed by reference and thus all new operations and new additions in future ArchiveFile's and ArchiveFolder's are automatically updated, and parent ArchiveFolder's will be updated accordingly. But that does not appear to be the case? I know this is a long ass post and I really hope anyone can help me out with this. Thanks in advance.

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • ASP.Net MVC 2 Auto Complete Textbox With Custom View Model Attribute & EditorTemplate

    - by SeanMcAlinden
    In this post I’m going to show how to create a generic, ajax driven Auto Complete text box using the new MVC 2 Templates and the jQuery UI library. The template will be automatically displayed when a property is decorated with a custom attribute within the view model. The AutoComplete text box in action will look like the following:   The first thing to do is to do is visit my previous blog post to put the custom model metadata provider in place, this is necessary when using custom attributes on the view model. http://weblogs.asp.net/seanmcalinden/archive/2010/06/11/custom-asp-net-mvc-2-modelmetadataprovider-for-using-custom-view-model-attributes.aspx Once this is in place, make sure you visit the jQuery UI and download the latest stable release – in this example I’m using version 1.8.2. You can download it here. Add the jQuery scripts and css theme to your project and add references to them in your master page. Should look something like the following: Site.Master <head runat="server">     <title><asp:ContentPlaceHolder ID="TitleContent" runat="server" /></title>     <link href="../../Content/Site.css" rel="stylesheet" type="text/css" />     <link href="../../css/ui-lightness/jquery-ui-1.8.2.custom.css" rel="stylesheet" type="text/css" />     <script src="../../Scripts/jquery-1.4.2.min.js" type="text/javascript"></script>     <script src="../../Scripts/jquery-ui-1.8.2.custom.min.js" type="text/javascript"></script> </head> Once this is place we can get started. Creating the AutoComplete Custom Attribute The auto complete attribute will derive from the abstract MetadataAttribute created in my previous post. It will look like the following: AutoCompleteAttribute using System.Collections.Generic; using System.Web.Mvc; using System.Web.Routing; namespace Mvc2Templates.Attributes {     public class AutoCompleteAttribute : MetadataAttribute     {         public RouteValueDictionary RouteValueDictionary;         public AutoCompleteAttribute(string controller, string action, string parameterName)         {             this.RouteValueDictionary = new RouteValueDictionary();             this.RouteValueDictionary.Add("Controller", controller);             this.RouteValueDictionary.Add("Action", action);             this.RouteValueDictionary.Add(parameterName, string.Empty);         }         public override void Process(ModelMetadata modelMetaData)         {             modelMetaData.AdditionalValues.Add("AutoCompleteUrlData", this.RouteValueDictionary);             modelMetaData.TemplateHint = "AutoComplete";         }     } } As you can see, the constructor takes in strings for the controller, action and parameter name. The parameter name will be used for passing the search text within the auto complete text box. The constructor then creates a new RouteValueDictionary which we will use later to construct the url for getting the auto complete results via ajax. The main interesting method is the method override called Process. With the process method, the route value dictionary is added to the modelMetaData AdditionalValues collection. The TemplateHint is also set to AutoComplete, this means that when the view model is parsed for display, the MVC 2 framework will look for a view user control template called AutoComplete, if it finds one, it uses that template to display the property. The View Model To show you how the attribute will look, this is the view model I have used in my example which can be downloaded at the end of this post. View Model using System.ComponentModel; using Mvc2Templates.Attributes; namespace Mvc2Templates.Models {     public class TemplateDemoViewModel     {         [AutoComplete("Home", "AutoCompleteResult", "searchText")]         [DisplayName("European Country Search")]         public string SearchText { get; set; }     } } As you can see, the auto complete attribute is called with the controller name, action name and the name of the action parameter that the search text will be passed into. The AutoComplete Template Now all of this is in place, it’s time to create the AutoComplete template. Create a ViewUserControl called AutoComplete.ascx at the following location within your application – Views/Shared/EditorTemplates/AutoComplete.ascx Add the following code: AutoComplete.ascx <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> <%     var propertyName = ViewData.ModelMetadata.PropertyName;     var propertyValue = ViewData.ModelMetadata.Model;     var id = Guid.NewGuid().ToString();     RouteValueDictionary urlData =         (RouteValueDictionary)ViewData.ModelMetadata.AdditionalValues.Where(x => x.Key == "AutoCompleteUrlData").Single().Value;     var url = Mvc2Templates.Views.Shared.Helpers.RouteHelper.GetUrl(this.ViewContext.RequestContext, urlData); %> <input type="text" name="<%= propertyName %>" value="<%= propertyValue %>" id="<%= id %>" class="autoComplete" /> <script type="text/javascript">     $(function () {         $("#<%= id %>").autocomplete({             source: function (request, response) {                 $.ajax({                     url: "<%= url %>" + request.term,                     dataType: "json",                     success: function (data) {                         response(data);                     }                 });             },             minLength: 2         });     }); </script> There is a lot going on in here but when you break it down it’s quite simple. Firstly, the property name and property value are retrieved through the model meta data. These are required to ensure that the text box input has the correct name and data to allow for model binding. If you look at line 14 you can see them being used in the text box input creation. The interesting bit is on line 8 and 9, this is the code to retrieve the route value dictionary we added into the model metada via the custom attribute. Line 11 is used to create the url, in order to do this I created a quick helper class which looks like the code below titled RouteHelper. The last bit of script is the code to initialise the jQuery UI AutoComplete control with the correct url for calling back to our controller action. RouteHelper using System.Web.Mvc; using System.Web.Routing; namespace Mvc2Templates.Views.Shared.Helpers {     public static class RouteHelper     {         const string Controller = "Controller";         const string Action = "Action";         const string ReplaceFormatString = "REPLACE{0}";         public static string GetUrl(RequestContext requestContext, RouteValueDictionary routeValueDictionary)         {             RouteValueDictionary urlData = new RouteValueDictionary();             UrlHelper urlHelper = new UrlHelper(requestContext);                          int i = 0;             foreach(var item in routeValueDictionary)             {                 if (item.Value == string.Empty)                 {                     i++;                     urlData.Add(item.Key, string.Format(ReplaceFormatString, i.ToString()));                 }                 else                 {                     urlData.Add(item.Key, item.Value);                 }             }             var url = urlHelper.RouteUrl(urlData);             for (int index = 1; index <= i; index++)             {                 url = url.Replace(string.Format(ReplaceFormatString, index.ToString()), string.Empty);             }             return url;         }     } } See it in action All you need to do to see it in action is pass a view model from your controller with the new AutoComplete attribute attached and call the following within your view: <%= this.Html.EditorForModel() %> NOTE: The jQuery UI auto complete control expects a JSON string returned from your controller action method… as you can’t use the JsonResult to perform GET requests, use a normal action result, convert your data into json and return it as a string via a ContentResult. If you download the solution it will be very clear how to handle the controller and action for this demo. The full source code for this post can be downloaded here. It has been developed using MVC 2 and Visual Studio 2010. As always, I hope this has been interesting/useful. Kind Regards, Sean McAlinden.

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • WCF WS-Security and WSE Nonce Authentication

    - by Rick Strahl
    WCF makes it fairly easy to access WS-* Web Services, except when you run into a service format that it doesn't support. Even then WCF provides a huge amount of flexibility to make the service clients work, however finding the proper interfaces to make that happen is not easy to discover and for the most part undocumented unless you're lucky enough to run into a blog, forum or StackOverflow post on the matter. This is definitely true for the Password Nonce as part of the WS-Security/WSE protocol, which is not natively supported in WCF. Specifically I had a need to create a WCF message on the client that includes a WS-Security header that looks like this from their spec document:<soapenv:Header> <wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-8" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>TeStUsErNaMe1</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >TeStPaSsWoRd1</wsse:Password> <wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >f8nUe3YupTU5ISdCy3X9Gg==</wsse:Nonce> <wsu:Created>2011-05-04T19:01:40.981Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> Specifically, the Nonce and Created keys are what WCF doesn't create or have a built in formatting for. Why is there a nonce? My first thought here was WTF? The username and password are there in clear text, what does the Nonce accomplish? The Nonce and created keys are are part of WSE Security specification and are meant to allow the server to detect and prevent replay attacks. The hashed nonce should be unique per request which the server can store and check for before running another request thus ensuring that a request is not replayed with exactly the same values. Basic ServiceUtl Import - not much Luck The first thing I did when I imported this service with a service reference was to simply import it as a Service Reference. The Add Service Reference import automatically detects that WS-Security is required and appropariately adds the WS-Security to the basicHttpBinding in the config file:<?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="RealTimeOnlineSoapBinding"> <security mode="Transport" /> </binding> <binding name="RealTimeOnlineSoapBinding1" /> </basicHttpBinding> </bindings> <client> <endpoint address="https://notarealurl.com:443/services/RealTimeOnline" binding="basicHttpBinding" bindingConfiguration="RealTimeOnlineSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> </configuration> If if I run this as is using code like this:var client = new RealTimeOnlineClient(); client.ClientCredentials.UserName.UserName = "TheUsername"; client.ClientCredentials.UserName.Password = "ThePassword"; … I get nothing in terms of WS-Security headers. The request is sent, but the the binding expects transport level security to be applied, rather than message level security. To fix this so that a WS-Security message header is sent the security mode can be changed to: <security mode="TransportWithMessageCredential" /> Now if I re-run I at least get a WS-Security header which looks like this:<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2012-11-24T02:55:18.011Z</u:Created> <u:Expires>2012-11-24T03:00:18.011Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-18c215d4-1106-40a5-8dd1-c81fdddf19d3-1"> <o:Username>TheUserName</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> Closer! Now the WS-Security header is there along with a timestamp field (which might not be accepted by some WS-Security expecting services), but there's no Nonce or created timestamp as required by my original service. Using a CustomBinding instead My next try was to go with a CustomBinding instead of basicHttpBinding as it allows a bit more control over the protocol and transport configurations for the binding. Specifically I can explicitly specify the message protocol(s) used. Using configuration file settings here's what the config file looks like:<?xml version="1.0"?> <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="CustomSoapBinding"> <security includeTimestamp="false" authenticationMode="UserNameOverTransport" defaultAlgorithmSuite="Basic256" requireDerivedKeys="false" messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10"> </security> <textMessageEncoding messageVersion="Soap11"></textMessageEncoding> <httpsTransport maxReceivedMessageSize="2000000000"/> </binding> </customBinding> </bindings> <client> <endpoint address="https://notrealurl.com:443/services/RealTimeOnline" binding="customBinding" bindingConfiguration="CustomSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup> </configuration> This ends up creating a cleaner header that's missing the timestamp field which can cause some services problems. The WS-Security header output generated with the above looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-291622ca-4c11-460f-9886-ac1c78813b24-1"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> This is closer as it includes only the username and password. The key here is the protocol for WS-Security:messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" which explicitly specifies the protocol version. There are several variants of this specification but none of them seem to support the nonce unfortunately. This protocol does allow for optional omission of the Nonce and created timestamp provided (which effectively makes those keys optional). With some services I tried that requested a Nonce just using this protocol actually worked where the default basicHttpBinding failed to connect, so this is a possible solution for access to some services. Unfortunately for my target service that was not an option. The nonce has to be there. Creating Custom ClientCredentials As it turns out WCF doesn't have support for the Digest Nonce as part of WS-Security, and so as far as I can tell there's no way to do it just with configuration settings. I did a bunch of research on this trying to find workarounds for this, and I did find a couple of entries on StackOverflow as well as on the MSDN forums. However, none of these are particularily clear and I ended up using bits and pieces of several of them to arrive at a working solution in the end. http://stackoverflow.com/questions/896901/wcf-adding-nonce-to-usernametoken http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4df3354f-0627-42d9-b5fb-6e880b60f8ee The latter forum message is the more useful of the two (the last message on the thread in particular) and it has most of the information required to make this work. But it took some experimentation for me to get this right so I'll recount the process here maybe a bit more comprehensively. In order for this to work a number of classes have to be overridden: ClientCredentials ClientCredentialsSecurityTokenManager WSSecurityTokenizer The idea is that we need to create a custom ClientCredential class to hold the custom properties so they can be set from the UI or via configuration settings. The TokenManager and Tokenizer are mainly required to allow the custom credentials class to flow through the WCF pipeline and eventually provide custom serialization. Here are the three classes required and their full implementations:public class CustomCredentials : ClientCredentials { public CustomCredentials() { } protected CustomCredentials(CustomCredentials cc) : base(cc) { } public override System.IdentityModel.Selectors.SecurityTokenManager CreateSecurityTokenManager() { return new CustomSecurityTokenManager(this); } protected override ClientCredentials CloneCore() { return new CustomCredentials(this); } } public class CustomSecurityTokenManager : ClientCredentialsSecurityTokenManager { public CustomSecurityTokenManager(CustomCredentials cred) : base(cred) { } public override System.IdentityModel.Selectors.SecurityTokenSerializer CreateSecurityTokenSerializer(System.IdentityModel.Selectors.SecurityTokenVersion version) { return new CustomTokenSerializer(System.ServiceModel.Security.SecurityVersion.WSSecurity11); } } public class CustomTokenSerializer : WSSecurityTokenSerializer { public CustomTokenSerializer(SecurityVersion sv) : base(sv) { } protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); // in this case password is plain text // for digest mode password needs to be encoded as: // PasswordAsDigest = Base64(SHA-1(Nonce + Created + Password)) // and profile needs to change to //string password = GetSHA1String(nonce + createdStr + userToken.Password); string password = userToken.Password; writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } protected string GetSHA1String(string phrase) { SHA1CryptoServiceProvider sha1Hasher = new SHA1CryptoServiceProvider(); byte[] hashedDataBytes = sha1Hasher.ComputeHash(Encoding.UTF8.GetBytes(phrase)); return Convert.ToBase64String(hashedDataBytes); } } Realistically only the CustomTokenSerializer has any significant code in. The code there deals with actually serializing the custom credentials using low level XML semantics by writing output into an XML writer. I can't take credit for this code - most of the code comes from the MSDN forum post mentioned earlier - I made a few adjustments to simplify the nonce generation and also added some notes to allow for PasswordDigest generation. Per spec the nonce is nothing more than a unique value that's supposed to be 'random'. I'm thinking that this value can be any string that's unique and a GUID on its own probably would have sufficed. Comments on other posts that GUIDs can be potentially guessed are highly exaggerated to say the least IMHO. To satisfy even that aspect though I added the SHA1 encryption and binary decoding to give a more random value that would be impossible to 'guess'. The original example from the forum post used another level of encoding and decoding to string in between - but that really didn't accomplish anything but extra overhead. The header output generated from this looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-f43d8b0d-0ebb-482e-998d-f544401a3c91-1" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">ThePassword</o:Password> <o:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >PjVE24TC6HtdAnsf3U9c5WMsECY=</o:Nonce> <u:Created>2012-11-23T07:10:04.670Z</u:Created> </o:UsernameToken> </o:Security> </s:Header> which is exactly as it should be. Password Digest? In my case the password is passed in plain text over an SSL connection, so there's no digest required so I was done with the code above. Since I don't have a service handy that requires a password digest,  I had no way of testing the code for the digest implementation, but here is how this is likely to work. If you need to pass a digest encoded password things are a little bit trickier. The password type namespace needs to change to: http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest and then the password value needs to be encoded. The format for password digest encoding is this: Base64(SHA-1(Nonce + Created + Password)) and it can be handled in the code above with this code (that's commented in the snippet above): string password = GetSHA1String(nonce + createdStr + userToken.Password); The entire WriteTokenCore method for digest code looks like this:protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); string password = GetSHA1String(nonce + createdStr + userToken.Password); writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } I had no service to connect to to try out Digest auth - if you end up needing it and get it to work please drop a comment… How to use the custom Credentials The easiest way to use the custom credentials is to create the client in code. Here's a factory method I use to create an instance of my service client:  public static RealTimeOnlineClient CreateRealTimeOnlineProxy(string url, string username, string password) { if (string.IsNullOrEmpty(url)) url = "https://notrealurl.com:443/cows/services/RealTimeOnline"; CustomBinding binding = new CustomBinding(); var security = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); security.IncludeTimestamp = false; security.DefaultAlgorithmSuite = SecurityAlgorithmSuite.Basic256; security.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10; var encoding = new TextMessageEncodingBindingElement(); encoding.MessageVersion = MessageVersion.Soap11; var transport = new HttpsTransportBindingElement(); transport.MaxReceivedMessageSize = 20000000; // 20 megs binding.Elements.Add(security); binding.Elements.Add(encoding); binding.Elements.Add(transport); RealTimeOnlineClient client = new RealTimeOnlineClient(binding, new EndpointAddress(url)); // to use full client credential with Nonce uncomment this code: // it looks like this might not be required - the service seems to work without it client.ChannelFactory.Endpoint.Behaviors.Remove<System.ServiceModel.Description.ClientCredentials>(); client.ChannelFactory.Endpoint.Behaviors.Add(new CustomCredentials()); client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; return client; } This returns a service client that's ready to call other service methods. The key item in this code is the ChannelFactory endpoint behavior modification that that first removes the original ClientCredentials and then adds the new one. The ClientCredentials property on the client is read only and this is the way it has to be added.   Summary It's a bummer that WCF doesn't suport WSE Security authentication with nonce values out of the box. From reading the comments in posts/articles while I was trying to find a solution, I found that this feature was omitted by design as this protocol is considered unsecure. While I agree that plain text passwords are rarely a good idea even if they go over secured SSL connection as WSE Security does, there are unfortunately quite a few services (mosly Java services I suspect) that use this protocol. I've run into this twice now and trying to find a solution online I can see that this is not an isolated problem - many others seem to have struggled with this. It seems there are about a dozen questions about this on StackOverflow all with varying incomplete answers. Hopefully this post provides a little more coherent content in one place. Again I marvel at WCF and its breadth of support for protocol features it has in a single tool. And even when it can't handle something there are ways to get it working via extensibility. But at the same time I marvel at how freaking difficult it is to arrive at these solutions. I mean there's no way I could have ever figured this out on my own. It takes somebody working on the WCF team or at least being very, very intricately involved in the innards of WCF to figure out the interconnection of the various objects to do this from scratch. Luckily this is an older problem that has been discussed extensively online and I was able to cobble together a solution from the online content. I'm glad it worked out that way, but it feels dirty and incomplete in that there's a whole learning path that was omitted to get here… Man am I glad I'm not dealing with SOAP services much anymore. REST service security - even when using some sort of federation is a piece of cake by comparison :-) I'm sure once standards bodies gets involved we'll be right back in security standard hell…© Rick Strahl, West Wind Technologies, 2005-2012Posted in WCF  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1

    - by Bobby Diaz
    Update: Live demo and source code now available!  The recent wave of earthquakes (no pun intended) being reported in the news got me wondering about the frequency and severity of earthquakes around the world. Since I’ve been doing a lot of Silverlight development lately, I decided to scratch my curiosity with a nice little Bing Maps application that will show the location and relative strength of recent seismic activity. Here is a list of technologies this application will utilize, so be sure to have everything downloaded and installed if you plan on following along. Silverlight 3 WCF RIA Services Bing Maps Silverlight Control * Managed Extensibility Framework (optional) MVVM Light Toolkit (optional) log4net (optional) * If you are new to Bing Maps or have not signed up for a Developer Account, you will need to visit www.bingmapsportal.com to request a Bing Maps key for your application. Getting Started We start out by creating a new Silverlight Application called EarthquakeLocator and specify that we want to automatically create the Web Application Project with RIA Services enabled. I cleaned up the web app by removing the Default.aspx and EarthquakeLocatorTestPage.html. Then I renamed the EarthquakeLocatorTestPage.aspx to Default.aspx and set it as my start page. I also set the development server to use a specific port, as shown below. RIA Services Next, I created a Services folder in the EarthquakeLocator.Web project and added a new Domain Service Class called EarthquakeService.cs. This is the RIA Services Domain Service that will provide earthquake data for our client application. I am not using LINQ to SQL or Entity Framework, so I will use the <empty domain service class> option. We will be pulling data from an external Atom feed, but this example could just as easily pull data from a database or another web service. This is an important distinction to point out because each scenario I just mentioned could potentially use a different Domain Service base class (i.e. LinqToSqlDomainService<TDataContext>). Now we can start adding Query methods to our EarthquakeService that pull data from the USGS web site. Here is the complete code for our service class: using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.ServiceModel.Syndication; using System.Web.DomainServices; using System.Web.Ria; using System.Xml; using log4net; using EarthquakeLocator.Web.Model;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// Provides earthquake data to client applications.     /// </summary>     [EnableClientAccess()]     public class EarthquakeService : DomainService     {         private static readonly ILog log = LogManager.GetLogger(typeof(EarthquakeService));           // USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/         private const string FeedForPreviousDay =             "http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml";         private const string FeedForPreviousWeek =             "http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml";           /// <summary>         /// Gets the earthquake data for the previous week.         /// </summary>         /// <returns>A queryable collection of <see cref="Earthquake"/> objects.</returns>         public IQueryable<Earthquake> GetEarthquakes()         {             var feed = GetFeed(FeedForPreviousWeek);             var list = new List<Earthquake>();               if ( feed != null )             {                 foreach ( var entry in feed.Items )                 {                     var quake = CreateEarthquake(entry);                     if ( quake != null )                     {                         list.Add(quake);                     }                 }             }               return list.AsQueryable();         }           /// <summary>         /// Creates an <see cref="Earthquake"/> object for each entry in the Atom feed.         /// </summary>         /// <param name="entry">The Atom entry.</param>         /// <returns></returns>         private Earthquake CreateEarthquake(SyndicationItem entry)         {             Earthquake quake = null;             string title = entry.Title.Text;             string summary = entry.Summary.Text;             string point = GetElementValue<String>(entry, "point");             string depth = GetElementValue<String>(entry, "elev");             string utcTime = null;             string localTime = null;             string depthDesc = null;             double? magnitude = null;             double? latitude = null;             double? longitude = null;             double? depthKm = null;               if ( !String.IsNullOrEmpty(title) && title.StartsWith("M") )             {                 title = title.Substring(2, title.IndexOf(',')-3).Trim();                 magnitude = TryParse(title);             }             if ( !String.IsNullOrEmpty(point) )             {                 var values = point.Split(' ');                 if ( values.Length == 2 )                 {                     latitude = TryParse(values[0]);                     longitude = TryParse(values[1]);                 }             }             if ( !String.IsNullOrEmpty(depth) )             {                 depthKm = TryParse(depth);                 if ( depthKm != null )                 {                     depthKm = Math.Round((-1 * depthKm.Value) / 100, 2);                 }             }             if ( !String.IsNullOrEmpty(summary) )             {                 summary = summary.Replace("</p>", "");                 var values = summary.Split(                     new string[] { "<p>" },                     StringSplitOptions.RemoveEmptyEntries);                   if ( values.Length == 3 )                 {                     var times = values[1].Split(                         new string[] { "<br>" },                         StringSplitOptions.RemoveEmptyEntries);                       if ( times.Length > 0 )                     {                         utcTime = times[0];                     }                     if ( times.Length > 1 )                     {                         localTime = times[1];                     }                       depthDesc = values[2];                     depthDesc = "Depth: " + depthDesc.Substring(depthDesc.IndexOf(":") + 2);                 }             }               if ( latitude != null && longitude != null )             {                 quake = new Earthquake()                 {                     Id = entry.Id,                     Title = entry.Title.Text,                     Summary = entry.Summary.Text,                     Date = entry.LastUpdatedTime.DateTime,                     Url = entry.Links.Select(l => Path.Combine(l.BaseUri.OriginalString,                         l.Uri.OriginalString)).FirstOrDefault(),                     Age = entry.Categories.Where(c => c.Label == "Age")                         .Select(c => c.Name).FirstOrDefault(),                     Magnitude = magnitude.GetValueOrDefault(),                     Latitude = latitude.GetValueOrDefault(),                     Longitude = longitude.GetValueOrDefault(),                     DepthInKm = depthKm.GetValueOrDefault(),                     DepthDesc = depthDesc,                     UtcTime = utcTime,                     LocalTime = localTime                 };             }               return quake;         }           private T GetElementValue<T>(SyndicationItem entry, String name)         {             var el = entry.ElementExtensions.Where(e => e.OuterName == name).FirstOrDefault();             T value = default(T);               if ( el != null )             {                 value = el.GetObject<T>();             }               return value;         }           private double? TryParse(String value)         {             double d;             if ( Double.TryParse(value, out d) )             {                 return d;             }             return null;         }           /// <summary>         /// Gets the feed at the specified URL.         /// </summary>         /// <param name="url">The URL.</param>         /// <returns>A <see cref="SyndicationFeed"/> object.</returns>         public static SyndicationFeed GetFeed(String url)         {             SyndicationFeed feed = null;               try             {                 log.Debug("Loading RSS feed: " + url);                   using ( var reader = XmlReader.Create(url) )                 {                     feed = SyndicationFeed.Load(reader);                 }             }             catch ( Exception ex )             {                 log.Error("Error occurred while loading RSS feed: " + url, ex);             }               return feed;         }     } }   The only method that will be generated in the client side proxy class, EarthquakeContext, will be the GetEarthquakes() method. The reason being that it is the only public instance method and it returns an IQueryable<Earthquake> collection that can be consumed by the client application. GetEarthquakes() calls the static GetFeed(String) method, which utilizes the built in SyndicationFeed API to load the external data feed. You will need to add a reference to the System.ServiceModel.Web library in order to take advantage of the RSS/Atom reader. The API will also allow you to create your own feeds to serve up in your applications. Model I have also created a Model folder and added a new class, Earthquake.cs. The Earthquake object will hold the various properties returned from the Atom feed. Here is a sample of the code for that class. Notice the [Key] attribute on the Id property, which is required by RIA Services to uniquely identify the entity. using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ComponentModel.DataAnnotations;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     [DataContract]     public class Earthquake     {         /// <summary>         /// Gets or sets the id.         /// </summary>         /// <value>The id.</value>         [Key]         [DataMember]         public string Id { get; set; }           /// <summary>         /// Gets or sets the title.         /// </summary>         /// <value>The title.</value>         [DataMember]         public string Title { get; set; }           /// <summary>         /// Gets or sets the summary.         /// </summary>         /// <value>The summary.</value>         [DataMember]         public string Summary { get; set; }           // additional properties omitted     } }   View Model The recent trend to use the MVVM pattern for WPF and Silverlight provides a great way to separate the data and behavior logic out of the user interface layer of your client applications. I have chosen to use the MVVM Light Toolkit for the Earthquake Locator, but there are other options out there if you prefer another library. That said, I went ahead and created a ViewModel folder in the Silverlight project and added a EarthquakeViewModel class that derives from ViewModelBase. Here is the code: using System; using System.Collections.ObjectModel; using System.ComponentModel.Composition; using System.ComponentModel.Composition.Hosting; using Microsoft.Maps.MapControl; using GalaSoft.MvvmLight; using EarthquakeLocator.Web.Model; using EarthquakeLocator.Web.Services;   namespace EarthquakeLocator.ViewModel {     /// <summary>     /// Provides data for views displaying earthquake information.     /// </summary>     public class EarthquakeViewModel : ViewModelBase     {         [Import]         public EarthquakeContext Context;           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         public EarthquakeViewModel()         {             var catalog = new AssemblyCatalog(GetType().Assembly);             var container = new CompositionContainer(catalog);             container.ComposeParts(this);             Initialize();         }           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         /// <param name="context">The context.</param>         public EarthquakeViewModel(EarthquakeContext context)         {             Context = context;             Initialize();         }           private void Initialize()         {             MapCenter = new Location(20, -170);             ZoomLevel = 2;         }           #region Private Methods           private void OnAutoLoadDataChanged()         {             LoadEarthquakes();         }           private void LoadEarthquakes()         {             var query = Context.GetEarthquakesQuery();             Context.Earthquakes.Clear();               Context.Load(query, (op) =>             {                 if ( !op.HasError )                 {                     foreach ( var item in op.Entities )                     {                         Earthquakes.Add(item);                     }                 }             }, null);         }           #endregion Private Methods           #region Properties           private bool autoLoadData;         /// <summary>         /// Gets or sets a value indicating whether to auto load data.         /// </summary>         /// <value><c>true</c> if auto loading data; otherwise, <c>false</c>.</value>         public bool AutoLoadData         {             get { return autoLoadData; }             set             {                 if ( autoLoadData != value )                 {                     autoLoadData = value;                     RaisePropertyChanged("AutoLoadData");                     OnAutoLoadDataChanged();                 }             }         }           private ObservableCollection<Earthquake> earthquakes;         /// <summary>         /// Gets the collection of earthquakes to display.         /// </summary>         /// <value>The collection of earthquakes.</value>         public ObservableCollection<Earthquake> Earthquakes         {             get             {                 if ( earthquakes == null )                 {                     earthquakes = new ObservableCollection<Earthquake>();                 }                   return earthquakes;             }         }           private Location mapCenter;         /// <summary>         /// Gets or sets the map center.         /// </summary>         /// <value>The map center.</value>         public Location MapCenter         {             get { return mapCenter; }             set             {                 if ( mapCenter != value )                 {                     mapCenter = value;                     RaisePropertyChanged("MapCenter");                 }             }         }           private double zoomLevel;         /// <summary>         /// Gets or sets the zoom level.         /// </summary>         /// <value>The zoom level.</value>         public double ZoomLevel         {             get { return zoomLevel; }             set             {                 if ( zoomLevel != value )                 {                     zoomLevel = value;                     RaisePropertyChanged("ZoomLevel");                 }             }         }           #endregion Properties     } }   The EarthquakeViewModel class contains all of the properties that will be bound to by the various controls in our views. Be sure to read through the LoadEarthquakes() method, which handles calling the GetEarthquakes() method in our EarthquakeService via the EarthquakeContext proxy, and also transfers the loaded entities into the view model’s Earthquakes collection. Another thing to notice is what’s going on in the default constructor. I chose to use the Managed Extensibility Framework (MEF) for my composition needs, but you can use any dependency injection library or none at all. To allow the EarthquakeContext class to be discoverable by MEF, I added the following partial class so that I could supply the appropriate [Export] attribute: using System; using System.ComponentModel.Composition;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// The client side proxy for the EarthquakeService class.     /// </summary>     [Export]     public partial class EarthquakeContext     {     } }   One last piece I wanted to point out before moving on to the user interface, I added a client side partial class for the Earthquake entity that contains helper properties that we will bind to later: using System;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     public partial class Earthquake     {         /// <summary>         /// Gets the location based on the current Latitude/Longitude.         /// </summary>         /// <value>The location.</value>         public string Location         {             get { return String.Format("{0},{1}", Latitude, Longitude); }         }           /// <summary>         /// Gets the size based on the Magnitude.         /// </summary>         /// <value>The size.</value>         public double Size         {             get { return (Magnitude * 3); }         }     } }   View Now the fun part! Usually, I would create a Views folder to place all of my View controls in, but I took the easy way out and added the following XAML code to the default MainPage.xaml file. Be sure to add the bing prefix associating the Microsoft.Maps.MapControl namespace after adding the assembly reference to your project. The MVVM Light Toolkit project templates come with a ViewModelLocator class that you can use via a static resource, but I am instantiating the EarthquakeViewModel directly in my user control. I am setting the AutoLoadData property to true as a way to trigger the LoadEarthquakes() method call. The MapItemsControl found within the <bing:Map> control binds its ItemsSource property to the Earthquakes collection of the view model, and since it is an ObservableCollection<T>, we get the automatic two way data binding via the INotifyCollectionChanged interface. <UserControl x:Class="EarthquakeLocator.MainPage"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:bing="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl"     xmlns:vm="clr-namespace:EarthquakeLocator.ViewModel"     mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" >     <UserControl.Resources>         <DataTemplate x:Key="EarthquakeTemplate">             <Ellipse Fill="Red" Stroke="Black" StrokeThickness="1"                      Width="{Binding Size}" Height="{Binding Size}"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="{Binding UtcTime}" />                         <TextBlock Text="{Binding LocalTime}" />                         <TextBlock Text="{Binding DepthDesc}" />                     </StackPanel>                 </ToolTipService.ToolTip>             </Ellipse>         </DataTemplate>     </UserControl.Resources>       <UserControl.DataContext>         <vm:EarthquakeViewModel AutoLoadData="True" />     </UserControl.DataContext>       <Grid x:Name="LayoutRoot">           <bing:Map x:Name="map" CredentialsProvider="--Your-Bing-Maps-Key--"                   Center="{Binding MapCenter, Mode=TwoWay}"                   ZoomLevel="{Binding ZoomLevel, Mode=TwoWay}">             <bing:MapItemsControl ItemsSource="{Binding Earthquakes}"                                   ItemTemplate="{StaticResource EarthquakeTemplate}" />         </bing:Map>       </Grid> </UserControl>   The EarthquakeTemplate defines the Ellipse that will represent each earthquake, the Width and Height that are determined by the Magnitude, the Position on the map, and also the tooltip that will appear when we mouse over each data point. Running the application will give us the following result (shown with a tooltip example): That concludes this portion of our show but I plan on implementing additional functionality in later blog posts. Be sure to come back soon to see the next installments in this series. Enjoy!   Additional Resources USGS Earthquake Data Feeds Brad Abrams shows how RIA Services and MVVM can work together

    Read the article

  • Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix

    - by ScottGu
    I’m excited to announce the release today of several products: ASP.NET MVC 3 NuGet IIS Express 7.5 SQL Server Compact Edition 4 Web Deploy and Web Farm Framework 2.0 Orchard 1.0 WebMatrix 1.0 The above products are all free. They build upon the .NET 4 and VS 2010 release, and add a ton of additional value to ASP.NET (both Web Forms and MVC) and the Microsoft Web Server stack. ASP.NET MVC 3 Today we are shipping the final release of ASP.NET MVC 3.  You can download and install ASP.NET MVC 3 here.  The ASP.NET MVC 3 source code (released under an OSI-compliant open source license) can also optionally be downloaded here. ASP.NET MVC 3 is a significant update that brings with it a bunch of great features.  Some of the improvements include: Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to continuing to support/enhance the existing .aspx view engine).  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, with Razor you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type.  You can learn more about Razor from some of the blog posts I’ve done about it over the last 6 months Introducing Razor New @model keyword in Razor Layouts with Razor Server-Side Comments with Razor Razor’s @: and <text> syntax Implicit and Explicit code nuggets with Razor Layouts and Sections with Razor Today’s release supports full code intellisense support for Razor (both VB and C#) with Visual Studio 2010 and the free Visual Web Developer 2010 Express. JavaScript Improvements ASP.NET MVC 3 enables richer JavaScript scenarios and takes advantage of emerging HTML5 capabilities. The AJAX and Validation helpers in ASP.NET MVC 3 now use an Unobtrusive JavaScript based approach.  Unobtrusive JavaScript avoids injecting inline JavaScript into HTML, and enables cleaner separation of behavior using the new HTML 5 “data-“ attribute convention (which conveniently works on older browsers as well – including IE6). This keeps your HTML tight and clean, and makes it easier to optionally swap out or customize JS libraries.  ASP.NET MVC 3 now includes built-in support for posting JSON-based parameters from client-side JavaScript to action methods on the server.  This makes it easier to exchange data across the client and server, and build rich JavaScript front-ends.  We think this capability will be particularly useful going forward with scenarios involving client templates and data binding (including the jQuery plugins the ASP.NET team recently contributed to the jQuery project).  Previous releases of ASP.NET MVC included the core jQuery library.  ASP.NET MVC 3 also now ships the jQuery Validate plugin (which our validation helpers use for client-side validation scenarios).  We are also now shipping and including jQuery UI by default as well (which provides a rich set of client-side JavaScript UI widgets for you to use within projects). Improved Validation ASP.NET MVC 3 includes a bunch of validation enhancements that make it even easier to work with data. Client-side validation is now enabled by default with ASP.NET MVC 3 (using an onbtrusive javascript implementation).  Today’s release also includes built-in support for Remote Validation - which enables you to annotate a model class with a validation attribute that causes ASP.NET MVC to perform a remote validation call to a server method when validating input on the client. The validation features introduced within .NET 4’s System.ComponentModel.DataAnnotations namespace are now supported by ASP.NET MVC 3.  This includes support for the new IValidatableObject interface – which enables you to perform model-level validation, and allows you to provide validation error messages specific to the state of the overall model, or between two properties within the model.  ASP.NET MVC 3 also supports the improvements made to the ValidationAttribute class in .NET 4.  ValidationAttribute now supports a new IsValid overload that provides more information about the current validation context, such as what object is being validated.  This enables richer scenarios where you can validate the current value based on another property of the model.  We’ve shipped a built-in [Compare] validation attribute  with ASP.NET MVC 3 that uses this support and makes it easy out of the box to compare and validate two property values. You can use any data access API or technology with ASP.NET MVC.  This past year, though, we’ve worked closely with the .NET data team to ensure that the new EF Code First library works really well for ASP.NET MVC applications.  These two posts of mine cover the latest EF Code First preview and demonstrates how to use it with ASP.NET MVC 3 to enable easy editing of data (with end to end client+server validation support).  The final release of EF Code First will ship in the next few weeks. Today we are also publishing the first preview of a new MvcScaffolding project.  It enables you to easily scaffold ASP.NET MVC 3 Controllers and Views, and works great with EF Code-First (and is pluggable to support other data providers).  You can learn more about it – and install it via NuGet today - from Steve Sanderson’s MvcScaffolding blog post. Output Caching Previous releases of ASP.NET MVC supported output caching content at a URL or action-method level. With ASP.NET MVC V3 we are also enabling support for partial page output caching – which allows you to easily output cache regions or fragments of a response as opposed to the entire thing.  This ends up being super useful in a lot of scenarios, and enables you to dramatically reduce the work your application does on the server.  The new partial page output caching support in ASP.NET MVC 3 enables you to easily re-use cached sub-regions/fragments of a page across multiple URLs on a site.  It supports the ability to cache the content either on the web-server, or optionally cache it within a distributed cache server like Windows Server AppFabric or memcached. I’ll post some tutorials on my blog that show how to take advantage of ASP.NET MVC 3’s new output caching support for partial page scenarios in the future. Better Dependency Injection ASP.NET MVC 3 provides better support for applying Dependency Injection (DI) and integrating with Dependency Injection/IOC containers. With ASP.NET MVC 3 you no longer need to author custom ControllerFactory classes in order to enable DI with Controllers.  You can instead just register a Dependency Injection framework with ASP.NET MVC 3 and it will resolve dependencies not only for Controllers, but also for Views, Action Filters, Model Binders, Value Providers, Validation Providers, and Model Metadata Providers that you use within your application. This makes it much easier to cleanly integrate dependency injection within your projects. Other Goodies ASP.NET MVC 3 includes dozens of other nice improvements that help to both reduce the amount of code you write, and make the code you do write cleaner.  Here are just a few examples: Improved New Project dialog that makes it easy to start new ASP.NET MVC 3 projects from templates. Improved Add->View Scaffolding support that enables the generation of even cleaner view templates. New ViewBag property that uses .NET 4’s dynamic support to make it easy to pass late-bound data from Controllers to Views. Global Filters support that allows specifying cross-cutting filter attributes (like [HandleError]) across all Controllers within an app. New [AllowHtml] attribute that allows for more granular request validation when binding form posted data to models. Sessionless controller support that allows fine grained control over whether SessionState is enabled on a Controller. New ActionResult types like HttpNotFoundResult and RedirectPermanent for common HTTP scenarios. New Html.Raw() helper to indicate that output should not be HTML encoded. New Crypto helpers for salting and hashing passwords. And much, much more… Learn More about ASP.NET MVC 3 We will be posting lots of tutorials and samples on the http://asp.net/mvc site in the weeks ahead.  Below are two good ASP.NET MVC 3 tutorials available on the site today: Build your First ASP.NET MVC 3 Application: VB and C# Building the ASP.NET MVC 3 Music Store We’ll post additional ASP.NET MVC 3 tutorials and videos on the http://asp.net/mvc site in the future. Visit it regularly to find new tutorials as they are published. How to Upgrade Existing Projects ASP.NET MVC 3 is compatible with ASP.NET MVC 2 – which means it should be easy to update existing MVC projects to ASP.NET MVC 3.  The new features in ASP.NET MVC 3 build on top of the foundational work we’ve already done with the MVC 1 and MVC 2 releases – which means that the skills, knowledge, libraries, and books you’ve acquired are all directly applicable with the MVC 3 release.  MVC 3 adds new features and capabilities – it doesn’t obsolete existing ones. You can upgrade existing ASP.NET MVC 2 projects by following the manual upgrade steps in the release notes.  Alternatively, you can use this automated ASP.NET MVC 3 upgrade tool to easily update your  existing projects. Localized Builds Today’s ASP.NET MVC 3 release is available in English.  We will be releasing localized versions of ASP.NET MVC 3 (in 9 languages) in a few days.  I’ll blog pointers to the localized downloads once they are available. NuGet Today we are also shipping NuGet – a free, open source, package manager that makes it easy for you to find, install, and use open source libraries in your projects. It works with all .NET project types (including ASP.NET Web Forms, ASP.NET MVC, WPF, WinForms, Silverlight, and Class Libraries).  You can download and install it here. NuGet enables developers who maintain open source projects (for example, .NET projects like Moq, NHibernate, Ninject, StructureMap, NUnit, Windsor, Raven, Elmah, etc) to package up their libraries and register them with an online gallery/catalog that is searchable.  The client-side NuGet tools – which include full Visual Studio integration – make it trivial for any .NET developer who wants to use one of these libraries to easily find and install it within the project they are working on. NuGet handles dependency management between libraries (for example: library1 depends on library2). It also makes it easy to update (and optionally remove) libraries from your projects later. It supports updating web.config files (if a package needs configuration settings). It also allows packages to add PowerShell scripts to a project (for example: scaffold commands). Importantly, NuGet is transparent and clean – and does not install anything at the system level. Instead it is focused on making it easy to manage libraries you use with your projects. Our goal with NuGet is to make it as simple as possible to integrate open source libraries within .NET projects.  NuGet Gallery This week we also launched a beta version of the http://nuget.org web-site – which allows anyone to easily search and browse an online gallery of open source packages available via NuGet.  The site also now allows developers to optionally submit new packages that they wish to share with others.  You can learn more about how to create and share a package here. There are hundreds of open-source .NET projects already within the NuGet Gallery today.  We hope to have thousands there in the future. IIS Express 7.5 Today we are also shipping IIS Express 7.5.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  It works for both ASP.NET Web Forms and ASP.NET MVC project types. We think IIS Express combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into Visual Studio today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.  You can also optionally redistribute IIS Express with your own applications if you want a lightweight web-server.  The standard IIS Express EULA now includes redistributable rights. Visual Studio 2010 SP1 adds support for IIS Express.  Read my VS 2010 SP1 and IIS Express blog post to learn more about what it enables.  SQL Server Compact Edition 4 Today we are also shipping SQL Server Compact Edition 4 (aka SQL CE 4).  SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Tooling Support with VS 2010 SP1 Visual Studio 2010 SP1 adds support for SQL CE 4 and ASP.NET Projects.  Read my VS 2010 SP1 and SQL CE 4 blog post to learn more about what it enables.  Web Deploy and Web Farm Framework 2.0 Today we are also releasing Microsoft Web Deploy V2 and Microsoft Web Farm Framework V2.  These services provide a flexible and powerful way to deploy ASP.NET applications onto either a single server, or across a web farm of machines. You can learn more about these capabilities from my previous blog posts on them: Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Visit the http://iis.net website to learn more and install them. Both are free. Orchard 1.0 Today we are also releasing Orchard v1.0.  Orchard is a free, open source, community based project.  It provides Content Management System (CMS) and Blogging System support out of the box, and makes it possible to easily create and manage web-sites without having to write code (site owners can customize a site through the browser-based editing tools built-into Orchard).  Read these tutorials to learn more about how you can setup and manage your own Orchard site. Orchard itself is built as an ASP.NET MVC 3 application using Razor view templates (and by default uses SQL CE 4 for data storage).  Developers wishing to extend an Orchard site with custom functionality can open and edit it as a Visual Studio project – and add new ASP.NET MVC Controllers/Views to it.  WebMatrix 1.0 WebMatrix is a new, free, web development tool from Microsoft that provides a suite of technologies that make it easier to enable website development.  It enables a developer to start a new site by browsing and downloading an app template from an online gallery of web applications (which includes popular apps like Umbraco, DotNetNuke, Orchard, WordPress, Drupal and Joomla).  Alternatively it also enables developers to create and code web sites from scratch. WebMatrix is task focused and helps guide developers as they work on sites.  WebMatrix includes IIS Express, SQL CE 4, and ASP.NET - providing an integrated web-server, database and programming framework combination.  It also includes built-in web publishing support which makes it easy to find and deploy sites to web hosting providers. You can learn more about WebMatrix from my Introducing WebMatrix blog post this summer.  Visit http://microsoft.com/web to download and install it today. Summary I’m really excited about today’s releases – they provide a bunch of additional value that makes web development with ASP.NET, Visual Studio and the Microsoft Web Server a lot better.  A lot of folks worked hard to share this with you today. On behalf of my whole team – we hope you enjoy them! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Configuring UCM cache to check for external Content Server changes

    - by Martin Deh
    Recently, I was involved in a customer scenario where they were modifying the Content Server's contributor data files directly through Content Server.  This operation of course is completely supported.  However, since the contributor data file was modified through the "backdoor", a running WebCenter Spaces page, which also used the same data file, would not get the updates immediately.  This was due to two reasons.  The first reason is that the Spaces page was using Content Presenter to display the contents of the data file. The second reason is that the Spaces application was using the "cached" version of the data file.  Fortunately, there is a way to configure cache so backdoor changes can be picked up more quickly and automatically. First a brief overview of Content Presenter.  The Content Presenter task flow enables WebCenter Spaces users with Page-Edit permissions to precisely customize the selection and presentation of content in a WebCenter Spaces application.  With Content Presenter, you can select a single item of content, contents under a folder, a list of items, or query for content, and then select a Content Presenter based template to render the content on a page in a Spaces application.  In addition to displaying the folders and the files in a Content Server, Content Presenter integrates with Oracle Site Studio to allow you to create, access, edit, and display Site Studio contributor data files (Content Server Document) in either a Site Studio region template or in a custom Content Presenter display template.  More information about creating Content Presenter Display Template can be found in the OFM Developers Guide for WebCenter Portal. The easiest way to configure the cache is to modify the WebCenter Spaces Content Server service connection setting through Enterprise Manager.  From here, under the Cache Details, there is a section to set the Cache Invalidation Interval.  Basically, this enables the cache to be monitored by the cache "sweeper" utility.  The cache sweeper queries for changes in the Content Server, and then "marks" the object in cache as "dirty".  This causes the application in turn to get a new copy of the document from the Content Server that replaces the cached version.  By default the initial value for the Cache Invalidation Interval is set to 0 (minutes).  This basically means that the sweeper is OFF.  To turn the sweeper ON, just set a value (in minutes).  The mininal value that can be set is 2 (minutes): Just a note.  In some instances, once the value of the Cache Invalidation Interval has been set (and saved) in the Enterprise Manager UI, it becomes "sticky" and the interval value cannot be set back to 0.  The good news is that this value can also be updated throught a WLST command.   The WLST command to run is as follows: setJCRContentServerConnection(appName, name, [socketType, url, serverHost, serverPort, keystoreLocation, keystorePassword, privateKeyAlias, privateKeyPassword, webContextRoot, clientSecurityPolicy, cacheInvalidationInterval, binaryCacheMaxEntrySize, adminUsername, adminPassword, extAppId, timeout, isPrimary, server, applicationVersion]) One way to get the required information for executing the command is to use the listJCRContentServerConnections('webcenter',verbose=true) command.  For example, this is the sample output from the execution: ------------------ UCM ------------------ Connection Name: UCM Connection Type: JCR External Appliction ID: Timeout: (not set) CIS Socket Type: socket CIS Server Hostname: webcenter.oracle.local CIS Server Port: 4444 CIS Keystore Location: CIS Private Key Alias: CIS Web URL: Web Server Context Root: /cs Client Security Policy: Admin User Name: sysadmin Cache Invalidation Interval: 2 Binary Cache Maximum Entry Size: 1024 The Documents primary connection is "UCM" From this information, the completed  setJCRContentServerConnection would be: setJCRContentServerConnection(appName='webcenter',name='UCM', socketType='socket', serverHost='webcenter.oracle.local', serverPort='4444', webContextRoot='/cs', cacheInvalidationInterval='0', binaryCacheMaxEntrySize='1024',adminUsername='sysadmin',isPrimary=1) Note: The Spaces managed server must be restarted for the change to take effect. More information about using WLST for WebCenter can be found here. Once the sweeper is turned ON, only cache objects that have been changed will be invalidated.  To test this out, I will go through a simple scenario.  The first thing to do is configure the Content Server so it can monitor and report on events.  Log into the Content Server console application, and under the Administration menu item, select System Audit Information.  Note: If your console is using the left menu display option, the Administration link will be located there. Under the Tracing Sections Information, add in only "system" and "requestaudit" in the Active Sections.  Check Full Verbose Tracing, check Save, then click the Update button.  Once this is done, select the View Server Output menu option.  This will change the browser view to display the log.  This is all that is needed to configure the Content Server. For example, the following is the View Server Output with the cache invalidation interval set to 2(minutes) Note the time stamp: requestaudit/6 08.30 09:52:26.001  IdcServer-68    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.016933999955654144(secs) requestaudit/6 08.30 09:52:26.010  IdcServer-69    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.006134999915957451(secs) requestaudit/6 08.30 09:52:26.014  IdcServer-70    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.004271999932825565(secs) ... other trace info ... requestaudit/6 08.30 09:54:26.002  IdcServer-71    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.020323999226093292(secs) requestaudit/6 08.30 09:54:26.011  IdcServer-72    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.017928000539541245(secs) requestaudit/6 08.30 09:54:26.017  IdcServer-73    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.010185999795794487(secs) Now that the tracing logs are reporting correctly, the next step is set up the Spaces app to test the sweeper. I will use 2 different pages that will use Content Presenter task flows.  Each task flow will use a different custom Content Presenter display template, and will be assign 2 different contributor data files (document that will be in the cache).  The pages at run time appear as follows: Initially, when the Space pages containing the content is loaded in the browser for the first time, you can see the tracing information in the Content Server output viewer. requestaudit/6 08.30 11:51:12.030 IdcServer-129 CLEAR_SERVER_OUTPUT [dUser=weblogic] 0.029171999543905258(secs) requestaudit/6 08.30 11:51:12.101 IdcServer-130 GET_SERVER_OUTPUT [dUser=weblogic] 0.025721000507473946(secs) requestaudit/6 08.30 11:51:26.592 IdcServer-131 VCR_GET_DOCUMENT_BY_NAME [dID=919][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.21525299549102783(secs) requestaudit/6 08.30 11:51:27.117 IdcServer-132 VCR_GET_CONTENT_TYPES [dUser=sysadmin][IsJava=1] 0.5059549808502197(secs) requestaudit/6 08.30 11:51:27.146 IdcServer-133 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.03360399976372719(secs) requestaudit/6 08.30 11:51:27.169 IdcServer-134 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.008806000463664532(secs) requestaudit/6 08.30 11:51:27.204 IdcServer-135 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.013265999965369701(secs) requestaudit/6 08.30 11:51:27.384 IdcServer-136 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.18119299411773682(secs) requestaudit/6 08.30 11:51:27.533 IdcServer-137 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.1519480049610138(secs) requestaudit/6 08.30 11:51:27.634 IdcServer-138 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.10827399790287018(secs) requestaudit/6 08.30 11:51:27.687 IdcServer-139 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.059702999889850616(secs) requestaudit/6 08.30 11:51:28.271 IdcServer-140 GET_USER_PERMISSIONS [dUser=weblogic][IsJava=1] 0.006703000050038099(secs) requestaudit/6 08.30 11:51:28.285 IdcServer-141 GET_ENVIRONMENT [dUser=sysadmin][IsJava=1] 0.010893999598920345(secs) requestaudit/6 08.30 11:51:30.433 IdcServer-142 GET_SERVER_OUTPUT [dUser=weblogic] 0.017318999394774437(secs) requestaudit/6 08.30 11:51:41.837 IdcServer-143 VCR_GET_DOCUMENT_BY_NAME [dID=508][dDocName=113_ES][dDocTitle=Landing Home][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.15937699377536774(secs) requestaudit/6 08.30 11:51:42.781 IdcServer-144 GET_FILE [dID=326][dDocName=WEBCENTERORACL000315][dDocTitle=Duke][dUser=anonymous][RevisionSelectionMethod=LatestReleased][dSecurityGroup=Public][xCollectionID=0] 0.16288499534130096(secs) The highlighted sections show where the 2 data files DF_UCMCACHETESTER (P1 page) and 113_ES (P2 page) were called by the (Spaces) VCR connection to the Content Server. The most important line to notice is the VCR_GET_DOCUMENT_BY_NAME invocation.  On subsequent refreshes of these 2 pages, you will notice (after you refresh the Content Server's View Server Output) that there are no further traces of the same VCR_GET_DOCUMENT_BY_NAME invocations.  This is because the pages are getting the documents from the cache. The next step is to go through the "backdoor" and change one of the documents through the Content Server console.  This operation can be done by first locating the data file document, and from the Content Information page, select Edit Data File menu option.   This invokes the Site Studio Contributor, where the modifications can be made. Refreshing the Content Server View Server Output, the tracing displays the operations perform on the document.  requestaudit/6 08.30 11:56:59.972 IdcServer-255 SS_CHECKOUT_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dUser=weblogic][dSecurityGroup=Public] 0.05558200180530548(secs) requestaudit/6 08.30 11:57:00.065 IdcServer-256 SS_GET_CONTRIBUTOR_CONFIG [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.08632399886846542(secs) requestaudit/6 08.30 11:57:00.470 IdcServer-259 DOC_INFO_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.02268899977207184(secs) requestaudit/6 08.30 11:57:10.177 IdcServer-264 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007652000058442354(secs) requestaudit/6 08.30 11:57:10.181 IdcServer-263 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.01868399977684021(secs) requestaudit/6 08.30 11:57:10.187 IdcServer-265 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.009367000311613083(secs) (internal)/6 08.30 11:57:26.118 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml (internal)/6 08.30 11:57:26.121 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml requestaudit/6 08.30 11:57:26.122 IdcServer-266 SS_SET_ELEMENT_DATA [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0][StatusCode=0][StatusMessage=Successfully checked in content item 'DF_UCMCACHETESTER'.] 0.3765290081501007(secs) requestaudit/6 08.30 11:57:30.710 IdcServer-267 DOC_INFO_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.07942699640989304(secs) requestaudit/6 08.30 11:57:30.733 IdcServer-268 SS_GET_CONTRIBUTOR_STRINGS [dUser=weblogic] 0.0044570001773536205(secs) After a few moments and refreshing the P1 page, the updates has been applied. Note: The refresh time may very, since the Cache Invalidation Interval (set to 2 minutes) is not determined by when changes happened.  The sweeper just runs every 2 minutes. Refreshing the Content Server View Server Output, the tracing displays the important information. requestaudit/6 08.30 11:59:10.171 IdcServer-270 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.00952600035816431(secs) requestaudit/6 08.30 11:59:10.179 IdcServer-271 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.011118999682366848(secs) requestaudit/6 08.30 11:59:10.182 IdcServer-272 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007447000127285719(secs) requestaudit/6 08.30 11:59:16.885 IdcServer-273 VCR_GET_DOCUMENT_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.0786449983716011(secs) After the specifed interval time the sweeper is invoked, which is noted by the GET_ ... calls.  Since the history has noted the change, the next call is to the VCR_GET_DOCUMENT_BY_NAME to retrieve the new version of the (modifed) data file.  Navigating back to the P2 page, and viewing the server output, there are no further VCR_GET_DOCUMENT_BY_NAME to retrieve the data file.  This simply means that this data file was just retrieved from the cache.   Upon further review of the server output, we can see that there was only 1 request for the VCR_GET_DOCUMENT_BY_NAME: requestaudit/6 08.30 12:08:00.021 Audit Request Monitor Request Audit Report over the last 120 Seconds for server webcenteroraclelocal16200****  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor -Num Requests 8 Errors 0 Reqs/sec. 0.06666944175958633 Avg. Latency (secs) 0.02762500010430813 Max Thread Count 2  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 1 Service VCR_GET_DOCUMENT_BY_NAME Total Elapsed Time (secs) 0.09200000017881393 Num requests 1 Num errors 0 Avg. Latency (secs) 0.09200000017881393  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 2 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.054999999701976776 Num requests 1 Num errors 0 Avg. Latency (secs) 0.054999999701976776  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 3 Service GET_FOLDER_HISTORY_REPORT Total Elapsed Time (secs) 0.028999999165534973 Num requests 2 Num errors 0 Avg. Latency (secs) 0.014499999582767487  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 4 Service GET_SERVER_OUTPUT Total Elapsed Time (secs) 0.017999999225139618 Num requests 1 Num errors 0 Avg. Latency (secs) 0.017999999225139618  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 5 Service GET_FILE Total Elapsed Time (secs) 0.013000000268220901 Num requests 1 Num errors 0 Avg. Latency (secs) 0.013000000268220901  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor ****End Audit Report*****  

    Read the article

  • Using jQuery to Insert a New Database Record

    - by Stephen Walther
    The goal of this blog entry is to explore the easiest way of inserting a new record into a database using jQuery and .NET. I’m going to explore two approaches: using Generic Handlers and using a WCF service (In a future blog entry I’ll take a look at OData and WCF Data Services). Create the ASP.NET Project I’ll start by creating a new empty ASP.NET application with Visual Studio 2010. Select the menu option File, New Project and select the ASP.NET Empty Web Application project template. Setup the Database and Data Model I’ll use my standard MoviesDB.mdf movies database. This database contains one table named Movies that looks like this: I’ll use the ADO.NET Entity Framework to represent my database data: Select the menu option Project, Add New Item and select the ADO.NET Entity Data Model project item. Name the data model MoviesDB.edmx and click the Add button. In the Choose Model Contents step, select Generate from database and click the Next button. In the Choose Your Data Connection step, leave all of the defaults and click the Next button. In the Choose Your Data Objects step, select the Movies table and click the Finish button. Unfortunately, Visual Studio 2010 cannot spell movie correctly :) You need to click on Movy and change the name of the class to Movie. In the Properties window, change the Entity Set Name to Movies. Using a Generic Handler In this section, we’ll use jQuery with an ASP.NET generic handler to insert a new record into the database. A generic handler is similar to an ASP.NET page, but it does not have any of the overhead. It consists of one method named ProcessRequest(). Select the menu option Project, Add New Item and select the Generic Handler project item. Name your new generic handler InsertMovie.ashx and click the Add button. Modify your handler so it looks like Listing 1: Listing 1 – InsertMovie.ashx using System.Web; namespace WebApplication1 { /// <summary> /// Inserts a new movie into the database /// </summary> public class InsertMovie : IHttpHandler { private MoviesDBEntities _dataContext = new MoviesDBEntities(); public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; // Extract form fields var title = context.Request["title"]; var director = context.Request["director"]; // Create movie to insert var movieToInsert = new Movie { Title = title, Director = director }; // Save new movie to DB _dataContext.AddToMovies(movieToInsert); _dataContext.SaveChanges(); // Return success context.Response.Write("success"); } public bool IsReusable { get { return true; } } } } In Listing 1, the ProcessRequest() method is used to retrieve a title and director from form parameters. Next, a new Movie is created with the form values. Finally, the new movie is saved to the database and the string “success” is returned. Using jQuery with the Generic Handler We can call the InsertMovie.ashx generic handler from jQuery by using the standard jQuery post() method. The following HTML page illustrates how you can retrieve form field values and post the values to the generic handler: Listing 2 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Add Movie</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input name="title" /> <br /> <label>Director:</label> <input name="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { $.post("InsertMovie.ashx", $("form").serialize(), insertCallback); }); function insertCallback(result) { if (result == "success") { alert("Movie added!"); } else { alert("Could not add movie!"); } } </script> </body> </html>     When you open the page in Listing 2 in a web browser, you get a simple HTML form: Notice that the page in Listing 2 includes the jQuery library. The jQuery library is included with the following SCRIPT tag: <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> The jQuery library is included on the Microsoft Ajax CDN so you can always easily include the jQuery library in your applications. You can learn more about the CDN at this website: http://www.asp.net/ajaxLibrary/cdn.ashx When you click the Add Movie button, the jQuery post() method is called to post the form data to the InsertMovie.ashx generic handler. Notice that the form values are serialized into a URL encoded string by calling the jQuery serialize() method. The serialize() method uses the name attribute of form fields and not the id attribute. Notes on this Approach This is a very low-level approach to interacting with .NET through jQuery – but it is simple and it works! And, you don’t need to use any JavaScript libraries in addition to the jQuery library to use this approach. The signature for the jQuery post() callback method looks like this: callback(data, textStatus, XmlHttpRequest) The second parameter, textStatus, returns the HTTP status code from the server. I tried returning different status codes from the generic handler with an eye towards implementing server validation by returning a status code such as 400 Bad Request when validation fails (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html ). I finally figured out that the callback is not invoked when the textStatus has any value other than “success”. Using a WCF Service As an alternative to posting to a generic handler, you can create a WCF service. You create a new WCF service by selecting the menu option Project, Add New Item and selecting the Ajax-enabled WCF Service project item. Name your WCF service InsertMovie.svc and click the Add button. Modify the WCF service so that it looks like Listing 3: Listing 3 – InsertMovie.svc using System.ServiceModel; using System.ServiceModel.Activation; namespace WebApplication1 { [ServiceBehavior(IncludeExceptionDetailInFaults=true)] [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class MovieService { private MoviesDBEntities _dataContext = new MoviesDBEntities(); [OperationContract] public bool Insert(string title, string director) { // Create movie to insert var movieToInsert = new Movie { Title = title, Director = director }; // Save new movie to DB _dataContext.AddToMovies(movieToInsert); _dataContext.SaveChanges(); // Return movie (with primary key) return true; } } }   The WCF service in Listing 3 uses the Entity Framework to insert a record into the Movies database table. The service always returns the value true. Notice that the service in Listing 3 includes the following attribute: [ServiceBehavior(IncludeExceptionDetailInFaults=true)] You need to include this attribute if you want to get detailed error information back to the client. When you are building an application, you should always include this attribute. When you are ready to release your application, you should remove this attribute for security reasons. Using jQuery with the WCF Service Calling a WCF service from jQuery requires a little more work than calling a generic handler from jQuery. Here are some good blog posts on some of the issues with using jQuery with WCF: http://encosia.com/2008/06/05/3-mistakes-to-avoid-when-using-jquery-with-aspnet-ajax/ http://encosia.com/2008/03/27/using-jquery-to-consume-aspnet-json-web-services/ http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx http://www.west-wind.com/Weblog/posts/896411.aspx http://www.west-wind.com/weblog/posts/324917.aspx http://professionalaspnet.com/archive/tags/WCF/default.aspx The primary requirement when calling WCF from jQuery is that the request use JSON: The request must include a content-type:application/json header. Any parameters included with the request must be JSON encoded. Unfortunately, jQuery does not include a method for serializing JSON (Although, oddly, jQuery does include a parseJSON() method for deserializing JSON). Therefore, we need to use an additional library to handle the JSON serialization. The page in Listing 4 illustrates how you can call a WCF service from jQuery. Listing 4 – Default2.aspx <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Add Movie</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { title: $("#title").val(), director: $("#director").val() }; // JSONify the data data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Insert", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result result = result["d"]; if (result === true) { alert("Movie added!"); } else { alert("Could not add movie!"); } } </script> </body> </html> There are several things to notice about Listing 4. First, notice that the page includes both the jQuery library and Douglas Crockford’s JSON2 library: <script src="Scripts/json2.js" type="text/javascript"></script> You need to include the JSON2 library to serialize the form values into JSON. You can download the JSON2 library from the following location: http://www.json.org/js.html When you click the button to submit the form, the form data is converted into a JavaScript object: // Convert the form into an object var data = { title: $("#title").val(), director: $("#director").val() }; Next, the data is serialized into JSON using the JSON2 library: // JSONify the data var data = JSON.stringify(data); Finally, the form data is posted to the WCF service by calling the jQuery ajax() method: // Post it $.ajax({   type: "POST",   contentType: "application/json; charset=utf-8",   url: "MovieService.svc/Insert",   data: data,   dataType: "json",   success: insertCallback }); You can’t use the standard jQuery post() method because you must set the content-type of the request to be application/json. Otherwise, the WCF service will reject the request for security reasons. For details, see the Scott Guthrie blog post: http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx The insertCallback() method is called when the WCF service returns a response. This method looks like this: function insertCallback(result) {   // unwrap result   result = result["d"];   if (result === true) {       alert("Movie added!");   } else {     alert("Could not add movie!");   } } When we called the jQuery ajax() method, we set the dataType to JSON. That causes the jQuery ajax() method to deserialize the response from the WCF service from JSON into a JavaScript object automatically. The following value is passed to the insertCallback method: {"d":true} For security reasons, a WCF service always returns a response with a “d” wrapper. The following line of code removes the “d” wrapper: // unwrap result result = result["d"]; To learn more about the “d” wrapper, I recommend that you read the following blog posts: http://encosia.com/2009/02/10/a-breaking-change-between-versions-of-aspnet-ajax/ http://encosia.com/2009/06/29/never-worry-about-asp-net-ajaxs-d-again/ Summary In this blog entry, I explored two methods of inserting a database record using jQuery and .NET. First, we created a generic handler and called the handler from jQuery. This is a very low-level approach. However, it is a simple approach that works. Next, we looked at how you can call a WCF service using jQuery. This approach required a little more work because you need to serialize objects into JSON. We used the JSON2 library to perform the serialization. In the next blog post, I want to explore how you can use jQuery with OData and WCF Data Services.

    Read the article

  • CodePlex Daily Summary for Friday, March 12, 2010

    CodePlex Daily Summary for Friday, March 12, 2010New Projects.NET DEPENDENCY INJECTION: Abel Perez Enterprise FrameworkAutodocs - WCF REST Automatic API Documentation Generator: Autodocs is an automatic API documentation generator for .NET applications that use Windows Communication Foundation (WCF) to establish REST API's.BlockBlock: Block Block is a free game. You know Lumines and you will like BlockBlock.C4F XNA ASCII Post-Processing: This is the source code for the Coding4Fun article "XNA Effects – ASCII Art in 3D"ChequePrinter: this is ChequePrinterCompiladores MSIL usando Phoenix (PLP 2008.1 - CIn/UFPE): Este projeto foi feito com o intuito de explorar a plataforma Microsoft Phoenix para a construção de compiladores para MSIL de duas linguagens de E...CRM External View: CRM External View enables more robust control over exposing Microsoft CRM data (in a form of views) for external parties. The solution uses web ser...CS Project2: This is for the projectDotNetNuke IM Module of Facebook Like Messenger: Help you integrate 123 Web Messenger into DotNetNuke, and add a powerful 1-to-1 IM Software named "Facebook Messenger Style Web Chat Bar" at the bo...DotNetNuke® RadPanelBar: DNNRadPanelBar makes it easy to add telerik RadPanelBar functionality to your module or skin. Licensing permits anyone to use the components (incl...DotNetNuke® Skin Blocks: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Armand Datema of Schwingsoft. This skin uses a bit of jQu...Drilltrough and filtering on SSAS-cubes in SSRS: We will describe a technique to create Reporting services (SSRS) reports that use Analysis services (SSAS) cubes as data sources, have a very intu...Ecosystem Diagnosis & Treatment: The Ecosystem DIagnosis & Treatment community provides tools, analyses and applications of the medical model to natural resource problems. EDT sof...ExIf 35: A utility for use by film photographers for keeping track of critical facts about images taken on a roll of film, just as digital cameras do automa...FabricadeTI: Desenvolvimento do framework FabricadeTI.Find and Replace word in the sentences: This program used Java Development Kid 6.0 and i were using HighLighter class. It was completed code with source code and then everybody can use in...Flash Nut: Flash Nut is a flash card program. You can build and review decks of flash cards. The project is a vs2008 wpf application.Free DotNetNuke Chat Module (Popup Mode): With this free DotNetNuke Chat Module (Popup Mode), master will assist to integrate DotNetNuke with 123 Flash Chat seamlessly, and add a popup mode...Free DotNetNuke IM of 123 Web Messenger -- Web-based Friend List: With this FREE application, you could integrate DNN website Database with 123 Web Messenger seamlessly and embed a web-based Friends List into anyw...Free DotNetNuke Live Help Module: With DotNetNuke Live Help Module, integrate 123 Live Help into DotNetNuke website and add Live Chat Button anywhere you like. Let visitors to chat ...G52GRP Videowall: NottinghamHappy Turtle Plugins for BVI :: Repository Based Versioning for Visual Studio: The Happy Turtle project creates plugins for the Build Version Increment Add-In for Visual Studio (BVI). The focus is to automatically version asse...Hasher: Hasher es capaz de generar el hash MD5 y SHA de textos de hasta 100.000 caracteres y ficheros. También te permitirá comprobar dos hash para verifi...Infragistics Silverlight Extended Controls: This project is a group of controls that extend or add functionality to the Infragistics Silverlight control suite. This control requires Infragis...Insert Video Jnr: This is a baby version of my Video plugin, it is intended for Hosted Wordpress blogs only and shouldn't be used with other blog providers.jccc .NET smart framework: jccc .NET smart framework allows the creation of fast connections to MSSQL or MYSQL databases, and the data manipulation by using of c# class's tha...LytScript: 函数式脚本语言Microsoft - DDD NLayerApp .NET 4.0 Example (Microsoft Spain): DDD NLayered App .NET 4.0 Example By Microsoft - Spain Domain Driven Design NLayered App .NET 4.0 Example Implementation Example of our local Arc...mimiKit: Lightweight ASP.NET MVC / Javascript Framework for creating mobile applications PHPWord: With PHPWord you can easily create a Word document with PHP. PHPWord creates docx Files that can include all major word functions like TextElements...Protocol Transition with BizTalk: An example solution the shows how todo Protocol Transition with BizTalk. This also shows you how to create a WCF extension to allow this to happen.Raid Runner: Raid Runner makes it easier to run and manage raid in World of Warcraft. It is a Silverlight application developed in c#SQL Server Authentication Troubleshooter: SQL Server Authentication Troubleshooter is a tool to help investigate a root cause of ‘Login Failed’ error in SQL Server. There could be number of...SuperviseObjects: SuperviseObjects consists of a collection which is derived from ObservableCollection<T>. This collection fires ItemPropertyChanging and ItemPropert...Viuto: Viuto.NET project aims to create a fully track and trace application. It is developed in: - Java & C: Firmware - C#: Parser - Asp.net: Tracki...Zealand IT MSBuild Tasks: Zealand IT MSBuild Tasks is a collection that you cannot do without if you are serious about continous integration. Ever wish you could specify an...New ReleasesASP.NET: ASP.NET MVC 2 RTM: This release contains the source code for ASP.NET MVC 2 RTM as well as the ASP.NET MVC Futures project. The futures project contains features that ...C#Mail: Higuchi.Mail.dll (2010.3.11 ver): Higuchi.Mail.dll at 2010-3-11 version.C#Mail: Higuchi.MailServer.dll (2010.3.11 ver): Higuchi.MailServer.dll at 2010.3.11 version.C4F XNA ASCII Post-Processing: XNA ASCII FPS v1 - Full Version: This is the full, complete example of the XNA ASCII FPS.C4F XNA ASCII Post-Processing: XNA ASCII FPS v1.0 - Base Project: This is the base project to be used by those who plan to follow along the Coding4Fun article.CRM External View: 1.0: Release 1.0DevTreks -social budgeting that improves lives and livelihoods: Social Budgeting Web Software, DevTreks alpha 3c: Alpha 3c upgrades custom/virtual uris (devpacks), temp uris, and zip packages. This is believed to be the first fully functional/performant release.DotNetNuke® RadPanelBar: DNNRadPanelBar 1.0.0: DNNRadPanelBar makes it easy to add telerik RadPanelBar functionality to your module or skin. Licensing permits anyone to use the components (inclu...Drilltrough and filtering on SSAS-cubes in SSRS: Release 1: Release 1ExIf 35: ExIf 35: Daily build of ExIf 35Family Tree Analyzer: Version 1.0.3.0: Version 1.0.3.0 Added options to check for updates on load and on help menu Disable use of US census for now until dealt with years being differen...Family Tree Analyzer: Version 1.0.4.0: Version 1.0.4.0 Added support for display of Ahnenfatel numbers Added filter to hide individuals from Lost Cousins report that have been flagged a...Flash Nut: Flash Nut 1.0 Setup: Flash Nut SetupFluent Validation for .NET: 1.2 RC: This is the release candidate for FluentValidation 1.2. If no bugs are found within the next couple of weeks, then this will become the 1.2 Final b...Free DotNetNuke Chat Module (Popup Mode): Download DNN Chat Module (Popup Mode)+Source Code: Feel free to download DotNetNuke Chat Module (Popup Mode), integrating DotNetNuke with 123 Flash Chat Software, and add a free popup mode flash cha...Free DotNetNuke Live Help Module: Download DNN Live Support Module and Source Code: In Readme file, there are detailed Installation and Integration Manual for you. This module is compatible with DotNetNuke v5.x.Happy Turtle Plugins for BVI :: Repository Based Versioning for Visual Studio: Happy Turtle 1.0.44927: This is the first release of the SVN based version incrementor. How To InstallMake sure that Build Version Increment v2.2.10065.1524 or newer is i...Hasher: 1.0: Versión inicial de la aplicación: Obtención de hash MD5 y SHA. Codificación en tiempo real de textos de hasta 100.000 caracteres. Codificación ...Jamolina: PhotosynthDemo: PhotosynthDemoMapWindow GIS: MapWindow 6.0 msi (March 11): This fixes an PixelToProj problem for the Extended Buffer case, as well as adding fixes to the WKBFeatureReader to fix an X,Y reversal and some ext...Math.NET Numerics: 2010.3.11.291 Build: Latest alpha buildMicrosoft - DDD NLayerApp .NET 4.0 Example (Microsoft Spain): V0.5 - N-Layer DDD Sample App: Required Software (Microsoft Base Software needed for Development environment) Unity Application Block 1.2 - October 2008 http://www.microsoft.com/...MiniTwitter: 1.09.2: MiniTwitter 1.09.2 更新内容 修正 タイムラインを削除すると落ちるバグを修正 稀にタイムラインのスクロールが出来ないバグを修正Nestoria.NET: Nestoria.NET 0.8: Provides access to the Nestoria API. Documentation contains a basic getting started guide. Please visit Darren Edge's blog for ongoing developmen...Pod Thrower: Version 1.0: Here is version 1.0. It has all the features I was looking to do in it. Please let me know if you use this and if you would like any changes.SharePoint Ad Rotator: SPAdRotator 2.0 Beta: This new release of the Ad Rotator contains many new features. One major new feature is that jQuery has been added to do image rotation without hav...SharePoint Objects: Democode Ton Stegeman: These download contains sample code for some SharePoint 2007 blog posts: TST.Themes_Build20100311.zip contains a feature receiver that registers Sh...SharePoint Taxonomy Extensions: SharePoint Taxonomy Extensions 1.2: Make Taxonomy Extensions useable in every list type. Not only in document libraries.SharePoint Video Player Web Part & SharePoint Video Library: Version 3.0.0: Absolutely killer feature - installing multiple players on a page without any loss of performance.SilverLight Interface for Mapserver: SLMapViewer v. 1.0: SLMapviewer sample application version 1.0. This new release includes the following enhancements: Silverlight 3.0 native Added a new init parame...Spark View Engine: Spark v1.1: Changes since RC1Built against ASP.NET MVC 2 RTMSPSS .NET interop library: 2.0: This new version supports SPSS 15, and includes spssio32.dll and other native .dll dependencies so that it works out of the box without SPSS being ...stefvanhooijdonk.com: SharePoint2010.ProfilePicturesLoader: So, with the help of Reflector, I wrote a small tool that would import all our profile pictures and update the user profiles. http://wp.me/pMnlQ-6G SuperviseObjects: SuperviseObjects 1.0: First releaseTortoiseSVN Addin for Visual Studio: TortoiseSVN Addin 1.0.5: Feature: Visual Studio/svn action synchronization on Item in Solution explorer like add, move, delete and rename. Note: Move action does not rememb...VCC: Latest build, v2.1.30311.0: Automatic drop of latest buildVivoSocial: VivoSocial 7.0.4: Business Management ■This release fixes a Could not load type error on the main view of the module. Groups ■Group requests were failing in some i...WikiPlex – a Regex Wiki Engine: WikiPlex 1.3: Info: Official Version: 1.3.0.215 | Full Release Notes Documentation - This new documentation includes Full Markup Guide with Examples Articles ...Zealand IT MSBuild Tasks: Zealand IT MSBuild Tasks: Initial beta release of Zealand IT MSBuild Tasks. Contains the following tasks: RunAs - Same as Exec task, but provides parameters for impersonat...ZoomBarPlus: V1 (Beta): This is the initial release. It should be considered a beta test version as it has not been tested for very long on my device.Most Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NET Ajax LibraryASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsUmbraco CMSRawrN2 CMSBlogEngine.NETFasterflect - A Fast and Simple Reflection APIjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryFarseer Physics EngineCaliburn: An Application Framework for WPF and SilverlightSharePoint Team-Mailer

    Read the article

  • Making WCF Output a single WSDL file for interop purposes.

    - by Glav
    By default, when WCF emits a WSDL definition for your services, it can often contain many links to others related schemas that need to be imported. For the most part, this is fine. WCF clients understand this type of schema without issue, and it conforms to the requisite standards as far as WSDL definitions go. However, some non Microsoft stacks will only work with a single WSDL file and require that all definitions for the service(s) (port types, messages, operation etc…) are contained within that single file. In other words, no external imports are supported. Some Java clients (to my working knowledge) have this limitation. This obviously presents a problem when trying to create services exposed for consumption and interop by these clients. Note: You can download the full source code for this sample from here To illustrate this point, lets say we have a simple service that looks like: Service Contract public interface IService1 { [OperationContract] [FaultContract(typeof(DataFault))] string GetData(DataModel1 model); [OperationContract] [FaultContract(typeof(DataFault))] string GetMoreData(DataModel2 model); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Service Implementation/Behaviour public class Service1 : IService1 { public string GetData(DataModel1 model) { return string.Format("Some Field was: {0} and another field was {1}", model.SomeField,model.AnotherField); } public string GetMoreData(DataModel2 model) { return string.Format("Name: {0}, age: {1}", model.Name, model.Age); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Configuration File <system.serviceModel> <services> <service name="SingleWSDL_WcfService.Service1" behaviorConfiguration="SingleWSDL_WcfService.Service1Behavior"> <!-- ...std/default data omitted for brevity..... --> <endpoint address ="" binding="wsHttpBinding" contract="SingleWSDL_WcfService.IService1" > ....... </services> <behaviors> <serviceBehaviors> <behavior name="SingleWSDL_WcfService.Service1Behavior"> ........ </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When WCF is asked to produce a WSDL for this service, it will produce a file that looks something like this (note: some sections omitted for brevity): <?xml version="1.0" encoding="utf-8" ?> - <wsdl:definitions name="Service1" targetNamespace="http://tempuri.org/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" ...... namespace definitions omitted for brevity + &lt;wsp:Policy wsu:Id="WSHttpBinding_IService1_policy"> ... multiple policy items omitted for brevity </wsp:Policy> - <wsdl:types> - <xsd:schema targetNamespace="http://tempuri.org/Imports"> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd0" namespace="http://tempuri.org/" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd3" namespace="Http://SingleWSDL/Fault" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd1" namespace="http://schemas.microsoft.com/2003/10/Serialization/" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd2" namespace="http://SingleWSDL/Model1" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd4" namespace="http://SingleWSDL/Model2" /> </xsd:schema> </wsdl:types> + <wsdl:message name="IService1_GetData_InputMessage"> .... </wsdl:message> - <wsdl:operation name="GetData"> ..... </wsdl:operation> - <wsdl:service name="Service1"> ....... </wsdl:service> </wsdl:definitions> The above snippet from the WSDL shows the external links and references that are generated by WCF for a relatively simple service. Note the xsd:import statements that reference external XSD definitions which are also generated by WCF. In order to get WCF to produce a single WSDL file, we first need to follow some good practices when it comes to WCF service definitions. Step 1: Define a namespace for your service contract. [ServiceContract(Namespace="http://SingleWSDL/Service1")] public interface IService1 { ...... } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Normally you would not use a literal string and may instead define a constant to use in your own application for the namespace. When this is applied and we generate the WSDL, we get the following statement inserted into the document: <wsdl:import namespace="http://SingleWSDL/Service1" location="http://localhost:2370/HostingSite/Service-default.svc?wsdl=wsdl0" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } All the previous imports have gone. If we follow this link, we will see that the XSD imports are now in this external WSDL file. Not really any benefit for our purposes. Step 2: Define a namespace for your service behaviour [ServiceBehavior(Namespace = "http://SingleWSDL/Service1")] public class Service1 : IService1 { ...... } As you can see, the namespace of the service behaviour should be the same as the service contract interface to which it implements. Failure to do these tasks will cause WCF to emit its default http://tempuri.org namespace all over the place and cause WCF to still generate import statements. This is also true if the namespace of the contract and behaviour differ. If you define one and not the other, defaults kick in, and you’ll find extra imports generated. While each of the previous 2 steps wont cause any less import statements to be generated, you will notice that namespace definitions within the WSDL have identical, well defined names. Step 3: Define a binding namespace In the configuration file, modify the endpoint configuration line item to iunclude a bindingNamespace attribute which is the same as that defined on the service behaviour and service contract <endpoint address="" binding="wsHttpBinding" contract="SingleWSDL_WcfService.IService1" bindingNamespace="http://SingleWSDL/Service1"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, this does not completely solve the issue. What this will do is remove the WSDL import statements like this one: <wsdl:import namespace="http://SingleWSDL/Service1" location="http://localhost:2370/HostingSite/Service-default.svc?wsdl" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } from the generated WSDL. Finally…. the magic…. Step 4: Use a custom endpoint behaviour to read in external imports and include in the main WSDL output. In order to force WCF to output a single WSDL with all the required definitions, we need to define a custom WSDL Export extension that can be applied to any endpoints. This requires implementing the IWsdlExportExtension and IEndpointBehavior interfaces and then reading in any imported schemas, and adding that output to the main, flattened WSDL to be output. Sounds like fun right…..? Hmmm well maybe not. This step sounds a little hairy, but its actually quite easy thanks to some kind individuals who have already done this for us. As far as I know, there are 2 available implementations that we can easily use to perform the import and “WSDL flattening”.  WCFExtras which is on codeplex and FlatWsdl by Thinktecture. Both implementations actually do exactly the same thing with the imports and provide an endpoint behaviour, however FlatWsdl does a little more work for us by providing a ServiceHostFactory that we can use which automatically attaches the requisite behaviour to our endpoints for us. To use this in an IIS hosted service, we can modify the .SVC file to specify this ne factory to use like so: <%@ ServiceHost Language="C#" Debug="true" Service="SingleWSDL_WcfService.Service1" Factory="Thinktecture.ServiceModel.Extensions.Description.FlatWsdlServiceHostFactory" %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Within a service application or another form of executable such as a console app, we can simply create an instance of the custom service host and open it as we normally would as shown here: FlatWsdlServiceHost host = new FlatWsdlServiceHost(typeof(Service1)); host.Open(); And we are done. WCF will now generate one single WSDL file that contains all he WSDL imports and data/XSD imports. You can download the full source code for this sample from here Hope this has helped you. Note: Please note that I have not extensively tested this in a number of different scenarios so no guarantees there.

    Read the article

  • Making WCF Output a single WSDL file for interop purposes.

    By default, when WCF emits a WSDL definition for your services, it can often contain many links to others related schemas that need to be imported. For the most part, this is fine. WCF clients understand this type of schema without issue, and it conforms to the requisite standards as far as WSDL definitions go. However, some non Microsoft stacks will only work with a single WSDL file and require that all definitions for the service(s) (port types, messages, operation etc) are contained within that single file. In other words, no external imports are supported. Some Java clients (to my working knowledge) have this limitation. This obviously presents a problem when trying to create services exposed for consumption and interop by these clients. Note: You can download the full source code for this sample from here To illustrate this point, lets say we have a simple service that looks like: Service Contract public interface IService1 { [OperationContract] [FaultContract(typeof(DataFault))] string GetData(DataModel1 model); [OperationContract] [FaultContract(typeof(DataFault))] string GetMoreData(DataModel2 model); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Service Implementation/Behaviour public class Service1 : IService1 { public string GetData(DataModel1 model) { return string.Format("Some Field was: {0} and another field was {1}", model.SomeField,model.AnotherField); } public string GetMoreData(DataModel2 model) { return string.Format("Name: {0}, age: {1}", model.Name, model.Age); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Configuration File <system.serviceModel> <services> <service name="SingleWSDL_WcfService.Service1" behaviorConfiguration="SingleWSDL_WcfService.Service1Behavior"> <!-- ...std/default data omitted for brevity..... --> <endpoint address ="" binding="wsHttpBinding" contract="SingleWSDL_WcfService.IService1" > ....... </services> <behaviors> <serviceBehaviors> <behavior name="SingleWSDL_WcfService.Service1Behavior"> ........ </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When WCF is asked to produce a WSDL for this service, it will produce a file that looks something like this (note: some sections omitted for brevity): <?xml version="1.0" encoding="utf-8" ?> - <wsdl:definitions name="Service1" targetNamespace="http://tempuri.org/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" ...... namespace definitions omitted for brevity + <wsp:Policy wsu:Id="WSHttpBinding_IService1_policy"> ... multiple policy items omitted for brevity </wsp:Policy> - <wsdl:types> - <xsd:schema targetNamespace="http://tempuri.org/Imports"> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd0" namespace="http://tempuri.org/" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd3" namespace="Http://SingleWSDL/Fault" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd1" namespace="http://schemas.microsoft.com/2003/10/Serialization/" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd2" namespace="http://SingleWSDL/Model1" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd4" namespace="http://SingleWSDL/Model2" /> </xsd:schema> </wsdl:types> + <wsdl:message name="IService1_GetData_InputMessage"> .... </wsdl:message> - <wsdl:operation name="GetData"> ..... </wsdl:operation> - <wsdl:service name="Service1"> ....... </wsdl:service> </wsdl:definitions> The above snippet from the WSDL shows the external links and references that are generated by WCF for a relatively simple service. Note the xsd:import statements that reference external XSD definitions which are also generated by WCF. In order to get WCF to produce a single WSDL file, we first need to follow some good practices when it comes to WCF service definitions. Step 1: Define a namespace for your service contract. [ServiceContract(Namespace="http://SingleWSDL/Service1")] public interface IService1 { ...... } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Normally you would not use a literal string and may instead define a constant to use in your own application for the namespace. When this is applied and we generate the WSDL, we get the following statement inserted into the document: <wsdl:import namespace="http://SingleWSDL/Service1" location="http://localhost:2370/HostingSite/Service-default.svc?wsdl=wsdl0" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } All the previous imports have gone. If we follow this link, we will see that the XSD imports are now in this external WSDL file. Not really any benefit for our purposes. Step 2: Define a namespace for your service behaviour [ServiceBehavior(Namespace = "http://SingleWSDL/Service1")] public class Service1 : IService1 { ...... } As you can see, the namespace of the service behaviour should be the same as the service contract interface to which it implements. Failure to do these tasks will cause WCF to emit its default http://tempuri.org namespace all over the place and cause WCF to still generate import statements. This is also true if the namespace of the contract and behaviour differ. If you define one and not the other, defaults kick in, and youll find extra imports generated. While each of the previous 2 steps wont cause any less import statements to be generated, you will notice that namespace definitions within the WSDL have identical, well defined names. Step 3: Define a binding namespace In the configuration file, modify the endpoint configuration line item to iunclude a bindingNamespace attribute which is the same as that defined on the service behaviour and service contract <endpoint address="" binding="wsHttpBinding" contract="SingleWSDL_WcfService.IService1" bindingNamespace="http://SingleWSDL/Service1"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, this does not completely solve the issue. What this will do is remove the WSDL import statements like this one: <wsdl:import namespace="http://SingleWSDL/Service1" location="http://localhost:2370/HostingSite/Service-default.svc?wsdl" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } from the generated WSDL. Finally. the magic. Step 4: Use a custom endpoint behaviour to read in external imports and include in the main WSDL output. In order to force WCF to output a single WSDL with all the required definitions, we need to define a custom WSDL Export extension that can be applied to any endpoints. This requires implementing the IWsdlExportExtension and IEndpointBehavior interfaces and then reading in any imported schemas, and adding that output to the main, flattened WSDL to be output. Sounds like fun right..? Hmmm well maybe not. This step sounds a little hairy, but its actually quite easy thanks to some kind individuals who have already done this for us. As far as I know, there are 2 available implementations that we can easily use to perform the import and WSDL flattening.  WCFExtras which is on codeplex and FlatWsdl by Thinktecture. Both implementations actually do exactly the same thing with the imports and provide an endpoint behaviour, however FlatWsdl does a little more work for us by providing a ServiceHostFactory that we can use which automatically attaches the requisite behaviour to our endpoints for us. To use this in an IIS hosted service, we can modify the .SVC file to specify this ne factory to use like so: <%@ ServiceHost Language="C#" Debug="true" Service="SingleWSDL_WcfService.Service1" Factory="Thinktecture.ServiceModel.Extensions.Description.FlatWsdlServiceHostFactory" %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Within a service application or another form of executable such as a console app, we can simply create an instance of the custom service host and open it as we normally would as shown here: FlatWsdlServiceHost host = new FlatWsdlServiceHost(typeof(Service1)); host.Open(); And we are done. WCF will now generate one single WSDL file that contains all he WSDL imports and data/XSD imports. You can download the full source code for this sample from here Hope this has helped you. Note: Please note that I have not extensively tested this in a number of different scenarios so no guarantees there.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • CodePlex Daily Summary for Wednesday, February 17, 2010

    CodePlex Daily Summary for Wednesday, February 17, 2010New ProjectsAcademic Success Accounting System: The system is intended to use by school teacher to set marks to students and estimate their academic success and possibilities. The client applicat...Access.PowerTools: Access PowerTools is currently a sample MS Access add-in project to try & test features of Add-in Express™ 2009 for Microsoft® Office and .net (ht...AntoonCms: AntoonCms makes it easy to maintain a simple website with it's builtin administration pages. It's developed in C# on target Framework 2.0 The CMS...ASP.NET MVC Mehr Lib: Mehr Lib makes it easier for ASP.NET MVC developers to do develop projects. It's developed in C#. This version currently include Ajax master detail...BCryptTool: Developer tool that calculates BCrypt hash codes for strings. BCrypt is an implementation of the Blowfish cipher and a computationally-expensive ha...Coronasoft Cryostasis scripting engine: A scripting engine that allows you to dynamically load plugins from just about any supported .NET language. Its written in C#. Languages supported ...Critical Point Search: Critical Point Searchcritical points: critical pointsFont Family Name Retrieval: This library helps developer to retrieve the font family name from the TTF, OTF and TTC font files, so that developer can display the font without ...jQuery Form Input Hints Plugin: Automatically display hints on input textboxes in your forms using this jQuery plugin. I wrote this code to be as simple and as easy to use as pos...Kojax: kojax projectKronRetro: KronRetro! Making a Habbo Retro just got easier! Powered by PHP & MySQL you can make a Habbo Retro site fast!MVVM Wrapper Kit: MVVM Wrapper Kit makes it easier for View Model programmers to wrap their business objects and collections while preserving change notification and...ObjectCartographer: ObjectCartographer is an object to object mapper and object factory. It's developed in C#.PE-file Reader Writer API (PERWAPI): PERWAPI is a reader writer module for .NET program executables. It has been used as back-end for progamming language compilers such as Gardens Poi...Pinger: A simple Pinger, pings an address until you press a buttonQPV: 0.1: QPV aka Que pelicula es una aplicacion que consiste crear una base de datos potente de peliculas, criticas e informacion para poder filtrar pelicul...SIMD Detector: This SIMD class helps developers to detect the types of SIMD instruction available on users' processor. It supports Intel and AMD CPUs. It is writt...StackOverflow Test Project: Following Andrew Siemer's StackOverflow Knowledge Exchange Project.WeBlog: A blogging platform built on the MVC framework The project will showcase current technologies such as MVC 2, Silverlight 4 and jQuery 1.4. Data pro...Webmedia: this is my webmedia projectWindows Azure RSS Reader: This is and online RSS reader based on the Windows Azure platformWordEditor. A Word Editor for Windows, and an extended RichTextBox control.: This is a word editor that can be used as a stand alone word processor, or added to an existing project.Домашняя Бухгалтерия: Программа для ведения домашнего бухгалтерского учета финансов. New ReleasesAccess.PowerTools: Access PowerTools Add-In Community Edition v.0.0.1: Access PowerTools Add-In Community Edition v.0.0.1 is a sample MS Access add-in project to try & test Add-in Express™ 2009 for Microsoft® Office an...Active CSS: ActiveCSS-0.1.1: revision for version 0.1ASP.NET: Microsoft Ajax Minifier 4.0: The Microsoft Ajax Minifier enables you to improve the performance of your Ajax applications by reducing the size of your Cascading Style Sheet and...ASP.NET MVC Mehr Lib: V1.0: Mehr Lib V1.0 This version currently include ajax master detail combo facilities.ASP.net Ribbon: Version 1.2: New controls : Expandable gallery Color Picker Multi color File Menu Some JS modifications. Some CSS modifications. Includes some functionna...ASP.NET Web Forms Model-View-Presenter (MVP) Contrib: WebForms MVP Contrib CTP6: This is a release of the WebForms MVP Contrib project for WebForms MVP CTP6. Release includes: WebForms MVP Contrib framework Ninject IoC containerAwesomiumDotNet: AwesomiumDotNet 1.2.1: - Added Awesomium 1.5 features: URL filtering, header rewrite rules, SetOpensExternalLinksInCallingFrame. - Numerous fixes and improvements.BCryptTool: BCryptTool v0.1: The Microsoft .NET Framework 3.5 (SP1) is needed to run this program.Buzz Dot Net: Buzz Dot Net v.1.10216: Features Parse Google Buzz feed to Objects Partial MVVM Implementation Partial OptimizationsCanvas VSDOC Intellisense: v1.0.0.0a: canvas-vsdoc.js and canvas-utils.js JavaScript intellisense for HTML5 Canvas element.CheckHeader: CheckHeader v0.8.5: The Microsoft .NET Framework 3.5 (SP1) is needed to run this program.Claymore MVP: Claymore 1.0.2.0: Changelog Added ASP.NET WebForm support via ClaymoreHttpModule class. Added xsd schema for Visual Studio Intellisense within App.config and Web....Dam Gd - URL Shortner: Dam.gd Version 1.1: This is the latest instalment in our URL shortner. It uses The Easy API http://theeasyapi.com to access data that is used for the back-end analyti...D-AMPS: D-AMPS 0.9.1: Initial version.easySMS: easySMS 1.0 Source code: easySMS 1.0 Source codeFont Family Name Retrieval: 1st Release: Version 1.0.0Free Silverlight & WPF Chart Control - Visifire: Visifire Now Supports DataBinding: Hi, Today we are releasing the much awaited DataBinding feature in Visifire 3.0.3 beta 3. Now you can Bind any DataSource at the Series level so t...GenerateTypedBamApi: Version 2.0: Changes in this release: NEW: Export functionality no longer requires Excel to be installed (uses OLE DB vs. Excel Automation; also enables usage i...Gmail Notifier 2: GmailNotifier2 1.2.1: Fixes issues #9652, #9653iTuner - The iTunes Companion: iTuner 1.1.3699: This includes the first pass of the iTuner Librarian including management of dead tracks, duplicates, and empty directories... While I promised a ...jQuery Form Input Hints Plugin: jQuery.InputHints v1.0: jQuery.InputHints v1.0 Includes Standard & minified source Demo HTML file VS2008 SolutionLibWowArmory: LibWowArmory 0.2.3 beta: LibWowArmory 0.2.3 betaThis release of the LibWowArmory source code matches the WoW Armory as of version 3.3.2. Changes since version 0.2.2:Update...Managed Extensibility Framework: MEF Preview 9: We have merged the .net 3.5 and Silverlight 3 into a single zip. The bin folder contains the binaries for .net 3.5 whereas bin\SL contains the bina...MDX Parser,Builder,DOM and OLAP visual controls with Writeback for Silverlight: Ranet.UILibrary.Olap-1.3.3.0-6571.msi: February 16, 2010 * MdxDesigner: Fix for the issue where when an element is clicked, the mouse wheel stops working until the cursor leaves and r...MEFGeneric: MEFGeneric Preview 9: MEFGeneric Preview 9 release.Mesopotamia Experiment: Mesopotamia 1.2.26: Bug Fixes - mud map - progress window - recycle app domains on robotics engine crashes( in command prompt and visual, major work) - fixed rooomba h...Microsoft Solution Framework for Business Intelligence in Media: Release 1.0: This is the public release of the Microsoft Solution Framework for Business Intelligence in Media (Release 1.0).MVVM Wrapper Kit: MVVM Wrapper Beta: A simple test project is included to get you up and running, and wrapping those business objects.nBayes - Bayesian Filtering in C#: nBayes v0.2: nBayes' indexing system is factored in such a way that you can easily replace the index with a custom implementation. This release introduces an ad...NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.5: 3.6.0.5 16-February-2010 - Fix: SqlAzManSid Class. "Equals" matches object signiture instead of IAzManSid signiture. When a real null object is pas...ObjectCartographer: ObjectCartographer Code 1.0: This is the first release and contains code to help with object to object mapping (including mapping from one object to multiple objects), object f...Office Apps: 0.8.6: Bug fix's, added Calendar.OI - Open Internet: OI HTML and .XAP files (OI offline): this is the HTML code and the XAP file. please right-click the app at http://bit.ly/openinternet and select "install openinternet application to th...PE-file Reader Writer API (PERWAPI): PERWAPI-1.1.3: Perwapi version 1.1.3 is the complete distribution package. It contains Binary files, pdb files and xml files for the PERWAPI and SymbolRW compone...Pinger: Pinger 1.0.0.0 Binary: The Latest BinaryRNA Comparative Analysis Software Tools: RNA Comparative Analysis Software Tools 2.0: RNA Comparative Analysis Software Tools Version 2.0 Note: The RNA Comparative Analysis Software Tools are provided as is, without any warranty. No...SAL- Self Artificial Learning: Artificial Learning working proof of concept: This is a working proof of concept. It includes the Dev version (in .zip format) and the consumer version (in .exe format)SharePoint Management PowerShell scripts: SharePoint 2010 PowerShell Scripts: All the SharePoint 2010 PowerShell Scripts The first file is an Excel 2010 file allowing to find quiclky and easily the new cmdlets available wi...SIMD Detector: 1st Release: Version 1.3 Supports MMX/MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, SSE4a, SSE5, 3DNow.Terminals: Terminals 1.9 Beta Release: This is a beta release so the new features being added to terminals can be tested properly. The major change in this release is that Terminals has...Text Designer Outline Text Library: 9th minor release: Added the ability to select brush, such as gradient brush or texture brush for the text body. Added CSharp library, TextDesignerCSLibrary. Manage...VivoSocial: VivoSocial 7.0.2: This release has several updated modules. See the Support Forums for more details. Since we update modules very often, we will be changing how we d...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.6.00: changes CKEditor Upgrade to Version 3.2 SVN 5132 File Browser: After File Upload, File will be Auto Selected File Browser: Icons are corrected ...WordEditor. A Word Editor for Windows, and an extended RichTextBox control.: WordEditor Source Code: This contains the latest solution file, with all project files included.Домашняя Бухгалтерия: Alapha Realease: Принимаются ваши предложения по дизайну и функциональности программы.Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsMicrosoft SQL Server Community & SamplesASP.NETLiveUpload to FacebookMost Active ProjectsDinnerNow.netRawrBlogEngine.NETSimple SavantNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelSharpyjQuery Library for SharePoint Web ServicesFluent Validation for .NET

    Read the article

  • Quick guide to Oracle IRM 11g: Server configuration

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index Welcome to the second article in this quick quide to Oracle IRM 11g. Hopefully you've just finished the first article which takes you through deploying the software onto a Linux server. This article walks you through the configuration of this new service and contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Create IRM WebLogic Domain Starting the Admin Server and initial configuration Introduction In the previous article the database was prepared, the WebLogic Application Server installed and the files required for an IRM server installed. But we don't actually have a configured system yet. We need to now create a WebLogic Domain in which the IRM server will run, then configure some of the settings and crypography so that we can create a context and be ready to seal some content and test it all works. This article doesn't cover the configuration of SSL communication from client to server. This is quite a big topic and a separate article has been dedicated for this area. In these articles I also use the hostname, irm.company.internal to reference the IRM server and later on use the hostname irm.company.com in reference to the public facing service. Create IRM WebLogic Domain First step is creating the WebLogic domain, in a console switch to the newly created IRM installation folder as shown below and we will run the domain configuration wizard. [oracle@irm /]$ cd /oracle/middleware/Oracle_IRM/common/bin [oracle@irm bin]$ ./config.sh First thing the wizard will ask is if you wish to create a new or extend an existing domain. This guide is creating a standalone system so you should select to create a new domain. Next step is to choose what technologies from the Oracle ECM Suite you wish this domain to host. You are only interested in selecting the option "Oracle Information Rights Management". When you select this check box you will notice that it also selects "Oracle Enterprise Manager" and "Oracle JRF" as these are dependencies of the IRM server. You then need to specify where you wish to place the domain files. I usually just change the domain name from base_domain or irm_domain and leave the others with their defaults. Now the domain will have a single user initially and by default this user is called "weblogic". I usually change this account name to "sysadmin" or "administrator", but in this guide lets just accept the default. With respects to the next dialog, again for eval or dev reasons, leave the server startup mode as development. The JDK should also be automatically detected. We now need to provide details of the database. This guide is using the Oracle 11gR2 database and the settings I used can be seen in the image to the right. There is a lot of configuration that can now be done for the admin server, any managed servers and where the deployments reside. In this guide I am leaving all of these to their defaults so do not check any of the boxes. However I will on this blog be detailing later how you can go back and setup things such as automated startup of an IRM server which require changes to these default settings. But for now, lets leave it all alone and just click next. Now we are ready to install. Note that from this dialog you can scroll the left window and see there are going to be two servers created from the defaults. The AdminServer which is where you modify settings for the WebLogic Server and also hosts the Oracle Enterprise Manager for IRM which allows to monitor the IRM service performance and also make service related settings (which we shortly do below) and the IRM_server1 which hosts the actual IRM services themselves. So go right ahead and hit create, the process is pretty quick and usually under 10 minutes. When the domain creation ends, it will give you the URL to the admin server. It's worth noting this down and the URL is usually; http://irm.company.internal:7001 Starting the Admin Server and initial configuration First thing to do is to start the WebLogic Admin server and review the initial IRM server settings. In this guide we are going to run the Admin server and IRM server in console windows, in another article I will discuss running these as background services. So for now, start a console and run the Admin server by doing the following. cd /oracle/middleware/user_projects/domains/irm_domain/ ./startWebLogic.sh Wait for the server to start, you are looking for the following line to be reported in the console window. <BEA-00360><Server started in RUNNING mode> First step is configuring the IRM service via Enterprise Manager. Now that the Admin server is running you can point a browser at http://irm.company.internal:7001/em. Login with the username and password you supplied when you created the domain. In Enterprise Manager the IRM service administrator is able to make server wide configuration. However finding where to access the pages with these settings can be a bit of a challenge. After logging in on the left you'll see a tree containing elements of the Enterprise Manager farm Farm_irm_domain. Open up Content Management, then Information Rights Management and finally select the IRM node. On the right then select the IRM menu item, navigate to the Administration section and now we have four options, for now, we are just going to look at General Settings. The image on the right proves that a picture is worth a thousand words (or 113 in this case). The General Settings page allows you to set the cryptographic algorithms used for protecting sealed content. Unless you have a burning need to increase the key lengths or you need to comply to a regulation or government mandate, AES192 is a good start. You can change this later on without worry. The most important setting here we need to make is the Server URL. In this blog article I go over why this URL is so important, basically every single piece of content you protect with Oracle IRM is going to have this URL embedded in it, so if it's wrong or unresolvable, then nobody can open the secured documents. Note that in our environment we have yet to do any SSL configuration of the service. If you intend to build a server without SSL, then use http as the protocol instead of https. But I would recommend using SSL and setting this up is described in the next article. I would also probably up the device count from 1 to 3. This means that any user can retrieve rights to access content onto 3 computers at any one time. The default of 1 doesn't really make sense in development, evaluation nor even production environments and my experience is that 3 is a better number. Next step is to create the keystore for the IRM server. When a classification (called a context) is created, Oracle IRM generates a unique set of symmetric keys which are used to secure the content itself. These keys are then encrypted with a set of "wrapper" asymmetric cryptography keys which are stored externally to the server either in a Java Key Store or a HSM. These keys need to be generated and the following shows my commands and the resulting output. I have greyed out the responses from the commands so you can see the input a little easier. [oracle@irmsrv ~]$ cd /oracle/middleware/wlserver_10.3/server/bin/ [oracle@irmsrv bin]$ ./setWLSEnv.sh CLASSPATH=/oracle/middleware/patch_wls1033/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/oracle/middleware/patch_ocp353/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/usr/java/jdk1.6.0_18/lib/tools.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic_sp.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic.jar:/oracle/middleware/modules/features/weblogic.server.modules_10.3.3.0.jar:/oracle/middleware/wlserver_10.3/server/lib/webservices.jar:/oracle/middleware/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/oracle/middleware/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar: PATH=/oracle/middleware/wlserver_10.3/server/bin:/oracle/middleware/modules/org.apache.ant_1.7.1/bin:/usr/java/jdk1.6.0_18/jre/bin:/usr/java/jdk1.6.0_18/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin Your environment has been set. [oracle@irmsrv bin]$ cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irmsrv fmwconfig]$ keytool -genkeypair -alias oracle.irm.wrap -keyalg RSA -keysize 2048 -keystore irm.jks Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: Simon Thorpe What is the name of your organizational unit? [Unknown]: Oracle What is the name of your organization? [Unknown]: Oracle What is the name of your City or Locality? [Unknown]: San Francisco What is the name of your State or Province? [Unknown]: CA What is the two-letter country code for this unit? [Unknown]: US Is CN=Simon Thorpe, OU=Oracle, O=Oracle, L=San Francisco, ST=CA, C=US correct? [no]: yes Enter key password for (RETURN if same as keystore password): At this point we now have an irm.jks in the directory /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig. The reason we store it here is this folder would be backed up as part of a domain backup. As with any cryptographic technology, DO NOT LOSE THESE KEYS OR THIS KEY STORE. Once you've sealed content against a context, the keys will be wrapped with these keys, lose these keys, and you can't get access to any secured content, pretty important. Now we've got the keys created, we need to go back to the IRM Enterprise Manager and set the location of the key store. Going back to the General Settings page in Enterprise Manager scroll down to Keystore Settings. Leave the type as JKS but change the location to; /oracle/Middleware/user_projects/domains/irm_domain/config/fmwconfig/irm.jks and hit Apply. The final step with regards to the key store is we need to tell the server what the password is for the Java Key Store so that it can be opened and the keys accessed. Once more fire up a console window and run these commands (again i've greyed out the clutter to see the commands easier). You will see dummy passed into the commands, this is because the command asks for a username, but in this instance we don't use one, hence the value dummy is passed and it isn't used. [oracle@irmsrv fmwconfig]$ cd /oracle/middleware/Oracle_IRM/common/bin/ [oracle@irmsrv bin]$ ./wlst.sh ... lots of settings fly by... Welcome to WebLogic Server Administration Scripting Shell Type help() for help on available commands wls:/offline>connect('weblogic','password','t3://irmsrv.us.oracle.com:7001') Connecting to t3://irmsrv.us.oracle.com:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'irm_domain'. Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead. wls:/irm_domain/serverConfig>createCred("IRM","keystore:irm.jks","dummy","password") Location changed to domainRuntime tree. This is a read-only tree with DomainMBean as the root. For more help, use help(domainRuntime)wls:/irm_domain/serverConfig>createCred("IRM","key:irm.jks:oracle.irm.wrap","dummy","password") Already in Domain Runtime Tree wls:/irm_domain/serverConfig> At last we are now ready to fire up the IRM server itself. The domain creation created a managed server called IRM_server1 and we need to start this, use the following commands in a new console window. cd /oracle/middleware/user_projects/domains/irm_domain/bin/ ./startManagedWebLogic.sh IRM_server1 This will start up the server in the console, unlike the Admin server, you need to provide the username and password for the service to start. Enter in your weblogic username and password when prompted. You can change this behavior by putting the password into a boot.properties file, read more about this in the WebLogic Server documentation. Once running, wait until you see the line; <Notice><WebLogicServer><BEA-000360><Server started in RUNNING mode> At this point we can now login to the Oracle IRM Management Website at the URL. http://irm.company.internal:1600/irm_rights/ The server is just configured for HTTP at the moment, no SSL involved. Just want to ensure we can get a working system up and running. You should now see a login like the image on the right and you can now login using your weblogic username and password. The next article in this guide goes over adding SSL and now testing your server by actually adding a few users, sealing some content and opening this content as a user.

    Read the article

  • Metro: Declarative Data Binding

    - by Stephen.Walther
    The goal of this blog post is to describe how declarative data binding works in the WinJS library. In particular, you learn how to use both the data-win-bind and data-win-bindsource attributes. You also learn how to use calculated properties and converters to format the value of a property automatically when performing data binding. By taking advantage of WinJS data binding, you can use the Model-View-ViewModel (MVVM) pattern when building Metro style applications with JavaScript. By using the MVVM pattern, you can prevent your JavaScript code from spinning into chaos. The MVVM pattern provides you with a standard pattern for organizing your JavaScript code which results in a more maintainable application. Using Declarative Bindings You can use the data-win-bind attribute with any HTML element in a page. The data-win-bind attribute enables you to bind (associate) an attribute of an HTML element to the value of a property. Imagine, for example, that you want to create a product details page. You want to show a product object in a page. In that case, you can create the following HTML page to display the product details: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1>Product Details</h1> <div class="field"> Product Name: <span data-win-bind="innerText:name"></span> </div> <div class="field"> Product Price: <span data-win-bind="innerText:price"></span> </div> <div class="field"> Product Picture: <br /> <img data-win-bind="src:photo;alt:name" /> </div> </body> </html> The HTML page above contains three data-win-bind attributes – one attribute for each product property displayed. You use the data-win-bind attribute to set properties of the HTML element associated with the data-win-attribute. The data-win-bind attribute takes a semicolon delimited list of element property names and data source property names: data-win-bind=”elementPropertyName:datasourcePropertyName; elementPropertyName:datasourcePropertyName;…” In the HTML page above, the first two data-win-bind attributes are used to set the values of the innerText property of the SPAN elements. The last data-win-bind attribute is used to set the values of the IMG element’s src and alt attributes. By the way, using data-win-bind attributes is perfectly valid HTML5. The HTML5 standard enables you to add custom attributes to an HTML document just as long as the custom attributes start with the prefix data-. So you can add custom attributes to an HTML5 document with names like data-stephen, data-funky, or data-rover-dog-is-hungry and your document will validate. The product object displayed in the page above with the data-win-bind attributes is created in the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var product = { name: "Tesla", price: 80000, photo: "/images/TeslaPhoto.png" }; WinJS.Binding.processAll(null, product); } }; app.start(); })(); In the code above, a product object is created with a name, price, and photo property. The WinJS.Binding.processAll() method is called to perform the actual binding (Don’t confuse WinJS.Binding.processAll() and WinJS.UI.processAll() – these are different methods). The first parameter passed to the processAll() method represents the root element for the binding. In other words, binding happens on this element and its child elements. If you provide the value null, then binding happens on the entire body of the document (document.body). The second parameter represents the data context. This is the object that has the properties which are displayed with the data-win-bind attributes. In the code above, the product object is passed as the data context parameter. Another word for data context is view model.  Creating Complex View Models In the previous section, we used the data-win-bind attribute to display the properties of a simple object: a single product. However, you can use binding with more complex view models including view models which represent multiple objects. For example, the view model in the following default.js file represents both a customer and a product object. Furthermore, the customer object has a nested address object: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var viewModel = { customer: { firstName: "Fred", lastName: "Flintstone", address: { street: "1 Rocky Way", city: "Bedrock", country: "USA" } }, product: { name: "Bowling Ball", price: 34.55 } }; WinJS.Binding.processAll(null, viewModel); } }; app.start(); })(); The following page displays the customer (including the customer address) and the product. Notice that you can use dot notation to refer to child objects in a view model such as customer.address.street. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1>Customer Details</h1> <div class="field"> First Name: <span data-win-bind="innerText:customer.firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:customer.lastName"></span> </div> <div class="field"> Address: <address> <span data-win-bind="innerText:customer.address.street"></span> <br /> <span data-win-bind="innerText:customer.address.city"></span> <br /> <span data-win-bind="innerText:customer.address.country"></span> </address> </div> <h1>Product</h1> <div class="field"> Name: <span data-win-bind="innerText:product.name"></span> </div> <div class="field"> Price: <span data-win-bind="innerText:product.price"></span> </div> </body> </html> A view model can be as complicated as you need and you can bind the view model to a view (an HTML document) by using declarative bindings. Creating Calculated Properties You might want to modify a property before displaying the property. For example, you might want to format the product price property before displaying the property. You don’t want to display the raw product price “80000”. Instead, you want to display the formatted price “$80,000”. You also might need to combine multiple properties. For example, you might need to display the customer full name by combining the values of the customer first and last name properties. In these situations, it is tempting to call a function when performing binding. For example, you could create a function named fullName() which concatenates the customer first and last name. Unfortunately, the WinJS library does not support the following syntax: <span data-win-bind=”innerText:fullName()”></span> Instead, in these situations, you should create a new property in your view model that has a getter. For example, the customer object in the following default.js file includes a property named fullName which combines the values of the firstName and lastName properties: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var customer = { firstName: "Fred", lastName: "Flintstone", get fullName() { return this.firstName + " " + this.lastName; } }; WinJS.Binding.processAll(null, customer); } }; app.start(); })(); The customer object has a firstName, lastName, and fullName property. Notice that the fullName property is defined with a getter function. When you read the fullName property, the values of the firstName and lastName properties are concatenated and returned. The following HTML page displays the fullName property in an H1 element. You can use the fullName property in a data-win-bind attribute in exactly the same way as any other property. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1 data-win-bind="innerText:fullName"></h1> <div class="field"> First Name: <span data-win-bind="innerText:firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:lastName"></span> </div> </body> </html> Creating a Converter In the previous section, you learned how to format the value of a property by creating a property with a getter. This approach makes sense when the formatting logic is specific to a particular view model. If, on the other hand, you need to perform the same type of formatting for multiple view models then it makes more sense to create a converter function. A converter function is a function which you can apply whenever you are using the data-win-bind attribute. Imagine, for example, that you want to create a general function for displaying dates. You always want to display dates using a short format such as 12/25/1988. The following JavaScript file – named converters.js – contains a shortDate() converter: (function (WinJS) { var shortDate = WinJS.Binding.converter(function (date) { return date.getMonth() + 1 + "/" + date.getDate() + "/" + date.getFullYear(); }); // Export shortDate WinJS.Namespace.define("MyApp.Converters", { shortDate: shortDate }); })(WinJS); The file above uses the Module Pattern, a pattern which is used through the WinJS library. To learn more about the Module Pattern, see my blog entry on namespaces and modules: http://stephenwalther.com/blog/archive/2012/02/22/windows-web-applications-namespaces-and-modules.aspx The file contains the definition for a converter function named shortDate(). This function converts a JavaScript date object into a short date string such as 12/1/1988. The converter function is created with the help of the WinJS.Binding.converter() method. This method takes a normal function and converts it into a converter function. Finally, the shortDate() converter is added to the MyApp.Converters namespace. You can call the shortDate() function by calling MyApp.Converters.shortDate(). The default.js file contains the customer object that we want to bind. Notice that the customer object has a firstName, lastName, and birthday property. We will use our new shortDate() converter when displaying the customer birthday property: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var customer = { firstName: "Fred", lastName: "Flintstone", birthday: new Date("12/1/1988") }; WinJS.Binding.processAll(null, customer); } }; app.start(); })(); We actually use our shortDate converter in the HTML document. The following HTML document displays all of the customer properties: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script type="text/javascript" src="js/converters.js"></script> </head> <body> <h1>Customer Details</h1> <div class="field"> First Name: <span data-win-bind="innerText:firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:lastName"></span> </div> <div class="field"> Birthday: <span data-win-bind="innerText:birthday MyApp.Converters.shortDate"></span> </div> </body> </html> Notice the data-win-bind attribute used to display the birthday property. It looks like this: <span data-win-bind="innerText:birthday MyApp.Converters.shortDate"></span> The shortDate converter is applied to the birthday property when the birthday property is bound to the SPAN element’s innerText property. Using data-win-bindsource Normally, you pass the view model (the data context) which you want to use with the data-win-bind attributes in a page by passing the view model to the WinJS.Binding.processAll() method like this: WinJS.Binding.processAll(null, viewModel); As an alternative, you can specify the view model declaratively in your markup by using the data-win-datasource attribute. For example, the following default.js script exposes a view model with the fully-qualified name of MyWinWebApp.viewModel: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { // Create view model var viewModel = { customer: { firstName: "Fred", lastName: "Flintstone" }, product: { name: "Bowling Ball", price: 12.99 } }; // Export view model to be seen by universe WinJS.Namespace.define("MyWinWebApp", { viewModel: viewModel }); // Process data-win-bind attributes WinJS.Binding.processAll(); } }; app.start(); })(); In the code above, a view model which represents a customer and a product is exposed as MyWinWebApp.viewModel. The following HTML page illustrates how you can use the data-win-bindsource attribute to bind to this view model: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1>Customer Details</h1> <div data-win-bindsource="MyWinWebApp.viewModel.customer"> <div class="field"> First Name: <span data-win-bind="innerText:firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:lastName"></span> </div> </div> <h1>Product</h1> <div data-win-bindsource="MyWinWebApp.viewModel.product"> <div class="field"> Name: <span data-win-bind="innerText:name"></span> </div> <div class="field"> Price: <span data-win-bind="innerText:price"></span> </div> </div> </body> </html> The data-win-bindsource attribute is used twice in the page above: it is used with the DIV element which contains the customer details and it is used with the DIV element which contains the product details. If an element has a data-win-bindsource attribute then all of the child elements of that element are affected. The data-win-bind attributes of all of the child elements are bound to the data source represented by the data-win-bindsource attribute. Summary The focus of this blog entry was data binding using the WinJS library. You learned how to use the data-win-bind attribute to bind the properties of an HTML element to a view model. We also discussed several advanced features of data binding. We examined how to create calculated properties by including a property with a getter in your view model. We also discussed how you can create a converter function to format the value of a view model property when binding the property. Finally, you learned how to use the data-win-bindsource attribute to specify a view model declaratively.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • CodePlex Daily Summary for Saturday, March 13, 2010

    CodePlex Daily Summary for Saturday, March 13, 2010New Projects[Experiment] vczh DBUI: EXPERIMENT. Auto UI adaptor for a specified data base[SAMPLE PROJECT] Branch and Patch with GIT: I'll paste more here later...BlogPress: This web application provides a content management system. Coot: Facebook Photos Screensaver.DotNetNuke® JDMenu: dnnJDMenu makes it easy to use the open source JDMenu component in your DotNetNuke skin.DotNetNuke® RadRotator: dnnRadRotator makes it easy to add telerik RadRotator functionality to your site or custom module. Licensing permits anyone to use the components ...DotNetNuke® RadTabStrip: DNNRadTabStrip makes it easy to add telerik RadTabStrip functionality to your module or skin. Licensing permits anyone to use the components (incl...DotNetNuke® RadTreeView: DNNRadTreeView makes it easy to add telerik RadTreeView functionality to your module or skin. Licensing permits anyone to use the components (incl...DotNetNuke® Skin Garden: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Mark Allan of DnnGarden. Concise and semantic HTML 5 stru...ElmasFC: Elmas Facebook CheckerFront Callback Protocol: Front Callback Protocol make it easier for developers to create a network application in the .Net Language. It provide the flexibility of using TCP...Glass UI: A Windows Forms Control Library built specifically for Aero Glass. This will consist of existing controls, modified to render on glass, as well as ...Guitarist Tools: Some tools for guitarists.HomeUX: HomeUX is home control software featuring a Silverlight-based touch screen user interface.Homework Helper: Homework Helper help students to train simpler homework like "What's the name of the capital of France?" - Question - Answear based. It's made i...IGNOU Mini Project Allocation: A mini project allocation system.Images Compiler: Images Compiler is pretty small wpf application for users who want to resize, rename, disable colors... for too much images in very short time. It...InstitutionManagementSystem: Institution Management System prototypeManagedCv: ManagedCv is the library for C#/VB/ to use the OpenCV librarymanagement joint ownership: Application pour la gestion des actions réalisées au sein d’une copropriété. Application for management actions within a joint ownershipMapWindow6: MapWindow 6.0 is an open source geographic information system (GIS) and library of geospatial software development tools for .NET, written in C#. T...Morris Auto: Morris Auto is a social application build templateMouse control with finger detection: The aim of this project is to design an application for recognition of the movement of a finger through a webcam and then control the mouse. The pr...ORAYLIS BI.Quality: ORAYLIS BI.Quality makes it easier to develop BI Solutions in an agile environment. It adapts Unit Tests to BI Development. Propositional Framework: Different sales propositions are presented to the user using data collected from marketing intelligence, browser history or user behaviour as they ...ServStop: ServStop is a .NET application that makes it easy to stop several system services at once. Now you don't have to change startup types or stop them ...shatkotha web portal: source code of web portal shatkothaSurvey and Registration tools: Two Silverlight tools for add Survey and Registration in your websiteTable Storage Backup & Restore for Windows Azure: Allows various backup & restore options for Windows Azure Table Storage accounts: - Backup from one storage account to another - Backup a storag...Topsy Lib: This project provides wrapper methods to call Topsy's web services from C#.twNowplaying: twNowplaying is a small and handy utility that allows you to tweet your currently playing track from Spotify to Twitter. Followed by the #twNowplay...XOM: XOM: Xml Object Mapper Automatically populate your objects with data from XML via an XML map. Never do this again: if(e.GetElementsByTagName("us...YS Utils: The library contains set of useful classes which may solve common tasks for different applications.ZabViewer: CS ProjectNew ReleasesAppFabric Caching Admin Tool: AppFabric Caching Admin Tool (0.8): System Requirements:.NET 4.0 RC AppFabric Caching Beta2 Test On:Win 7 (64x)ASP.Net Routing Configuration: mal.Web.Routing v0.9.2.0: mal.Web.Routing v0.9.2.0DotNetNuke IM Module of Facebook Like Messenger: DNN IM Module of Facebook Like Messenger: Empower DNN Website with a 1-to-1 Chat Solution named Free Facebook Messenger Style Web Chat Bar of 123 Web Messenger. 1. If you need to use the c...DotNetNuke® Form and List (formerly User Defined Table): 05.01.02: Form and List 05.01.02 (Release Candidate 2)5.1.2 will be the next stabilzation release. Major Highlights fixed Cancel action in a form, it don't ...DotNetNuke® JDMenu: DNN jdMenu 1.0.0: dnnJDMenu makes it easy to use the open source JDMenu component in your DotNetNuke skin. Many thanks to Jonathan Sharp of Outwest Media for creati...DotNetNuke® RadTabStrip: DNNRadTabstrip 1.0.0: DNNRadTabStrip makes it easy to add telerik RadTabStrip functionality to your module or skin. Licensing permits anyone to use the components (inclu...DotNetNuke® RadTreeView: DNNRadTreeView 1.0.0: DNNRadTreeView makes it easy to create skins which use the Telerik RadTreeview functionality. Licensing permits anyone (including designers) to use...DotNetNuke® Skin Garden: Garden Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Mark Allan of DnnGarden. Concise and semantic HTML 5 struc...ElmasFC: Elmas FC: This porgram is checking your facebook account automaticly.Free DotNetNuke IM of 123 Web Messenger -- Web-based Friend List: Free Download DNN IM Module and Source Code: 123 Web Messenger offers free DotNetNuke Instant Messaging to help you embed a IM Software into DNN Website by integrating 123 Web Messenger with D...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.4 Released: Hi, Today we have released the final version of Visifire v3.0.4 which contains the following major features: * Zooming * Step Line chart ...Home Access Plus+: v3.1.0.0: Version 3.1.0.0 Release Problem with this release, get Version 3.1.1.1 Change Log: Fixed ampersand issue Added Unzip Features Added Help Desk ...Home Access Plus+: v3.1.1.1: Version 3.1.1.1 Release Change Log: Fixed the Help Desk File Changes: ~/App_Data/Tickets.xml ~/bin/CHS Extranet.dll ~/bin/CHS Extranet.pdbHomework Helper: Homework Helper 1.0: The first release of Homework Helper. It got basic functionality. Check the Documentation or press F1 while using the program.Homework Helper: Homework Helper v1.0: This is the latest version of Homework Helper. Just a few updates since the latest one (a couple of minutes ago). Now it saves your settings! Instr...IBCSharp: IBCSharp 1.02: What IBCSharp 1.02.zip unzips to: http://i39.tinypic.com/28hz8m1.png Note: The above solution has MSTest, Typemock Isolator, and Microsoft CHESS c...InfoService: InfoService v1.5 RC 1: InfoService Release Canidate Please note this is a RC. It should be stable, but i can't guarantee that! So use it on your own risk. Please read Pl...IntX: IntX 0.9.3.3: Fixed issue #7100IronRuby: 1.0 RC3: The IronRuby team is pleased to announce version 1.0 RC3! As IronRuby approaches the final 1.0, these RCs will contain crucial bug fixes and enhanc...MAISGestão: LayerDiagram (pdf): This is the final version of the layer diagramMapWindow6: MapWindow 6.0 msi (March 12): After some significant trouble with svn, this site represents a test of using Mercurial for our version control. Hopefully the choice of this new ...MSBuild Mercurial Tasks: 0.2.0 Beta: This release realises the Scenario 2 and provides two MSBuild tasks: HgCommit and HgPush. This task allows to create a new changeset in the current...NLog - Advanced .NET Logging: NLog 2.0 preview 1: This is very experimental first build of NLog from 2.0 branch is available for download. It’s not alpha, beta or even gamma, just a preview of upco...Open NFe: Fonte DANFE v1.9.5: Código-fonte para a versão 1.9.5 do DANFEOpiConsole (XNA): OpiConsole 0.3: OpiConsole 0.3 OpiConsole includes: * PC & Xbox support. * Default commands (cvar, quit etc.) * Ability to add custom commands. *...ORAYLIS BI.Quality: Release 1.0.0: Release 1.0.0Pcap.Net: Pcap.Net 0.5.0 (38141): Pcap.Net - March 2010 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and include...PhysX.Net: PhysX.Net 0.12.0.0 Beta 1: This is beta release of 0.12.0.0 for people to test and provide feedback on. It targets 2.8.3.21 of PhysXPólya: Pólya 2010 03 13 alpha: Pólya is a collection of generic data structures; as generic as C#/.Net allows them to be. In this first release the focus has been on some fundam...Prolog.NET: Prolog.NET 1.0 Beta 1.3: Installer includes: primary Prolog.NET assembly Prolog.NET Workbench Prolog.NET Scheduler sample application PrologTest console applicatio...Propositional Framework: USP: The initial code drop was based on the Autocomplete demo and the neural network approach used by Jeff Heaton (@ http://www.heatonresearch.com/) - e...Protocol Transition with BizTalk: Source: Stable buildRoTwee: RoTwee (7.1.0.0): Now picture of your friend is shown in RoTwee. Place mouse cursor to the tweet !ServStop: 0.5.0.0: Initial Release Contents of ServStop0500.zip ServStop.exe 0.5.0.0 Initial Release. 32,768 bytes. Other files None. Source code available on "...SkeinLibManaged: Release 1.0.0.0 (Beta): This is the compiled DLL with XML documentation, so there should be plenty of context sensitive help and Intellisense. This is the Release version...sPWadmin: pwAdmin v0.9a: Added: LiveChat Plugin Shows the latest 150 chat messages from "/Your PW Server/logservice/logs/world2.chat" Refresh chat every 5 seconds Allow...sPWadmin: pwAdmin v0.9b: Some minor fixes to style, layout & different browser support Merged the forms in server configuration tab to a single form that can now save mul...Topsy Lib: Initial release: Initial Debug and Release versions.twNowplaying: twNowplaying: Press the Twitter icon to get started, don't forget to submit bugs to the issue tracker. What's new This release has some minor UI fixes.twNowplaying: twNowplaying Alpha 1.0: Press the Twitter icon to get started, don't forget to submit bugs to the issue tracker. Thank you , and enjoy! =)VCC: Latest build, v2.1.30312.0: Automatic drop of latest buildWPF Dialogs: Version 0.1.3: The FolderBrowseDialog / FolderBrowseDialog - Deutsch was improved and extended.XOM: XOM 0.1A: Just a release of the code I have so far. In order to get your objects to work, you need to create an XML file to define your objects. Here is a ...Xpress - ASP.NET MVC 个人博客程序: xpress2.1.1.0312.beta: 最新beta版,注意:此版本和2.1.0不兼容 更改内容: 将主题文件发放在 Views 文件夹下 主题文件支持强类型Model 主题资源文件放在Resouces目录下YS Utils: V 1.0.0.0: This is first release. The YSUtils library contains first set of classes. ZIP files contains documentation.Most Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NET Ajax LibraryASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrN2 CMSBlogEngine.NETFasterflect - A Fast and Simple Reflection APIpatterns & practices – Enterprise LibraryFarseer Physics EngineSharePoint Team-MailerCaliburn: An Application Framework for WPF and SilverlightCalcium: A modular application toolset leveraging PrismjQuery Library for SharePoint Web Services

    Read the article

  • Compat Wireless Drivers Centrino N-2230

    - by user2699451
    So I am using linux and am having trouble installing the Compat Wireless drivers Hardware: Intel Centrino N-2230 OS: Linux Mint 64bit (kernel 13.08-generic) I followed this link http://www.mathyvanhoef.com/2012/09/compat-wireless-injection-patch-for.html Output: apt-get install linux-headers-$(uname -r) Reading package lists... Done Building dependency tree Reading state information... Done linux-headers-3.8.0-19-generic is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 19 not upgraded. charles-W55xEU compat-wireless-2010-10-16 # cd ~ charles-W55xEU ~ # dir adt-bundle-linux-x86_64-20130917.zip Desktop known_hosts_backup charles-W55xEU ~ # wget http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.6/compat-wireless-3.6.2-1-snp.tar.bz2 --2013-10-29 10:28:23-- http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.6/compat-wireless-3.6.2-1-snp.tar.bz2 Resolving www.orbit-lab.org (www.orbit-lab.org)... 128.6.192.131 Connecting to www.orbit-lab.org (www.orbit-lab.org)|128.6.192.131|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 4443700 (4,2M) [application/x-bzip2] Saving to: ‘compat-wireless-3.6.2-1-snp.tar.bz2’ 100%[======================================>] 4 443 700 13,5KB/s in 11m 3s 2013-10-29 10:39:27 (6,55 KB/s) - ‘compat-wireless-3.6.2-1-snp.tar.bz2’ saved [4443700/4443700] charles-W55xEU ~ # tar -xf compat-wireless-3.6.2-1-snp.tar.bz2 charles-W55xEU ~ # cd compat-wireless-3.6-rc6-1 bash: cd: compat-wireless-3.6-rc6-1: No such file or directory charles-W55xEU ~ # dir adt-bundle-linux-x86_64-20130917.zip Desktop compat-wireless-3.6.2-1-snp known_hosts_backup compat-wireless-3.6.2-1-snp.tar.bz2 charles-W55xEU ~ # cd compat-wireless-3.6.2-1-snp/ charles-W55xEU compat-wireless-3.6.2-1-snp # dir code-metrics.txt defconfigs linux-next-pending pending-stable compat drivers MAINTAINERS README config.mk enable-older-kernels Makefile scripts COPYRIGHT include net udev crap linux-next-cherry-picks patches charles-W55xEU compat-wireless-3.6.2-1-snp # wget http://patches.aircrack-ng.org/mac80211.compat08082009.wl_frag+ack_v1.patch --2013-10-29 10:40:52-- http://patches.aircrack-ng.org/mac80211.compat08082009.wl_frag+ack_v1.patch Resolving patches.aircrack-ng.org (patches.aircrack-ng.org)... 213.186.33.2, 2001:41d0:1:1b00:213:186:33:2 Connecting to patches.aircrack-ng.org (patches.aircrack-ng.org)|213.186.33.2|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1049 (1,0K) [text/plain] Saving to: ‘mac80211.compat08082009.wl_frag+ack_v1.patch’ 100%[======================================>] 1 049 --.-K/s in 0s 2013-10-29 10:40:56 (180 MB/s) - ‘mac80211.compat08082009.wl_frag+ack_v1.patch’ saved [1049/1049] charles-W55xEU compat-wireless-3.6.2-1-snp # patch -p1 < mac80211.compat08082009.wl_frag+ack_v1.patch patching file net/mac80211/tx.c Hunk #1 succeeded at 792 (offset 115 lines). charles-W55xEU compat-wireless-3.6.2-1-snp # wget -Ocompatwireless_chan_qos_frag.patch http://pastie.textmate.org/pastes/4882675/download --2013-10-29 10:43:18-- http://pastie.textmate.org/pastes/4882675/download Resolving pastie.textmate.org (pastie.textmate.org)... 178.79.137.125 Connecting to pastie.textmate.org (pastie.textmate.org)|178.79.137.125|:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: http://pastie.org/pastes/4882675/download [following] --2013-10-29 10:43:20-- http://pastie.org/pastes/4882675/download Resolving pastie.org (pastie.org)... 96.126.119.119 Connecting to pastie.org (pastie.org)|96.126.119.119|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 2036 (2,0K) [application/octet-stream] Saving to: ‘compatwireless_chan_qos_frag.patch’ 100%[======================================>] 2 036 --.-K/s in 0,001s 2013-10-29 10:43:21 (3,35 MB/s) - ‘compatwireless_chan_qos_frag.patch’ saved [2036/2036] charles-W55xEU compat-wireless-3.6.2-1-snp # patch -p1 < compatwireless_chan_qos_frag.patch patching file drivers/net/wireless/rtl818x/rtl8187/dev.c patching file net/mac80211/tx.c Hunk #1 succeeded at 1495 (offset 8 lines). patching file net/wireless/chan.c charles-W55xEU compat-wireless-3.6.2-1-snp # make ./scripts/gen-compat-autoconf.sh /root/compat-wireless-3.6.2-1-snp/.config /root/compat-wireless-3.6.2-1-snp/config.mk > include/linux/compat_autoconf.h make -C /lib/modules/3.8.0-19-generic/build M=/root/compat-wireless-3.6.2-1-snp modules make[1]: Entering directory `/usr/src/linux-headers-3.8.0-19-generic' CC [M] /root/compat-wireless-3.6.2-1-snp/compat/main.o LD [M] /root/compat-wireless-3.6.2-1-snp/compat/compat.o CC [M] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:8:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_pci.h:217:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_pci_init’ In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:10:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_gmac_cmn.h:95:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_gmac_cmn_init’ In file included from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8:0: /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:25:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:152:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:17:21: warning: ‘bcma_bus_next_num’ defined but not used [-Wunused-variable] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:93:12: warning: ‘bcma_register_cores’ defined but not used [-Wunused-function] make[3]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o] Error 1 make[2]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma] Error 2 make[1]: *** [_module_/root/compat-wireless-3.6.2-1-snp] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.8.0-19-generic' make: *** [modules] Error 2 charles-W55xEU compat-wireless-3.6.2-1-snp # make install Warning: You may or may not need to update your initframfs, you should if any of the modules installed are part of your initramfs. To add support for your distribution to do this automatically send a patch against ./scripts/update-initramfs. If your distribution does not require this send a patch against the '/usr/bin/lsb_release -i -s': LinuxMint tag for your distribution to avoid this warning. make -C /lib/modules/3.8.0-19-generic/build M=/root/compat-wireless-3.6.2-1-snp modules make[1]: Entering directory `/usr/src/linux-headers-3.8.0-19-generic' CC [M] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:8:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_pci.h:217:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_pci_init’ In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:10:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_gmac_cmn.h:95:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_gmac_cmn_init’ In file included from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8:0: /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:25:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:152:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:17:21: warning: ‘bcma_bus_next_num’ defined but not used [-Wunused-variable] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:93:12: warning: ‘bcma_register_cores’ defined but not used [-Wunused-function] make[3]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o] Error 1 make[2]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma] Error 2 make[1]: *** [_module_/root/compat-wireless-3.6.2-1-snp] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.8.0-19-generic' make: *** [modules] Error 2 charles-W55xEU compat-wireless-3.6.2-1-snp # It keeps giving errors, same with other sites, I get the same errors??? I am lost, help needed

    Read the article

  • Visual Studio 2010 and .NET 4 Released

    - by ScottGu
    The final release of Visual Studio 2010 and .NET 4 is now available. Download and Install Today MSDN subscribers, as well as WebsiteSpark/BizSpark/DreamSpark members, can now download the final releases of Visual Studio 2010 and TFS 2010 through the MSDN subscribers download center.  If you are not an MSDN Subscriber, you can download free 90-day trial editions of Visual Studio 2010.  Or you can can download the free Visual Studio express editions of Visual Web Developer 2010, Visual Basic 2010, Visual C# 2010 and Visual C++.  These express editions are available completely for free (and never time out).  If you are looking for an easy way to setup a new machine for web-development you can automate installing ASP.NET 4, ASP.NET MVC 2, IIS, SQL Server Express and Visual Web Developer 2010 Express really quickly with the Microsoft Web Platform Installer (just click the install button on the page). What is new with VS 2010 and .NET 4 Today’s release is a big one – and brings with it a ton of new feature and capabilities. One of the things we tried hard to focus on with this release was to invest heavily in making existing applications, projects and developer experiences better.  What this means is that you don’t need to read 1000+ page books or spend time learning major new concepts in order to take advantage of the release.  There are literally thousands of improvements (both big and small) that make you more productive and successful without having to learn big new concepts in order to start using them.  Below is just a small sampling of some of the improvements with this release: Visual Studio 2010 IDE  Visual Studio 2010 now supports multiple-monitors (enabling much better use of screen real-estate).  It has new code Intellisense support that makes it easier to find and use classes and methods. It has improved code navigation support for searching code-bases and seeing how code is called and used.  It has new code visualization support that allows you to see the relationships across projects and classes within projects, as well as to automatically generate sequence diagrams to chart execution flow.  The editor now supports HTML and JavaScript snippet support as well as improved JavaScript intellisense. The VS 2010 Debugger and Profiling support is now much, much richer and enables new features like Intellitrace (aka Historical Debugging), debugging of Crash/Dump files, and better parallel debugging.  VS 2010’s multi-targeting support is now much richer, and enables you to use VS 2010 to target .NET 2, .NET 3, .NET 3.5 and .NET 4 applications.  And the infamous Add Reference dialog now loads much faster. TFS 2010 is now easy to setup (you can now install the server in under 10 minutes) and enables great source-control, bug/work-item tracking, and continuous integration support.  Testing (both automated and manual) is now much, much richer.  And VS 2010 Premium and Ultimate provide much richer architecture and design tooling support. VB and C# Language Features VB and C# in VS 2010 both contain a bunch of new features and capabilities.  VB adds new support for automatic properties, collection initializers, and implicit line continuation support among many other features.  C# adds support for optional parameters and named arguments, a new dynamic keyword, co-variance and contra-variance, and among many other features. ASP.NET 4 and ASP.NET MVC 2 With ASP.NET 4, Web Forms controls now render clean, semantically correct, and CSS friendly HTML markup. Built-in URL routing functionality allows you to expose clean, search engine friendly, URLs and increase the traffic to your Website.  ViewState within applications can now be more easily controlled and made smaller.  ASP.NET Dynamic Data support has been expanded.  More controls, including rich charting and data controls, are now built-into ASP.NET 4 and enable you to build applications even faster.  New starter project templates now make it easier to get going with new projects.  SEO enhancements make it easier to drive traffic to your public facing sites.  And web.config files are now clean and simple. ASP.NET MVC 2 is now built-into VS 2010 and ASP.NET 4, and provides a great way to build web sites and applications using a model-view-controller based pattern. ASP.NET MVC 2 adds features to easily enable client and server validation logic, provides new strongly-typed HTML and UI-scaffolding helper methods.  It also enables more modular/reusable applications.  The new <%: %> syntax in ASP.NET makes it easier to HTML encode output.  Visual Studio 2010 also now includes better tooling support for unit testing and TDD.  In particular, “Consume first intellisense” and “generate from usage" support within VS 2010 make it easier to write your unit tests first, and then drive your implementation from them. Deploying ASP.NET applications gets a lot easier with this release. You can now publish your Websites and applications to a staging or production server from within Visual Studio itself. Visual Studio 2010 makes it easy to transfer all your files, code, configuration, database schema and data in one complete package. VS 2010 also makes it easy to manage separate web.config configuration files settings depending upon whether you are in debug, release, staging or production modes. WPF 4 and Silverlight 4 WPF 4 includes a ton of new improvements and capabilities including more built-in controls, richer graphics features (cached composition, pixel shader 3 support, layoutrounding, and animation easing functions), a much improved text stack (with crisper text rendering, custom dictionary support, and selection and caret brush options).  WPF 4 also includes a bunch of support to enable you to take advantage of new Windows 7 features – including multi-touch and Windows 7 shell integration. Silverlight 4 will launch this week as well.  You can watch my Silverlight 4 launch keynote streamed live Tuesday (April 13th) at 8am Pacific Time.  Silverlight 4 includes a ton of new capabilities – including a bunch for making it possible to build great business applications and out of the browser applications.  I’ll be doing a separate blog post later this week (once it is live on the web) that talks more about its capabilities. Visual Studio 2010 now includes great tooling support for both WPF and Silverlight.  The new VS 2010 WPF and Silverlight designer makes it much easier to build client applications as well as build great line of business solutions, as well as integrate and bind with data.  Tooling support for Silverlight 4 with the final release of Visual Studio 2010 will be available when Silverlight 4 releases to the web this week. SharePoint and Azure Visual Studio 2010 now includes built-in support for building SharePoint applications.  You can now create, edit, build, and debug SharePoint applications directly within Visual Studio 2010.  You can also now use SharePoint with TFS 2010. Support for creating Azure-hosted applications is also now included with VS 2010 – allowing you to build ASP.NET and WCF based applications and host them within the cloud. Data Access Data access has a lot of improvements coming to it with .NET 4.  Entity Framework 4 includes a ton of new features and capabilities – including support for model first and POCO development, default support for lazy loading, built-in support for pluralization/singularization of table/property names within the VS 2010 designer, full support for all the LINQ operators, the ability to optionally expose foreign keys on model objects (useful for some stateless web scenarios), disconnected API support to better handle N-Tier and stateless web scenarios, and T4 template customization support within VS 2010 to allow you to customize and automate how code is generated for you by the data designer.  In addition to improvements with the Entity Framework, LINQ to SQL with .NET 4 also includes a bunch of nice improvements.  WCF and Workflow WCF includes a bunch of great new capabilities – including better REST, activation and configuration support.  WCF Data Services (formerly known as Astoria) and WCF RIA Services also now enable you to easily expose and work with data from remote clients. Windows Workflow is now much faster, includes flowchart services, and now makes it easier to make custom services than before.  More details can be found here. CLR and Core .NET Library Improvements .NET 4 includes the new CLR 4 engine – which includes a lot of nice performance and feature improvements.  CLR 4 engine now runs side-by-side in-process with older versions of the CLR – allowing you to use two different versions of .NET within the same process.  It also includes improved COM interop support.  The .NET 4 base class libraries (BCL) include a bunch of nice additions and refinements.  In particular, the .NET 4 BCL now includes new parallel programming support that makes it much easier to build applications that take advantage of multiple CPUs and cores on a computer.  This work dove-tails nicely with the new VS 2010 parallel debugger (making it much easier to debug parallel applications), as well as the new F# functional language support now included in the VS 2010 IDE.  .NET 4 also now also has the Dynamic Language Runtime (DLR) library built-in – which makes it easier to use dynamic language functionality with .NET.  MEF – a really cool library that enables rich extensibility – is also now built-into .NET 4 and included as part of the base class libraries.  .NET 4 Client Profile The download size of the .NET 4 redist is now much smaller than it was before (the x86 full .NET 4 package is about 36MB).  We also now have a .NET 4 Client Profile package which is a pure sub-set of the full .NET that can be used to streamline client application installs. C++ VS 2010 includes a bunch of great improvements for C++ development.  This includes better C++ Intellisense support, MSBuild support for projects, improved parallel debugging and profiler support, MFC improvements, and a number of language features and compiler optimizations. My VS 2010 and .NET 4 Blog Series I’ve been cranking away on a blog series the last few months that highlights many of the new VS 2010 and .NET 4 improvements.  The good news is that I have about 20 in-depth posts already written.  The bad news (for me) is that I have about 200 more to go until I’m done!  I’m going to try and keep adding a few more each week over the next few months to discuss the new improvements and how best to take advantage of them. Below is a list of the already written ones that you can check out today: Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 Stay tuned to my blog as I post more.  Also check out this page which links to a bunch of great articles and videos done by others. VS 2010 Installation Notes If you have installed a previous version of VS 2010 on your machine (either the beta or the RC) you must first uninstall it before installing the final VS 2010 release.  I also recommend uninstalling .NET 4 betas (including both the client and full .NET 4 installs) as well as the other installs that come with VS 2010 (e.g. ASP.NET MVC 2 preview builds, etc).  The uninstalls of the betas/RCs will clean up all the old state on your machine – after which you can install the final VS 2010 version and should have everything just work (this is what I’ve done on all of my machines and I haven’t had any problems). The VS 2010 and .NET 4 installs add a bunch of new managed assemblies to your machine.  Some of these will be “NGEN’d” to native code during the actual install process (making them run fast).  To avoid adding too much time to VS setup, though, we don’t NGEN all assemblies immediately – and instead will NGEN the rest in the background when your machine is idle.  Until it finishes NGENing the assemblies they will be JIT’d to native code the first time they are used in a process – which for large assemblies can sometimes cause a slight performance hit. If you run into this you can manually force all assemblies to be NGEN’d to native code immediately (and not just wait till the machine is idle) by launching the Visual Studio command line prompt from the Windows Start Menu (Microsoft Visual Studio 2010->Visual Studio Tools->Visual Studio Command Prompt).  Within the command prompt type “Ngen executequeueditems” – this will cause everything to be NGEN’d immediately. How to Buy Visual Studio 2010 You can can download and use the free Visual Studio express editions of Visual Web Developer 2010, Visual Basic 2010, Visual C# 2010 and Visual C++.  These express editions are available completely for free (and never time out). You can buy a new copy of VS 2010 Professional that includes a 1 year subscription to MSDN Essentials for $799.  MSDN Essentials includes a developer license of Windows 7 Ultimate, Windows Server 2008 R2 Enterprise, SQL Server 2008 DataCenter R2, and 20 hours of Azure hosting time.  Subscribers also have access to MSDN’s Online Concierge, and Priority Support in MSDN Forums. Upgrade prices from previous releases of Visual Studio are also available.  Existing Visual Studio 2005/2008 Standard customers can upgrade to Visual Studio 2010 Professional for a special $299 retail price until October.  You can take advantage of this VS Standard->Professional upgrade promotion here. Web developers who build applications for others, and who are either independent developers or who work for companies with less than 10 employees, can also optionally take advantage of the Microsoft WebSiteSpark program.  This program gives you three copies of Visual Studio 2010 Professional, 1 copy of Expression Studio, and 4 CPU licenses of both Windows 2008 R2 Web Server and SQL 2008 Web Edition that you can use to both develop and deploy applications with at no cost for 3 years.  At the end of the 3 years there is no obligation to buy anything.  You can sign-up for WebSiteSpark today in under 5 minutes – and immediately have access to the products to download. Summary Today’s release is a big one – and has a bunch of improvements for pretty much every developer.  Thank you everyone who provided feedback, suggestions and reported bugs throughout the development process – we couldn’t have delivered it without you.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Adding Client Validation To DataAnnotations DataType Attribute

    - by srkirkland
    The System.ComponentModel.DataAnnotations namespace contains a validation attribute called DataTypeAttribute, which takes an enum specifying what data type the given property conforms to.  Here are a few quick examples: public class DataTypeEntity { [DataType(DataType.Date)] public DateTime DateTime { get; set; }   [DataType(DataType.EmailAddress)] public string EmailAddress { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This attribute comes in handy when using ASP.NET MVC, because the type you specify will determine what “template” MVC uses.  Thus, for the DateTime property if you create a partial in Views/[loc]/EditorTemplates/Date.ascx (or cshtml for razor), that view will be used to render the property when using any of the Html.EditorFor() methods. One thing that the DataType() validation attribute does not do is any actual validation.  To see this, let’s take a look at the EmailAddress property above.  It turns out that regardless of the value you provide, the entity will be considered valid: //valid new DataTypeEntity {EmailAddress = "Foo"}; .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Hmmm.  Since DataType() doesn’t validate, that leaves us with two options: (1) Create our own attributes for each datatype to validate, like [Date], or (2) add validation into the DataType attribute directly.  In this post, I will show you how to hookup client-side validation to the existing DataType() attribute for a desired type.  From there adding server-side validation would be a breeze and even writing a custom validation attribute would be simple (more on that in future posts). Validation All The Way Down Our goal will be to leave our DataTypeEntity class (from above) untouched, requiring no reference to System.Web.Mvc.  Then we will make an ASP.NET MVC project that allows us to create a new DataTypeEntity and hookup automatic client-side date validation using the suggested “out-of-the-box” jquery.validate bits that are included with ASP.NET MVC 3.  For simplicity I’m going to focus on the only DateTime field, but the concept is generally the same for any other DataType. Building a DataTypeAttribute Adapter To start we will need to build a new validation adapter that we can register using ASP.NET MVC’s DataAnnotationsModelValidatorProvider.RegisterAdapter() method.  This method takes two Type parameters; The first is the attribute we are looking to validate with and the second is an adapter that should subclass System.Web.Mvc.ModelValidator. Since we are extending DataAnnotations we can use the subclass of ModelValidator called DataAnnotationsModelValidator<>.  This takes a generic argument of type DataAnnotations.ValidationAttribute, which lucky for us means the DataTypeAttribute will fit in nicely. So starting from there and implementing the required constructor, we get: public class DataTypeAttributeAdapter : DataAnnotationsModelValidator<DataTypeAttribute> { public DataTypeAttributeAdapter(ModelMetadata metadata, ControllerContext context, DataTypeAttribute attribute) : base(metadata, context, attribute) { } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now you have a full-fledged validation adapter, although it doesn’t do anything yet.  There are two methods you can override to add functionality, IEnumerable<ModelValidationResult> Validate(object container) and IEnumerable<ModelClientValidationRule> GetClientValidationRules().  Adding logic to the server-side Validate() method is pretty straightforward, and for this post I’m going to focus on GetClientValidationRules(). Adding a Client Validation Rule Adding client validation is now incredibly easy because jquery.validate is very powerful and already comes with a ton of validators (including date and regular expressions for our email example).  Teamed with the new unobtrusive validation javascript support we can make short work of our ModelClientValidationDateRule: public class ModelClientValidationDateRule : ModelClientValidationRule { public ModelClientValidationDateRule(string errorMessage) { ErrorMessage = errorMessage; ValidationType = "date"; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If your validation has additional parameters you can the ValidationParameters IDictionary<string,object> to include them.  There is a little bit of conventions magic going on here, but the distilled version is that we are defining a “date” validation type, which will be included as html5 data-* attributes (specifically data-val-date).  Then jquery.validate.unobtrusive takes this attribute and basically passes it along to jquery.validate, which knows how to handle date validation. Finishing our DataTypeAttribute Adapter Now that we have a model client validation rule, we can return it in the GetClientValidationRules() method of our DataTypeAttributeAdapter created above.  Basically I want to say if DataType.Date was provided, then return the date rule with a given error message (using ValidationAttribute.FormatErrorMessage()).  The entire adapter is below: public class DataTypeAttributeAdapter : DataAnnotationsModelValidator<DataTypeAttribute> { public DataTypeAttributeAdapter(ModelMetadata metadata, ControllerContext context, DataTypeAttribute attribute) : base(metadata, context, attribute) { }   public override System.Collections.Generic.IEnumerable<ModelClientValidationRule> GetClientValidationRules() { if (Attribute.DataType == DataType.Date) { return new[] { new ModelClientValidationDateRule(Attribute.FormatErrorMessage(Metadata.GetDisplayName())) }; }   return base.GetClientValidationRules(); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Putting it all together Now that we have an adapter for the DataTypeAttribute, we just need to tell ASP.NET MVC to use it.  The easiest way to do this is to use the built in DataAnnotationsModelValidatorProvider by calling RegisterAdapter() in your global.asax startup method. DataAnnotationsModelValidatorProvider.RegisterAdapter(typeof(DataTypeAttribute), typeof(DataTypeAttributeAdapter)); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Show and Tell Let’s see this in action using a clean ASP.NET MVC 3 project.  First make sure to reference the jquery, jquery.vaidate and jquery.validate.unobtrusive scripts that you will need for client validation. Next, let’s make a model class (note we are using the same built-in DataType() attribute that comes with System.ComponentModel.DataAnnotations). public class DataTypeEntity { [DataType(DataType.Date, ErrorMessage = "Please enter a valid date (ex: 2/14/2011)")] public DateTime DateTime { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Then we make a create page with a strongly-typed DataTypeEntity model, the form section is shown below (notice we are just using EditorForModel): @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>Fields</legend>   @Html.EditorForModel()   <p> <input type="submit" value="Create" /> </p> </fieldset> } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The final step is to register the adapter in our global.asax file: DataAnnotationsModelValidatorProvider.RegisterAdapter(typeof(DataTypeAttribute), typeof(DataTypeAttributeAdapter)); Now we are ready to run the page: Looking at the datetime field’s html, we see that our adapter added some data-* validation attributes: <input type="text" value="1/1/0001" name="DateTime" id="DateTime" data-val-required="The DateTime field is required." data-val-date="Please enter a valid date (ex: 2/14/2011)" data-val="true" class="text-box single-line valid"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here data-val-required was added automatically because DateTime is non-nullable, and data-val-date was added by our validation adapter.  Now if we try to add an invalid date: Our custom error message is displayed via client-side validation as soon as we tab out of the box.  If we didn’t include a custom validation message, the default DataTypeAttribute “The field {0} is invalid” would have been shown (of course we can change the default as well).  Note we did not specify server-side validation, but in this case we don’t have to because an invalid date will cause a server-side error during model binding. Conclusion I really like how easy it is to register new data annotations model validators, whether they are your own or, as in this post, supplements to existing validation attributes.  I’m still debating about whether adding the validation directly in the DataType attribute is the correct place to put it versus creating a dedicated “Date” validation attribute, but it’s nice to know either option is available and, as we’ve seen, simple to implement. I’m also working through the nascent stages of an open source project that will create validation attribute extensions to the existing data annotations providers using similar techniques as seen above (examples: Email, Url, EqualTo, Min, Max, CreditCard, etc).  Keep an eye on this blog and subscribe to my twitter feed (@srkirkland) if you are interested for announcements.

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >