Search Results

Search found 941 results on 38 pages for 'tyler treat'.

Page 36/38 | < Previous Page | 32 33 34 35 36 37 38  | Next Page >

  • Using Selenium-IDE with a rich Javascript application?

    - by Darien
    Problem At my workplace, we're trying to find the best way to create automated-tests for an almost wholly javascript-driven intranet application. Right now we're stuck trying to find a good tradeoff between: Application code in reusable and nest-able GUI components. Tests which are easily created by the testing team Tests which can be recorded once and then automated Tests which do not break after small cosmetic changes to the site XPath expressions (or other possible expressions, like jQuery selectors) naively generated from Selenium-IDE are often non-repeatable and very fragile. Conversely, having the JS code generate special unique ID values for every important DOM-element on the page... well, that is its own headache, complicated by re-usable GUI components and IDs needing to be consistent when the test is re-run. What successes have other people had with this kind of thing? How do you do automated application-level testing of a rich JS interface? Limitations We are using JavascriptMVC 2.0, hopefully 3.0 soon so that we can upgrade to jQuery 1.4.x. The test-making folks are mostly trained to use Selenium IDE to directly record things. The test leads would prefer a page-unique HTML ID on each clickable element on the page... Training the testers to write or alter special expressions (such as telling them which HTML class-names are important branching points) is a no-go. We try to make re-usable javascript components, but this means very few GUI components can treat themselves (or what they contain) as unique. Some of our components already use HTML ID values in their operation. I'd like to avoid doing this anyway, but it complicates the idea of ID-based testing. It may be possible to add custom facilities (like a locator-builder or new locator method) to the Selenium-IDE installation testers use. Almost everything that goes on occurs within a single "page load" from a conventional browser perspective, even when items are saved Current thoughts I'm considering a system where a custom locator-builder (javascript code) for Selenium-IDE will talk with our application code as the tester is recording. In this way, our application becomes partially responsible for generating a mostly-flexible expression (XPath or jQuery) for any given DOM element. While this can avoid requiring more training for testers, I worry it may be over-thinking things.

    Read the article

  • Unable to find assembly, C#

    - by PlasmaCube
    So, here's the deal. I've got two ASP.NET applications, both of which use SQLServer Session State management. They also both use the same server. I've got a custom session class in an external DLL, which fully implements serialization, and which both applications have referenced. Each application, in turn, has a class which inherits from the DLL class, and both applications use their own respective classes for their session state. Now, what I was trying to accomplish was that if you wanted to go to the other application, it could look in the session (they all use the same session key) and treat the existing object there as the base (the one from the DLL), extract whatever login info you need, then overwrite the session object with your own. Unfortunately, when the second application attempts to read the session, it seems that it looks for the DLL of the first application, and when it can't find it, it throws an exception. Is there a flaw in my logic? Here's an example: // Global.asax of the 1st app protected void Session_Start(object sender, EventArgs e) { Session.Add( "UserSessionKey", new FirstUserSession()); // FirstUserSession inherits from BaseUserSession } Now the second application: // Global.asax of 2nd app protected void Session_Start(object sender, EventArgs e) { if (Session["UserSessionKey"] != null) { BaseUserSession existing = (BaseUserSession)Session["UserSessionKey"]; SecondUserSession session = new SecondUserSession(); // This also inherits from BaseUserSession session.Authenticated = existing.Authenticated; session.Id = existing.Id; session.Role = existing.Role; Session.Add("UserSessionKey", session); } else { Session.Add("UserSessionKey", new SecondUserSession()); } } Here's the exception stack trace. In this case, "MyCBC" is the real name of the first app, and "ASPTesting" is the second app. [SerializationException: Unable to find assembly 'MyCBC, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'.] System.Runtime.Serialization.Formatters.Binary.BinaryAssemblyInfo.GetAssembly() +1871092 System.Runtime.Serialization.Formatters.Binary.ObjectReader.GetType(BinaryAssemblyInfo assemblyInfo, String name) +7545734 System.Runtime.Serialization.Formatters.Binary.ObjectMap..ctor(String objectName, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) +120 System.Runtime.Serialization.Formatters.Binary.ObjectMap.Create(String name, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) +52 System.Runtime.Serialization.Formatters.Binary.__BinaryParser.ReadObjectWithMapTyped(BinaryObjectWithMapTyped record) +190 System.Runtime.Serialization.Formatters.Binary.__BinaryParser.ReadObjectWithMapTyped(BinaryHeaderEnum binaryHeaderEnum) +61 System.Runtime.Serialization.Formatters.Binary.__BinaryParser.Run() +253 System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(HeaderHandler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) +168 System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) +203 System.Web.Util.AltSerialization.ReadValueFromStream(BinaryReader reader) +788 System.Web.SessionState.SessionStateItemCollection.ReadValueFromStreamWithAssert() +55 System.Web.SessionState.SessionStateItemCollection.DeserializeItem(String name, Boolean check) +281 System.Web.SessionState.SessionStateItemCollection.get_Item(String name) +19 System.Web.SessionState.HttpSessionStateContainer.get_Item(String name) +13 System.Web.SessionState.HttpSessionState.get_Item(String name) +13 ASPTesting._Default.Page_Load(Object sender, EventArgs e) in C:\Documents and Settings\sarsstu\My Documents\Projects\Testing\ASPTesting\ASPTesting\Default.aspx.cs:20 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 Thanks to everyone in advance.

    Read the article

  • Authenticating users in iPhone app

    - by Myron
    I'm developing an HTTP api for our web application. Initially, the primary consumer of the API will be an iPhone app we're developing, but I'm designing this with future uses in mind (such as mobile apps for other platforms). I'm trying to decide on the best way to authenticate users so they can access their accounts from the iPhone. I've got a design that I think works well, but I'm no security expert, so I figured it would be good to ask for feedback here. The design of the user authentication has 3 primary goals: Good user experience: We want to allow users to enter their credentials once, and remain logged in indefinitely, until they explicitly log out. I would have considered OAuth if not for the fact that the experience from an iPhone app is pretty awful, from what I've heard (i.e. it launches the login form in Safari, then tells the user to return to the app when authentication succeeds). No need to store the user creds with the app: I always hate the idea of having the user's password stored in either plain text or symmetrically encrypted anywhere, so I don't want the app to have to store the password to pass it to the API for future API requests. Security: We definitely don't need the intense security of a banking app, but I'd obviously like this to be secure. Overall, the API is REST-inspired (i.e. treating URLs as resources, and using the HTTP methods and status codes semantically). Each request to the API must include two custom HTTP headers: an API Key (unique to each client app) and a unique device ID. The API requires all requests to be made using HTTPS, so that the headers and body are encrypted. My plan is to have an api_sessions table in my database. It has a unique constraint on the API key and unique device ID (so that a device may only be logged into a single user account through a given app) as well as a foreign key to the users table. The API will have a login endpoint, which receives the username/password and, if they match an account, logs the user in, creating an api_sessions record for the given API key and device id. Future API requests will look up the api_session using the API key and device id, and, if a record is found, treat the request as being logged in under the user account referenced by the api_session record. There will also be a logout API endpoint, which deletes the record from the api_sessions table. Does anyone see any obvious security holes in this?

    Read the article

  • PHP - trying to automate/loop a save function

    - by Homer_J
    EDIT: the difference between the pages are the simply $q1, $q2, $q3 becomes, for example, $q13, $q14, $q15 - they are questions in the form. Hi all, I have the following code: $rguid = $_POST["r"]; $ip=substr($_SERVER['REMOTE_ADDR'], 0, 50); $browser=substr($_SERVER['HTTP_USER_AGENT'], 0, 255); $q1 = $_POST["q1"]; $q2 = $_POST["q2"]; $q3 = $_POST["q3"]; $q4 = $_POST["q4"]; $q5 = $_POST["q5"]; $q6 = $_POST["q6"]; $q7 = $_POST["q7"]; $q8 = $_POST["q8"]; $respondent_id = decode_respondent_guid($rguid); $rcount=respondent_status($respondent_id); if ($rcount==0) { $proc = mysqli_prepare($link, "INSERT INTO tresults (respondent_id, ip, browser, q1, q2, q3, q4, q5, q6, q7, q8) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?);"); mysqli_stmt_bind_param($proc, "issiiiiiiii", $respondent_id, $ip, $browser, $q1, $q2, $q3, $q4, $q5, $q6, $q7, $q8); mysqli_stmt_execute($proc); $mysql_error = mysqli_error($link); if ($mysql_error!="") { printf("Unexpected database error: %s\n", $mysql_error); mysqli_stmt_close($proc); mysqli_clean_connection($link); exit(); } else { mysqli_stmt_close($proc); mysqli_clean_connection($link); update_completion_status($respondent_id, 'Started'); header("Location: page2.php?r=".$rguid); } } This code works a treat - no problem. At the moment, that code appears on a 'save' page that relates to a 'form' page (i.e. if I have 5 .php pages with a form on each, I need 5 save.php pages to save each pages data to the database) to save data to my SQL database. What I would like is to have a single save.php page that will work with as many 'form' pages as I have. So instead of manually having to setup and change each save page, it automates it. I've no idea of the coding to do this, though I suspect something like a foreacg or loop etc. Any suggestions? Thanks, Homer.

    Read the article

  • Displaying an image on a LED matrix with a Netduino

    - by Bertrand Le Roy
    In the previous post, we’ve been flipping bits manually on three ports of the Netduino to simulate the data, clock and latch pins that a shift register expected. We did all that in order to control one line of a LED matrix and create a simple Knight Rider effect. It was rightly pointed out in the comments that the Netduino has built-in knowledge of the sort of serial protocol that this shift register understands through a feature called SPI. That will of course make our code a whole lot simpler, but it will also make it a whole lot faster: writing to the Netduino ports is actually not that fast, whereas SPI is very, very fast. Unfortunately, the Netduino documentation for SPI is severely lacking. Instead, we’ve been reliably using the documentation for the Fez, another .NET microcontroller. To send data through SPI, we’ll just need  to move a few wires around and update the code. SPI uses pin D11 for writing, pin D12 for reading (which we won’t do) and pin D13 for the clock. The latch pin is a parameter that can be set by the user. This is very close to the wiring we had before (data on D11, clock on D12 and latch on D13). We just have to move the latch from D13 to D10, and the clock from D12 to D13. The code that controls the shift register has slimmed down considerably with that change. Here is the new version, which I invite you to compare with what we had before: public class ShiftRegister74HC595 { protected SPI Spi; public ShiftRegister74HC595(Cpu.Pin latchPin) : this(latchPin, SPI.SPI_module.SPI1) { } public ShiftRegister74HC595(Cpu.Pin latchPin, SPI.SPI_module spiModule) { var spiConfig = new SPI.Configuration( SPI_mod: spiModule, ChipSelect_Port: latchPin, ChipSelect_ActiveState: false, ChipSelect_SetupTime: 0, ChipSelect_HoldTime: 0, Clock_IdleState: false, Clock_Edge: true, Clock_RateKHz: 1000 ); Spi = new SPI(spiConfig); } public void Write(byte buffer) { Spi.Write(new[] {buffer}); } } All we have to do here is configure SPI. The write method couldn’t be any simpler. Everything is now handled in hardware by the Netduino. We set the frequency to 1MHz, which is largely sufficient for what we’ll be doing, but it could potentially go much higher. The shift register addresses the columns of the matrix. The rows are directly wired to ports D0 to D7 of the Netduino. The code writes to only one of those eight lines at a time, which will make it fast enough. The way an image is displayed is that we light the lines one after the other so fast that persistence of vision will give the illusion of a stable image: foreach (var bitmap in matrix.MatrixBitmap) { matrix.OnRow(row, bitmap, true); matrix.OnRow(row, bitmap, false); row++; } Now there is a twist here: we need to run this code as fast as possible in order to display the image with as little flicker as possible, but we’ll eventually have other things to do. In other words, we need the code driving the display to run in the background, except when we want to change what’s being displayed. Fortunately, the .NET Micro Framework supports multithreading. In our implementation, we’ve added an Initialize method that spins a new thread that is tied to the specific instance of the matrix it’s being called on. public LedMatrix Initialize() { DisplayThread = new Thread(() => DoDisplay(this)); DisplayThread.Start(); return this; } I quite like this way to spin a thread. As you may know, there is another, built-in way to contextualize a thread by passing an object into the Start method. For the method to work, the thread must have been constructed with a ParameterizedThreadStart delegate, which takes one parameter of type object. I like to use object as little as possible, so instead I’m constructing a closure with a Lambda, currying it with the current instance. This way, everything remains strongly-typed and there’s no casting to do. Note that this method would extend perfectly to several parameters. Of note as well is the return value of Initialize, a common technique to add some fluency to the API and enabling the matrix to be instantiated and initialized in a single line: using (var matrix = new LedMS88SR74HC595().Initialize()) The “using” in the previous line is because we have implemented IDisposable so that the matrix kills the thread and clears the display when the user code is done with it: public void Dispose() { Clear(); DisplayThread.Abort(); } Thanks to the multi-threaded version of the matrix driver class, we can treat the display as a simple bitmap with a very synchronous programming model: matrix.Set(someimage); while (button.Read()) { Thread.Sleep(10); } Here, the call into Set returns immediately and from the moment the bitmap is set, the background display thread will constantly continue refreshing no matter what happens in the main thread. That enables us to wait or read a button’s port on the main thread knowing that the current image will continue displaying unperturbed and without requiring manual refreshing. We’ve effectively hidden the implementation of the display behind a convenient, synchronous-looking API. Pretty neat, eh? Before I wrap up this post, I want to talk about one small caveat of using SPI rather than driving the shift register directly: when we got to the point where we could actually display images, we noticed that they were a mirror image of what we were sending in. Oh noes! Well, the reason for it is that SPI is sending the bits in a big-endian fashion, in other words backwards. Now sure you could fix that in software by writing some bit-level code to reverse the bits we’re sending in, but there is a far more efficient solution than that. We are doing hardware here, so we can simply reverse the order in which the outputs of the shift register are connected to the columns of the matrix. That’s switching 8 wires around once, as compared to doing bit operations every time we send a line to display. All right, so bringing it all together, here is the code we need to write to display two images in succession, separated by a press on the board’s button: var button = new InputPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled); using (var matrix = new LedMS88SR74HC595().Initialize()) { // Oh, prototype is so sad! var sad = new byte[] { 0x66, 0x24, 0x00, 0x18, 0x00, 0x3C, 0x42, 0x81 }; DisplayAndWait(sad, matrix, button); // Let's make it smile! var smile = new byte[] { 0x42, 0x18, 0x18, 0x81, 0x7E, 0x3C, 0x18, 0x00 }; DisplayAndWait(smile, matrix, button); } And here is a video of the prototype running: The prototype in action I’ve added an artificial delay between the display of each row of the matrix to clearly show what’s otherwise happening very fast. This way, you can clearly see each of the two images being displayed line by line. Next time, we’ll do no hardware changes, focusing instead on building a nice programming model for the matrix, with sprites, text and hardware scrolling. Fun stuff. By the way, can any of my reader guess where we’re going with all that? The code for this prototype can be downloaded here: http://weblogs.asp.net/blogs/bleroy/Samples/NetduinoLedMatrixDriver.zip

    Read the article

  • ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (today) In today’s post I’m going to discuss how Razor enables you to both implicitly and explicitly define code nuggets within your view templates, and walkthrough some code examples of each of them.  Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post. Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a collection of products and output a <ul> list of product names that link to their corresponding product pages: When run, the above code generates output like below: Notice above how we were able to embed two code nuggets within the content of the foreach loop.  One of them outputs the name of the Product, and the other embeds the ProductID within a hyperlink.  Notice that we didn’t have to explicitly wrap these code-nuggets - Razor was instead smart enough to implicitly identify where the code began and ended in both of these situations.  How Razor Enables Implicit Code Nuggets Razor does not define its own language.  Instead, the code you write within Razor code nuggets is standard C# or VB.  This allows you to re-use your existing language skills, and avoid having to learn a customized language grammar. The Razor parser has smarts built into it so that whenever possible you do not need to explicitly mark the end of C#/VB code nuggets you write.  This makes coding more fluid and productive, and enables a nice, clean, concise template syntax.  Below are a few scenarios that Razor supports where you can avoid having to explicitly mark the beginning/end of a code nugget, and instead have Razor implicitly identify the code nugget scope for you: Property Access Razor allows you to output a variable value, or a sub-property on a variable that is referenced via “dot” notation: You can also use “dot” notation to access sub-properties multiple levels deep: Array/Collection Indexing: Razor allows you to index into collections or arrays: Calling Methods: Razor also allows you to invoke methods: Notice how for all of the scenarios above how we did not have to explicitly end the code nugget.  Razor was able to implicitly identify the end of the code block for us. Razor’s Parsing Algorithm for Code Nuggets The below algorithm captures the core parsing logic we use to support “@” expressions within Razor, and to enable the implicit code nugget scenarios above: Parse an identifier - As soon as we see a character that isn't valid in a C# or VB identifier, we stop and move to step 2 Check for brackets - If we see "(" or "[", go to step 2.1., otherwise, go to step 3  Parse until the matching ")" or "]" (we track nested "()" and "[]" pairs and ignore "()[]" we see in strings or comments) Go back to step 2 Check for a "." - If we see one, go to step 3.1, otherwise, DO NOT ACCEPT THE "." as code, and go to step 4 If the character AFTER the "." is a valid identifier, accept the "." and go back to step 1, otherwise, go to step 4 Done! Differentiating between code and content Step 3.1 is a particularly interesting part of the above algorithm, and enables Razor to differentiate between scenarios where an identifier is being used as part of the code statement, and when it should instead be treated as static content: Notice how in the snippet above we have ? and ! characters at the end of our code nuggets.  These are both legal C# identifiers – but Razor is able to implicitly identify that they should be treated as static string content as opposed to being part of the code expression because there is whitespace after them.  This is pretty cool and saves us keystrokes. Explicit Code Nuggets in Razor Razor is smart enough to implicitly identify a lot of code nugget scenarios.  But there are still times when you want/need to be more explicit in how you scope the code nugget expression.  The @(expression) syntax allows you to do this: You can write any C#/VB code statement you want within the @() syntax.  Razor will treat the wrapping () characters as the explicit scope of the code nugget statement.  Below are a few scenarios where we could use the explicit code nugget feature: Perform Arithmetic Calculation/Modification: You can perform arithmetic calculations within an explicit code nugget: Appending Text to a Code Expression Result: You can use the explicit expression syntax to append static text at the end of a code nugget without having to worry about it being incorrectly parsed as code: Above we have embedded a code nugget within an <img> element’s src attribute.  It allows us to link to images with URLs like “/Images/Beverages.jpg”.  Without the explicit parenthesis, Razor would have looked for a “.jpg” property on the CategoryName (and raised an error).  By being explicit we can clearly denote where the code ends and the text begins. Using Generics and Lambdas Explicit expressions also allow us to use generic types and generic methods within code expressions – and enable us to avoid the <> characters in generics from being ambiguous with tag elements. One More Thing….Intellisense within Attributes We have used code nuggets within HTML attributes in several of the examples above.  One nice feature supported by the Razor code editor within Visual Studio is the ability to still get VB/C# intellisense when doing this. Below is an example of C# code intellisense when using an implicit code nugget within an <a> href=”” attribute: Below is an example of C# code intellisense when using an explicit code nugget embedded in the middle of a <img> src=”” attribute: Notice how we are getting full code intellisense for both scenarios – despite the fact that the code expression is embedded within an HTML attribute (something the existing .aspx code editor doesn’t support).  This makes writing code even easier, and ensures that you can take advantage of intellisense everywhere. Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s ability to implicitly scope code nuggets reduces the amount of typing you need to perform, and leaves you with really clean code. When necessary, you can also explicitly scope code expressions using a @(expression) syntax to provide greater clarity around your intent, as well as to disambiguate code statements from static markup. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • RSS feeds in Orchard

    - by Bertrand Le Roy
    When we added RSS to Orchard, we wanted to make it easy for any module to expose any contents as a feed. We also wanted the rendering of the feed to be handled by Orchard in order to minimize the amount of work from the module developer. A typical example of such feed exposition is of course blog feeds. We have an IFeedManager interface for which you can get the built-in implementation through dependency injection. Look at the BlogController constructor for an example: public BlogController( IOrchardServices services, IBlogService blogService, IBlogSlugConstraint blogSlugConstraint, IFeedManager feedManager, RouteCollection routeCollection) { If you look a little further in that same controller, in the Item action, you’ll see a call to the Register method of the feed manager: _feedManager.Register(blog); This in reality is a call into an extension method that is specialized for blogs, but we could have made the two calls to the actual generic Register directly in the action instead, that is just an implementation detail: feedManager.Register(blog.Name, "rss", new RouteValueDictionary { { "containerid", blog.Id } }); feedManager.Register(blog.Name + " - Comments", "rss", new RouteValueDictionary { { "commentedoncontainer", blog.Id } }); What those two effective calls are doing is to register two feeds: one for the blog itself and one for the comments on the blog. For each call, the name of the feed is provided, then we have the type of feed (“rss”) and some values to be injected into the generic RSS route that will be used later to route the feed to the right providers. This is all you have to do to expose a new feed. If you’re only interested in exposing feeds, you can stop right there. If on the other hand you want to know what happens after that under the hood, carry on. What happens after that is that the feedmanager will take care of formatting the link tag for the feed (see FeedManager.GetRegisteredLinks). The GetRegisteredLinks method itself will be called from a specialized filter, FeedFilter. FeedFilter is an MVC filter and the event we’re interested in hooking into is OnResultExecuting, which happens after the controller action has returned an ActionResult and just before MVC executes that action result. In other words, our feed registration has already been called but the view is not yet rendered. Here’s the code for OnResultExecuting: model.Zones.AddAction("head:after", html => html.ViewContext.Writer.Write( _feedManager.GetRegisteredLinks(html))); This is another piece of code whose execution is differed. It is saying that whenever comes time to render the “head” zone, this code should be called right after. The code itself is rendering the link tags. As a result of all that, here’s what can be found in an Orchard blog’s head section: <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire"     href="/rss?containerid=5" /> <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire - Comments"     href="/rss?commentedoncontainer=5" /> The generic action that these two feeds point to is Index on FeedController. That controller has three important dependencies: an IFeedBuilderProvider, an IFeedQueryProvider and an IFeedItemProvider. Different implementations of these interfaces can provide different formats of feeds, such as RSS and Atom. The Match method enables each of the competing providers to provide a priority for themselves based on arbitrary criteria that can be found on the FeedContext. This means that a provider can be selected based not only on the desired format, but also on the nature of the objects being exposed as a feed or on something even more arbitrary such as the destination device (you could imagine for example giving shorter text only excerpts of posts on mobile devices, and full HTML on desktop). The key here is extensibility and dynamic competition and collaboration from unknown and loosely coupled parts. You’ll find this pattern pretty much everywhere in the Orchard architecture. The RssFeedBuilder implementation of IFeedBuilderProvider is also a regular controller with a Process action that builds a RssResult, which is itself a thin ActionResult wrapper around an XDocument. Let’s get back to the FeedController’s Index action. After having called into each known feed builder to get its priority on the currently requested feed, it will select the one with the highest priority. The next thing it needs to do is to actually fetch the data for the feed. This again is a collaborative effort from a priori unknown providers, the implementations of IFeedQueryProvider. There are several implementations by default in Orchard, the choice of which is again done through a Match method. ContainerFeedQuery for example chimes in when a “containerid” parameter is found in the context (see URL in the link tag above): public FeedQueryMatch Match(FeedContext context) { var containerIdValue = context.ValueProvider.GetValue("containerid"); if (containerIdValue == null) return null; return new FeedQueryMatch { FeedQuery = this, Priority = -5 }; } The actual work is done in the Execute method, which finds the right container content item in the Orchard database and adds elements for each of them. In other words, the feed query provider knows how to retrieve the list of content items to add to the feed. The last step is to translate each of the content items into feed entries, which is done by implementations of IFeedItemBuilder. There is no Match method this time. Instead, all providers are called with the collection of items (or more accurately with the FeedContext, but this contains the list of items, which is what’s relevant in most cases). Each provider can then choose to pick those items that it knows how to treat and transform them into the format requested. This enables the construction of heterogeneous feeds that expose content items of various types into a single feed. That will be extremely important when you’ll want to expose a single feed for all your site. So here are feeds in Orchard in a nutshell. The main point here is that there is a fair number of components involved, with some complexity in implementation in order to allow for extreme flexibility, but the part that you use to expose a new feed is extremely simple and light: declare that you want your content exposed as a feed and you’re done. There are cases where you’ll have to dive in and provide new implementations for some or all of the interfaces involved, but that requirement will only arise as needed. For example, you might need to create a new feed item builder to include your custom content type but that effort will be extremely focused on the specialized task at hand. The rest of the system won’t need to change. So what do you think?

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 2

    - by AjarnMark
    In Part 1, I started talking about using Red-Gate’s newest version of SQL Source Control and how I really like it as a viable method to source control your database development.  It looks like this is going to turn into a little series where I will explain how we have done things in the past, and how life is different with SQL Source Control.  I will also explain some of my philosophy and methodology around deployment with these tools.  But for now, let’s talk about some of the good and the bad of the tool itself. More Kudos and Features I mentioned previously how impressed I was with the responsiveness of Red-Gate’s team.  I have been having an ongoing email conversation with Gyorgy Pocsi, and as I have run into problems or requested things behave a little differently, it has not been more than a day or two before a new Build is ready for me to download and test.  Quite impressive! I’m sure much of the requests I put in were already in the plans, so I can’t really take credit for them, but throughout this conversation, Red-Gate has implemented several features that were not in the first Early Access version.  Those include: Honoring the Fortress configuration option to require Work Item (Bug) IDs on check-ins. Adding the check-in comment text as a comment to the Work Item. Adding the list of checked-in files, along with the Fortress links for automatic History and DIFF view Updating the status of a Work Item on check-in (e.g. setting the item to Complete or, in our case “Dev-Complete”) Support for the Fortress 2.0 API, and not just the Vault Pro 5.1 API.  (See later notes regarding support for Fortress 2.0). These were all features that I felt we really needed to have in-place before I could honestly consider converting my team to using SQL Source Control on a regular basis.  Now that I have those, my only excuse is not wanting to switch boats on the team mid-stream.  So when we wrap up our current release in a few weeks, we will make the jump.  In the meantime, I will continue to bang on it to make sure it is stable.  It passed one test for stability when I did a test load of one of our larger database schemas into Fortress with SQL Source Control.  That database has about 150 tables, 200 User-Defined Functions and nearly 900 Stored Procedures.  The initial load to source control went smoothly and took just a brief amount of time. Warnings Remember that this IS still in pre-release stage and while I have not had any problems after that first hiccup I wrote about last time, you still need to treat it with a healthy respect.  As I understand it, the RTM is targeted for February.  There are a couple more features that I hope make it into the final release version, but if not, they’ll probably be coming soon thereafter.  Those are: A Browse feature to let me lookup the Work Item ID instead of having to remember it or look back in my Item details.  This is just a matter of convenience. I normally have my Work Item list open anyway, so I can easily look it up, but hey, why not make it even easier. A multi-line comment area.  The current space for writing check-in comments is a single-line text box.  I would like to have a multi-line space as I sometimes write lengthy commentary.  But I recognize that it is a struggle to get most developers to put in more than the word “fixed” as their comment, so this meets the need of the majority as-is, and it’s not a show-stopper for us. Merge.  SQL Source Control currently does not have a Merge feature.  If two or more people make changes to the same database object, you will get a warning of the conflict and have to choose which one wins (and then manually edit to include the others’ changes).  I think it unlikely you will run into actual conflicts in Stored Procedures and Functions, but you might with Views or Tables.  This will be nice to have, but I’m not losing any sleep over it.  And I have multiple tools at my disposal to do merges manually, so really not a show-stopper for us. Automation has its limits.  As cool as this automation is, it has its limits and there are some changes that you will be better off scripting yourself.  For example, if you are refactoring table definitions, and want to change a column name, you can write that as a quick sp_rename command and preserve the data within that column.  But because this tool is looking just at a before and after picture, it cannot tell that you just renamed a column.  To the tool, it looks like you dropped one column and added another.  This is not a knock against Red-Gate.  All automated scripting tools have this issue, unless the are actively monitoring your every step to know exactly what you are doing.  This means that when you go to Deploy your changes, SQL Compare will script the change as a column drop and add, or will attempt to rebuild the entire table.  Unfortunately, neither of these approaches will preserve the existing data in that column the way an sp_rename will, and so you are better off scripting that change yourself.  Thankfully, SQL Compare will produce warnings about the potential loss of data before it does the actual synchronization and give you a chance to intercept the script and do it yourself. Also, please note that the current official word is that SQL Source Control supports Vault Professional 5.1 and later.  Vault Professional is the new name for what was previously known as Fortress.  (You can read about the name change on SourceGear’s site.)  The last version of Fortress was 2.x, and the API for Fortress 2.x is different from the API for Vault Pro.  At my company, we are currently running Fortress 2.0, with plans to upgrade to Vault Pro early next year.  Gyorgy was able to come up with a work-around for me to be able to use SQL Source Control with Fortress 2.0, even though it is not officially supported.  If you are using Fortress 2.0 and want to use SQL Source Control, be aware that this is not officially supported, but it is working for us, and you can probably get the work-around instructions from Red-Gate if you’re really, really nice to them. Upcoming Topics Some of the other topics I will likely cover in this series over the next few weeks are: How we used to do source control back in the old days (a few weeks ago) before SQL Source Control was available to Vault users What happens when you restore a database that is linked to source control Handling multiple development branches of source code Concurrent Development practices and handling Conflicts Deployment Tips and Best Practices A recap after using the tool for a while

    Read the article

  • MySQL for Excel 1.1.3 has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.1.3, the  latest addition to the MySQL Installer for Windows. MySQL for Excel is an application plug-in enabling data analysts to very easily access and manipulate MySQL data within Microsoft Excel. It enables you to directly work with a MySQL database from within Microsoft Excel so you can easily do tasks such as: Importing MySQL Data into Excel Exporting Excel data directly into MySQL to a new or existing table Editing MySQL data directly within Excel MySQL for Excel is installed using the MySQL Installer for Windows. The MySQL installer comes in 2 versions   Full (150 MB) which includes a complete set of MySQL products with their binaries included in the download Web (1.5 MB - a network install) which will just pull MySQL for Excel over the web and install it when run.   You can download MySQL Installer from our official Downloads page at http://dev.mysql.com/downloads/installer/. MySQL for Excel 1.1.3 introduces the following features:   Upon saving a Workbook containing Worksheets in Edit Mode, the user is asked if he wants to exit the Edit Mode on all Worksheets before their parent Workbook is saved so the Worksheets are saved unprotected, otherwise the Worksheets will remain protected and the users will be able to unprotect them later retrieving the passkeys from the application log after closing MySQL for Excel. Added background coloring to the column names header row of an Import Data operation to have the same look as the one in an Edit Data operation (i.e. gray-ish background). Connection passwords can be stored securely just like MySQL Workbench does and these secured passwords are shared with Workbench in the same way connections are. Changed the way the MySQL for Excel ribbon toggle button works, instead of just showing or hiding the add-in it actually opens and closes it. Added a connection test before any operation against the database (schema creation, data import, append, export or edition) so the operation dialog is not shown and a friendlier error message is shown.   Also this release contains the following bug fixes:   Added a check on every connection test for an expired password, if the password has been expired a dialog is now shown to the user to reset the password. Bug #17354118 - DON'T HANDLE EXPIRED PASSWORDS Added code to escape text values to be imported to an Excel worksheet that start with an equals sign so Excel does not treat those values as formulas that will fail evaluation. This is an option turned on by default that can be turned off by users if they wish to import values to be treated as Excel formulas. Bug #17354102 - ERROR IMPORTING TEXT VALUES TO EXCEL STARTING WITH AN EQUALS SIGN Added code to properly check the reason for a failing connection, if it's a failing password the user gets a dialog to retry the connection with a different password until the connection succeeds, a connection error not related to the password is thrown or the user cancels. If the failing connection is not related to a bad password an error message is shown to the users indicating the reason of the failure. Bug #16239007 - CONNECTIONS TO MYSQL SERVICES NOT RUNNING DISPLAY A WRONG PASSWORD ERROR MESSAGE Added global options dialog that can be accessed from the Schema Selection and DB Object Selection panels where the timeouts for the connection to the DB Server and for the query commands can be changed from their default values (15 seconds for the connection timeout and 30 seconds for the query timeout). MySQL Bug #68732, Bug #17191646 - QUERY TIMEOUT CANNOT BE ADJUSTED IN MYSQL FOR EXCEL Changed the Varchar(65,535) data type shown in the Export Data data type combo box to Text since the maximum row size is 65,535 bytes and any autodetected column data type with a length greater than 4,000 should be set to Text actually for the table to be created successfully. MySQL Bug #69779, Bug #17191633 - EXPORT FAILS FOR EXCEL FILES CONTAINING > 4000 CHARACTERS OF TEXT PER CELL Removed code that was replacing all spaces typed by the user in an overriden data type for a new column in an Export Data operation, also improved the data type detection code to flag as invalid data types with parenthesis but without any text inside or where the contents inside the parenthesis are not valid for the specific data type. Bug #17260260 - EXPORT DATA SET TYPE NOT WORKING WITH MEMBER VALUES CONTAINING SPACES Added support for the year data type with a length of 2 or 4 and a validation that valid values are integers between 1901-2155 (for 4-digit years) or between 0-99 (for 2-digit years). Bug #17259915 - EXPORT DATA YEAR DATA TYPE NOT RECOGNIZED IF DECLARED WITH A DISPLAY WIDTH) Fixed code for Export Data operations where users overrode the data type for columns typing Text in the data type combobox, which is a valid data type but was not recognized as such. Bug #17259490 - EXPORT DATA TEXT DATA TYPE NOT RECOGNIZED AS A VALID DATA TYPE Changed the location of the registry where the MySQL for Excel add-in is installed to HKEY_LOCAL_MACHINE instead of HKEY_CURRENT_USER so the add-in is accessible by all users and not only to the user that installed it. For this to work with Excel 2007 a hotfix may be required (see http://support.microsoft.com/kb/976477). MySQL Bug #68746, Bug #16675992 - EXCEL-ADD-IN IS ONLY INSTALLED FOR USER ACCOUNT THAT THE INSTALLATION RUNS UNDER Added support for Excel 2013 Single Document Interface, now that Excel 2013 creates 1 window per workbook also the Excel Add-In maintains an independent custom task pane in each window. MySQL Bug #68792, Bug #17272087 - MYSQL FOR EXCEL SIDEBAR DOES NOT APPEAR IN EXCEL 2013 (WITH WORKAROUND) Included the latest MySQL Utility with a code fix for the COM exception thrown when attempting to open Workbench in the Manage Connections window. Bug #17258966 - MYSQL WORKBENCH NOT OPENED BY CLICKING MANAGE CONNECTIONS HOTLABEL Fixed code for Append Data operations that was not applying a calculated automatic mapping correctly when the source and target tables had different number of columns, some columns with the same name but some of those lying on column indexes beyond the limit of the other source/target table. MySQL Bug #69220, Bug #17278349 - APPEND DOESN'T AUTOMATICALLY DETECT EXCEL COL HEADER WITH SAME NAME AS SQL FIELD Fixed some code for Edit Data operations that was escaping special characters twice (during edition in Excel and then upon sending the query to the MySQL server). MySQL Bug #68669, Bug #17271693 - A BACKSLASH IS INSERTED BEFORE AN APOSTROPHE EDITING TABLE WITH MYSQL FOR EXCEL Upgraded MySQL Utility with latest version that encapsulates dialog base classes and introduces more classes to handle Workbench connections, and removed these from the Excel project. Bug #16500331 - CAN'T DELETE CONNECTIONS CREATED WITHIN ADDIN You can access the MySQL for Excel documentation at http://dev.mysql.com/doc/refman/5.6/en/mysql-for-excel.html You can find our team’s blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Is Financial Inclusion an Obligation or an Opportunity for Banks?

    - by tushar.chitra
    Why should banks care about financial inclusion? First, the statistics, I think this will set the tone for this blog post. There are close to 2.5 billion people who are excluded from the banking stream and out of this, 2.2 billion people are from the continents of Africa, Latin America and Asia (McKinsey on Society: Global Financial Inclusion). However, this is not just a third-world phenomenon. According to Federal Deposit Insurance Corp (FDIC), in the US, post 2008 financial crisis, one family out of five has either opted out of the banking system or has been moved out (American Banker). Moving this huge unbanked population into mainstream banking is both an opportunity and a challenge for banks. An obvious opportunity is the significant untapped customer base that banks can target, so is the positive brand equity a bank can build by fulfilling its social responsibilities. Also, as banks target the cost-conscious unbanked customer, they will be forced to look at ways to offer cost-effective products and services, necessitating technology upgrades and innovations. However, cost is not the only hurdle in increasing the adoption of banking services. The potential users need to be convinced of the benefits of banking and banks will also face stiff competition from unorganized players. Finally, the banks will have to believe in the viability of this business opportunity, and not treat financial inclusion as an obligation. In what ways can banks target the unbanked For financial inclusion to be a success, banks should adopt innovative business models to develop products that address the stated and unstated needs of the unbanked population and also design delivery channels that are cost effective and viable in the long run. Through business correspondents and facilitators In rural and remote areas, one of the major hurdles in increasing banking penetration is connectivity and accessibility to banking services, which makes last mile inclusion a daunting challenge. To address this, banks can avail the services of business correspondents or facilitators. This model allows banks to establish greater connectivity through a trusted and reliable intermediary. In India, for instance, banks can leverage the local Kirana stores (the mom & pop stores) to service rural and remote areas. With a supportive nudge from the central bank, the commercial banks can enlist these shop owners as business correspondents to increase their reach. Since these neighborhood stores are acquainted with the local population, they can help banks manage the KYC norms, besides serving as a conduit for remittance. Banks also have an opportunity over a period of time to cross-sell other financial products such as micro insurance, mutual funds and pension products through these correspondents. To exercise greater operational control over the business correspondents, banks can also adopt a combination of branch and business correspondent models to deliver financial inclusion. Through mobile devices According to a 2012 world bank report on financial inclusion, out of a world population of 7 billion, over 5 billion or 70% have mobile phones and only 2 billion or 30% have a bank account. What this means for banks is that there is scope for them to leverage this phenomenal growth in mobile usage to serve the unbanked population. Banks can use mobile technology to service the basic banking requirements of their customers with no frills accounts, effectively bringing down the cost per transaction. As I had discussed in my earlier post on mobile payments, though non-traditional players have taken the lead in P2P mobile payments, banks still hold an edge in terms of infrastructure and reliability. Through crowd-funding According to the Crowdfunding Industry Report by Massolution, the global crowdfunding industry raised $2.7 billion in 2012, and is projected to grow to $5.1 billion in 2013. With credit policies becoming tighter and banks becoming more circumspect in terms of loan disbursals, crowdfunding has emerged as an alternative channel for lending. Typically, these initiatives target the unbanked population by offering small loans that are unviable for larger banks. Though a significant proportion of crowdfunding initiatives globally are run by non-banking institutions, banks are also venturing into this space. The next step towards inclusive finance Banks by themselves cannot make financial inclusion a success. There is a need for a whole ecosystem that is supportive of this mission. The policy makers, that include the regulators and government bodies, must be in sync, the IT solution providers must put on their thinking caps to come out with innovative products and solutions, communication channels such as internet and mobile need to expand their reach, and the media and the public need to play an active part. The other challenge for financial inclusion is from the banks themselves. While it is true that financial inclusion will unleash a hitherto hugely untapped market, the normal banking model may be found wanting because of issues such as flexibility, convenience and reliability. The business will be viable only when there is a focus on increasing the usage of existing infrastructure and that is possible when the banks can offer the entire range of products and services to the large number of users of essential banking services. Apart from these challenges, banks will also have to quickly master and replicate the business model to extend their reach to the remotest regions in their respective geographies. They will need to ensure that the transactions deliver a viable business benefit to the bank. For tapping cross-sell opportunities, banks will have to quickly roll-out customized and segment-specific products. The bank staff should be brought in sync with the business plan by convincing them of the viability of the business model and the need for a business correspondent delivery model. Banks, in collaboration with the government and NGOs, will have to run an extensive financial literacy program to educate the unbanked about the benefits of banking. Finally, with the growing importance of retail banking and with many unconventional players eyeing the opportunity in payments and other lucrative areas of banking, banks need to understand the importance of micro and small branches. These micro and small branches can help banks increase their presence without a huge cost burden, provide bankers an opportunity to cross sell micro products and offer a window of opportunity for the large non-banked population to transact without any interference from intermediaries. These branches can also help diminish the role of the unorganized financial sector, such as local moneylenders and unregistered credit societies. This will also help banks build a brand awareness and loyalty among the users, which by itself has a cascading effect on the business operations, especially among the rural and un-banked centers. In conclusion, with the increasingly competitive banking sector facing frequent slowdowns and downturns, the unbanked population presents a huge opportunity for banks to enhance their customer base and fulfill their social responsibility.

    Read the article

  • Full-text Indexing Books Online

    - by Most Valuable Yak (Rob Volk)
    While preparing for a recent SQL Saturday presentation, I was struck by a crazy idea (shocking, I know): Could someone import the content of SQL Server Books Online into a database and apply full-text indexing to it?  The answer is yes, and it's really quite easy to do. The first step is finding the installed help files.  If you have SQL Server 2012, BOL is installed under the Microsoft Help Library.  You can find the install location by opening SQL Server Books Online and clicking the gear icon for the Help Library Manager.  When the new window pops up click the Settings link, you'll get the following: You'll see the path under Library Location. Once you navigate to that path you'll have to drill down a little further, to C:\ProgramData\Microsoft\HelpLibrary\content\Microsoft\store.  This is where the help file content is kept if you downloaded it for offline use. Depending on which products you've downloaded help for, you may see a few hundred files.  Fortunately they're named well and you can easily find the "SQL_Server_Denali_Books_Online_" files.  We are interested in the .MSHC files only, and can skip the Installation and Developer Reference files. Despite the .MHSC extension, these files are compressed with the standard Zip format, so your favorite archive utility (WinZip, 7Zip, WinRar, etc.) can open them.  When you do, you'll see a few thousand files in the archive.  We are only interested in the .htm files, but there's no harm in extracting all of them to a folder.  7zip provides a command-line utility and the following will extract to a D:\SQLHelp folder previously created: 7z e –oD:\SQLHelp "C:\ProgramData\Microsoft\HelpLibrary\content\Microsoft\store\SQL_Server_Denali_Books_Online_B780_SQL_110_en-us_1.2.mshc" *.htm Well that's great Rob, but how do I put all those files into a full-text index? I'll tell you in a second, but first we have to set up a few things on the database side.  I'll be using a database named Explore (you can certainly change that) and the following setup is a fragment of the script I used in my presentation: USE Explore; GO CREATE SCHEMA help AUTHORIZATION dbo; GO -- Create default fulltext catalog for later FT indexes CREATE FULLTEXT CATALOG FTC AS DEFAULT; GO CREATE TABLE help.files(file_id int not null IDENTITY(1,1) CONSTRAINT PK_help_files PRIMARY KEY, path varchar(256) not null CONSTRAINT UNQ_help_files_path UNIQUE, doc_type varchar(6) DEFAULT('.xml'), content varbinary(max) not null); CREATE FULLTEXT INDEX ON help.files(content TYPE COLUMN doc_type LANGUAGE 1033) KEY INDEX PK_help_files; This will give you a table, default full-text catalog, and full-text index on that table for the content you're going to insert.  I'll be using the command line again for this, it's the easiest method I know: for %a in (D:\SQLHelp\*.htm) do sqlcmd -S. -E -d Explore -Q"set nocount on;insert help.files(path,content) select '%a', cast(c as varbinary(max)) from openrowset(bulk '%a', SINGLE_CLOB) as c(c)" You'll need to copy and run that as one line in a command prompt.  I'll explain what this does while you run it and watch several thousand files get imported: The "for" command allows you to loop over a collection of items.  In this case we want all the .htm files in the D:\SQLHelp folder.  For each file it finds, it will assign the full path and file name to the %a variable.  In the "do" clause, we'll specify another command to be run for each iteration of the loop.  I make a call to "sqlcmd" in order to run a SQL statement.  I pass in the name of the server (-S.), where "." represents the local default instance. I specify -d Explore as the database, and -E for trusted connection.  I then use -Q to run a query that I enclose in double quotes. The query uses OPENROWSET(BULK…SINGLE_CLOB) to open the file as a data source, and to treat it as a single character large object.  In order for full-text indexing to work properly, I have to convert the text content to varbinary. I then INSERT these contents along with the full path of the file into the help.files table created earlier.  This process continues for each file in the folder, creating one new row in the table. And that's it! 5 SQL Statements and 2 command line statements to unzip and import SQL Server Books Online!  In case you're wondering why I didn't use FILESTREAM or FILETABLE, it's simply because I haven't learned them…yet. I may return to this blog after I figure that out and update it with the steps to do so.  I believe that will make it even easier. In the spirit of exploration, I'll leave you to work on some fulltext queries of this content.  I also recommend playing around with the sys.dm_fts_xxxx DMVs (I particularly like sys.dm_fts_index_keywords, it's pretty interesting).  There are additional example queries in the download material for my presentation linked above. Many thanks to Kevin Boles (t) for his advice on (re)checking the content of the help files.  Don't let that .htm extension fool you! The 2012 help files are actually XML, and you'd need to specify '.xml' in your document type column in order to extract the full-text keywords.  (You probably noticed this in the default definition for the doc_type column.)  You can query sys.fulltext_document_types to get a complete list of the types that can be full-text indexed. I also need to thank Hilary Cotter for giving me the original idea. I believe he used MSDN content in a full-text index for an article from waaaaaaaaaaay back, that I can't find now, and had forgotten about until just a few days ago.  He is also co-author of Pro Full-Text Search in SQL Server 2008, which I highly recommend.  He also has some FTS articles on Simple Talk: http://www.simple-talk.com/sql/learn-sql-server/sql-server-full-text-search-language-features/ http://www.simple-talk.com/sql/learn-sql-server/sql-server-full-text-search-language-features,-part-2/

    Read the article

  • The Great Divorce

    - by BlackRabbitCoder
    I have a confession to make: I've been in an abusive relationship for more than 17 years now.  Yes, I am not ashamed to admit it, but I'm finally doing something about it. I met her in college, she was new and sexy and amazingly fast -- and I'd never met anything like her before.  Her style and her power captivated me and I couldn't wait to learn more about her.  I took a chance on her, and though I learned a lot from her -- and will always be grateful for my time with her -- I think it's time to move on. Her name was C++, and she so outshone my previous love, C, that any thoughts of going back evaporated in the heat of this new romance.  She promised me she'd be gentle and not hurt me the way C did.  She promised me she'd clean-up after herself better than C did.  She promised me she'd be less enigmatic and easier to keep happy than C was.  But I was deceived.  Oh sure, as far as truth goes, it wasn't a complete lie.  To some extent she was more fun, more powerful, safer, and easier to maintain.  But it just wasn't good enough -- or at least it's not good enough now. I loved C++, some part of me still does, it's my first-love of programming languages and I recognize its raw power, its blazing speed, and its improvements over its predecessor.  But with today's hardware, at speeds we could only dream to conceive of twenty years ago, that need for speed -- at the cost of all else -- has died, and that has left my feelings for C++ moribund. If I ever need to write an operating system or a device driver, then I might need that speed.  But 99% of the time I don't.  I'm a business-type programmer and chances are 90% of you are too, and even the ones who need speed at all costs may be surprised by how much you sacrifice for that.   That's not to say that I don't want my software to perform, and it's not to say that in the business world we don't care about speed or that our job is somehow less difficult or technical.  There's many times we write programs to handle millions of real-time updates or handle thousands of financial transactions or tracking trading algorithms where every second counts.  But if I choose to write my code in C++ purely for speed chances are I'll never notice the speed increase -- and equally true chances are it will be far more prone to crash and far less easy to maintain.  Nearly without fail, it's the macro-optimizations you need, not the micro-optimizations.  If I choose to write a O(n2) algorithm when I could have used a O(n) algorithm -- that can kill me.  If I choose to go to the database to load a piece of unchanging data every time instead of caching it on first load -- that too can kill me.  And if I cross the network multiple times for pieces of data instead of getting it all at once -- yes that can also kill me.  But choosing an overly powerful and dangerous mid-level language to squeeze out every last drop of performance will realistically not make stock orders process any faster, and more likely than not open up the system to more risk of crashes and resource leaks. And that's when my love for C++ began to die.  When I noticed that I didn't need that speed anymore.  That that speed was really kind of a lie.  Sure, I can be super efficient and pack bits in a byte instead of using separate boolean values.  Sure, I can use an unsigned char instead of an int.  But in the grand scheme of things it doesn't matter as much as you think it does.  The key is maintainability, and that's where C++ failed me.  I like to tell the other developers I work with that there's two levels of correctness in coding: Is it immediately correct? Will it stay correct? That is, you can hack together any piece of code and make it correct to satisfy a task at hand, but if a new developer can't come in tomorrow and make a fairly significant change to it without jeopardizing that correctness, it won't stay correct. Some people laugh at me when I say I now prefer maintainability over speed.  But that is exactly the point.  If you focus solely on speed you tend to produce code that is much harder to maintain over the long hall, and that's a load of technical debt most shops can't afford to carry and end up completely scrapping code before it's time.  When good code is written well for maintainability, though, it can be correct both now and in the future. And you know the best part is?  My new love is nearly as fast as C++, and in some cases even faster -- and better than that, I know C# will treat me right.  Her creators have poured hundreds of thousands of hours of time into making her the sexy beast she is today.  They made her easy to understand and not an enigmatic mess.  They made her consistent and not moody and amorphous.  And they made her perform as fast as I care to go by optimizing her both at compile time and a run-time. Her code is so elegant and easy on the eyes that I'm not worried where she will run to or what she'll pull behind my back.  She is powerful enough to handle all my tasks, fast enough to execute them with blazing speed, maintainable enough so that I can rely on even fairly new peers to modify my work, and rich enough to allow me to satisfy any need.  C# doesn't ask me to clean up her messes!  She cleans up after herself and she tries to make my life easier for me by taking on most of those optimization tasks C++ asked me to take upon myself.  Now, there are many of you who would say that I am the cause of my own grief, that it was my fault C++ didn't behave because I didn't pay enough attention to her.  That I alone caused the pain she inflicted on me.  And to some extent, you have a point.  But she was so high maintenance, requiring me to know every twist and turn of her vast and unrestrained power that any wrong term or bout of forgetfulness was met with painful reminders that she wasn't going to watch my back when I made a mistake.  But C#, she loves me when I'm good, and she loves me when I'm bad, and together we make beautiful code that is both fast and safe. So that's why I'm leaving C++ behind.  She says she's changing for me, but I have no interest in what C++0x may bring.  Oh, I'll still keep in touch, and maybe I'll see her now and again when she brings her problems to my door and asks for some attention -- for I always have a soft spot for her, you see.  But she's out of my house now.  I have three kids and a dog and a cat, and all require me to clean up after them, why should I have to clean up after my programming language as well?

    Read the article

  • How-to delete a tree node using the context menu

    - by frank.nimphius
    Hierarchical trees in Oracle ADF make use of View Accessors, which means that only the top level node needs to be exposed as a View Object instance on the ADF Business Components Data Model. This also means that only the top level node has a representation in the PageDef file as a tree binding and iterator binding reference. Detail nodes are accessed through tree rule definitions that use the accessor mentioned above (or nested collections in the case of POJO or EJB business services). The tree component is configured for single node selection, which however can be declaratively changed for users to press the ctrl key and selecting multiple nodes. In the following, I explain how to create a context menu on the tree for users to delete the selected tree nodes. For this, the context menu item will access a managed bean, which then determines the selected node(s), the internal ADF node bindings and the rows they represent. As mentioned, the ADF Business Components Data Model only needs to expose the top level node data sources, which in this example is an instance of the Locations View Object. For the tree to work, you need to have associations defined between entities, which usually is done for you by Oracle JDeveloper if the database tables have foreign keys defined Note: As a general hint of best practices and to simplify your life: Make sure your database schema is well defined and designed before starting your development project. Don't treat the database as something organic that grows and changes with the requirements as you proceed in your project. Business service refactoring in response to database changes is possible, but should be treated as an exception, not the rule. Good database design is a necessity – even for application developers – and nothing evil. To create the tree component, expand the Data Controls panel and drag the View Object collection to the view. From the context menu, select the tree component entry and continue with defining the tree rules that make up the hierarchical structure. As you see, when pressing the green plus icon  in the Edit Tree Binding  dialog, the data structure, Locations -  Departments – Employees in my sample, shows without you having created a View Object instance for each of the nodes in the ADF Business Components Data Model. After you configured the tree structure in the Edit Tree Binding dialog, you press OK and the tree is created. Select the tree in the page editor and open the Structure Window (ctrl+shift+S). In the Structure window, expand the tree node to access the conextMenu facet. Use the right mouse button to insert a Popup  into the facet. Repeat the same steps to insert a Menu and a Menu Item into the Popup you created. The Menu item text should be changed to something meaningful like "Delete". Note that the custom menu item later is added to the context menu together with the default context menu options like expand and expand all. To define the action that is executed when the menu item is clicked on, you select the Action Listener property in the Property Inspector and click the arrow icon followed by the Edit menu option. Create or select a managed bean and define a method name for the action handler. Next, select the tree component and browse to its binding property in the Property Inspector. Again, use the arrow icon | Edit option to create a component binding in the same managed bean that has the action listener defined. The tree handle is used in the action listener code, which is shown below: public void onTreeNodeDelete(ActionEvent actionEvent) {   //access the tree from the JSF component reference created   //using the af:tree "binding" property. The "binding" property   //creates a pair of set/get methods to access the RichTree instance   RichTree tree = this.getTreeHandler();   //get the list of selected row keys   RowKeySet rks = tree.getSelectedRowKeys();   //access the iterator to loop over selected nodes   Iterator rksIterator = rks.iterator();          //The CollectionModel represents the tree model and is   //accessed from the tree "value" property   CollectionModel model = (CollectionModel) tree.getValue();   //The CollectionModel is a wrapper for the ADF tree binding   //class, which is JUCtrlHierBinding   JUCtrlHierBinding treeBinding =                  (JUCtrlHierBinding) model.getWrappedData();          //loop over the selected nodes and delete the rows they   //represent   while(rksIterator.hasNext()){     List nodeKey = (List) rksIterator.next();     //find the ADF node binding using the node key     JUCtrlHierNodeBinding node =                       treeBinding.findNodeByKeyPath(nodeKey);     //delete the row.     Row rw = node.getRow();       rw.remove();   }          //only refresh the tree if tree nodes have been selected   if(rks.size() > 0){     AdfFacesContext adfFacesContext =                          AdfFacesContext.getCurrentInstance();     adfFacesContext.addPartialTarget(tree);   } } Note: To enable multi node selection for a tree, select the tree and change the row selection setting from "single" to "multiple". Note: a fully pictured version of this post will become available at the end of the month in a PDF summary on ADF Code Corner : http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html 

    Read the article

  • Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

    - by sxkumar
    This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.   Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources. Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical.  For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications: Broad and Deep Solution It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise. Run and Manage Efficiently Your Mission Critical Applications It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around.  Automated, Integrated Set of Cloud Management Capabilities At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you.  At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. I will dedicated the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

    Read the article

  • Azure - Part 4 - Table Storage Service in Windows Azure

    - by Shaun
    In Windows Azure platform there are 3 storage we can use to save our data on the cloud. They are the Table, Blob and Queue. Before the Chinese New Year Microsoft announced that Azure SDK 1.1 had been released and it supports a new type of storage – Drive, which allows us to operate NTFS files on the cloud. I will cover it in the coming few posts but now I would like to talk a bit about the Table Storage.   Concept of Table Storage Service The most common development scenario is to retrieve, create, update and remove data from the data storage. In the normal way we communicate with database. When we attempt to move our application over to the cloud the most common requirement should be have a storage service. Windows Azure provides a in-build service that allow us to storage the structured data, which is called Windows Azure Table Storage Service. The data stored in the table service are like the collection of entities. And the entities are similar to rows or records in the tradtional database. An entity should had a partition key, a row key, a timestamp and set of properties. You can treat the partition key as a group name, the row key as a primary key and the timestamp as the identifer for solving the concurrency problem. Different with a table in a database, the table service does not enforce the schema for tables, which means you can have 2 entities in the same table with different property sets. The partition key is being used for the load balance of the Azure OS and the group entity transaction. As you know in the cloud you will never know which machine is hosting your application and your data. It could be moving based on the transaction weight and the number of the requests. If the Azure OS found that there are many requests connect to your Book entities with the partition key equals “Novel” it will move them to another idle machine to increase the performance. So when choosing the partition key for your entities you need to make sure they indecate the category or gourp information so that the Azure OS can perform the load balance as you wish.   Consuming the Table Although the table service looks like a database, you cannot access it through the way you are using now, neither ADO.NET nor ODBC. The table service exposed itself by ADO.NET Data Service protocol, which allows you can consume it through the RESTful style by Http requests. The Azure SDK provides a sets of classes for us to connect it. There are 2 classes we might need: TableServiceContext and TableServiceEntity. The TableServiceContext inherited from the DataServiceContext, which represents the runtime context of the ADO.NET data service. It provides 4 methods mainly used by us: CreateQuery: It will create a IQueryable instance from a given type of entity. AddObject: Add the specified entity into Table Service. UpdateObject: Update an existing entity in the Table Service. DeleteObject: Delete an entity from the Table Service. Beofre you operate the table service you need to provide the valid account information. It’s something like the connect string of the database but with your account name and the account key when you created the storage service on the Windows Azure Development Portal. After getting the CloudStorageAccount you can create the CloudTableClient instance which provides a set of methods for using the table service. A very useful method would be CreateTableIfNotExist. It will create the table container for you if it’s not exsited. And then you can operate the eneities to that table through the methods I mentioned above. Let me explain a bit more through an exmaple. We always like code rather than sentence.   Straightforward Accessing to the Table Here I would like to build a WCF service on the Windows Azure platform, and for now just one requirement: it would allow the client to create an account entity on the table service. The WCF service would have a method named Register and accept an instance of the account which the client wants to create. After perform some validation it will add the entity into the table service. So the first thing I should do is to create a Cloud Application on my VIstial Studio 2010 RC. (The Azure SDK 1.1 only supports VS2008 and VS2010 RC.) The solution should be like this below. Then I added a configuration items for the storage account through the Settings section under the cloud project. (Double click the Services file under Roles folder and navigate to the Setting section.) This setting will be used when to retrieve my storage account information. Since for now I just in the development phase I will select “UseDevelopmentStorage=true”. And then I navigated to the WebRole.cs file under my WCF project. If you have read my previous posts you would know that this file defines the process when the application start, and terminate on the cloud. What I need to do is to when the application start, set the configuration publisher to load my config file with the config name I specified. So the code would be like below. I removed the original service and contract created by the VS template and add my IAccountService contract and its implementation class - AccountService. And I add the service method Register with the parameters: email, password and it will return a boolean value to indicates the result which is very simple. At this moment if I press F5 the application will be established on my local development fabric and I can see my service runs well through the browser. Let’s implement the service method Rigister, add a new entity to the table service. As I said before the entities you want to store in the table service must have 3 properties: partition key, row key and timespan. You can create a class with these 3 properties. The Azure SDK provides us a base class for that named TableServiceEntity in Microsoft.WindowsAzure.StorageClient namespace. So what we need to do is more simply, create a class named Account and let it derived from the TableServiceEntity. And I need to add my own properties: Email, Password, DateCreated and DateDeleted. The DateDeleted is a nullable date time value to indecate whether this entity had been deleted and when. Do you notice that I missed something here? Yes it’s the partition key and row key I didn’t assigned. The TableServiceEntity base class defined 2 constructors one was a parameter-less constructor which will be used to fill values into the properties from the table service when retrieving data. The other was one with 2 parameters: partition key and row key. As I said below the partition key may affect the load balance and the row key must be unique so here I would like to use the email as the parition key and the email plus a Guid as the row key. OK now we finished the entity class we need to store onto the table service. The next step is to create a data access class for us to add it. Azure SDK gives us a base class for it named TableServiceContext as I mentioned below. So let’s create a class for operate the Account entities. The TableServiceContext need the storage account information for its constructor. It’s the combination of the storage service URI that we will create on Windows Azure platform, and the relevant account name and key. The TableServiceContext will use this information to find the related address and verify the account to operate the storage entities. Hence in my AccountDataContext class I need to override this constructor and pass the storage account into it. All entities will be saved in the table storage with one or many tables which we call them “table containers”. Before we operate an entity we need to make sure that the table container had been created on the storage. There’s a method we can use for that: CloudTableClient.CreateTableIfNotExist. So in the constructor I will perform it firstly to make sure all method will be invoked after the table had been created. Notice that I passed the storage account enpoint URI and the credentials to specify where my storage is located and who am I. Another advise is that, make your entity class name as the same as the table name when create the table. It will increase the performance when you operate it over the cloud especially querying. Since the Register WCF method will add a new account into the table service, here I will create a relevant method to add the account entity. Before implement, I should add a reference - System.Data.Services.Client to the project. This reference provides some common method within the ADO.NET Data Service which can be used in the Windows Azure Table Service. I will use its AddObject method to create my account entity. Since the table service are not fully implemented the ADO.NET Data Service, there are some methods in the System.Data.Services.Client that TableServiceContext doesn’t support, such as AddLinks, etc. Then I implemented the serivce method to add the account entity through the AccountDataContext. You can see in the service implmentation I load the storage account information through my configuration file and created the account table entity from the parameters. Then I created the AccountDataContext. If it’s my first time to invoke this method the constructor of the AccountDataContext will create a table container for me. Then I use Add method to add the account entity into the table. Next, let’s create a farely simple client application to test this service. I created a windows console application and added a service reference to my WCF service. The metadata information of the WCF service cannot be retrieved if it’s deployed on the Windows Azure even though the <serviceMetadata httpGetEnabled="true"/> had been set. If we need to get its metadata we can deploy it on the local development service and then changed the endpoint to the address which is on the cloud. In the client side app.config file I specified the endpoint to the local development fabric address. And the just implement the client to let me input an email and a password then invoke the WCF service to add my acocunt. Let’s run my application and see the result. Of course it should return TRUE to me. And in the local SQL Express I can see the data had been saved in the table.   Summary In this post I explained more about the Windows Azure Table Storage Service. I also created a small application for demostration of how to connect and consume it through the ADO.NET Data Service Managed Library provided within the Azure SDK. I only show how to create an eneity in the storage service. In the next post I would like to explain about how to query the entities with conditions thruogh LINQ. I also would like to refactor my AccountDataContext class to make it dyamic for any kinds of entities.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • NINE Questions with Michelle Juett

    - by NINEQuestions
    Michelle Juett is one of the more interesting people I know, even though we’ve never met face to face. She’s part artist, part techie and all cool. We “met” via my good buddy George Clingerman and have plotting to take over the world, errr… I mean “collaborating” ever since. If you happen to live in the Seattle area, you can catch her and her work at Sakura Con on April 2-4, 2010 and various other gamer and art cons throughout the year. You can also find her on Twitter as @Shelldragon. Now that you know a little bit, I’ll let her tell you the rest of the story in these NINE Questions: 1. Where are you from? I was born in Clearwater, Florida. I like to tell people I'm from the Bermuda Triangle, it just makes explaining myself so much easier. My family moved to Washington when I was 5 and I've been in the Pacific Northwest ever since. We like to QQ about the rain but we really love the green trees and clean water. 2. What do you do? I fight evil by moonlight and win love by daylight.. or something like that.  I’ve been in quality assurance for games during the day since January 2008 and an artist for life. I currently work in QA for a really awesome game company in Bellevue.  At home, I work on personal digital art, making game assets as well as other random freelance projects as they pop up. 3. How did you get to where you are now? I'm still not where I want to be but I'm getting closer. The biggest piece of advice I can give is to work hard and never settle for the minimum required. I tend to overwork myself but I've never regretted it. You can want something really bad but if you aren't willing to work for it, then you can't expect it to just happen. I've always drawn and had an unhealthy love for video games that I was told I’d grow out of.  I knew I would not ‘grow out’ of games and that real adults make them and I could too. After I graduated, in searching for jobs, I discovered game testing. I figured this would be a good way to get my foot in the door and start networking. I’ve worked with consoles, websites and now, PC games.  I stuck with my journey, although it has been a rocky one, daylighting as a tester and moonlighting as an artist. I'm still on that journey but I wouldn't have it any other way. Test has given me a perspective that is difficult, if not impossible, to obtain any other way. It gives an unconditional respect for other hard working testers and an insight into creative problem solving. 4. So video game testing probably sounds WAY cooler than the reality. What's it like? What's a given day for you? Game testers don't get a lot of respect because of their stigmas and the fact most people don't actually know what we do.  People hear about the opening and closing disc trays all day. Many places do treat their testers like numbers. It all depends on where you work and how awesome your company is. I've had to deal with a lot of bad work situations to get to a really good one. QA exists to ensure the game is as flawless and enjoyable as it can be by the time it has to leave the nest and go out into the world. This includes everything obvious: “can I beat the level and save the princess?” to the more obscure: ‘What happens when I lose internet connection while trying to save right before falling into a pit to my death while holding the jump key then my cat pulls out my memory card and hides it in her litter box?” On the dev side, for developers, testers can be very scary people. Especially when the test team is not in house and you can’t see each other’s faces.  I've seen both sides. We don't mean to hurt your feelings. We really DO love you and want your game to be the best it can be! It can be some serious tough love. 5. You are also an accomplished artist. Got any major projects right now you'd like to talk about? LOL, I don't know if I’d say I'm an accomplished artist just yet. I’m still a long way from where I want to be. I figure that’s what makes you grow though: the desire to never stop improving. I like QA but I want to be a full time artist. I was lucky enough to register for a table at Sakura Con in the 11 second window that the tables sold out. As such, I’ll be selling my wares in the Artist Alley April 2-4th. Part of preparing for this is actually making the art to be sold there. Anime is a fun pass time but I don’t draw a whole lot of it so I’m making up for lost time. As I seem to enjoy burying myself in work, I’m an art lead for a secret project that’s so secret I might be killed tonight for even mentioning it. I also take on various freelance projects and do what I can to help out indie games. I discovered the XNA community a year and a half ago and developed a love for Indies when I was writing a weekly newsletter on XBLA news. I’m a little late to the party but I find myself in a unique position where I am an artist and also have technical skills in games. While not programmer myself, I have a lot of game sense and experience. I hope to make some awesome happen. Lastly, I have an ongoing web comic Shell’s Angels) that tends to get neglected when I get busy. I still love drawing comics and keep a little book with me to sketch down ideas as they pop into my head. I may pick it back up again as a larger project sometime in the future. 6. Can you talk about any of the other freelance projects you're doing or are you sworn to secrecy on those too? We wouldn't want a team of game developer ninjas to take you out or anything. All my projects are currently 2d. I have personal projects such as the ongoing comic as well as a graphic novel I've been picking at here and there. My main focus until April is Sakura Con, Sakura Con, Sakura Con.  I see it as a great way to get exposure and convention experience. I found out I love conventions a couple years ago and I want to get more involved in them. 7. As an artist, what is your weapon of choice? What do you use to get most of your stuff done? I am a Photoshop Hero and I have the hoodie to prove it. (http://www.pennyarcademerch.com/pah090011.html) I've dabbled in other paint programs but I always gravitate back to Photoshop. She is my one true love. I'd like to learn programs like Flash or Anime Studio when I get a bit more time because of their animation abilities. I've worked on frame by frame animation forever but I would love to learn 2d rigging. Still, nothing can compare to a simple sketchpad and a pencil. I always have one on me in case I come across or think of something interesting and can't get to a computer. If the Courier ever comes to exist it will be an ideal weapon for me. 8. You did some videos too, depicting the art creation process. What was the motivation behind those? The creative process is just as important as the final product, if not more so.  I've always loved watching speed paint videos and wanted to try it out myself. Turns out it's a lot of work and time but it's definitely fun to go back and rewatch them. Art isn't always the end result and is more often the process itself. 9. Got any interesting tattoos? Designed any for yourself or other people? Not yet, but not for lack of desire. I've toiled over what and where for years. Last year, I finally decided the back of my shoulders would be the place. Like anything permanent, I want it to have meaning. I thought of somehow incorporating games but I couldn't find something I felt would stand the test of time even with all the classic sprite games. I'm very picky so we'll see if I can get something solid decided. Come see me at Sakura Con April 2 -4!!!

    Read the article

  • CodePlex Daily Summary for Wednesday, March 17, 2010

    CodePlex Daily Summary for Wednesday, March 17, 2010New Projectschaosreader: A simple RSS reader.CRM 4.0 Customization GUID Update: The CRM 4.0 customization GUID update is an open source C# console application that automatically replaces GUID values in your exported workflow cu...DotNetNuke® Skin Bright: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Noel Jerke of SiteToolset. This simple and clean business...DotNetNuke® Skin Go: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by DnnGo Corporation. The skin uses web standard DIV+CSS tec...DotNetNuke® Skin J10blend: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Timthy Maler of 2M Studio Design. J10-Black v01.00.00 inc...DotNetNuke® Skin Recipe: A DotNetNuke Design Challenge skin package submitted to the "Standards" category by dnnprofis.at. For mobile devices the skin changes to a mobile...DotNetNuke® Skin SpaceSmurfs: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Eric Johnson of Personify Design. This fun personal skin was ins...ERDOS6 - Web: A Web Project about ERDOS 6Flickrlight: Flickrlight is a personal fun project out of love of Flickr and Silverlight. You can experience it here: http://www.flickrlight.net.GsGrid: Extracting data from Gaussian grid file and grid file calculationiLocator: iLocator is a collaborative educational mapping game for children developed on Microsoft Surface. This game encourages players to collaborate with ...Javascript CallObject SOAP AJAX Helper: CallObject is a Javascript based AJAX helper, it facilitates wrapping of basic soap calls (as long as simple data types are used), asynchronous ret...kbTrainer: kbTrainer is a simple to use HTML application for typing speed training. A lot of features completed in basic. 2 learning keywords layouts -- engli...Laboratório de Engenharia de Software - Projeto: Criado para estudar e aplicar novas tecnologias web.Maxilds Powershell Scripts: Repo of my powershell scriptsNamespacifier: Namespacifier is a C#.NET library and console application to fix XML documents containing multiple default namespaces. It gives prefixes to defaul...OData SDK for Objective-C: This is a CTP of the OData SDK for Objective-C. The library targets iPhone devices and Mac OS X and it is designed to facilitate the connection wit...Open Data App Framework (ODAF): The Open Data Application Framework (ODAF) is a framework that allows cities to easily map existing civic Open Data landmarks, and allow users to r...QuickieB2B: QuickieB2B is web application which main target is to provide quick info about products. It's designed for small companies who have a big number of...RayView: Rayview is an easy-to-use Raytracing-Framework based on Microsoft XNA.Robotics Studio application to navigate Lego Mindstorms robot through labyrinth: A project for Software Systems Analysis and Design Tools subject at the Kaunas University of Technology. The main point of the project is to code L...SharePoint Icon Integration: SharePoint Icon Integration makes it easier for SharePoint Administrators / Developers to add a icon (pdf) to the SharePoint farm. You will no long...TestVersion: Testing VersionieringTimecard: SoftSource Timecard project.T-shirt Cannon: So the Coding4Fun team had two weeks to build two robots able to drive, aim, and shoot t-shirts with a Windows Phone during a MIX10 Keynote demo of...USTF: This project is a bit secretive right now.Windows Azure Command-line Tools for PHP Developers: The “Windows Azure Command-line tools for PHP” provide a command-line experience to developers who wish to develop, package, and deploy PHP applic...New ReleasesCaramel Engine: CaramelEngine Alpha Build 0.0.0.1a: This is an early alpha release of the Engine and it's functionality. Be sure to have the using CaramelEngine statement. This release is for people...Coot: Preview: Basic preview On the first use you have to click Create New Session and Login. After this you can just click Screen Saver each time. Settings sho...CycleMania Starter Kit EAP - ASP.NET 4.0 Problem - Design - Solution: Cyclemania 0.08.32: The latest alpha release.DeepZoomContainer, Expanded DeepZoom for Silverlight & Windows Phone 7 Series: Release ver. 1.20 for Windows Phone 7 Series: SolutionMerge PathAnimation solution into one MouseWheel elimination PathAnimationWP7 Port DeepZoomContainerProject rebuilt for WP7 support De...Desktop Google Reader: 1.3 (the social release): NewsSharing Liking Mail item Labels / Tags Send to Twitter Read It Later http://readitlaterlist.com/ Instapaper http://www.instapaper.com/ Favicons...DotNetNuke® Blog: 04.00.00 RC 2: PLEASE NOTE: Please do not upgrade previous version of the Beta releases - please start from 03.05.01 This is a RELEASE CANDIDATE, and as such ...DotNetNuke® Community Edition: 05.03.00: New FeaturesTemplated User Profiles - User profile pages are now publicly viewable Photo field in User Profile - Users can upload a photo to thei...DotNetNuke® Skin Bright: Bright Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Noel Jerke of SiteToolset. This simple and clean business...DotNetNuke® Skin Go: Go Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by DnnGo Corporation. The skin uses web standard DIV+CSS tec...DotNetNuke® Skin J10blend: J10 Blend Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Timthy Maler of 2M Studio Design. J10-Black v01.00.00 incl...DotNetNuke® Skin Recipe: Recipe Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Standards" category by dnnprofis.at. For mobile devices the skin changes to a mobile f...DotNetNuke® Skin SpaceSmurfs: Space Smurfs Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Eric Johnson of Personify Design. This fun personal skin was ins...Dynamo: Dynamo v0.1 Beta: The following is included: Dynamo dlls Antlr dlls Hello world Simple Plugin example Application Dependency injection Singleton Managment ...ExtremeML: ExtremeML v1.0 Beta 1: Timed to accompany the RTM release of the OpenXML SDK v2.0, this is the first Beta release of ExtremeML (it was previously classified as a preview ...Family Tree Analyzer: Version 1.1.1.1: Version 1.1.1.1 Lots of Gedcom parsing fixes it should crash a whole lot less often and tolerate more "interesting" or "quirky" Gedcom entries. Add...Family Tree Analyzer: Version 1.2.0.1: Version 1.2.0.1 Added option to treat residence facts as Census Facts IGI Search now permits default country selection ie: what to use if it doesn...Flickrlight: Flickrlight: Current release is for idea sharing. There are not many design patterns being used. Please bare with the mess. :-) In order to run the applicat...Gherkin editor: Alpha 0.1: Most of the code at this point is the same as the Avalon.Sample from code project, just changed the name, removed extra languages and added syntax ...GsGrid: gsgrid1.6.4: gsgrid1.6.4GsGrid: gsgrid1.6.4-src: gsgrid1.6.4-srcHTML Template Repeater Module: Version 01.00.02: GeneralThe HTML Template Repeater Module is a direct replacement for the Core DotNetNuke Text/HTML module. Use it where you need to repeat the form...Images Compiler: Release 0.1: Last alpha buildJavascript CallObject SOAP AJAX Helper: Beta Release, 0.2.1: Beta Release, 0.2.1 Contains only core objectskbTrainer: kbtrainer 1.25u: kbTrainer is a simple to use speed typing training HTML application. A lot of features. All ither info availiable on http://code.google.com/p/kbtr...MapWindow6: MapWindow 6.0 msi (March 16): This version fixes a bug where selected points were not drawing correctly.Mesopotamia Experiment: Mesopotamia 1.2.43: Release Notes New Features - Scenario Name on title bar - Show organisms in Scnearios with simple stats Bug Fixes - Removed app domain recyling an...MFCMAPI: March 2010 Release: If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. Build: 6.0.0.1018 The 64 bit bu...MVVM Light Toolkit: MVVM Light Toolkit V3: Download the Zip file and extract it to a local folder. Then, follow the instructions on the Installation page http://www.galasoft.ch/mvvm/installi...NETXPF: 1.0.2: Changes: - Added a class "IOUtils" with methods for reading streams and GZip-compressing HTTP responses - Fixed a bug in the size formatter (excep...OData SDK for Objective-C: OData SDK for Objective-C CTP: The current release supports read-only operations only and it has been tested on a limited set of scenarios. The download include a sample iPhone a...Open Data App Framework (ODAF): ODAF 1.0: Initial beta release.Selection Maker: Selection Maker 1.1: New FeaturesContext Menu for ListView added Bug FixesFixed: If the users press Copy/Cut Button when no item is selected in ListView the ListView cl...Selection Maker: Selection Maker 1.2: Bug Fixes:a minor bug fixedSimple.NET: Simple.Mocking 1.0.0.5: Initial version of a new mocking framework for .NET Revision 1: Expect.AnyInocationOn<T>(T target) changed to Expect.AnyInocationOn(object target...SQL Server Extended Properties Quick Editor: New release 1.5.4: Whats new: Move preferences to application settings and add a form to edit preferences. Support to add, modify and delete operations could be made ...SuperModel - A Dynamic View-Model Generator: 1.0.0.0 - Tyra: The final 1.0 release, now less intrusive! If you don't want to implement ISuperModel, simply implement INotifyPropertyChanged.Timecard: Timecard Initial Release: The zipped version of the Initial Checkin.Transparent Persistence.Net: TP.Net 0.1.0: This is the initial alpha release. It's working for small set of use-cases (basic access to Cassandra).VCC: Latest build, v2.1.30316.0: Automatic drop of latest buildVFPnfe: Projeto Ajuda PAF-ECF: Este projeto visa ajudar aos desenvolvedores para homologação do PAF-ECF , sob licença publica GNU/GPL para ver mais detalhes do mesmo assista o vi...Visual Studio DSite: Gif Animator: This program will make an animate gif. (Program written in Vb.Net 2008)Most Popular ProjectsMetaSharpLiveUpload to FacebookSkype Voice ChangerLiveUpload to YouTubeSIPSorceryChartPart for SharePointTFS Branching Guide 2.0TouchFlo DetacherNPandaySnippet EditorMost Active ProjectsLINQ to TwitterRawrOData SDK for PHPDirectQpatterns & practices – Enterprise LibraryBlogEngine.NETN2 CMSOpen Data App Framework (ODAF)NB_Store - Free DotNetNuke Ecommerce Catalog ModuleMapWindow6

    Read the article

  • Looking Back at MIX10

    - by WeigeltRo
    It’s the sad truth of my life that even though I’m fascinated by airplanes and flight in general since my childhood days, my body doesn’t like flying. Even the ridiculously short flights inside Germany are taking their toll on me each time. Now combine this with sitting in the cramped space of economy class for many hours on a transatlantic flight from Germany to Las Vegas and back, and factor in some heavy dose of jet lag (especially on my way eastwards), and you get an idea why after coming back home I had this question on my mind: Was it really worth it to attend MIX10? This of course is a question that will also be asked by my boss at Comma Soft (for other reasons, obviously), who decided to send me and my colleague Jens Schaller, to the MIX10 conference. (A note to my German readers: An dieser Stelle der Hinweis, dass Comma Soft noch Silverlight-Entwickler und/oder UI-Designer für den Standort Bonn sucht – aussagekräftige Bewerbungen bitte an [email protected]) Too keep things short: My answer is yes. Before I’ll go into detail, let me ask the heretical questions whether tech conferences in general still make sense. There was a time, where actually being at a tech conference gave you a head-start in regard to learning about new technologies. Nowadays this is no longer true, where every bit of information and every detail is immediately twittered, blogged and whatevered to death. In the case of MIX10 you even can download the video-taped sessions shortly after. So: Does visiting a conference still make sense? It depends on what you expect from a conference. It should be clear to everybody that you’ll neither get exclusive information, nor receive training in a small group. What a conference does offer that sitting in front of your computer does not can be summarized as follows: Focus Being away from work and home will help you to focus on the presented information. Of course there are always the poor guys who are haunted by their work (with mails and short text messages reporting the latest showstopper problem), but in general being out of your office makes a huge difference. Inspiration With the focus comes the emotional involvement. I find it much easier to absorb information if I feel that certain vibe when sitting in a session. This still means that I have put work into reviewing the information later, but it’s a better starting point. And all the impressions collected at a (good) conference combined lead to a higher motivation – be it by the buzz (“this is gonna be sooo cool!”) or by the fear to fall behind (“man, we’ll have work on this, or else…”). People At a conference it’s pretty easy to get into contact with other people during breakfast, lunch and other breaks. This is a good opportunity to get a feel for what other development teams are doing (on a very general level of course, nobody will tell you about their secret formula) and what they are thinking about specific technologies. So MIX10 did offer focus, inspiration and people, but that would have meant nothing without valuable content. When I (being a frontend developer with a strong interest in UI/UX) planned my visit to MIX10, I made the decision to focus on the "soft" topics of design, interaction and user experience. I figured that I would be bombarded with all the technical details about Silverlight 4 anyway in the weeks and months to come. Actually, I would have liked to catch a few technical sessions, but the agenda wasn’t exactly in favor of people interested in any kind of Silverlight and UI/UX/Design topics. That’s one of my few complaints about the conference – I would have liked one more day and/or more sessions per day. Overall, the quality of the workshops and sessions was pretty high. In fact, looking back at my collection of conferences I’ve visited in the past I’d say that MIX10 ranks somewhere near the top spot. Here’s an overview of the workshops/sessions I attended (I’ll leave out the keynotes): Day 0 (Workshops on Sunday) Design Fundamentals for Developers Robby Ingebretsen is the man! Great workshop in three parts with the perfect mix of examples, well-structured definition of terminology and the right dose of humor. Robby was part of the WPF team before founding his own company so he not only has a strong interest in design (and the skillz!) but also the technical background.   Design Tools and Techniques Originally announced to be held by Arturo Toledo, the Rosso brothers from ArcheType filled in for the first two parts, and Corrina Black had a pretty general part about the Windows Phone UI. The first two thirds were a mixed bag; the two guys definitely knew what they were talking about, and the demos were great, but the talk lacked the preparation and polish of a truly great presentation. Corrina was not allowed to go into too much detail before the keynote on Monday, but the session was still very interesting as it showed how much thought went into the Windows Phone UI (and there’s always a lot to learn when people talk about their thought process). Day 1 (Monday) Designing Rich Experiences for Data-Centric Applications I wonder whether there was ever a test-run for this session, but what Ken Azuma and Yoshihiro Saito delivered in the first 15 minutes of a 30-minutes-session made me walk out. A commercial for a product (just great: a video showing a SharePoint plug-in in an all-Japanese UI) combined with the most generic blah blah one could imagine. EPIC FAIL.   Great User Experiences: Seamlessly Blending Technology & Design I switched to this session from the one above but I guess I missed the interesting part – what I did catch was what looked like a “look at the cool stuff we did” without being helpful. Or maybe I was just in a bad mood after the other session.   The Art, Technology and Science of Reading This talk by Kevin Larson was very interesting, but was more a presentation of what Microsoft is doing in research (pretty impressive) and in the end lacked a bit the helpful advice one could have hoped for.   10 Ways to Attack a Design Problem and Come Out Winning Robby Ingebretsen again, and again a great mix of theory and practice. The clean and simple, yet effective, UI of the reader app resulted in a simultaneous “wow” of Jens and me. If you’d watch only one session video, this should be it. Microsoft has to bring Robby back next year! Day 2 (Tuesday) Touch in Public: Multi-touch Interaction Design for Kiosks & Architectural Experiences Very interesting session by Jason Brush, a great inspiration with many details to look out for in the examples. Exactly what I was hoping for – and then some!   Designing Bing: Heart and Science How hard can it be to design the UI for a search engine? An input field and a list of results, that should be it, right? Well, not so fast! The talk by Paul Ray showed the many iterations to finally get it right (up to the choice of a specific blue for the links). And yes, I want an eye-tracking device to play around with!   The Elephant in the Room When Nishant Kothary presented a long list of what his session was not about, I told to myself (not having the description text present) “Am I in the wrong talk? Should I leave?”. Boy, was I wrong. A great talk about human factors in the process of designing stuff.   An Hour with Bill Buxton Having seen Bill Buxton’s presentation in the keynote, I just had to see this man again – even though I didn’t know what to expect. Being more or less unplanned and intended to be more of a conversation, the session didn’t provide a wealth of immediately useful information. Nevertheless Bill Buxton was impressive with his huge knowledge of seemingly everything. But this could/should have been a session some when in the evening and not in parallel to at least two other interesting talks. Day 3 (Wednesday) Design the Ordinary, Like the Fixie This session by DL Byron and Kevin Tamura started really well and brought across the message to keep things simple. But towards the end the talk lost some of its steam. And, as a member of the audience pointed out, they kind of ignored their own advice when they used a fancy presentation software other then PowerPoint that sometimes got in the way of showing things.   Developing Natural User Interfaces Speaking of alternative presentation software, Joshua Blake definitely had the most remarkable alternative to PowerPoint, a self-written program called NaturalShow that was controlled using multi-touch on a touch screen. Not a PowerPoint-killer, but impressive nevertheless. The (excellent) talk itself was kind of eye-opening in regard to what “multi-touch support” on various platforms (WPF, Silverlight, Windows Phone) actually means.   Treat your Content Right The talk by Tiffani Jones Brown wasn’t even on my planned schedule, but somehow I ended up in that session – and it was great. And even for people who don’t necessarily have to write content for websites, some points made by Tiffani are valid in many places, notably wherever you put texts with more than a single word into your UI. Creating Effective Info Viz in Microsoft Silverlight The last session of MIX10 I attended was kind of disappointing. At first things were very promising, with Matthias Shapiro giving a brief but well-structured introduction to info graphics and interactive visualizations. Then the live-coding began and while the result was interesting, too much time was spend on wrestling to get the code working. Ending earlier than planned, the talk was a bit light on actual content, but at least it included a nice list of resources. Conclusion It could be felt all across MIX10, UIs will take a huge leap forward; in fact, there are enough examples that have already. People who both have the technical know-how and at least a basic understanding of design (“literacy” as Bill Buxton called it) are in high demand. The concept of the MIX conference and initiatives like design.toolbox shows that Microsoft understands very well that frontend developers have to acquire new knowledge besides knowing how to hack code and putting buttons on a form. There are extremely exciting times before us, with lots of opportunity for those who are eager to develop their skills, that is for sure.

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • Going Paperless

    - by Jesse
    One year ago I came to work for a company where the entire development team is 100% “remote”; we’re spread over 3 time zones and each of us works from home. This seems to be an increasingly popular way for people to work and there are many articles and blog posts out there enumerating the advantages and disadvantages of working this way. I had read a lot about telecommuting before accepting this job and felt as if I had a pretty decent idea of what I was getting into, but I’ve encountered a few things over the past year that I did not expect. Among the most surprising by-products of working from home for me has been a dramatic reduction in the amount of paper that I use on a weekly basis. Hoarding In The Workplace Prior to my current telecommute job I worked in what most would consider pretty traditional office environments. I sat in cubicles furnished with an enormous plastic(ish) modular desks, had a mediocre (at best) PC workstation, and had ready access to a seemingly endless supply of legal pads, pens, staplers and paper clips. The ready access to paper, countless conference room meetings, and abundance of available surface area on my desk and in drawers created a perfect storm for wasting paper. I brought a pad of paper with me to every meeting I ever attended, scrawled some brief notes, and then tore that sheet off to keep next to my keyboard to follow up on any needed action items. Once my immediate need for the notes was fulfilled, that sheet would get shuffled off into a corner of my desk or filed away in a drawer “just in case”. I would guess that for all of the notes that I ever filed away, I might have actually had to dig up and refer to 2% of them (and that’s probably being very generous). That said, on those rare occasions that I did have to dig something up from old notes, it was usually pretty important and I ended up being very glad that I saved them. It was only when I would leave a job or move desks that I would finally gather all those notes together and take them to shredding bin to be disposed of. When I left my last job the amount of paper I had accumulated over my three years there was absurd, and I knew coworkers who had substance-abuse caliber paper wasting addictions that made my bad habit look like nail-biting in comparison. A Product Of My Environment I always hated using all of this paper, but simply couldn’t bring myself to stop. It would look bad if I showed up to an important conference room meeting without a pad of paper. What if someone said something profound! Plus, everyone else always brought paper with them. If you saw someone walking down the hallway with a pad of paper in hand you knew they must be on their way to a conference room meeting. Some people even had fancy looking portfolio notebook sheaths that gave their legal pads all the prestige of a briefcase. No one ever worried about running out of fresh paper because there was an endless supply, and there certainly was no shortage of places to store and file used paper. In short, the traditional office was setup for using tons and tons of paper; it’s baked into the culture there. For that reason, it didn’t take long for me to kick the paper habit once I started working from home. In my home office, desk and drawer space are at a premium. I don’t have the budget (or the tolerance) for huge modular office furniture in my spare bedroom. I also no longer have access to a bottomless pit of office supplies stock piled in cabinets and closets. If I want to use some paper, I have to go out and buy it. Finally (and most importantly), all of the meetings that I have to attend these days are “virtual”. We use instant messaging, VOIP, video conferencing, and e-mail to communicate with each other. All I need to take notes during a meeting is my computer, which I happen to be sitting right in front of all day. I don’t have any hard numbers for this, but my gut feeling is that I actually take a lot more notes now than I ever did when I worked in an office. The big difference is I don’t have to use any paper to do so. This makes it far easier to keep important information safe and organized. The Right Tool For The Job When I first started working from home I tried to find a single application that would fill the gap left by the pen and paper that I always had at my desk when I worked in an office. Well, there are no silver bullets and I’ve evolved my approach over time to try and find the best tool for the job at hand. Here’s a quick summary of how I take notes and keep everything organized. Notepad++ – This is the first application I turn to when I feel like there’s some bit of information that I need to write down and save. I use Launchy, so opening Notepad++ and creating a new file only takes a few keystrokes. If I find that the information I’m trying to get down requires a more sophisticated application I escalate as needed. The Desktop – By default, I save every file or other bit of information to the desktop. Anyone who has ever had to fix their parents computer before knows that this is a dangerous game (any file my mother has ever worked on is saved directly to the desktop and rarely moves anywhere else). I agree that storing things on the desktop isn’t a great long term approach to keeping organized, which is why I treat my desktop a bit like my e-mail inbox. I strive to keep both empty (or as close to empty as I possibly can). If something is on my desktop, it means that it’s something relevant to a task or project that I’m currently working on. About once a week I take things that I’m not longer working on and put them into my ‘Notes’ folder. The ‘Notes’ Folder – As I work on a task, I tend to accumulate multiple files associated with that task. For example, I might have a bit of SQL that I’m working on to gather data for a new report, a quick C# method that I came up with but am not yet ready to commit to source control, a bulleted list of to-do items in a .txt file, etc. If the desktop starts to get too cluttered, I create a new sub-folder in my ‘Notes’ folder. Each sub-folder’s name is the current date followed by a brief description of the task or project. Then all files related to that task or project go into that sub folder. By using the date as the first part of the folder name, these folders are automatically sorted in reverse chronological order. This means that things I worked on recently will generally be near the top of the list. Using the built-in Windows search functionality I now have a pretty quick and easy way to try and find something that I worked on a week ago or six months ago. Dropbox – Dropbox is a free service that lets you store up to 2GB of files “in the cloud” and have those files synced to all of the different computers that you use. My ‘Notes’ folder lives in Dropbox, meaning that it’s contents are constantly backed up and are always available to me regardless of which computer I’m using. They also have a pretty decent iPhone application that lets you browse and view all of the files that you have stored there. The free 2GB edition is probably enough for just storing notes, but I also pay $99/year for the 50GB storage upgrade and keep all of my music, e-books, pictures, and documents in Dropbox. It’s a fantastic service and I highly recommend it. Evernote – I use Evernote mostly to organize information that I access on a fairly regular basis. For example, my Evernote account has a running grocery shopping list, recipes that my wife and I use a lot, and contact information for people I contact infrequently enough that I don’t want to keep them in my phone. I know some people that keep nearly everything in Evernote, but there’s something about it that I find a bit clunky, so I tend to use it sparingly. Google Tasks – One of my biggest paper wasting habits was keeping a running task-list next to my computer at work. Every morning I would sit down, look at my task list, cross off what was done and add new tasks that I thought of during my morning commute. This usually resulted in having to re-copy the task list onto a fresh sheet of paper when I was done. I still keep a running task list at my desk, but I’ve started using Google Tasks instead. This is a dead-simple web-based application for quickly adding, deleting, and organizing tasks in a simple checklist style. You can quickly move tasks up and down on the list (which I use for prioritizing), and even create sub-tasks for breaking down larger tasks into smaller pieces. Balsamiq Mockups – This is a simple and lightweight tool for creating drawings of user interfaces. It’s great for sketching out a new feature, brainstorm the layout of a interface, or even draw up a quick sequence diagram. I’m terrible at drawing, so Balsamiq Mockups not only lets me create sketches that other people can actually understand, but it’s also handy because you can upload a sketch to a common location for other team members to access. I can honestly say that using these tools (and having limited resources at home) have lead me to cut my paper usage down to virtually none. If I ever were to return to a traditional office workplace (hopefully never!) I’d try to employ as many of these applications and techniques as I could to keep paper usage low. I feel far less cluttered and far better organized now.

    Read the article

  • SQL Cruise Alaska 2011

    - by Grant Fritchey
    I had the extreme good fortune to get sent on the last SQL Cruise to Alaska. I love my job. In case you don't what this is, SQL Cruise is a trip on a cruise ship during which you get to attend classes while on the boat, learning all about SQL Server and related topics as well as network with the instructors and the other Cruisers. Frankly, it's amazing. Classes ran from Monday, 5/30, to Saturday, 6/4. The networking was constant, between classes, at night on cruise ship, out on excursions in Alaskan rainforests and while snorkeling in ocean waters. Here's a run down of the experience from my point of view. Because I couldn't travel out 2 days early, I missed the BBQ that occurred the day before the cruise when many of the Cruisers received their swag bags. Some of that swag came from Red Gate. I researched what was useful on a cruise like this and purchased small flashlights and binoculars for all the Cruisers. The flashlights were because, depending on your cabin, ships can be very dark. The binoculars were so that the cruisers could watch all the beautiful landscape as it flowed by. I would have liked to have been there when the bags were opened, but I heard from several people that they appreciated the gifts. Cruisers "In" the hot tub. Pictured: Marjory Woody, Michele Grondin, Kyle Brandt, Grant Fritchey, John Halunen Sunday I went to board the ship with my wife. We had a bit of an adventure because I messed up our documents. It all worked out and we got on board to meet up at the back of the boat at one of the outdoor bars with the other Cruisers, thanks to tweets letting everyone know where to go. That was the end of electronic coordination on the trip (connectivity in Alaska was horrible for everyone except AT&T). The Cruisers were a great bunch of people and it was a real honor to meet them and get to spend time with them. After everyone settled into their cabins, our very first activity was a contest, sponsored by Red Gate. The Cruisers, in an effort to get to know each other and the ship, were required to go all over taking various photographs, some of them hilarious. The winning team of three would all win prizes. Some of the significant others helped out and I tagged along with a team that tied for first but lost the coin toss. The winning team consisted of Christina Leo (blog|twitter), Ryan Malcom (twitter), Neil Hambly (blog|twitter). They then had to do math and identify the cabin with the lowest prime number, oh, and get a picture of it and be the first to get back up to the bar where we were waiting. Christina came in first and very happily carried home an Ipad2. Ryan won a 1TB portable hard drive and Neil won a wireless mouse (picture below, note my special SQL Server Central Friday Shirt. Thanks Steve (blog|twitter)). Winners: Christina Leo, Neil Hambly, Ryan Malcolm. Just Lucky: Grant Fritchey Monday morning classes started. Buck Woody (blog|twitter) was a special guest speaker on this cruise. His theme was "Three C's on the High Seas: Career, Communication and Cloud." The first session was all on Career. I'm not going to type out all my notes from the session, but let's just say, if you get the chance to hear Buck talk about how to manage your career, I suggest you attend. I have a ton of blog posts that I'll be putting together over the next several months (yes, months) both here and over on ScaryDBA. I also have a bunch of work I'm going to be doing to get my career performance bumped up a notch or two (and let's face it, that won't be easy). Later on Monday, Tim Ford (blog|twitter) did a session on DMOs. Specifically the session was on Tim's Period Table of DMOs that he has put together, and how to use some of the more interesting DMOs in your day to day job. It was a great session, packed with good information. Next, Brent Ozar (blog|twitter) did a session on how to monitor and guide SAN configuration for the DBA that doesn't have access to the SAN. That was some seriously useful information. Tuesday morning we only had a single class. Kendra Little (blog|twitter) taught us all about "No Lock for Yes Fun".  It was all about the different transaction isolation levels and how they work. There is so often confusion in this area and Kendra does a great job in clarifying the information. Also, she tosses in her excellent drawings to liven up the presentation. Then it was excursion time in Juneau. My wife and I, along with several other Cruisers, took a hike up around the Mendenhall Glacier. It was absolutely beautiful weather and walking through the Alaskan rain forest was a treat. Our guide, Jason, was a great guy and it was a good day of hiking. Wednesday was an all day excursion in Skagway. My wife and I took the "Ghost and Good Time Girls" walking tour that ended up at a bar that used to be a brothel, the Red Onion. It was a great history of the town. We went back out and hit a few museums and exhibits. We also hiked up the side of the mountain to see the Dewey Lake and some great views of the town. Finally we hiked out to the far side of town to see the Gold Rush cemetery. Hiking done we went back to the boat and had a quiet dinner on our own. Thursday we cruised through Glacier Bay and saw at least four different glaciers including sitting next to the Marjory Glacier for  about an hour. It was amazing. Then it got better. We went into class with Buck again, this time to talk about Communication. Again, I've got pages of notes that I'm going to be referring back to for some time to come. This was an excellent opportunity to learn. Snorkelers: Nicole Bertrand, Aaron Bertrand, Grant Fritchey, Neil Hambly, Christina Leo, John Robel, Yanni Robel, Tim Ford Friday we pulled into Ketchikan. A bunch of us went snorkeling. Yes, snorkeling. Yes, in Alaska. Yes, snorkeling in the ocean in Alaska. It was fantastic. They had us put on 7mm thick wet suits (an adventure all by itself) so it was basically warm the entire time we were in the water (except for the occasional squirt of cold water down my back). Before we got in the water a bald eagle flew up and landed about 15 feet in front of us, which was just an incredible event. Then our guide pointed out about 14 other eagles in the area, hanging out in the trees. Wow! The water was pretty clear and there was a ton of things to see. That was absolutely a blast. Back on the boat I presented a session called Execution Plans: The Deep Dive (note the nautical theme). It seemed to go over well and I had several good questions come out of the session that will lead to new blog posts. After I presented, it was Aaron Bertrand's (blog|twitter) turn. He did a session on "What's New in Denali" that provided a lot of great information. He was able to incorporate new things straight out of Tech-Ed, so this was expanded beyond his usual presentation. The man really knows what he's talking about and communicates it well. Saturday we were travelling so there was time for a bunch of classes. Jeremiah Peschka (blog|twitter) did a great overview of some of the NoSQL databases and what they should be used for. The session was called "The Database is Dead" but it was really about how there are specific uses for these databases that SQL Server doesn't fill, but also that these databases can't replace SQL Server in other areas. Again, good material. Brent Ozar presented again with a session on Defensive Indexing. It was an overview of how indexes work and a deep dive into how to apply them appropriately in your databases to better support access. A good session, as you would expect. Then we pulled into Victoria, BC, in Canada and had a nice dinner with several of the Cruisers, including Denny Cherry (blog|twitter). After that it was back to Seattle on Sunday. By the way, the Science Fiction Museum in Seattle isn't a Science Fiction Museum any more. I was very disappointed to discover this. Overall, it was a great experience. I'm extremely appreciative of Red Gate for sending me and for Tim, Brent, Kendra and Jeremiah for having me. The other Cruisers were all amazing people and it was an honor & privilege to meet them and spend time with them. While this was a seriously fun time, it was also a very serious training opportunity with solid information coming from seasoned industry pros.

    Read the article

  • C#/.NET Little Wonders: The Nullable static class

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Today we’re going to look at an interesting Little Wonder that can be used to mitigate what could be considered a Little Pitfall.  The Little Wonder we’ll be examining is the System.Nullable static class.  No, not the System.Nullable<T> class, but a static helper class that has one useful method in particular that we will examine… but first, let’s look at the Little Pitfall that makes this wonder so useful. Little Pitfall: Comparing nullable value types using <, >, <=, >= Examine this piece of code, without examining it too deeply, what’s your gut reaction as to the result? 1: int? x = null; 2:  3: if (x < 100) 4: { 5: Console.WriteLine("True, {0} is less than 100.", 6: x.HasValue ? x.ToString() : "null"); 7: } 8: else 9: { 10: Console.WriteLine("False, {0} is NOT less than 100.", 11: x.HasValue ? x.ToString() : "null"); 12: } Your gut would be to say true right?  It would seem to make sense that a null integer is less than the integer constant 100.  But the result is actually false!  The null value is not less than 100 according to the less-than operator. It looks even more outrageous when you consider this also evaluates to false: 1: int? x = null; 2:  3: if (x < int.MaxValue) 4: { 5: // ... 6: } So, are we saying that null is less than every valid int value?  If that were true, null should be less than int.MinValue, right?  Well… no: 1: int? x = null; 2:  3: // um... hold on here, x is NOT less than min value? 4: if (x < int.MinValue) 5: { 6: // ... 7: } So what’s going on here?  If we use greater than instead of less than, we see the same little dilemma: 1: int? x = null; 2:  3: // once again, null is not greater than anything either... 4: if (x > int.MinValue) 5: { 6: // ... 7: } It turns out that four of the comparison operators (<, <=, >, >=) are designed to return false anytime at least one of the arguments is null when comparing System.Nullable wrapped types that expose the comparison operators (short, int, float, double, DateTime, TimeSpan, etc.).  What’s even odder is that even though the two equality operators (== and !=) work correctly, >= and <= have the same issue as < and > and return false if both System.Nullable wrapped operator comparable types are null! 1: DateTime? x = null; 2: DateTime? y = null; 3:  4: if (x <= y) 5: { 6: Console.WriteLine("You'd think this is true, since both are null, but it's not."); 7: } 8: else 9: { 10: Console.WriteLine("It's false because <=, <, >, >= don't work on null."); 11: } To make matters even more confusing, take for example your usual check to see if something is less than, greater to, or equal: 1: int? x = null; 2: int? y = 100; 3:  4: if (x < y) 5: { 6: Console.WriteLine("X is less than Y"); 7: } 8: else if (x > y) 9: { 10: Console.WriteLine("X is greater than Y"); 11: } 12: else 13: { 14: // We fall into the "equals" assumption, but clearly null != 100! 15: Console.WriteLine("X is equal to Y"); 16: } Yes, this code outputs “X is equal to Y” because both the less-than and greater-than operators return false when a Nullable wrapped operator comparable type is null.  This violates a lot of our assumptions because we assume is something is not less than something, and it’s not greater than something, it must be equal.  So keep in mind, that the only two comparison operators that work on Nullable wrapped types where at least one is null are the equals (==) and not equals (!=) operators: 1: int? x = null; 2: int? y = 100; 3:  4: if (x == y) 5: { 6: Console.WriteLine("False, x is null, y is not."); 7: } 8:  9: if (x != y) 10: { 11: Console.WriteLine("True, x is null, y is not."); 12: } Solution: The Nullable static class So we’ve seen that <, <=, >, and >= have some interesting and perhaps unexpected behaviors that can trip up a novice developer who isn’t expecting the kinks that System.Nullable<T> types with comparison operators can throw.  How can we easily mitigate this? Well, obviously, you could do null checks before each check, but that starts to get ugly: 1: if (x.HasValue) 2: { 3: if (y.HasValue) 4: { 5: if (x < y) 6: { 7: Console.WriteLine("x < y"); 8: } 9: else if (x > y) 10: { 11: Console.WriteLine("x > y"); 12: } 13: else 14: { 15: Console.WriteLine("x == y"); 16: } 17: } 18: else 19: { 20: Console.WriteLine("x > y because y is null and x isn't"); 21: } 22: } 23: else if (y.HasValue) 24: { 25: Console.WriteLine("x < y because x is null and y isn't"); 26: } 27: else 28: { 29: Console.WriteLine("x == y because both are null"); 30: } Yes, we could probably simplify this logic a bit, but it’s still horrendous!  So what do we do if we want to consider null less than everything and be able to properly compare Nullable<T> wrapped value types? The key is the System.Nullable static class.  This class is a companion class to the System.Nullable<T> class and allows you to use a few helper methods for Nullable<T> wrapped types, including a static Compare<T>() method of the. What’s so big about the static Compare<T>() method?  It implements an IComparer compatible comparison on Nullable<T> types.  Why do we care?  Well, if you look at the MSDN description for how IComparer works, you’ll read: Comparing null with any type is allowed and does not generate an exception when using IComparable. When sorting, null is considered to be less than any other object. This is what we probably want!  We want null to be less than everything!  So now we can change our logic to use the Nullable.Compare<T>() static method: 1: int? x = null; 2: int? y = 100; 3:  4: if (Nullable.Compare(x, y) < 0) 5: { 6: // Yes! x is null, y is not, so x is less than y according to Compare(). 7: Console.WriteLine("x < y"); 8: } 9: else if (Nullable.Compare(x, y) > 0) 10: { 11: Console.WriteLine("x > y"); 12: } 13: else 14: { 15: Console.WriteLine("x == y"); 16: } Summary So, when doing math comparisons between two numeric values where one of them may be a null Nullable<T>, consider using the System.Nullable.Compare<T>() method instead of the comparison operators.  It will treat null less than any value, and will avoid logic consistency problems when relying on < returning false to indicate >= is true and so on. Tweet   Technorati Tags: C#,C-Sharp,.NET,Little Wonders,Little Pitfalls,Nulalble

    Read the article

  • C#/.NET Little Wonders: Use Cast() and TypeOf() to Change Sequence Type

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. We’ve seen how the Select() extension method lets you project a sequence from one type to a new type which is handy for getting just parts of items, or building new items.  But what happens when the items in the sequence are already the type you want, but the sequence itself is typed to an interface or super-type instead of the sub-type you need? For example, you may have a sequence of Rectangle stored in an IEnumerable<Shape> and want to consider it an IEnumerable<Rectangle> sequence instead.  Today we’ll look at two handy extension methods, Cast<TResult>() and OfType<TResult>() which help you with this task. Cast<TResult>() – Attempt to cast all items to type TResult So, the first thing we can do would be to attempt to create a sequence of TResult from every item in the source sequence.  Typically we’d do this if we had an IEnumerable<T> where we knew that every item was actually a TResult where TResult inherits/implements T. For example, assume the typical Shape example classes: 1: // abstract base class 2: public abstract class Shape { } 3:  4: // a basic rectangle 5: public class Rectangle : Shape 6: { 7: public int Widtgh { get; set; } 8: public int Height { get; set; } 9: } And let’s assume we have a sequence of Shape where every Shape is a Rectangle… 1: var shapes = new List<Shape> 2: { 3: new Rectangle { Width = 3, Height = 5 }, 4: new Rectangle { Width = 10, Height = 13 }, 5: // ... 6: }; To get the sequence of Shape as a sequence of Rectangle, of course, we could use a Select() clause, such as: 1: // select each Shape, cast it to Rectangle 2: var rectangles = shapes 3: .Select(s => (Rectangle)s) 4: .ToList(); But that’s a bit verbose, and fortunately there is already a facility built in and ready to use in the form of the Cast<TResult>() extension method: 1: // cast each item to Rectangle and store in a List<Rectangle> 2: var rectangles = shapes 3: .Cast<Rectangle>() 4: .ToList(); However, we should note that if anything in the list cannot be cast to a Rectangle, you will get an InvalidCastException thrown at runtime.  Thus, if our Shape sequence had a Circle in it, the call to Cast<Rectangle>() would have failed.  As such, you should only do this when you are reasonably sure of what the sequence actually contains (or are willing to handle an exception if you’re wrong). Another handy use of Cast<TResult>() is using it to convert an IEnumerable to an IEnumerable<T>.  If you look at the signature, you’ll see that the Cast<TResult>() extension method actually extends the older, object-based IEnumerable interface instead of the newer, generic IEnumerable<T>.  This is your gateway method for being able to use LINQ on older, non-generic sequences.  For example, consider the following: 1: // the older, non-generic collections are sequence of object 2: var shapes = new ArrayList 3: { 4: new Rectangle { Width = 3, Height = 13 }, 5: new Rectangle { Width = 10, Height = 20 }, 6: // ... 7: }; Since this is an older, object based collection, we cannot use the LINQ extension methods on it directly.  For example, if I wanted to query the Shape sequence for only those Rectangles whose Width is > 5, I can’t do this: 1: // compiler error, Where() operates on IEnumerable<T>, not IEnumerable 2: var bigRectangles = shapes.Where(r => r.Width > 5); However, I can use Cast<Rectangle>() to treat my ArrayList as an IEnumerable<Rectangle> and then do the query! 1: // ah, that’s better! 2: var bigRectangles = shapes.Cast<Rectangle>().Where(r => r.Width > 5); Or, if you prefer, in LINQ query expression syntax: 1: var bigRectangles = from s in shapes.Cast<Rectangle>() 2: where s.Width > 5 3: select s; One quick warning: Cast<TResult>() only attempts to cast, it won’t perform a cast conversion.  That is, consider this: 1: var intList = new List<int> { 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 }; 2:  3: // casting ints to longs, this should work, right? 4: var asLong = intList.Cast<long>().ToList(); Will the code above work?  No, you’ll get a InvalidCastException. Remember that Cast<TResult>() is an extension of IEnumerable, thus it is a sequence of object, which means that it will box every int as an object as it enumerates over it, and there is no cast conversion from object to long, and thus the cast fails.  In other words, a cast from int to long will succeed because there is a conversion from int to long.  But a cast from int to object to long will not, because you can only unbox an item by casting it to its exact type. For more information on why cast-converting boxed values doesn’t work, see this post on The Dangers of Casting Boxed Values (here). OfType<TResult>() – Filter sequence to only items of type TResult So, we’ve seen how we can use Cast<TResult>() to change the type of our sequence, when we expect all the items of the sequence to be of a specific type.  But what do we do when a sequence contains many different types, and we are only concerned with a subset of a given type? For example, what if a sequence of Shape contains Rectangle and Circle instances, and we just want to select all of the Rectangle instances?  Well, let’s say we had this sequence of Shape: 1: var shapes = new List<Shape> 2: { 3: new Rectangle { Width = 3, Height = 5 }, 4: new Rectangle { Width = 10, Height = 13 }, 5: new Circle { Radius = 10 }, 6: new Square { Side = 13 }, 7: // ... 8: }; Well, we could get the rectangles using Select(), like: 1: var onlyRectangles = shapes.Where(s => s is Rectangle).ToList(); But fortunately, an easier way has already been written for us in the form of the OfType<T>() extension method: 1: // returns only a sequence of the shapes that are Rectangles 2: var onlyRectangles = shapes.OfType<Rectangle>().ToList(); Now we have a sequence of only the Rectangles in the original sequence, we can also use this to chain other queries that depend on Rectangles, such as: 1: // select only Rectangles, then filter to only those more than 2: // 5 units wide... 3: var onlyBigRectangles = shapes.OfType<Rectangle>() 4: .Where(r => r.Width > 5) 5: .ToList(); The OfType<Rectangle>() will filter the sequence to only the items that are of type Rectangle (or a subclass of it), and that results in an IEnumerable<Rectangle>, we can then apply the other LINQ extension methods to query that list further. Just as Cast<TResult>() is an extension method on IEnumerable (and not IEnumerable<T>), the same is true for OfType<T>().  This means that you can use OfType<TResult>() on object-based collections as well. For example, given an ArrayList containing Shapes, as below: 1: // object-based collections are a sequence of object 2: var shapes = new ArrayList 3: { 4: new Rectangle { Width = 3, Height = 5 }, 5: new Rectangle { Width = 10, Height = 13 }, 6: new Circle { Radius = 10 }, 7: new Square { Side = 13 }, 8: // ... 9: }; We can use OfType<Rectangle> to filter the sequence to only Rectangle items (and subclasses), and then chain other LINQ expressions, since we will then be of type IEnumerable<Rectangle>: 1: // OfType() converts the sequence of object to a new sequence 2: // containing only Rectangle or sub-types of Rectangle. 3: var onlyBigRectangles = shapes.OfType<Rectangle>() 4: .Where(r => r.Width > 5) 5: .ToList(); Summary So now we’ve seen two different ways to get a sequence of a superclass or interface down to a more specific sequence of a subclass or implementation.  The Cast<TResult>() method casts every item in the source sequence to type TResult, and the OfType<TResult>() method selects only those items in the source sequence that are of type TResult. You can use these to downcast sequences, or adapt older types and sequences that only implement IEnumerable (such as DataTable, ArrayList, etc.). Technorati Tags: C#,CSharp,.NET,LINQ,Little Wonders,TypeOf,Cast,IEnumerable<T>

    Read the article

< Previous Page | 32 33 34 35 36 37 38  | Next Page >