Search Results

Search found 4745 results on 190 pages for 'spare parts'.

Page 125/190 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • SQL SERVER – World Shapefile Download and Upload to Database – Spatial Database

    - by pinaldave
    During my recent, training I was asked by a student if I know a place where he can download spatial files for all the countries around the world, as well as if there is a way to upload shape files to a database. Here is a quick tutorial for it. VDS Technologies has all the spatial files for every location for free. You can download the spatial file from here. If you cannot find the spatial file you are looking for, please leave a comment here, and I will send you the necessary details. Unzip the file to a folder and it will have the following content. Then, download Shape2SQL tool from SharpGIS. This is one of the best tools available to convert shapefiles to SQL tables. Afterwards, run the .exe file. When the file is run for the first time, it will ask for the database properties. Provide your database details. Select the appropriate shape files and the tool will fill up the essential details automatically. If you do not want to create the index on the column, uncheck the box beside it. The screenshot below is simply explains the procedure. You also have to be careful regarding your data, whether that is GEOMETRY or GEOGRAPHY. In this example,  it is GEOMETRY data. Click “Upload to Database”. It will show you the uploading process. Once the shape file is uploaded, close the application and open SQL Server Management Studio (SSMS). Run the following code in SSMS Query Editor. USE Spatial GO SELECT * FROM dbo.world GO This will show the complete map of world after you click on Spatial Results in Spatial Tab. In Spatial Results Set, the Zoom feature is available. From the Select label column, choose the country name in order to show the country name overlaying the country borders. Let me know if this tutorial is helpful enough. I am planning to write a few more posts about this later. Note: Please note that the images displayed here do not reflect the original political boundaries. These data are pretty old and can probably draw incorrect maps as well. I have personally spotted several parts of the map where some countries are located a little bit inaccurately. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Spatial, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Creating Custom Ajax Control Toolkit Controls

    - by Stephen Walther
    The goal of this blog entry is to explain how you can extend the Ajax Control Toolkit with custom Ajax Control Toolkit controls. I describe how you can create the two halves of an Ajax Control Toolkit control: the server-side control extender and the client-side control behavior. Finally, I explain how you can use the new Ajax Control Toolkit control in a Web Forms page. At the end of this blog entry, there is a link to download a Visual Studio 2010 solution which contains the code for two Ajax Control Toolkit controls: SampleExtender and PopupHelpExtender. The SampleExtender contains the minimum skeleton for creating a new Ajax Control Toolkit control. You can use the SampleExtender as a starting point for your custom Ajax Control Toolkit controls. The PopupHelpExtender control is a super simple custom Ajax Control Toolkit control. This control extender displays a help message when you start typing into a TextBox control. The animated GIF below demonstrates what happens when you click into a TextBox which has been extended with the PopupHelp extender. Here’s a sample of a Web Forms page which uses the control: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ShowPopupHelp.aspx.cs" Inherits="MyACTControls.Web.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head runat="server"> <title>Show Popup Help</title> </head> <body> <form id="form1" runat="server"> <div> <act:ToolkitScriptManager ID="tsm" runat="server" /> <%-- Social Security Number --%> <asp:Label ID="lblSSN" Text="SSN:" AssociatedControlID="txtSSN" runat="server" /> <asp:TextBox ID="txtSSN" runat="server" /> <act:PopupHelpExtender id="ph1" TargetControlID="txtSSN" HelpText="Please enter your social security number." runat="server" /> <%-- Social Security Number --%> <asp:Label ID="lblPhone" Text="Phone Number:" AssociatedControlID="txtPhone" runat="server" /> <asp:TextBox ID="txtPhone" runat="server" /> <act:PopupHelpExtender id="ph2" TargetControlID="txtPhone" HelpText="Please enter your phone number." runat="server" /> </div> </form> </body> </html> In the page above, the PopupHelp extender is used to extend the functionality of the two TextBox controls. When focus is given to a TextBox control, the popup help message is displayed. An Ajax Control Toolkit control extender consists of two parts: a server-side control extender and a client-side behavior. For example, the PopupHelp extender consists of a server-side PopupHelpExtender control (PopupHelpExtender.cs) and a client-side PopupHelp behavior JavaScript script (PopupHelpBehavior.js). Over the course of this blog entry, I describe how you can create both the server-side extender and the client-side behavior. Writing the Server-Side Code Creating a Control Extender You create a control extender by creating a class that inherits from the abstract ExtenderControlBase class. For example, the PopupHelpExtender control is declared like this: public class PopupHelpExtender: ExtenderControlBase { } The ExtenderControlBase class is part of the Ajax Control Toolkit. This base class contains all of the common server properties and methods of every Ajax Control Toolkit extender control. The ExtenderControlBase class inherits from the ExtenderControl class. The ExtenderControl class is a standard class in the ASP.NET framework located in the System.Web.UI namespace. This class is responsible for generating a client-side behavior. The class generates a call to the Microsoft Ajax Library $create() method which looks like this: <script type="text/javascript"> $create(MyACTControls.PopupHelpBehavior, {"HelpText":"Please enter your social security number.","id":"ph1"}, null, null, $get("txtSSN")); }); </script> The JavaScript $create() method is part of the Microsoft Ajax Library. The reference for this method can be found here: http://msdn.microsoft.com/en-us/library/bb397487.aspx This method accepts the following parameters: type – The type of client behavior to create. The $create() method above creates a client PopupHelpBehavior. Properties – Enables you to pass initial values for the properties of the client behavior. For example, the initial value of the HelpText property. This is how server property values are passed to the client. Events – Enables you to pass client-side event handlers to the client behavior. References – Enables you to pass references to other client components. Element – The DOM element associated with the client behavior. This will be the DOM element associated with the control being extended such as the txtSSN TextBox. The $create() method is generated for you automatically. You just need to focus on writing the server-side control extender class. Specifying the Target Control All Ajax Control Toolkit extenders inherit a TargetControlID property from the ExtenderControlBase class. This property, the TargetControlID property, points at the control that the extender control extends. For example, the Ajax Control Toolkit TextBoxWatermark control extends a TextBox, the ConfirmButton control extends a Button, and the Calendar control extends a TextBox. You must indicate the type of control which your extender is extending. You indicate the type of control by adding a [TargetControlType] attribute to your control. For example, the PopupHelp extender is declared like this: [TargetControlType(typeof(TextBox))] public class PopupHelpExtender: ExtenderControlBase { } The PopupHelp extender can be used to extend a TextBox control. If you try to use the PopupHelp extender with another type of control then an exception is thrown. If you want to create an extender control which can be used with any type of ASP.NET control (Button, DataView, TextBox or whatever) then use the following attribute: [TargetControlType(typeof(Control))] Decorating Properties with Attributes If you decorate a server-side property with the [ExtenderControlProperty] attribute then the value of the property gets passed to the control’s client-side behavior. The value of the property gets passed to the client through the $create() method discussed above. The PopupHelp control contains the following HelpText property: [ExtenderControlProperty] [RequiredProperty] public string HelpText { get { return GetPropertyValue("HelpText", "Help Text"); } set { SetPropertyValue("HelpText", value); } } The HelpText property determines the help text which pops up when you start typing into a TextBox control. Because the HelpText property is decorated with the [ExtenderControlProperty] attribute, any value assigned to this property on the server is passed to the client automatically. For example, if you declare the PopupHelp extender in a Web Form page like this: <asp:TextBox ID="txtSSN" runat="server" /> <act:PopupHelpExtender id="ph1" TargetControlID="txtSSN" HelpText="Please enter your social security number." runat="server" />   Then the PopupHelpExtender renders the call to the the following Microsoft Ajax Library $create() method: $create(MyACTControls.PopupHelpBehavior, {"HelpText":"Please enter your social security number.","id":"ph1"}, null, null, $get("txtSSN")); You can see this call to the JavaScript $create() method by selecting View Source in your browser. This call to the $create() method calls a method named set_HelpText() automatically and passes the value “Please enter your social security number”. There are several attributes which you can use to decorate server-side properties including: ExtenderControlProperty – When a property is marked with this attribute, the value of the property is passed to the client automatically. ExtenderControlEvent – When a property is marked with this attribute, the property represents a client event handler. Required – When a value is not assigned to this property on the server, an error is displayed. DefaultValue – The default value of the property passed to the client. ClientPropertyName – The name of the corresponding property in the JavaScript behavior. For example, the server-side property is named ID (uppercase) and the client-side property is named id (lower-case). IDReferenceProperty – Applied to properties which refer to the IDs of other controls. URLProperty – Calls ResolveClientURL() to convert from a server-side URL to a URL which can be used on the client. ElementReference – Returns a reference to a DOM element by performing a client $get(). The WebResource, ClientResource, and the RequiredScript Attributes The PopupHelp extender uses three embedded resources named PopupHelpBehavior.js, PopupHelpBehavior.debug.js, and PopupHelpBehavior.css. The first two files are JavaScript files and the final file is a Cascading Style sheet file. These files are compiled as embedded resources. You don’t need to mark them as embedded resources in your Visual Studio solution because they get added to the assembly when the assembly is compiled by a build task. You can see that these files get embedded into the MyACTControls assembly by using Red Gate’s .NET Reflector tool: In order to use these files with the PopupHelp extender, you need to work with both the WebResource and the ClientScriptResource attributes. The PopupHelp extender includes the following three WebResource attributes. [assembly: WebResource("PopupHelp.PopupHelpBehavior.js", "text/javascript")] [assembly: WebResource("PopupHelp.PopupHelpBehavior.debug.js", "text/javascript")] [assembly: WebResource("PopupHelp.PopupHelpBehavior.css", "text/css", PerformSubstitution = true)] These WebResource attributes expose the embedded resource from the assembly so that they can be accessed by using the ScriptResource.axd or WebResource.axd handlers. The first parameter passed to the WebResource attribute is the name of the embedded resource and the second parameter is the content type of the embedded resource. The PopupHelp extender also includes the following ClientScriptResource and ClientCssResource attributes: [ClientScriptResource("MyACTControls.PopupHelpBehavior", "PopupHelp.PopupHelpBehavior.js")] [ClientCssResource("PopupHelp.PopupHelpBehavior.css")] Including these attributes causes the PopupHelp extender to request these resources when you add the PopupHelp extender to a page. If you open View Source in a browser which uses the PopupHelp extender then you will see the following link for the Cascading Style Sheet file: <link href="/WebResource.axd?d=0uONMsWXUuEDG-pbJHAC1kuKiIMteQFkYLmZdkgv7X54TObqYoqVzU4mxvaa4zpn5H9ch0RDwRYKwtO8zM5mKgO6C4WbrbkWWidKR07LD1d4n4i_uNB1mHEvXdZu2Ae5mDdVNDV53znnBojzCzwvSw2&amp;t=634417392021676003" type="text/css" rel="stylesheet" /> You also will see the following script include for the JavaScript file: <script src="/ScriptResource.axd?d=pIS7xcGaqvNLFBvExMBQSp_0xR3mpDfS0QVmmyu1aqDUjF06TrW1jVDyXNDMtBHxpRggLYDvgFTWOsrszflZEDqAcQCg-hDXjun7ON0Ol7EXPQIdOe1GLMceIDv3OeX658-tTq2LGdwXhC1-dE7_6g2&amp;t=ffffffff88a33b59" type="text/javascript"></script> The JavaScrpt file returned by this request to ScriptResource.axd contains the combined scripts for any and all Ajax Control Toolkit controls in a page. By default, the Ajax Control Toolkit combines all of the JavaScript files required by a page into a single JavaScript file. Combining files in this way really speeds up how quickly all of the JavaScript files get delivered from the web server to the browser. So, by default, there will be only one ScriptResource.axd include for all of the JavaScript files required by a page. If you want to disable Script Combining, and create separate links, then disable Script Combining like this: <act:ToolkitScriptManager ID="tsm" runat="server" CombineScripts="false" /> There is one more important attribute used by Ajax Control Toolkit extenders. The PopupHelp behavior uses the following two RequirdScript attributes to load the JavaScript files which are required by the PopupHelp behavior: [RequiredScript(typeof(CommonToolkitScripts), 0)] [RequiredScript(typeof(PopupExtender), 1)] The first parameter of the RequiredScript attribute represents either the string name of a JavaScript file or the type of an Ajax Control Toolkit control. The second parameter represents the order in which the JavaScript files are loaded (This second parameter is needed because .NET attributes are intrinsically unordered). In this case, the RequiredScript attribute will load the JavaScript files associated with the CommonToolkitScripts type and the JavaScript files associated with the PopupExtender in that order. The PopupHelp behavior depends on these JavaScript files. Writing the Client-Side Code The PopupHelp extender uses a client-side behavior written with the Microsoft Ajax Library. Here is the complete code for the client-side behavior: (function () { // The unique name of the script registered with the // client script loader var scriptName = "PopupHelpBehavior"; function execute() { Type.registerNamespace('MyACTControls'); MyACTControls.PopupHelpBehavior = function (element) { /// <summary> /// A behavior which displays popup help for a textbox /// </summmary> /// <param name="element" type="Sys.UI.DomElement">The element to attach to</param> MyACTControls.PopupHelpBehavior.initializeBase(this, [element]); this._textbox = Sys.Extended.UI.TextBoxWrapper.get_Wrapper(element); this._cssClass = "ajax__popupHelp"; this._popupBehavior = null; this._popupPosition = Sys.Extended.UI.PositioningMode.BottomLeft; this._popupDiv = null; this._helpText = "Help Text"; this._element$delegates = { focus: Function.createDelegate(this, this._element_onfocus), blur: Function.createDelegate(this, this._element_onblur) }; } MyACTControls.PopupHelpBehavior.prototype = { initialize: function () { MyACTControls.PopupHelpBehavior.callBaseMethod(this, 'initialize'); // Add event handlers for focus and blur var element = this.get_element(); $addHandlers(element, this._element$delegates); }, _ensurePopup: function () { if (!this._popupDiv) { var element = this.get_element(); var id = this.get_id(); this._popupDiv = $common.createElementFromTemplate({ nodeName: "div", properties: { id: id + "_popupDiv" }, cssClasses: ["ajax__popupHelp"] }, element.parentNode); this._popupBehavior = new $create(Sys.Extended.UI.PopupBehavior, { parentElement: element }, {}, {}, this._popupDiv); this._popupBehavior.set_positioningMode(this._popupPosition); } }, get_HelpText: function () { return this._helpText; }, set_HelpText: function (value) { if (this._HelpText != value) { this._helpText = value; this._ensurePopup(); this._popupDiv.innerHTML = value; this.raisePropertyChanged("Text") } }, _element_onfocus: function (e) { this.show(); }, _element_onblur: function (e) { this.hide(); }, show: function () { this._popupBehavior.show(); }, hide: function () { if (this._popupBehavior) { this._popupBehavior.hide(); } }, dispose: function() { var element = this.get_element(); $clearHandlers(element); if (this._popupBehavior) { this._popupBehavior.dispose(); this._popupBehavior = null; } } }; MyACTControls.PopupHelpBehavior.registerClass('MyACTControls.PopupHelpBehavior', Sys.Extended.UI.BehaviorBase); Sys.registerComponent(MyACTControls.PopupHelpBehavior, { name: "popupHelp" }); } // execute if (window.Sys && Sys.loader) { Sys.loader.registerScript(scriptName, ["ExtendedBase", "ExtendedCommon"], execute); } else { execute(); } })();   In the following sections, we’ll discuss how this client-side behavior works. Wrapping the Behavior for the Script Loader The behavior is wrapped with the following script: (function () { // The unique name of the script registered with the // client script loader var scriptName = "PopupHelpBehavior"; function execute() { // Behavior Content } // execute if (window.Sys && Sys.loader) { Sys.loader.registerScript(scriptName, ["ExtendedBase", "ExtendedCommon"], execute); } else { execute(); } })(); This code is required by the Microsoft Ajax Library Script Loader. You need this code if you plan to use a behavior directly from client-side code and you want to use the Script Loader. If you plan to only use your code in the context of the Ajax Control Toolkit then you can leave out this code. Registering a JavaScript Namespace The PopupHelp behavior is declared within a namespace named MyACTControls. In the code above, this namespace is created with the following registerNamespace() method: Type.registerNamespace('MyACTControls'); JavaScript does not have any built-in way of creating namespaces to prevent naming conflicts. The Microsoft Ajax Library extends JavaScript with support for namespaces. You can learn more about the registerNamespace() method here: http://msdn.microsoft.com/en-us/library/bb397723.aspx Creating the Behavior The actual Popup behavior is created with the following code. MyACTControls.PopupHelpBehavior = function (element) { /// <summary> /// A behavior which displays popup help for a textbox /// </summmary> /// <param name="element" type="Sys.UI.DomElement">The element to attach to</param> MyACTControls.PopupHelpBehavior.initializeBase(this, [element]); this._textbox = Sys.Extended.UI.TextBoxWrapper.get_Wrapper(element); this._cssClass = "ajax__popupHelp"; this._popupBehavior = null; this._popupPosition = Sys.Extended.UI.PositioningMode.BottomLeft; this._popupDiv = null; this._helpText = "Help Text"; this._element$delegates = { focus: Function.createDelegate(this, this._element_onfocus), blur: Function.createDelegate(this, this._element_onblur) }; } MyACTControls.PopupHelpBehavior.prototype = { initialize: function () { MyACTControls.PopupHelpBehavior.callBaseMethod(this, 'initialize'); // Add event handlers for focus and blur var element = this.get_element(); $addHandlers(element, this._element$delegates); }, _ensurePopup: function () { if (!this._popupDiv) { var element = this.get_element(); var id = this.get_id(); this._popupDiv = $common.createElementFromTemplate({ nodeName: "div", properties: { id: id + "_popupDiv" }, cssClasses: ["ajax__popupHelp"] }, element.parentNode); this._popupBehavior = new $create(Sys.Extended.UI.PopupBehavior, { parentElement: element }, {}, {}, this._popupDiv); this._popupBehavior.set_positioningMode(this._popupPosition); } }, get_HelpText: function () { return this._helpText; }, set_HelpText: function (value) { if (this._HelpText != value) { this._helpText = value; this._ensurePopup(); this._popupDiv.innerHTML = value; this.raisePropertyChanged("Text") } }, _element_onfocus: function (e) { this.show(); }, _element_onblur: function (e) { this.hide(); }, show: function () { this._popupBehavior.show(); }, hide: function () { if (this._popupBehavior) { this._popupBehavior.hide(); } }, dispose: function() { var element = this.get_element(); $clearHandlers(element); if (this._popupBehavior) { this._popupBehavior.dispose(); this._popupBehavior = null; } } }; The code above has two parts. The first part of the code is used to define the constructor function for the PopupHelp behavior. This is a factory method which returns an instance of a PopupHelp behavior: MyACTControls.PopupHelpBehavior = function (element) { } The second part of the code modified the prototype for the PopupHelp behavior: MyACTControls.PopupHelpBehavior.prototype = { } Any code which is particular to a single instance of the PopupHelp behavior should be placed in the constructor function. For example, the default value of the _helpText field is assigned in the constructor function: this._helpText = "Help Text"; Any code which is shared among all instances of the PopupHelp behavior should be added to the PopupHelp behavior’s prototype. For example, the public HelpText property is added to the prototype: get_HelpText: function () { return this._helpText; }, set_HelpText: function (value) { if (this._HelpText != value) { this._helpText = value; this._ensurePopup(); this._popupDiv.innerHTML = value; this.raisePropertyChanged("Text") } }, Registering a JavaScript Class After you create the PopupHelp behavior, you must register the behavior as a class by using the Microsoft Ajax registerClass() method like this: MyACTControls.PopupHelpBehavior.registerClass('MyACTControls.PopupHelpBehavior', Sys.Extended.UI.BehaviorBase); This call to registerClass() registers PopupHelp behavior as a class which derives from the base Sys.Extended.UI.BehaviorBase class. Like the ExtenderControlBase class on the server side, the BehaviorBase class on the client side contains method used by every behavior. The documentation for the BehaviorBase class can be found here: http://msdn.microsoft.com/en-us/library/bb311020.aspx The most important methods and properties of the BehaviorBase class are the following: dispose() – Use this method to clean up all resources used by your behavior. In the case of the PopupHelp behavior, the dispose() method is used to remote the event handlers created by the behavior and disposed the Popup behavior. get_element() -- Use this property to get the DOM element associated with the behavior. In other words, the DOM element which the behavior extends. get_id() – Use this property to the ID of the current behavior. initialize() – Use this method to initialize the behavior. This method is called after all of the properties are set by the $create() method. Creating Debug and Release Scripts You might have noticed that the PopupHelp behavior uses two scripts named PopupHelpBehavior.js and PopupHelpBehavior.debug.js. However, you never create these two scripts. Instead, you only create a single script named PopupHelpBehavior.pre.js. The pre in PopupHelpBehavior.pre.js stands for preprocessor. When you build the Ajax Control Toolkit (or the sample Visual Studio Solution at the end of this blog entry), a build task named JSBuild generates the PopupHelpBehavior.js release script and PopupHelpBehavior.debug.js debug script automatically. The JSBuild preprocessor supports the following directives: #IF #ELSE #ENDIF #INCLUDE #LOCALIZE #DEFINE #UNDEFINE The preprocessor directives are used to mark code which should only appear in the debug version of the script. The directives are used extensively in the Microsoft Ajax Library. For example, the Microsoft Ajax Library Array.contains() method is created like this: $type.contains = function Array$contains(array, item) { //#if DEBUG var e = Function._validateParams(arguments, [ {name: "array", type: Array, elementMayBeNull: true}, {name: "item", mayBeNull: true} ]); if (e) throw e; //#endif return (indexOf(array, item) >= 0); } Notice that you add each of the preprocessor directives inside a JavaScript comment. The comment prevents Visual Studio from getting confused with its Intellisense. The release version, but not the debug version, of the PopupHelpBehavior script is also minified automatically by the Microsoft Ajax Minifier. The minifier is invoked by a build step in the project file. Conclusion The goal of this blog entry was to explain how you can create custom AJAX Control Toolkit controls. In the first part of this blog entry, you learned how to create the server-side portion of an Ajax Control Toolkit control. You learned how to derive a new control from the ExtenderControlBase class and decorate its properties with the necessary attributes. Next, in the second part of this blog entry, you learned how to create the client-side portion of an Ajax Control Toolkit control by creating a client-side behavior with JavaScript. You learned how to use the methods of the Microsoft Ajax Library to extend your client behavior from the BehaviorBase class. Download the Custom ACT Starter Solution

    Read the article

  • Customize Team Build 2010 – Part 11: Speed up opening my build process template

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application       When you open the build process template, it takes 15 – 30 seconds until it opens. When you are in the process of creating your custom build process template, this can be very frustrating. Thanks to Ed Blankenship how has found a little trick to speed up the opening of the template. It now only takes a few seconds. Create a file called empty.xaml and place the following text in it: <Activity http://www.edsquared.com/ct.ashx?id=1746c587-59ce-45eb-85af-8ea167862617&url=http%3a%2f%2fschemas.microsoft.com%2fnetfx%2f2009%2fxaml%2factivities"http://schemas.microsoft.com/netfx/2009/xaml/activities"> </Activity> Open this file in Visual Studio. In the toolbox panel, add a new tab called “Team Foundation Build Activities”.  Note that it is important to get the tab name correct because if it is not correct then the activities will be reloaded. Inside the new tab, right click and select “Choose Items” Click the Browse button Load the file C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.TeamFoundation.Build.Workflow\v4.0_10.0.0.0__b03f5f7f11d50a3a\Microsoft.TeamFoundation.Build.Workflow.dll Click OK to add the toolbox items to the tab. Create another new tab called “Team Foundation LabManagement Activities”. Inside the new tab, right click and select “Choose Items” Click the Browse button Load the file C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.TeamFoundation.Lab.Workflow.Activities\v4.0_10.0.0.0__b03f5f7f11d50a3a\Microsoft.TeamFoundation.Lab.Workflow.Activities.dll Click OK to add the toolbox items to the tab. You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Managing Database Clusters - A Whole Lot Simpler

    - by mat.keep(at)oracle.com
    Clustered computing brings with it many benefits: high performance, high availability, scalable infrastructure, etc.  But it also brings with it more complexity.Why ?  Well, by its very nature, there are more "moving parts" to monitor and manage (from physical, virtual and logical hosts) to fault detection and failover software to redundant networking components - the list goes on.  And a cluster that isn't effectively provisioned and managed will cause more downtime than the standalone systems it is designed to improve upon.  Not so great....When it comes to the database industry, analysts already estimate that 50% of a typical database's Total Cost of Ownership is attributable to staffing and downtime costs.  These costs will only increase if a database cluster is to hard to properly administer.Over the past 9 months, monitoring and management has been a major focus in the development of the MySQL Cluster database, and on Tuesday 12th January, the product team will be presenting the output of that development in a new webinar.Even if you can't make the date, it is still worth registering so you will receive automatic notification when the on-demand replay is availableIn the webinar, the team will cover:    * NDBINFO: released with MySQL Cluster 7.1, NDBINFO presents real-time status and usage statistics, providing developers and DBAs with a simple means of pro-actively monitoring and optimizing database performance and availability.    * MySQL Cluster Manager (MCM): available as part of the commercial MySQL Cluster Carrier Grade Edition, MCM simplifies the creation and management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one!    * MySQL Cluster Advisors & Graphs: part of the MySQL Enterprise Monitor and available in the commercial MySQL Cluster Carrier Grade Edition, the Enterprise Advisor includes automated best practice rules that alert on key performance and availability metrics from MySQL Cluster data nodes.You'll also learn how you can get started evaluating and using all of these tools to simplify MySQL Cluster management.This session will last round an hour and will include interactive Q&A throughout. You can learn more about MySQL Cluster Manager from this whitepaper and on-line demonstration.  You can also download the packages from eDelivery (just select "MySQL Database" as the product pack, select your platform, click "Go" and then scroll down to get the software).While managing clusters will never be easy, the webinar will show hou how it just got a whole lot simpler !

    Read the article

  • Using Durandal to Create Single Page Apps

    - by Stephen.Walther
    A few days ago, I gave a talk on building Single Page Apps on the Microsoft Stack. In that talk, I recommended that people use Knockout, Sammy, and RequireJS to build their presentation layer and use the ASP.NET Web API to expose data from their server. After I gave the talk, several people contacted me and suggested that I investigate a new open-source JavaScript library named Durandal. Durandal stitches together Knockout, Sammy, and RequireJS to make it easier to use these technologies together. In this blog entry, I want to provide a brief walkthrough of using Durandal to create a simple Single Page App. I am going to demonstrate how you can create a simple Movies App which contains (virtual) pages for viewing a list of movies, adding new movies, and viewing movie details. The goal of this blog entry is to give you a sense of what it is like to build apps with Durandal. Installing Durandal First things first. How do you get Durandal? The GitHub project for Durandal is located here: https://github.com/BlueSpire/Durandal The Wiki — located at the GitHub project — contains all of the current documentation for Durandal. Currently, the documentation is a little sparse, but it is enough to get you started. Instead of downloading the Durandal source from GitHub, a better option for getting started with Durandal is to install one of the Durandal NuGet packages. I built the Movies App described in this blog entry by first creating a new ASP.NET MVC 4 Web Application with the Basic Template. Next, I executed the following command from the Package Manager Console: Install-Package Durandal.StarterKit As you can see from the screenshot of the Package Manager Console above, the Durandal Starter Kit package has several dependencies including: · jQuery · Knockout · Sammy · Twitter Bootstrap The Durandal Starter Kit package includes a sample Durandal application. You can get to the Starter Kit app by navigating to the Durandal controller. Unfortunately, when I first tried to run the Starter Kit app, I got an error because the Starter Kit is hard-coded to use a particular version of jQuery which is already out of date. You can fix this issue by modifying the App_Start\DurandalBundleConfig.cs file so it is jQuery version agnostic like this: bundles.Add( new ScriptBundle("~/scripts/vendor") .Include("~/Scripts/jquery-{version}.js") .Include("~/Scripts/knockout-{version}.js") .Include("~/Scripts/sammy-{version}.js") // .Include("~/Scripts/jquery-1.9.0.min.js") // .Include("~/Scripts/knockout-2.2.1.js") // .Include("~/Scripts/sammy-0.7.4.min.js") .Include("~/Scripts/bootstrap.min.js") ); The recommendation is that you create a Durandal app in a folder off your project root named App. The App folder in the Starter Kit contains the following subfolders and files: · durandal – This folder contains the actual durandal JavaScript library. · viewmodels – This folder contains all of your application’s view models. · views – This folder contains all of your application’s views. · main.js — This file contains all of the JavaScript startup code for your app including the client-side routing configuration. · main-built.js – This file contains an optimized version of your application. You need to build this file by using the RequireJS optimizer (unfortunately, before you can run the optimizer, you must first install NodeJS). For the purpose of this blog entry, I wanted to start from scratch when building the Movies app, so I deleted all of these files and folders except for the durandal folder which contains the durandal library. Creating the ASP.NET MVC Controller and View A Durandal app is built using a single server-side ASP.NET MVC controller and ASP.NET MVC view. A Durandal app is a Single Page App. When you navigate between pages, you are not navigating to new pages on the server. Instead, you are loading new virtual pages into the one-and-only-one server-side view. For the Movies app, I created the following ASP.NET MVC Home controller: public class HomeController : Controller { public ActionResult Index() { return View(); } } There is nothing special about the Home controller – it is as basic as it gets. Next, I created the following server-side ASP.NET view. This is the one-and-only server-side view used by the Movies app: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that I set the Layout property for the view to the value null. If you neglect to do this, then the default ASP.NET MVC layout will be applied to the view and you will get the <!DOCTYPE> and opening and closing <html> tags twice. Next, notice that the view contains a DIV element with the Id applicationHost. This marks the area where virtual pages are loaded. When you navigate from page to page in a Durandal app, HTML page fragments are retrieved from the server and stuck in the applicationHost DIV element. Inside the applicationHost element, you can place any content which you want to display when a Durandal app is starting up. For example, you can create a fancy splash screen. I opted for simply displaying the text “Loading app…”: Next, notice the view above includes a call to the Scripts.Render() helper. This helper renders out all of the JavaScript files required by the Durandal library such as jQuery and Knockout. Remember to fix the App_Start\DurandalBundleConfig.cs as described above or Durandal will attempt to load an old version of jQuery and throw a JavaScript exception and stop working. Your application JavaScript code is not included in the scripts rendered by the Scripts.Render helper. Your application code is loaded dynamically by RequireJS with the help of the following SCRIPT element located at the bottom of the view: <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> The data-main attribute on the SCRIPT element causes RequireJS to load your /app/main.js JavaScript file to kick-off your Durandal app. Creating the Durandal Main.js File The Durandal Main.js JavaScript file, located in your App folder, contains all of the code required to configure the behavior of Durandal. Here’s what the Main.js file looks like in the case of the Movies app: require.config({ paths: { 'text': 'durandal/amd/text' } }); define(function (require) { var app = require('durandal/app'), viewLocator = require('durandal/viewLocator'), system = require('durandal/system'), router = require('durandal/plugins/router'); //>>excludeStart("build", true); system.debug(true); //>>excludeEnd("build"); app.start().then(function () { //Replace 'viewmodels' in the moduleId with 'views' to locate the view. //Look for partial views in a 'views' folder in the root. viewLocator.useConvention(); //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id"); app.adaptToDevice(); //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); }); }); There are three important things to notice about the main.js file above. First, notice that it contains a section which enables debugging which looks like this: //>>excludeStart(“build”, true); system.debug(true); //>>excludeEnd(“build”); This code enables debugging for your Durandal app which is very useful when things go wrong. When you call system.debug(true), Durandal writes out debugging information to your browser JavaScript console. For example, you can use the debugging information to diagnose issues with your client-side routes: (The funny looking //> symbols around the system.debug() call are RequireJS optimizer pragmas). The main.js file is also the place where you configure your client-side routes. In the case of the Movies app, the main.js file is used to configure routes for three page: the movies show, add, and details pages. //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id");   The route for movie details includes a route parameter named id. Later, we will use the id parameter to lookup and display the details for the right movie. Finally, the main.js file above contains the following line of code: //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); This line of code causes Durandal to load up a JavaScript file named shell.js and an HTML fragment named shell.html. I’ll discuss the shell in the next section. Creating the Durandal Shell You can think of the Durandal shell as the layout or master page for a Durandal app. The shell is where you put all of the content which you want to remain constant as a user navigates from virtual page to virtual page. For example, the shell is a great place to put your website logo and navigation links. The Durandal shell is composed from two parts: a JavaScript file and an HTML file. Here’s what the HTML file looks like for the Movies app: <h1>Movies App</h1> <div class="container-fluid page-host"> <!--ko compose: { model: router.activeItem, //wiring the router afterCompose: router.afterCompose, //wiring the router transition:'entrance', //use the 'entrance' transition when switching views cacheViews:true //telling composition to keep views in the dom, and reuse them (only a good idea with singleton view models) }--><!--/ko--> </div> And here is what the JavaScript file looks like: define(function (require) { var router = require('durandal/plugins/router'); return { router: router, activate: function () { return router.activate('movies/show'); } }; }); The JavaScript file contains the view model for the shell. This view model returns the Durandal router so you can access the list of configured routes from your shell. Notice that the JavaScript file includes a function named activate(). This function loads the movies/show page as the first page in the Movies app. If you want to create a different default Durandal page, then pass the name of a different age to the router.activate() method. Creating the Movies Show Page Durandal pages are created out of a view model and a view. The view model contains all of the data and view logic required for the view. The view contains all of the HTML markup for rendering the view model. Let’s start with the movies show page. The movies show page displays a list of movies. The view model for the show page looks like this: define(function (require) { var moviesRepository = require("repositories/moviesRepository"); return { movies: ko.observable(), activate: function() { this.movies(moviesRepository.listMovies()); } }; }); You create a view model by defining a new RequireJS module (see http://requirejs.org). You create a RequireJS module by placing all of your JavaScript code into an anonymous function passed to the RequireJS define() method. A RequireJS module has two parts. You retrieve all of the modules which your module requires at the top of your module. The code above depends on another RequireJS module named repositories/moviesRepository. Next, you return the implementation of your module. The code above returns a JavaScript object which contains a property named movies and a method named activate. The activate() method is a magic method which Durandal calls whenever it activates your view model. Your view model is activated whenever you navigate to a page which uses it. In the code above, the activate() method is used to get the list of movies from the movies repository and assign the list to the view model movies property. The HTML for the movies show page looks like this: <table> <thead> <tr> <th>Title</th><th>Director</th> </tr> </thead> <tbody data-bind="foreach:movies"> <tr> <td data-bind="text:title"></td> <td data-bind="text:director"></td> <td><a data-bind="attr:{href:'#/movies/details/'+id}">Details</a></td> </tr> </tbody> </table> <a href="#/movies/add">Add Movie</a> Notice that this is an HTML fragment. This fragment will be stuffed into the page-host DIV element in the shell.html file which is stuffed, in turn, into the applicationHost DIV element in the server-side MVC view. The HTML markup above contains data-bind attributes used by Knockout to display the list of movies (To learn more about Knockout, visit http://knockoutjs.com). The list of movies from the view model is displayed in an HTML table. Notice that the page includes a link to a page for adding a new movie. The link uses the following URL which starts with a hash: #/movies/add. Because the link starts with a hash, clicking the link does not cause a request back to the server. Instead, you navigate to the movies/add page virtually. Creating the Movies Add Page The movies add page also consists of a view model and view. The add page enables you to add a new movie to the movie database. Here’s the view model for the add page: define(function (require) { var app = require('durandal/app'); var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToAdd: { title: ko.observable(), director: ko.observable() }, activate: function () { this.movieToAdd.title(""); this.movieToAdd.director(""); this._movieAdded = false; }, canDeactivate: function () { if (this._movieAdded == false) { return app.showMessage('Are you sure you want to leave this page?', 'Navigate', ['Yes', 'No']); } else { return true; } }, addMovie: function () { // Add movie to db moviesRepository.addMovie(ko.toJS(this.movieToAdd)); // flag new movie this._movieAdded = true; // return to list of movies router.navigateTo("#/movies/show"); } }; }); The view model contains one property named movieToAdd which is bound to the add movie form. The view model also has the following three methods: 1. activate() – This method is called by Durandal when you navigate to the add movie page. The activate() method resets the add movie form by clearing out the movie title and director properties. 2. canDeactivate() – This method is called by Durandal when you attempt to navigate away from the add movie page. If you return false then navigation is cancelled. 3. addMovie() – This method executes when the add movie form is submitted. This code adds the new movie to the movie repository. I really like the Durandal canDeactivate() method. In the code above, I use the canDeactivate() method to show a warning to a user if they navigate away from the add movie page – either by clicking the Cancel button or by hitting the browser back button – before submitting the add movie form: The view for the add movie page looks like this: <form data-bind="submit:addMovie"> <fieldset> <legend>Add Movie</legend> <div> <label> Title: <input data-bind="value:movieToAdd.title" required /> </label> </div> <div> <label> Director: <input data-bind="value:movieToAdd.director" required /> </label> </div> <div> <input type="submit" value="Add" /> <a href="#/movies/show">Cancel</a> </div> </fieldset> </form> I am using Knockout to bind the movieToAdd property from the view model to the INPUT elements of the HTML form. Notice that the FORM element includes a data-bind attribute which invokes the addMovie() method from the view model when the HTML form is submitted. Creating the Movies Details Page You navigate to the movies details Page by clicking the Details link which appears next to each movie in the movies show page: The Details links pass the movie ids to the details page: #/movies/details/0 #/movies/details/1 #/movies/details/2 Here’s what the view model for the movies details page looks like: define(function (require) { var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToShow: { title: ko.observable(), director: ko.observable() }, activate: function (context) { // Grab movie from repository var movie = moviesRepository.getMovie(context.id); // Add to view model this.movieToShow.title(movie.title); this.movieToShow.director(movie.director); } }; }); Notice that the view model activate() method accepts a parameter named context. You can take advantage of the context parameter to retrieve route parameters such as the movie Id. In the code above, the context.id property is used to retrieve the correct movie from the movie repository and the movie is assigned to a property named movieToShow exposed by the view model. The movie details view displays the movieToShow property by taking advantage of Knockout bindings: <div> <h2 data-bind="text:movieToShow.title"></h2> directed by <span data-bind="text:movieToShow.director"></span> </div> Summary The goal of this blog entry was to walkthrough building a simple Single Page App using Durandal and to get a feel for what it is like to use this library. I really like how Durandal stitches together Knockout, Sammy, and RequireJS and establishes patterns for using these libraries to build Single Page Apps. Having a standard pattern which developers on a team can use to build new pages is super valuable. Once you get the hang of it, using Durandal to create new virtual pages is dead simple. Just define a new route, view model, and view and you are done. I also appreciate the fact that Durandal did not attempt to re-invent the wheel and that Durandal leverages existing JavaScript libraries such as Knockout, RequireJS, and Sammy. These existing libraries are powerful libraries and I have already invested a considerable amount of time in learning how to use them. Durandal makes it easier to use these libraries together without losing any of their power. Durandal has some additional interesting features which I have not had a chance to play with yet. For example, you can use the RequireJS optimizer to combine and minify all of a Durandal app’s code. Also, Durandal supports a way to create custom widgets (client-side controls) by composing widgets from a controller and view. You can download the code for the Movies app by clicking the following link (this is a Visual Studio 2012 project): Durandal Movie App

    Read the article

  • USB packets - receive wrong data

    - by regorianer
    i have a little python script which shows me the packets of an enocean device and does some events depending on the packet type. unfortunately it doesn't work because i'm getting wrong packets. Parts of the python script (used pySerial): Blockquote ser = serial.Serial('/dev/ttyUSB1',57600,bytesize = serial.EIGHTBITS,timeout = 1, parity = serial.PARITY_NONE , rtscts = 0) print 'clearing buffer' s = ser.read(10000) print 'start read' while 1: s = ser.read(1) for character in s: sys.stdout.write(" %s" % character.encode('hex')) print 'end' ser.close() output baudrate 57600: e0 e0 00 e0 00 e0 e0 e0 e0 e0 00 e0 e0 00 00 00 00 00 00 00 e0 e0 e0 00 00 00 00 e0 e0 e0 00 00 e0 e0 e0 e0 e0 00 e0 00 e0 e0 e0 e0 e0 00 e0 e0 00 00 00 00 00 00 e0 e0 e0 00 00 00 00 e0 e0 e0 00 00 e0 e0 e0 output baudrate 9600: a5 5a 0b 05 10 00 00 00 00 15 c4 56 20 6f a5 5a 0b 05 00 00 00 00 00 15 c4 56 20 5f linux terminal baudrate 57600: $stty -F /dev/ttyUSB1 57600 $stty < /dev/ttyUSB1 speed 57600 baud; line = 0; eof = ^A; min = 0; time = 0; -brkint -icrnl -imaxbel -opost -onlcr -isig -icanon -iexten -echo -echoe -echok -echoctl -echoke $while (true) do cat -A /dev/ttyUSB1 ; done myfile $hexdump -C myfile 00000000 4d 2d 60 4d 2d 60 5e 40 4d 2d 60 5e 40 4d 2d 60 |M-M-^@M-^@M-| 00000010 4d 2d 60 4d 2d 60 4d 2d 60 4d 2d 60 5e 40 4d 2d |M-M-M-M-^@M-| 00000020 60 4d 2d 60 5e 40 5e 40 5e 40 5e 40 5e 40 5e 40 |M-^@^@^@^@^@^@| 00000030 5e 40 4d 2d 60 4d 2d 60 4d 2d 60 5e 40 5e 40 5e |^@M-M-M-`^@^@^| 00000040 40 5e 40 4d 2d 60 4d 2d 60 4d 2d 60 |@^@M-M-M-`| 0000004c linux terminal baudrate 9600: $hexdump -C myfile2 00000000 5e 40 5e 55 4d 2d 44 56 30 4d 2d 3f 5e 40 5e 40 |^@^UM-DV0M-?^@^@| 00000010 5e 55 4d 2d 44 56 20 5f |^UM-DV _| 00000018 the specification says: 0x55 sync byte 1st 0xNNNN data length bytes (2 bytes) 0x07 opt length byte 0x01 type byte CRC, data, opt data und nochmal CRC but I'm not getting this packet structure. The output of the python script differs from the one I get via the terminal. I also wrote the python part with C, but the output is the same as with python As the USB receiver a BSC-BoR USB Receiver/Sender is used The EnOcean device is a simple button

    Read the article

  • Oracle WebCenter Portal: Pagelet Producer – What’s New in 11.1.1.6.0 Release

    - by kellsey.ruppel
    Igor Plyakov, Sr. Principal Product Marketing Manager is back to share what's new in Oracle WebCenter Portal: Pagelet Producer. In February 2012 Oracle released 11g Release 1 (11.1.1.6.0) for WebCenter Portal. Pagelet Producer (aka Ensemble) that came out with this release added support for several new capabilities that are described in this post. As of 11.1.1.5.0 release the Pagelet Producer can expose WSRP and JPDK portlets as pagelets that can then be consumed in any portal or any third-party application that does not have a WSRP consumer. Now Pagelet Producer team is working on simplifying use of pagelets in WebCenter Sites. To expose WSRP portlets a new Producer should be registered with Pagelet Producer which can be done using Enterprise Manager, WLST or the Pagelet Producer Administration Console (for details see Section 25.9 of Administrator’s Guide for Oracle WebCenter Portal). If the producer requires authentication, Pagelet Producer allows you to select and use one of standard WSS token profiles.  After registration is finished a new resource is created and automatically populated with pagelets that represent the portlets associated with the WSRP endpoint.  For 11.1.1.6.0 release we completed extensive testing of consuming all WebCenter Services that are exposed as WSRP portlets by E2.0 Producer and delivery them as pagelets to WebCenter Interaction portal. In Pagelet Producer 11.1.1.6.0 release we added OpenSocial container that allows consuming gadgets from other OpenSocial containers, e.g. iGoogle, and expose them as pagelets. You can also use Pagelet Producer to host OpenSocial gadgets that could leverage OpenSocial APIs that it supports – People, Activities, Appdata and Pub-Sub features. Note that People and Activities expose the People Connections and Activity Stream from WebCenter Portal, i.e. to use these features Pagelet Producer requires connection to WebCenter Portal schema. Pub-Sub allows leveraging OpenAJAX Hub API for inter-gadget communication. In addition to these major new additions in Pagelet Producer 11.1.1.6.0 release we also extended several functional modules: The Clipping module was extended to support clipping of multiple regions on web resource page and then re-assembly of these separately clipped regions into a single pagelet. The auto-login feature can now be applied to web resources protected with Kerberos authentication; you would find this new functionality handy for consuming SharePoint web parts The logging module now supports full HTTP traffic between the Pagelet Producer and proxied web resource. At last, as the rest of WebCenter Portal stack the Pagelet Producer 11.1.1.6.0 can run on IBM WebSphere Application Server.

    Read the article

  • Talks Submitted for Ann Arbor Day of .NET 2010

    - by PSteele
    Just submitted my session abstracts for Ann Arbor's Day of .NET 2010.   Getting up to speed with .NET 3.5 -- Just in time for 4.0! Yes, C# 4.0 is just around the corner.  But if you haven't had the chance to use C# 3.5 extensively, this session will start from the ground up with the new features of 3.5.  We'll assume everyone is coming from C# 2.0.  This session will show you the details of extension methods, partial methods and more.  We'll also show you how LINQ -- Language Integrated Query -- can help decrease your development time and increase your code's readability.  If time permits, we'll look at some .NET 4.0 features, but the goal is to get you up to speed on .NET 3.5.   Go Ahead and Mock Me! When testing specific parts of your application, there can be a lot of external dependencies required to make your tests work.  Writing fake or mock objects that act as stand-ins for the real dependencies can waste a lot of time.  This is where mocking frameworks come in.  In this session, Patrick Steele will introduce you to Rhino Mocks, a popular mocking framework for .NET.  You'll see how a mocking framework can make writing unit tests easier and leads to less brittle unit tests.   Inversion of Control: Who's got control and why is it being inverted? No doubt you've heard of "Inversion of Control".  If not, maybe you've heard the term "Dependency Injection"?  The two usually go hand-in-hand.  Inversion of Control (IoC) along with Dependency Injection (DI) helps simplify the connections and lifetime of all of the dependent objects in the software you write.  In this session, Patrick Steele will introduce you to the concepts of IoC and DI and will show you how to use a popular IoC container (Castle Windsor) to help simplify the way you build software and how your objects interact with each other. If you're interested in speaking, hurry up and get your submissions in!  The deadline is Monday, April 5th! Technorati Tags: .NET,Ann Arbor,Day of .NET

    Read the article

  • The broken Promise of the Mobile Web

    - by Rick Strahl
    High end mobile devices have been with us now for almost 7 years and they have utterly transformed the way we access information. Mobile phones and smartphones that have access to the Internet and host smart applications are in the hands of a large percentage of the population of the world. In many places even very remote, cell phones and even smart phones are a common sight. I’ll never forget when I was in India in 2011 I was up in the Southern Indian mountains riding an elephant out of a tiny local village, with an elephant herder in front riding atop of the elephant in front of us. He was dressed in traditional garb with the loin wrap and head cloth/turban as did quite a few of the locals in this small out of the way and not so touristy village. So we’re slowly trundling along in the forest and he’s lazily using his stick to guide the elephant and… 10 minutes in he pulls out his cell phone from his sash and starts texting. In the middle of texting a huge pig jumps out from the side of the trail and he takes a picture running across our path in the jungle! So yeah, mobile technology is very pervasive and it’s reached into even very buried and unexpected parts of this world. Apps are still King Apps currently rule the roost when it comes to mobile devices and the applications that run on them. If there’s something that you need on your mobile device your first step usually is to look for an app, not use your browser. But native app development remains a pain in the butt, with the requirement to have to support 2 or 3 completely separate platforms. There are solutions that try to bridge that gap. Xamarin is on a tear at the moment, providing their cross-device toolkit to build applications using C#. While Xamarin tools are impressive – and also *very* expensive – they only address part of the development madness that is app development. There are still specific device integration isssues, dealing with the different developer programs, security and certificate setups and all that other noise that surrounds app development. There’s also PhoneGap/Cordova which provides a hybrid solution that involves creating local HTML/CSS/JavaScript based applications, and then packaging them to run in a specialized App container that can run on most mobile device platforms using a WebView interface. This allows for using of HTML technology, but it also still requires all the set up, configuration of APIs, security keys and certification and submission and deployment process just like native applications – you actually lose many of the benefits that  Web based apps bring. The big selling point of Cordova is that you get to use HTML have the ability to build your UI once for all platforms and run across all of them – but the rest of the app process remains in place. Apps can be a big pain to create and manage especially when we are talking about specialized or vertical business applications that aren’t geared at the mainstream market and that don’t fit the ‘store’ model. If you’re building a small intra department application you don’t want to deal with multiple device platforms and certification etc. for various public or corporate app stores. That model is simply not a good fit both from the development and deployment perspective. Even for commercial, big ticket apps, HTML as a UI platform offers many advantages over native, from write-once run-anywhere, to remote maintenance, single point of management and failure to having full control over the application as opposed to have the app store overloads censor you. In a lot of ways Web based HTML/CSS/JavaScript applications have so much potential for building better solutions based on existing Web technologies for the very same reasons a lot of content years ago moved off the desktop to the Web. To me the Web as a mobile platform makes perfect sense, but the reality of today’s Mobile Web unfortunately looks a little different… Where’s the Love for the Mobile Web? Yet here we are in the middle of 2014, nearly 7 years after the first iPhone was released and brought the promise of rich interactive information at your fingertips, and yet we still don’t really have a solid mobile Web platform. I know what you’re thinking: “But we have lots of HTML/JavaScript/CSS features that allows us to build nice mobile interfaces”. I agree to a point – it’s actually quite possible to build nice looking, rich and capable Web UI today. We have media queries to deal with varied display sizes, CSS transforms for smooth animations and transitions, tons of CSS improvements in CSS 3 that facilitate rich layout, a host of APIs geared towards mobile device features and lately even a number of JavaScript framework choices that facilitate development of multi-screen apps in a consistent manner. Personally I’ve been working a lot with AngularJs and heavily modified Bootstrap themes to build mobile first UIs and that’s been working very well to provide highly usable and attractive UI for typical mobile business applications. From the pure UI perspective things actually look very good. Not just about the UI But it’s not just about the UI - it’s also about integration with the mobile device. When it comes to putting all those pieces together into what amounts to a consolidated platform to build mobile Web applications, I think we still have a ways to go… there are a lot of missing pieces to make it all work together and integrate with the device more smoothly, and more importantly to make it work uniformly across the majority of devices. I think there are a number of reasons for this. Slow Standards Adoption HTML standards implementations and ratification has been dreadfully slow, and browser vendors all seem to pick and choose different pieces of the technology they implement. The end result is that we have a capable UI platform that’s missing some of the infrastructure pieces to make it whole on mobile devices. There’s lots of potential but what is lacking that final 10% to build truly compelling mobile applications that can compete favorably with native applications. Some of it is the fragmentation of browsers and the slow evolution of the mobile specific HTML APIs. A host of mobile standards exist but many of the standards are in the early review stage and they have been there stuck for long periods of time and seem to move at a glacial pace. Browser vendors seem even slower to implement them, and for good reason – non-ratified standards mean that implementations may change and vendor implementations tend to be experimental and  likely have to be changed later. Neither Vendors or developers are not keen on changing standards. This is the typical chicken and egg scenario, but without some forward momentum from some party we end up stuck in the mud. It seems that either the standards bodies or the vendors need to carry the torch forward and that doesn’t seem to be happening quickly enough. Mobile Device Integration just isn’t good enough Current standards are not far reaching enough to address a number of the use case scenarios necessary for many mobile applications. While not every application needs to have access to all mobile device features, almost every mobile application could benefit from some integration with other parts of the mobile device platform. Integration with GPS, phone, media, messaging, notifications, linking and contacts system are benefits that are unique to mobile applications and could be widely used, but are mostly (with the exception of GPS) inaccessible for Web based applications today. Unfortunately trying to do most of this today only with a mobile Web browser is a losing battle. Aside from PhoneGap/Cordova’s app centric model with its own custom API accessing mobile device features and the token exception of the GeoLocation API, most device integration features are not widely supported by the current crop of mobile browsers. For example there’s no usable messaging API that allows access to SMS or contacts from HTML. Even obvious components like the Media Capture API are only implemented partially by mobile devices. There are alternatives and workarounds for some of these interfaces by using browser specific code, but that’s might ugly and something that I thought we were trying to leave behind with newer browser standards. But it’s not quite working out that way. It’s utterly perplexing to me that mobile standards like Media Capture and Streams, Media Gallery Access, Responsive Images, Messaging API, Contacts Manager API have only minimal or no traction at all today. Keep in mind we’ve had mobile browsers for nearly 7 years now, and yet we still have to think about how to get access to an image from the image gallery or the camera on some devices? Heck Windows Phone IE Mobile just gained the ability to upload images recently in the Windows 8.1 Update – that’s feature that HTML has had for 20 years! These are simple concepts and common problems that should have been solved a long time ago. It’s extremely frustrating to see build 90% of a mobile Web app with relative ease and then hit a brick wall for the remaining 10%, which often can be show stoppers. The remaining 10% have to do with platform integration, browser differences and working around the limitations that browsers and ‘pinned’ applications impose on HTML applications. The maddening part is that these limitations seem arbitrary as they could easily work on all mobile platforms. For example, SMS has a URL Moniker interface that sort of works on Android, works badly with iOS (only works if the address is already in the contact list) and not at all on Windows Phone. There’s no reason this shouldn’t work universally using the same interface – after all all phones have supported SMS since before the year 2000! But, it doesn’t have to be this way Change can happen very quickly. Take the GeoLocation API for example. Geolocation has taken off at the very beginning of the mobile device era and today it works well, provides the necessary security (a big concern for many mobile APIs), and is supported by just about all major mobile and even desktop browsers today. It handles security concerns via prompts to avoid unwanted access which is a model that would work for most other device APIs in a similar fashion. One time approval and occasional re-approval if code changes or caches expire. Simple and only slightly intrusive. It all works well, even though GeoLocation actually has some physical limitations, such as representing the current location when no GPS device is present. Yet this is a solved problem, where other APIs that are conceptually much simpler to implement have failed to gain any traction at all. Technically none of these APIs should be a problem to implement, but it appears that the momentum is just not there. Inadequate Web Application Linking and Activation Another important piece of the puzzle missing is the integration of HTML based Web applications. Today HTML based applications are not first class citizens on mobile operating systems. When talking about HTML based content there’s a big difference between content and applications. Content is great for search engine discovery and plain browser usage. Content is usually accessed intermittently and permanent linking is not so critical for this type of content.  But applications have different needs. Applications need to be started up quickly and must be easily switchable to support a multi-tasking user workflow. Therefore, it’s pretty crucial that mobile Web apps are integrated into the underlying mobile OS and work with the standard task management features. Unfortunately this integration is not as smooth as it should be. It starts with actually trying to find mobile Web applications, to ‘installing’ them onto a phone in an easily accessible manner in a prominent position. The experience of discovering a Mobile Web ‘App’ and making it sticky is by no means as easy or satisfying. Today the way you’d go about this is: Open the browser Search for a Web Site in the browser with your search engine of choice Hope that you find the right site Hope that you actually find a site that works for your mobile device Click on the link and run the app in a fully chrome’d browser instance (read tiny surface area) Pin the app to the home screen (with all the limitations outline above) Hope you pointed at the right URL when you pinned Even for you and me as developers, there are a few steps in there that are painful and annoying, but think about the average user. First figuring out how to search for a specific site or URL? And then pinning the app and hopefully from the right location? You’ve probably lost more than half of your audience at that point. This experience sucks. For developers too this process is painful since app developers can’t control the shortcut creation directly. This problem often gets solved by crazy coding schemes, with annoying pop-ups that try to get people to create shortcuts via fancy animations that are both annoying and add overhead to each and every application that implements this sort of thing differently. And that’s not the end of it - getting the link onto the home screen with an application icon varies quite a bit between browsers. Apple’s non-standard meta tags are prominent and they work with iOS and Android (only more recent versions), but not on Windows Phone. Windows Phone instead requires you to create an actual screen or rather a partial screen be captured for a shortcut in the tile manager. Who had that brilliant idea I wonder? Surprisingly Chrome on recent Android versions seems to actually get it right – icons use pngs, pinning is easy and pinned applications properly behave like standalone apps and retain the browser’s active page state and content. Each of the platforms has a different way to specify icons (WP doesn’t allow you to use an icon image at all), and the most widely used interface in use today is a bunch of Apple specific meta tags that other browsers choose to support. The question is: Why is there no standard implementation for installing shortcuts across mobile platforms using an official format rather than a proprietary one? Then there’s iOS and the crazy way it treats home screen linked URLs using a crazy hybrid format that is neither as capable as a Web app running in Safari nor a WebView hosted application. Moving off the Web ‘app’ link when switching to another app actually causes the browser and preview it to ‘blank out’ the Web application in the Task View (see screenshot on the right). Then, when the ‘app’ is reactivated it ends up completely restarting the browser with the original link. This is crazy behavior that you can’t easily work around. In some situations you might be able to store the application state and restore it using LocalStorage, but for many scenarios that involve complex data sources (like say Google Maps) that’s not a possibility. The only reason for this screwed up behavior I can think of is that it is deliberate to make Web apps a pain in the butt to use and forcing users trough the App Store/PhoneGap/Cordova route. App linking and management is a very basic problem – something that we essentially have solved in every desktop browser – yet on mobile devices where it arguably matters a lot more to have easy access to web content we have to jump through hoops to have even a remotely decent linking/activation experience across browsers. Where’s the Money? It’s not surprising that device home screen integration and Mobile Web support in general is in such dismal shape – the mobile OS vendors benefit financially from App store sales and have little to gain from Web based applications that bypass the App store and the cash cow that it presents. On top of that, platform specific vendor lock-in of both end users and developers who have invested in hardware, apps and consumables is something that mobile platform vendors actually aspire to. Web based interfaces that are cross-platform are the anti-thesis of that and so again it’s no surprise that the mobile Web is on a struggling path. But – that may be changing. More and more we’re seeing operations shifting to services that are subscription based or otherwise collect money for usage, and that may drive more progress into the Web direction in the end . Nothing like the almighty dollar to drive innovation forward. Do we need a Mobile Web App Store? As much as I dislike moderated experiences in today’s massive App Stores, they do at least provide one single place to look for apps for your device. I think we could really use some sort of registry, that could provide something akin to an app store for mobile Web apps, to make it easier to actually find mobile applications. This could take the form of a specialized search engine, or maybe a more formal store/registry like structure. Something like apt-get/chocolatey for Web apps. It could be curated and provide at least some feedback and reviews that might help with the integrity of applications. Coupled to that could be a native application on each platform that would allow searching and browsing of the registry and then also handle installation in the form of providing the home screen linking, plus maybe an initial security configuration that determines what features are allowed access to for the app. I’m not holding my breath. In order for this sort of thing to take off and gain widespread appeal, a lot of coordination would be required. And in order to get enough traction it would have to come from a well known entity – a mobile Web app store from a no name source is unlikely to gain high enough usage numbers to make a difference. In a way this would eliminate some of the freedom of the Web, but of course this would also be an optional search path in addition to the standard open Web search mechanisms to find and access content today. Security Security is a big deal, and one of the perceived reasons why so many IT professionals appear to be willing to go back to the walled garden of deployed apps is that Apps are perceived as safe due to the official review and curation of the App stores. Curated stores are supposed to protect you from malware, illegal and misleading content. It doesn’t always work out that way and all the major vendors have had issues with security and the review process at some time or another. Security is critical, but I also think that Web applications in general pose less of a security threat than native applications, by nature of the sandboxed browser and JavaScript environments. Web applications run externally completely and in the HTML and JavaScript sandboxes, with only a very few controlled APIs allowing access to device specific features. And as discussed earlier – security for any device interaction can be granted the same for mobile applications through a Web browser, as they can for native applications either via explicit policies loaded from the Web, or via prompting as GeoLocation does today. Security is important, but it’s certainly solvable problem for Web applications even those that need to access device hardware. Security shouldn’t be a reason for Web apps to be an equal player in mobile applications. Apps are winning, but haven’t we been here before? So now we’re finding ourselves back in an era of installed app, rather than Web based and managed apps. Only it’s even worse today than with Desktop applications, in that the apps are going through a gatekeeper that charges a toll and censors what you can and can’t do in your apps. Frankly it’s a mystery to me why anybody would buy into this model and why it’s lasted this long when we’ve already been through this process. It’s crazy… It’s really a shame that this regression is happening. We have the technology to make mobile Web apps much more prominent, but yet we’re basically held back by what seems little more than bureaucracy, partisan bickering and self interest of the major parties involved. Back in the day of the desktop it was Internet Explorer’s 98+%  market shareholding back the Web from improvements for many years – now it’s the combined mobile OS market in control of the mobile browsers. If mobile Web apps were allowed to be treated the same as native apps with simple ways to install and run them consistently and persistently, that would go a long way to making mobile applications much more usable and seriously viable alternatives to native apps. But as it is mobile apps have a severe disadvantage in placement and operation. There are a few bright spots in all of this. Mozilla’s FireFoxOs is embracing the Web for it’s mobile OS by essentially building every app out of HTML and JavaScript based content. It supports both packaged and certified package modes (that can be put into the app store), and Open Web apps that are loaded and run completely off the Web and can also cache locally for offline operation using a manifest. Open Web apps are treated as full class citizens in FireFoxOS and run using the same mechanism as installed apps. Unfortunately FireFoxOs is getting a slow start with minimal device support and specifically targeting the low end market. We can hope that this approach will change and catch on with other vendors, but that’s also an uphill battle given the conflict of interest with platform lock in that it represents. Recent versions of Android also seem to be working reasonably well with mobile application integration onto the desktop and activation out of the box. Although it still uses the Apple meta tags to find icons and behavior settings, everything at least works as you would expect – icons to the desktop on pinning, WebView based full screen activation, and reliable application persistence as the browser/app is treated like a real application. Hopefully iOS will at some point provide this same level of rudimentary Web app support. What’s also interesting to me is that Microsoft hasn’t picked up on the obvious need for a solid Web App platform. Being a distant third in the mobile OS war, Microsoft certainly has nothing to lose and everything to gain by using fresh ideas and expanding into areas that the other major vendors are neglecting. But instead Microsoft is trying to beat the market leaders at their own game, fighting on their adversary’s terms instead of taking a new tack. Providing a kick ass mobile Web platform that takes the lead on some of the proposed mobile APIs would be something positive that Microsoft could do to improve its miserable position in the mobile device market. Where are we at with Mobile Web? It sure sounds like I’m really down on the Mobile Web, right? I’ve built a number of mobile apps in the last year and while overall result and response has been very positive to what we were able to accomplish in terms of UI, getting that final 10% that required device integration dialed was an absolute nightmare on every single one of them. Big compromises had to be made and some features were left out or had to be modified for some devices. In two cases we opted to go the Cordova route in order to get the integration we needed, along with the extra pain involved in that process. Unless you’re not integrating with device features and you don’t care deeply about a smooth integration with the mobile desktop, mobile Web development is fraught with frustration. So, yes I’m frustrated! But it’s not for lack of wanting the mobile Web to succeed. I am still a firm believer that we will eventually arrive a much more functional mobile Web platform that allows access to the most common device features in a sensible way. It wouldn't be difficult for device platform vendors to make Web based applications first class citizens on mobile devices. But unfortunately it looks like it will still be some time before this happens. So, what’s your experience building mobile Web apps? Are you finding similar issues? Just giving up on raw Web applications and building PhoneGap apps instead? Completely skipping the Web and going native? Leave a comment for discussion. Resources Rick Strahl on DotNet Rocks talking about Mobile Web© Rick Strahl, West Wind Technologies, 2005-2014Posted in HTML5  Mobile   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Podcast Show Notes: The Red Room Interview &ndash; Part 1

    - by Bob Rhubart
      The latest OTN Arch2Arch podcast is Part 1 of a three-part series featuring a discussion of a broad range of SOA  issues with three members of the small army of contributors to The Red Room Blog, now part of the OJam.biz site, the Australia-New Zealand outpost of the global Oracle community. The panelists for this program are: Sean Boiling - Sales Consulting Manager for Oracle Fusion Middleware LinkedIn | Twitter | Blog Richard Ward - SOA Channel Development Manager at Oracle LinkedIn | Blog Mervin Chiang - Consulting Principal at Leonardo Consulting LinkedIn | Twitter | Blog (You can also follow the Red Room itself on Twitter: @OracleRedRoom.) The genesis of this interview goes back to 2009, and the original Red Room blog, on which Sean, Richard, Mervin, and other Red Roomers published a 10-part series of posts that, taken together, form a kind of SOA best-practices guide, presented in an irreverent style that is rare in a lot of technical writing. It was on the basis of their expertise and irreverence that I wanted to get a few of the Red Room bloggers on an Arch2Arch podcast.  Easier said than done. Trying to schedule a group interview with very busy people on the other side of world (they’re actually 15 hours in the future, relative to my location) is not a simple process. The conversations about getting some of the Red Room people on the program began in the summer of 2009. The interview finally happened at 5:30 PM EDT on Tuesday March 30, 2010, which for the panelists, located in Australia, was 8:30 AM on Wednesday March 31, 2010. I was waiting for dinner, and Sean, Richard, and Mervin were waiting for breakfast. But the call went off without a hitch, and the panelists carried on a great discussion of SOA issues. Listen to Part 1 Many thanks to Gareth Llewellyn for his help in putting this together. SOA Best Practices Here’s a complete list of the posts in the original 10-part Red Room series: SOA is Dead. Long Live SOA by Sean Boiling Are you doing SOP’s instead of SOA? by Saul Cunningham All The President's SOA by Sean Boiling SOA – Pay Now or Pay Dearly by Richard Ward SOA where are the skills? by Richard Ward Project Management Pitfalls within SOA by Anton Gouws Viewing SOA as a project instead of an architecture by Saul Cunningham Kiss and Tell by Sean Boiling Failure to implement and adhere to SOA Governance by Mervin Chiang Ten Out Of Ten by Sean Boiling Parts 2 of the Red Room Interview will be available next week, followed by Part 3, so stay tuned: RSS Change in the Wind Beginning with next week’s program, the OTN Arch2Arch Podcast will be rechristened as the OTN ArchBeat Podcast, to better align with this blog. The transformation will be painless – you won’t feel a thing.   del.icio.us Tags: otn,oracle,Archbeat,Arch2Arch,soa,service oriented architecture,podcast Technorati Tags: otn,oracle,Archbeat,Arch2Arch,soa,service oriented architecture,podcast

    Read the article

  • Blender mesh mirroring screws up normals when importing in Unity

    - by Shivan Dragon
    My issue is as follows: I've modeled a robot in Blender 2.6. It's a mech-like biped or if you prefer, it kindda looks like a chicken. Since it's symmetrical on the XZ plane, I've decided to mirror some of its parts instead of re-modeling them. Problem is, those mirrored meshes look fine in Blender (faces all show up properly and light falls on them as it should) but in Unity faces and lighting on those very same mirrored meshes is wrong. What also stumps me is the fact that even if I flip normals in Blender, I still get bad results in Unity for those meshes (though now I get different bad results than before). Here's the details: Here's a Blender screen shot of the robot. I've took 2 pictures and slightly rotated the camera around so the geometry in question can be clearly seen: Now, the selected cog-wheel-like piece is the mirrored mesh obtained from mirroring the other cog-wheel on the other (far) side of the robot torso. The back-face culling is turned of here, so it's actually showing the faces as dictated by their normals. As you can see it looks ok, faces are orientated correctly and light falls on it ok (as it does on the original cog-wheel from which it was mirrored). Now if I export this as fbx using the following settings: and then import it into Unity, it looks all screwy: It looks like the normals are in the wrong direction. This is already very strange, because, while in Blender, the original cog-wheel and its mirrored counter part both had normals facing one way, when importing this in Unity, the original cog-wheel still looks ok (like in Blender) but the mirrored one now has normals inverted. First thing I've tried is to go "ok, so I'll flip normals in Blender for the mirrored cog-wheel and then it'll display ok in Unity and that's that". So I went back to Blender, flipped the normals on that mesh, so now it looks bad in Blender: and then re-exported as fbx with the same settings as before, and re-imported into Unity. Sure enough the cog-wheel now looks ok in Unity, in the sense where the faces show up properly, but if you look closely you'll notice that light and shadows are now wrong: Now in Unity, even though the light comes from the back of the robot, the cog-wheel in question acts as if light was coming from some-where else, its faces which should be in shadow are lit up, and those that should be lit up are dark. Here's some things I've tried and which didn't do anything: in Blender I tried mirroring the mesh in 2 ways: first by using the scale to -1 trick, then by using the mirroring tool (select mesh, hit crtl-m, select mirror axis), both ways yield the exact same result in Unity I've tried playing around with the prefab import settings like "normals: import/calculate", "tangents: import/calculate" I've also tired not exporting as fbx manually from Blender, but just dropping the .blend file in the assets folder inside the Unity project So, my question is: is there a way to actually mirror a mesh in Blender and then have it imported in Unity so that it displays properly (as it does in Blender)? If yes, how? Thank you, and please excuse the TL;DR style.

    Read the article

  • SOA’s People Problem by Bob Rhubart

    - by JuergenKress
    Are reluctant passengers slowing down your SOA train? Based on my conversations with various experts in service-oriented architecture (SOA), the consensus is that SOA tools and technology have achieved a high level of maturity. Some even use the term industrialization to describe the current state of SOA. Given that scenario, one might assume that SOA has been wildly successful for every organization that has adopted its principles. Obviously SOA could not have achieved its current level of maturity and industrialization without having reached a tipping point in the volume of success stories to drive continued adoption. But some organizations continue to struggle with SOA. The problem, according to some experts, has little to do with tools or technologies. “One of the greatest challenges to implementing SOA has nothing to do with the intrinsic complexity behind a SOA technology platform,” says Oracle ACE Luis Augusto Weir, senior Oracle solution director at HCL AXON. “The real difficulty lies in dealing with people and processes from different parts of the business and aligning them to deliver enterprisewide solutions.” What can an organization do to meet that challenge? “Staff the right people,” says Weir. “For example, the role of a SOA architect should be as much about integrating people as it is about integrating systems. Dealing with people from different departments, backgrounds, and agendas is a huge challenge. The SOA architect role requires someone that not only has a sound architectural and technological background but also has charisma and human skills, and can communicate equally well to the business and technical teams.” The SOA architect’s communication skills are instrumental in establishing service orientation as the guiding principle across the organization. “A consistent architecture comprising both business services and IT services can comprehensively redefine the role of IT at the process level,” says Danilo Schmiedel, solution architect at Opitz Consulting. That helps to shift the focus from siloes to services and get SOA on track. To that end, Oracle ACE Director Lonneke Dikmans, a managing partner at Vennster, stresses the importance of replacing individual, uncoordinated projects with a focused program that promotes communication, cooperation, and service reuse. “Having support among lead developers and architects helps, as does having sponsors that see the business case and understand the strategic value,” she says. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Bob Rhubard,OTN,Lonneke Dikmans,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to Quickly Cut a Clip From a Video File with Avidemux

    - by Trevor Bekolay
    Whether you’re cutting out the boring parts of your vacation video or getting a hilarious scene for an animated GIF, Avidemux provides a quick and easy way to cut clips from any video file. It’s overkill to use a full-featured video editing program if you just want to cut a few clips from a video file. Even programs that are designed to be small can have confusing interfaces when dealing with video. We’ve found that a great free program, Avidemux, makes the job of cutting clips extremely simple. Note: While the screenshots in this guide are taken from the Windows version, Avidemux runs on all of the major platforms – Windows, Mac OS X and Linux (GTK). Image by Keith Williamson. Cutting Clips from a Video File Open up Avidemux, and load the video file that you want to work with. If you get a prompt like this one: we recommend clicking Yes to use the safer mode. Find the portion of the video that you’d like to isolate. Get as close as you can to the start of the clip you want to cut. Once you find the start of your clip, look at the “Frame Type” of the current frame. You want it to read I; if it isn’t frame type I, then use the single left and right arrow buttons to go forward or backward one frame until you find an appropriate I frame. Once you’ve found the right starting frame, click the button with the A over a red bar. This will set the start of the clip. Advance to where you want your clip to end. Click on the button with a B when you’ve found the appropriate frame. This frame can be of any type. You can now save the clip, either by going to File –> Save –> Save Video… or by pressing Ctrl+S. Give the file a name, and Avidemux will prepare your clip. And that’s it! You should now have a movie file that contains only the portion of the original file that you want. Download Avidemux free for all platforms Latest Features How-To Geek ETC How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? Battlestar Galactica – Caprica Map of the 12 Colonies (Wallpaper Also Available) View Enlarged Versions of Thumbnail Images with Thumbnail Zoom for Firefox IntoNow Identifies Any TV Show by Sound Walk Score Calculates a Neighborhood’s Pedestrian Friendliness Factor Fantasy World at Twilight Wallpaper Hack a Wireless Doorbell into a Snail Mail Indicator

    Read the article

  • SQL SERVER – Automated Type Conversion using Expressor Studio

    - by pinaldave
    Recently I had an interesting situation during my consultation project. Let me share to you how I solved the problem using Expressor Studio. Consider a situation in which you need to read a field, such as customer_identifier, from a text file and pass that field into a database table. In the source file’s metadata structure, customer_identifier is described as a string; however, in the target database table, customer_identifier is described as an integer. Legitimately, all the source values for customer_identifier are valid numbers, such as “109380”. To implement this in an ETL application, you probably would have hard-coded a type conversion function call, such as: output.customer_identifier=stringToInteger(input.customer_identifier) That wasn’t so bad, was it? For this instance, programming this hard-coded type conversion function call was relatively easy. However, hard-coding, whether type conversion code or other business rule code, almost always means that the application containing hard-coded fields, function calls, and values is: a) specific to an instance of use; b) is difficult to adapt to new situations; and c) doesn’t contain many reusable sub-parts. Therefore, in the long run, applications with hard-coded type conversion function calls don’t scale well. In addition, they increase the overall level of effort and degree of difficulty to write and maintain the ETL applications. To get around the trappings of hard-coding type conversion function calls, developers need an access to smarter typing systems. Expressor Studio product offers this feature exactly, by providing developers with a type conversion automation engine based on type abstraction. The theory behind the engine is quite simple. A user specifies abstract data fields in the engine, and then writes applications against the abstractions (whereas in most ETL software, developers develop applications against the physical model). When a Studio-built application is run, Studio’s engine automatically converts the source type to the abstracted data field’s type and converts the abstracted data field’s type to the target type. The engine can do this because it has a couple of built-in rules for type conversions. So, using the example above, a developer could specify customer_identifier as an abstract data field with a type of integer when using Expressor Studio. Upon reading the string value from the text file, Studio’s type conversion engine automatically converts the source field from the type specified in the source’s metadata structure to the abstract field’s type. At the time of writing the data value to the target database, the engine doesn’t have any work to do because the abstract data type and the target data type are just the same. Had they been different, the engine would have automatically provided the conversion. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: SSIS

    Read the article

  • Nginx and PHP Fundamentals

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/08/01/nginx-and-php-fundamentals.aspxHot on the heels of my .NET caching course, I’ve had my first “fundamentals” course released on Pluralsight: Nginx and PHP Fundamentals. It’s a practical look at two of the biggest technologies on the web – Nginx, which is the fastest growing HTTP server around (currently hosting 100+ million sites), and PHP, which powers more websites than any other server-side framework (currently 240+ million sites). The two technologies work well together, both are open-source and cross-platform and both are lightweight and easy to get started with - you just need to download and unzip the runtimes, and with a text editor you can create and host dynamic websites. I’ve used PHP as a second (sometimes third) language since 2005 when I was brought cold into an established codebase to help improve performance, and Nginx to host tier 2 apps for the last couple of years. As with any training course, you learn new things as you produce it, and it was good to focus on a different stack from my commercial .NET world. In the course I start with a website in two parts – one which is just static content, and one which processes a user registration form using ASP.NET MVC, both running in IIS. Over four modules I migrate the app to Nginx and PHP: Hosting Static Content in Nginx – how to deploy and configure Nginx for a basic website; PHP Part 1: Basic Web Forms – installing PHP and an IDE, and building a simple form with server-side validation; PHP Part 2: Packages and Integration – using PECL and Composer for packages to connect to Azure, AWS, Mongo and reCAPTCHA; Hosting PHP in Nginx – configuring Nginx to host our PHP site. Along the way I run some performance stats with JMeter, and the headlines are that Nginx running on Linux outperforms IIS on Windows for static content,by 800 requests per second over 1000 concurrent requests; and Linux+Ngnix+PHP outperforms Windows+IIS+ASP.NET MVC by 700 request per second with the same load. Of course, the headline stats don’t tell the whole story, and when you add OpCode caching for PHP and the ASP.NET Output Cache, the results are very different. As Web architecture moves away from heavy server-side processing, to Single Page Apps with client-side frameworks like AngularJS and Knockout, I think there’s an increasing need for high-performance, low-cost server technologies, and the combination of Nginx and PHP makes a compelling case.

    Read the article

  • Silverlight User Group of Switzerland (SLUGS)

    - by Laurent Bugnion
    Last Thursday, the Silverlight Firestarter event took place in Redmond, and was streamed live to a large audience worldwide (around 20’000 people). Approximately 30 if them were in Wallisellen near Zurich, in Microsoft Switzerland’s offices. This was not only a great occasion to learn more about the future of Silverlight and to see great demos, but also it was the very first meeting of the Silverlight User Group of Switzerland (SLUGS). Having 30 people for a first meeting was a great success, especially if we consider that it was REALLY cold that night, that it had snowed 20 cm the night before! We all had a good time, and 3 lucky winners went back home with a prize: One LG Optimus 7 Windows Phone and two copies of Silverlight 4 Unleashed. Congratulations to the winners! After the keynote (which went in a whirlwind, shortest 90 minutes ever!), we all had pizza and beverages generously sponsored by the Swiss DPE team, of which not less than 5 guys came to the event! Thanks to Stefano, Ronnie, Sascha, Big Mike and Ken for attending! We decided to have meetings every month. Stay tuned for announcements on when and where the events will take place. We are also in the process of creating various groups online where the attendees can find more information. For instance, I created a group on Flickr where the pictures taken at events will be published. The group is public, and the pictures of the first event are already online! We also have the already known page at http://www.slugs.ch/, check it out. A national group Even though the first event was in Zurich, and that 3 of the founding members live nearby, we would like to try and be a national group. That means having events sometimes in other parts of Switzerland, collaborating with other local user groups, etc. Stay tuned for more Join! We want you, we need you If you are doing Silverlight, for a living or as a hobby, if you are interested in user experience, XAML, Expression Blend and many more topics, you should consider joining! This is a great occasion to exchange experiences, to learn from Silverlight experts, to hear sessions about various topics related to Silverlight, etc. If you want to talk about a topic that is of interest to you, If you want to propose a topic of discussion Or if you just want to hang out then go to http://www.slugs.ch and register! Cheers, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Windows Azure Use Case: Fast Acquisitions

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many organizations absorb, take over or merge with other organizations. In these cases, one of the most difficult parts of the process is the merging or changing of the IT systems that the employees use to do their work, process payments, and even get paid. Normally this means that the two companies have disparate systems, and several approaches can be used to have the two organizations use technology between them. An organization may choose to retain both systems, and manage them separately. The advantage here is speed, and keeping the profit/loss sheets separate. Another choice is to slowly “sunset” or stop using one organization’s system, and cutting to the other system immediately or at a later date. Although a popular choice, one of the most difficult methods is to extract data and processes from one system and import it into the other. Employees at the transitioning system have to be trained on the new one, the data must be examined and cleansed, and there is inevitable disruption when this happens. Still another option is to integrate the systems. This may prove to be as much work as a transitional strategy, but may have less impact on the users or the balance sheet. Implementation: A distributed computing paradigm can be a good strategic solution to most of these strategies. Retaining both systems is made more simple by allowing the users at the second organization immediate access to the new system, because security accounts can be created quickly inside an application. There is no need to set up a VPN or any other connections than just to the Internet. Having the users stop using one system and start with the other is also simple in Windows Azure for the same reason. Extracting data to Azure holds the same limitations as an on-premise system, and may even be more problematic because of the large data transfers that might be required. In a distributed environment, you pay for the data transfer, so a mixed migration strategy is not recommended. However, if the data is slowly migrated over time with a defined cutover, this can be an effective strategy. If done properly, an integration strategy works very well for a distributed computing environment like Windows Azure. If the Azure code is architected as a series of services, then endpoints can expose the service into and out of not only the Azure platform, but internally as well. This is a form of the Hybrid Application use-case documented here. References: Designing for Cloud Optimized Architecture: http://blogs.msdn.com/b/dachou/archive/2011/01/23/designing-for-cloud-optimized-architecture.aspx 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0

    Read the article

  • Silverlight Cream for April 04, 2010 -- #830

    - by Dave Campbell
    In this Issue: Michael Washington, Hassan, David Anson, Jeff Wilcox, UK Application Development Consulting, Davide Zordan, Victor Gaudioso, Anoop Madhusudanan, Phil Middlemiss, and Laurent Bugnion. Shoutouts: Josh Smith has a good-read post up: Design-time data is still data Shawn Hargreaves reported his MIX demo released From SilverlightCream.com: Silverlight MVVM: Enabling Design-Time Data in Expression Blend When Using Web Services Michael Washington has a tutorial up on MVVM and using a web service to get design-time data that works in Blend also... lots of information and screenshots. WP7 Transition Animation Hassan has a new WP7 tutorial up that demonstrates playing media and adding transition animation between pages. Tip: For a truly read-only custom DependencyProperty in Silverlight, use a read-only CLR property instead David Anson's latest tip is in response to comments on his previous post and details one by Dr. WPF who points out that a read-only DependencyProperty doesn't actually need to be a DependencyProperty as long as the class implements INotifyPropertyChanged. Template parts and custom controls (quick tip) Jeff Wilcox has posted a set of tips and recommendations to use when developing control development in Silverlight ... this is a post to bookmark. Flexible Data Template Support in Silverlight The UK Application Development Consulting details a 'problem' in Silverlight that doesn't exist in WPF and that is data templates that vary by type... and discusses a way around it. Multi-Touch enabling Silverlight Simon using Blend behaviors and the Surface sample for Silverlight Davide Zordan brought Multi-Touch to the Silverlight Simon game on CodePlex using Blend Behaviors. New Video Tutorial: How to Use a Behavior to Fire Methods from Objects in Styles Victor Gaudioso has a video tutorial up responding to a question from a developer. He demonstrates development of a Behavior that can be attached to objects in or out of Styles that allows you to specify what Method they need to fire. Creating a Silverlight Client for @shanselman ’s Nerd Dinner, using oData and Bing Maps Anoop Madhusudanan took Scott Hanselman's post on an OData API for StackOverflow, and has created a Silverlight client for Nerd Dinner, including BingMaps. A Chrome and Glass Theme - Part 2 Phil Middlemiss has the next part of his Chrome and Glass Theme up. In this one he creates a very nice chrome-look button with visual state changes. MVVM Light Toolkit V3 SP1 for Windows Phone 7 Laurent Bugnion has released a new version of MVVM Light for WP7. Included is an installation manual and information about what was changed. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Vodacom Call Center Management on the NetBeans Platform

    - by Geertjan
    If you live in South Africa, you know about Vodacom. Vodacom is one of the dominant mobile communication companies in South Africa, and beyond, providing voice, messaging, data, and similar mobile services. Inside Vodacom there's an application named Helios, which is a call centre application that had its inception in 2009 and consists of two parts. Firstly, a web-based front-end that allows a call centre agent to service subscribers using a Google-like search on a knowledge base structured as a collection of FAQs. The web-based front-end uses plain-old HTML + CSS + a good helping of JQuery and JQueryUI. This is delivered via JSR-168 portlets running on a cluster of IBM Portal 6 servers. In turn, the portlets communicate via RMI with several back-end EJB's containing the business logic. These EJB's are deployed on a cluster of Weblogic Application Servers, version 10.3.6. The second part is a NetBeans Platform application used for maintaining and constructing the knowledge base, i.e., the back-end of the web-based front-end. Helios is also used for a number of other maintenance functions, such as access permissions, user maintenance, and news bulletins. Below, in the web-based front-end, call centre agents can enter search terms and are presented with a number of FAQs from the knowledge base. Upon selecting a FAQ article, the agent is presented with the article text, the process to guide the subscriber, system checks that display information specific to the subscriber, and links to related applications and articles: Below, you can see that applications are searchable and can be accessed using the same web-based front-end as shown above. And, as can be seen below, knowledge base FAQs are maintained using the Helios Maintenance Application, which is the Vodacom application built on the NetBeans Platform: Several thousand call centre agent user accounts are administered using the Helios Maintenance Application. Below the main FAQ page is shown, together with the About dialog: Vodacom is happy with the back-end NetBeans Platform application. However, the front-end stack runs on quite old technology. Ideally Vodacom would like to migrate the portlets to Oracle Weblogic Portal or Oracle WebCenter, but this hasn't been accomplished yet. Migrating makes sense as the rest of the application server environment consists entirely of Oracle products.

    Read the article

  • SQL SERVER – Automation Process Good or Ugly

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by SQL Server Insane Asylum. The idea of this post really caught my attention. Automation – something getting itself done after the initial programming, is my understanding of the subject. The very next thought was – is it good or evil? The reality is there is no right answer. However, what if we quickly note a few things, then I would like to request your help to complete this post. We will start with the positive parts in SQL Server where automation happens. The Good If I start thinking of SQL Server and Automation the very first thing that comes to my mind is SQL Agent, which runs various jobs. Once I configure any task or job, it runs fine (till something goes wrong!). Well, automation has its own advantages. We all have used SQL Agent for so many things – backup, various validation jobs, maintenance jobs and numerous other things. What other kinds of automation tasks do you run in your database server? The Ugly This part is very interesting, because it can get really ugly(!). During my career I have found so many bad automation agent jobs. Client had an agent job where he was dropping the clean buffers every hour Client using database mail to send regular emails instead of necessary alert related emails The best one – A client used new Missing Index and Unused Index scripts in SQL Agent Job to follow suggestions 100%. Believe me, I have never seen such a badly performing and hard to optimize database. (I ended up dropping all non-clustered indexes on the development server and ran production workload on the development server again, then configured with optimal indexes). Shrinking database is performance killer. It should never be automated. SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance The one I hate the most is AutoShrink Database. It has given me hard time in my career quite a few times. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server Automation is necessary but common sense is a must when creating automation. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Tron: Legacy, 3D goggles, and embedded UA

    - by Roger Hart
    The 3D edition of Tron: Legacy opens with embedded user assistance. The film starts with an iconic white-on-black command-prompt message exhorting viewers to keep their 3D glasses on throughout. I can't quote it verbatim, and at the time of writing nor could anybody findable with 5 minutes of googling. But it was something like: "Although parts of the movie are 2D, it was shot in 3D, and glasses should be worn at all times. This is how it was intended to be viewed" Yeah - "intended". That part is verbatim. Wow. Now, I appreciate that even out of the small sub-set of readers who care a rat's ass for critical theory, few will be quite so gung-ho for the whole "death of the author" shtick as I tend to be. And yes, this is ergonomic rather than interpretive, but really - telling an audience how you expect them to watch a movie? That's up there with Big Steve's "you're holding it wrong" Even if it solves the problem, it's pretty arrogant. If anything, it's worse than RTFM. And if enough people are doing it wrong that you have to include the announcement, then maybe - just maybe - you've got a UX and/or design problem. Plus, current 3D glasses are like sitting in a darkened room, cosplaying the lovechild of Spider Jerusalem and Jarvis Cocker. Ok, so that observation was weirder than it was helpful; but seriously, nobody wants to wear the glasses if they don't have to. They ruin the visual experience of the non-3D sections, and personally, I find them pretty disruptive to the suspension of disbelief. This is an old, old, problem, and I'm carping on about it because Tron is enjoyable mass-market slush. It's easier for me to say "no, I can't just put some text on it. It's fundamentally broken, redesign it." in the middle of a small-ish, agile, software project than it would be for some beleaguered production assistant at the end of editing a $200 million movie. But lots of folks in software don't even get to do that. Way more people are going to see Tron, and be annoyed by this, than will ever read a technical communication blog. So hopefully, after two hours of being mildly annoyed, wanting to turn the brightness up, and slowly getting a headache, they'll realise something very, very important: you just can't document your way out of a shoddy UI.

    Read the article

  • SQL SERVER – Maximize Database Performance with DB Optimizer – SQL in Sixty Seconds #054

    - by Pinal Dave
    Performance tuning is an interesting concept and everybody evaluates it differently. Every developer and DBA have different opinion about how one can do performance tuning. I personally believe performance tuning is a three step process Understanding the Query Identifying the Bottleneck Implementing the Fix While, we are working with large database application and it suddenly starts to slow down. We are all under stress about how we can get back the database back to normal speed. Most of the time we do not have enough time to do deep analysis of what is going wrong as well what will fix the problem. Our primary goal at that time is to just fix the database problem as fast as we can. However, here is one very important thing which we need to keep in our mind is that when we do quick fix, it should not create any further issue with other parts of the system. When time is essence and we want to do deep analysis of our system to give us the best solution we often tend to make mistakes. Sometimes we make mistakes as we do not have proper time to analysis the entire system. Here is what I do when I face such a situation – I take the help of DB Optimizer. It is a fantastic tool and does superlative performance tuning of the system. Everytime when I talk about performance tuning tool, the initial reaction of the people is that they do not want to try this as they believe it requires lots of the learning of the tool before they use it. It is absolutely not true with the case of the DB optimizer. It is a very easy to use and self intuitive tool. Once can get going with the product, in no time. Here is a quick video I have build where I demonstrate how we can identify what index is missing for query and how we can quickly create the index. Entire three steps of the query tuning are completed in less than 60 seconds. If you are into performance tuning and query optimization you should download DB Optimizer and give it a go. Let us see the same concept in following SQL in Sixty Seconds Video: You can Download DB Optimizer and reproduce the same Sixty Seconds experience. Related Tips in SQL in Sixty Seconds: Performance Tuning – Part 1 of 2 – Getting Started and Configuration Performance Tuning – Part 2 of 2 – Analysis, Detection, Tuning and Optimizing What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Interview Questions and Answers, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Identity

    Read the article

  • ORM Profiler v1.1 has been released!

    - by FransBouma
    We've released ORM Profiler v1.1, which has the following new features: Real time profiling A real time viewer (RTV) has been added, which gives insight in the activity as it is received by the client, in two views: a chronological connection overview and an activity graph overview. This RTV allows the user to directly record to a snapshot using record buttons, pause the view, mark a range to create a snapshot from that range, and view graphs about the # of connection open actions and # of commands per second. The RTV has a 'range' in which it keeps live data and auto-cleans data that's older than this range. Screenshot of the activity graphs part of the real-time viewer: Low-level activity tab A new tab has been added to the Application tabs: the Low-level activity tab. This tab shows the main activity as it has been received over the named pipe. It can help to get insight in the chronological activity without the grouping over connections, so multiple connections at the same time per thread are easier to spot. Clicking a command will sync the rest of the application tabs, clicking a row will show the details below the splitter bar, as it is done with the other application tabs as well. Default application name in interceptor When an empty string or null is passed for application name to the Initialize method of the interceptor, the AppDomain's friendly name is used instead. Copy call stack to clipboard A call stack viewed in a grid in various parts of the UI is now copyable to the clipboard by clicking a button. Enable/Disable interceptor from the config file It's now possible to enable/disable the interceptor Initialization from the application's config file, using: Code: <appSettings> <add key="ORMProfilerEnabled" value="true"/> </appSettings> if value is true, the interceptor's Initialize method will proceed. If the value is false, the interceptor's Initialize method will not proceed and initialization won't be performed, meaning no interception will take place. If the setting is absent, or misconfigured, the Initialize method will proceed as normal and perform the initialization. Stored procedure calls for select databases are now properly displayed as a call For the databases: SQL Server, Oracle, DB2, Sybase ASA, Sybase ASE and Informix a stored procedure call is displayed as an execute/call statement and copy to clipboard works as-is. I'm especially happy with the new real-time profiling feature in ORM Profiler, which is the flagship feature for this release: it offers a completely new way to use the profiler, namely directly during debugging: you can immediately see what's going on without the necessity of a snapshot. The activity graph feature combined with the auto-cleanup of older data, allows you to keep the profiler open for a long period of time and see any spike of activity on the profiled application.

    Read the article

  • Regular Expressions Reference Tables Updated

    - by Jan Goyvaerts
    The regular expressions reference on the Regular-Expressions.info website was completely overhauled with the big update of that site last month. In the past, the reference section consisted of two parts. One part was a summary of the regex features commonly found in Perl-style regex flavors with short descriptions and examples. This part of the reference ignored differences between regex flavors and omitted most features that don’t have wide support. The other part was a regular expression flavor comparison that listed many more regex features along with YES/no indicators for many regex flavors, but without any explanations of the features. When reworking the site, I wanted to make the reference section more detailed, with descriptions and examples of all the syntax supported by the flavors discussed on the site. Doing that resulted in a reference that lists many features that are only supported by a few regex flavors. For such a reference to be usable, it needs to indicate which flavors support each feature. My original design for the new reference table used two rows for each feature. The first row had 4 columns with a label, syntax, description, and example, similar to the old reference tables. The second row had 20 columns indicating which versions of which flavors support these features. While the double-row design allowed all the information to fit within the table without requiring horizontal scrolling, it made it more difficult to quickly scan the tables for the feature you’re looking for. To make the new reference tables easier to read, they now have only a single row for each feature. The first 4 columns are the same as before. The remaining two columns show which versions of two regular expression flavors support the feature. You can use the drop-down lists above the table to choose the flavors the table should indicate. The site uses cookies to allow the flavor choices to persist while you navigate the reference. The result of this latest update is that the new regex tables are now just as easy to read as the ten-year-old tables on the old site were, while still covering all the features big and small of all the flavors discussed on the site.

    Read the article

  • Book Review: Inside Windows Communicat?ion Foundation by Justin Smith

    - by Sam Abraham
    In gearing up for a new major project, I have taken it upon myself to research and review various aspects of our Microsoft stack of choice seeking new creative ways for us to leverage in our upcoming state-of-the-art solution projected to position us ahead of the competition. While I am a big supporter of search engines and online articles as a quick and usually reliable source of information, I have opted in my investigative quest to actually “hit the books”.  I have also made it a habit to provide quick reviews for material I go over hoping this can be of help to someone who may be looking for items others may have had success using for reference. I have started a few months ago by investigating better ways to implementing, profiling and troubleshooting SQL Server 2008. My reference of choice was Itzik Ben-Gan et al’s “Inside Microsoft SQL Server 2008” series. While it has been a month since my last book review, this by no means meant that I have been sitting idle. It has been pretty challenging to balance research with the continuous flow of projects and deadlines all while balancing that with my family duties which, of course, always comes first. In this post, I will be providing a quick review of my latest reading: Inside Windows Communication Foundation by Justin Smith. This book has been on my reading list for a very long time and I am proud to have finally tackled it. Justin’s book presents a great coverage of WCF internals. His simple, concise and well-worded style has simplified the relatively complex internals of WCF and made it comprehensible. Justin opted to organize the book into three parts: an introduction to WCF, coverage of the Channel Layer and a look at WCF internals at the ServiceModel layer. Part I introduced the concepts and made the case behind WCF while covering a simplified version of WCF’s message patterns, endpoints and contracts. In Part II, Justin provided a thorough coverage of the internals of Messages, Channels and Channel Managers. Part III concluded this nice reading with coverage of Bindings, Contracts, Dispatchers and Clients. While one would not likely need to extend WCF at that low level of the API, an understanding of the inner-workings of WCF is a must to avoid pitfalls mainly caused by misinformation or erroneous assumptions. Problems can quickly arise in high-traffic hosted solutions, but most can be easily avoided with some minimal time investment and education. My next goal is to pay a closer look at WCF from the programmer’s API perspective now that I have acquired a better understanding of its inner working.   Many thanks to the O’Reilly User Group Program and its support of our West Palm Beach Developers’ Group.   Stay tuned for more… All the best, --Sam

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >