Search Results

Search found 5162 results on 207 pages for 'bill white'.

Page 193/207 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • SQL Server and Hyper-V Dynamic Memory Part 3

    - by SQLOS Team
    In parts 1 and 2 of this series we looked at the basics of Hyper-V Dynamic Memory and SQL Server memory management. In this part Serdar looks at configuration guidelines for SQL Server memory management. Part 3: Configuration Guidelines for Hyper-V Dynamic Memory and SQL Server Now that we understand SQL Server Memory Management and Hyper-V Dynamic Memory basics, let’s take a look at general configuration guidelines in order to utilize benefits of Hyper-V Dynamic Memory in your SQL Server VMs. Requirements Host Operating System Requirements Hyper-V Dynamic Memory feature is introduced with Windows Server 2008 R2 SP1. Therefore in order to use Dynamic Memory for your virtual machines, you need to have Windows Server 2008 R2 SP1 or Microsoft Hyper-V Server 2008 R2 SP1 in your Hyper-V host. Guest Operating System Requirements In addition to this Dynamic Memory is only supported in Standard, Web, Enterprise and Datacenter editions of windows running inside VMs. Make sure that your VM is running one of these editions. For additional requirements on each operating system see “Dynamic Memory Configuration Guidelines” here. SQL Server Requirements All versions of SQL Server support Hyper-V Dynamic Memory. However, only certain editions of SQL Server are aware of dynamically changing system memory. To have a truly dynamic environment for your SQL Server VMs make sure that you are running one of the SQL Server editions listed below: ·         SQL Server 2005 Enterprise ·         SQL Server 2008 Enterprise / Datacenter Editions ·         SQL Server 2008 R2 Enterprise / Datacenter Editions Configuration guidelines for other versions of SQL Server are covered below in the FAQ section. Guidelines for configuring Dynamic Memory Parameters Here is how to configure Dynamic Memory for your SQL VMs in a nutshell: Hyper-V Dynamic Memory Parameter Recommendation Startup RAM 1 GB + SQL Min Server Memory Maximum RAM > SQL Max Server Memory Memory Buffer % 5 Memory Weight Based on performance needs   Startup RAM In order to ensure that your SQL Server VMs can start correctly, ensure that Startup RAM is higher than configured SQL Min Server Memory for your VMs. Otherwise SQL Server service will need to do paging in order to start since it will not be able to see enough memory during startup. Also note that Startup Memory will always be reserved for your VMs. This will guarantee a certain level of performance for your SQL Servers, however setting this too high will limit the consolidation benefits you’ll get out of your virtualization environment. Maximum RAM This one is obvious. If you’ve configured SQL Max Server Memory for your SQL Server, make sure that Dynamic Memory Maximum RAM configuration is higher than this value. Otherwise your SQL Server will not grow to memory values higher than the value configured for Dynamic Memory. Memory Buffer % Memory buffer configuration is used to provision file cache to virtual machines in order to improve performance. Due to the fact that SQL Server is managing its own buffer pool, Memory Buffer setting should be configured to the lowest value possible, 5%. Configuring a higher memory buffer will prevent low resource notifications from Windows Memory Manager and it will prevent reclaiming memory from SQL Server VMs. Memory Weight Memory weight configuration defines the importance of memory to a VM. Configure higher values for the VMs that have higher performance requirements. VMs with higher memory weight will have more memory under high memory pressure conditions on your host. Questions and Answers Q1 – Which SQL Server memory model is best for Dynamic Memory? The best SQL Server model for Dynamic Memory is “Locked Page Memory Model”. This memory model ensures that SQL Server memory is never paged out and it’s also adaptive to dynamically changing memory in the system. This will be extremely useful when Dynamic Memory is attempting to remove memory from SQL Server VMs ensuring no SQL Server memory is paged out. You can find instructions on configuring “Locked Page Memory Model” for your SQL Servers here. Q2 – What about other SQL Server Editions, how should I configure Dynamic Memory for them? Other editions of SQL Server do not adapt to dynamically changing environments. They will determine how much memory they should allocate during startup and don’t change this value afterwards. Therefore make sure that you configure a higher startup memory for your VM because that will be all the memory that SQL Server utilize Tune Maximum Memory and Memory Buffer based on the other workloads running on the system. If there are no other workloads consider using Static Memory for these editions. Q3 – What if I have multiple SQL Server instances in a VM? Having multiple SQL Server instances in a VM is not a general recommendation for predictable performance, manageability and isolation. In order to achieve a predictable behavior make sure that you configure SQL Min Server Memory and SQL Max Server Memory for each instance in the VM. And make sure that: ·         Dynamic Memory Startup Memory is greater than the sum of SQL Min Server Memory values for the instances in the VM ·         Dynamic Memory Maximum Memory is greater than the sum of SQL Max Server Memory values for the instances in the VM Q4 – I’m using Large Page Memory Model for my SQL Server. Can I still use Dynamic Memory? The short answer is no. SQL Server does not dynamically change its memory size when configured with Large Page Memory Model. In virtualized environments Hyper-V provides large page support by default. Most of the time, Large Page Memory Model doesn’t bring any benefits to a SQL Server if it’s running in virtualized environments. Q5 – How do I monitor SQL performance when I’m trying Dynamic Memory on my VMs? Use the performance counters below to monitor memory performance for SQL Server: Process - Working Set: This counter is available in the VM via process performance counters. It represents the actual amount of physical memory being used by SQL Server process in the VM. SQL Server – Buffer Cache Hit Ratio: This counter is available in the VM via SQL Server counters. This represents the paging being done by SQL Server. A rate of 90% or higher is desirable. Conclusion These blog posts are a quick start to a story that will be developing more in the near future. We’re still continuing our testing and investigations to provide more detailed configuration guidelines with example performance numbers with a white paper in the upcoming months. Now it’s time to give SQL Server and Hyper-V Dynamic Memory a try. Use this guidelines to kick-start your environment. See what you think about it and let us know of your experiences. - Serdar Sutay Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • What&rsquo;s New in ASP.NET 4.0 Part Two: WebForms and Visual Studio Enhancements

    - by Rick Strahl
    In the last installment I talked about the core changes in the ASP.NET runtime that I’ve been taking advantage of. In this column, I’ll cover the changes to the Web Forms engine and some of the cool improvements in Visual Studio that make Web and general development easier. WebForms The WebForms engine is the area that has received most significant changes in ASP.NET 4.0. Probably the most widely anticipated features are related to managing page client ids and of ViewState on WebForm pages. Take Control of Your ClientIDs Unique ClientID generation in ASP.NET has been one of the most complained about “features” in ASP.NET. Although there’s a very good technical reason for these unique generated ids - they guarantee unique ids for each and every server control on a page - these unique and generated ids often get in the way of client-side JavaScript development and CSS styling as it’s often inconvenient and fragile to work with the long, generated ClientIDs. In ASP.NET 4.0 you can now specify an explicit client id mode on each control or each naming container parent control to control how client ids are generated. By default, ASP.NET generates mangled client ids for any control contained in a naming container (like a Master Page, or a User Control for example). The key to ClientID management in ASP.NET 4.0 are the new ClientIDMode and ClientIDRowSuffix properties. ClientIDMode supports four different ClientID generation settings shown below. For the following examples, imagine that you have a Textbox control named txtName inside of a master page control container on a WebForms page. <%@Page Language="C#"      MasterPageFile="~/Site.Master"     CodeBehind="WebForm2.aspx.cs"     Inherits="WebApplication1.WebForm2"  %> <asp:Content ID="content"  ContentPlaceHolderID="content"               runat="server"               ClientIDMode="Static" >       <asp:TextBox runat="server" ID="txtName" /> </asp:Content> The four available ClientIDMode values are: AutoID This is the existing behavior in ASP.NET 1.x-3.x where full naming container munging takes place. <input name="ctl00$content$txtName" type="text"        id="ctl00_content_txtName" /> This should be familiar to any ASP.NET developer and results in fairly unpredictable client ids that can easily change if the containership hierarchy changes. For example, removing the master page changes the name in this case, so if you were to move a block of script code that works against the control to a non-Master page, the script code immediately breaks. Static This option is the most deterministic setting that forces the control’s ClientID to use its ID value directly. No naming container naming at all is applied and you end up with clean client ids: <input name="ctl00$content$txtName"         type="text" id="txtName" /> Note that the name property which is used for postback variables to the server still is munged, but the ClientID property is displayed simply as the ID value that you have assigned to the control. This option is what most of us want to use, but you have to be clear on that because it can potentially cause conflicts with other controls on the page. If there are several instances of the same naming container (several instances of the same user control for example) there can easily be a client id naming conflict. Note that if you assign Static to a data-bound control, like a list child control in templates, you do not get unique ids either, so for list controls where you rely on unique id for child controls, you’ll probably want to use Predictable rather than Static. I’ll write more on this a little later when I discuss ClientIDRowSuffix. Predictable The previous two values are pretty self-explanatory. Predictable however, requires some explanation. To me at least it’s not in the least bit predictable. MSDN defines this value as follows: This algorithm is used for controls that are in data-bound controls. The ClientID value is generated by concatenating the ClientID value of the parent naming container with the ID value of the control. If the control is a data-bound control that generates multiple rows, the value of the data field specified in the ClientIDRowSuffix property is added at the end. For the GridView control, multiple data fields can be specified. If the ClientIDRowSuffix property is blank, a sequential number is added at the end instead of a data-field value. Each segment is separated by an underscore character (_). The key that makes this value a bit confusing is that it relies on the parent NamingContainer’s ClientID to build its own ClientID value. This effectively means that the value is not predictable at all but rather very tightly coupled to the parent naming container’s ClientIDMode setting. For my simple textbox example, if the ClientIDMode property of the parent naming container (Page in this case) is set to “Predictable” you’ll get this: <input name="ctl00$content$txtName" type="text"         id="content_txtName" /> which gives an id that based on walking up to the currently active naming container (the MasterPage content container) and starting the id formatting from there downward. Think of this as a semi unique name that’s guaranteed unique only for the naming container. If, on the other hand, the Page is set to “AutoID” you get the following with Predictable on txtName: <input name="ctl00$content$txtName" type="text"         id="ctl00_content_txtName" /> The latter is effectively the same as if you specified AutoID because it inherits the AutoID naming from the Page and Content Master Page control of the page. But again - predictable behavior always depends on the parent naming container and how it generates its id, so the id may not always be exactly the same as the AutoID generated value because somewhere in the NamingContainer chain the ClientIDMode setting may be set to a different value. For example, if you had another naming container in the middle that was set to Static you’d end up effectively with an id that starts with the NamingContainers id rather than the whole ctl000_content munging. The most common use for Predictable is likely to be for data-bound controls, which results in each data bound item getting a unique ClientID. Unfortunately, even here the behavior can be very unpredictable depending on which data-bound control you use - I found significant differences in how template controls in a GridView behave from those that are used in a ListView control. For example, GridView creates clean child ClientIDs, while ListView still has a naming container in the ClientID, presumably because of the template container on which you can’t set ClientIDMode. Predictable is useful, but only if all naming containers down the chain use this setting. Otherwise you’re right back to the munged ids that are pretty unpredictable. Another property, ClientIDRowSuffix, can be used in combination with ClientIDMode of Predictable to force a suffix onto list client controls. For example: <asp:GridView runat="server" ID="gvItems"              AutoGenerateColumns="false"             ClientIDMode="Static"              ClientIDRowSuffix="Id">     <Columns>     <asp:TemplateField>         <ItemTemplate>             <asp:Label runat="server" id="txtName"                        Text='<%# Eval("Name") %>'                   ClientIDMode="Predictable"/>         </ItemTemplate>     </asp:TemplateField>     <asp:TemplateField>         <ItemTemplate>         <asp:Label runat="server" id="txtId"                     Text='<%# Eval("Id") %>'                     ClientIDMode="Predictable" />         </ItemTemplate>     </asp:TemplateField>     </Columns>  </asp:GridView> generates client Ids inside of a column in the master page described earlier: <td>     <span id="txtName_0">Rick</span> </td> where the value after the underscore is the ClientIDRowSuffix field - in this case “Id” of the item data bound to the control. Note that all of the child controls require ClientIDMode=”Predictable” in order for the ClientIDRowSuffix to be applied, and the parent GridView controls need to be set to Static either explicitly or via Naming Container inheritance to give these simple names. It’s a bummer that ClientIDRowSuffix doesn’t work with Static to produce this automatically. Another real problem is that other controls process the ClientIDMode differently. For example, a ListView control processes the Predictable ClientIDMode differently and produces the following with the Static ListView and Predictable child controls: <span id="ctrl0_txtName_0">Rick</span> I couldn’t even figure out a way using ClientIDMode to get a simple ID that also uses a suffix short of falling back to manually generated ids using <%= %> expressions instead. Given the inconsistencies inside of list controls using <%= %>, ids for the ListView might not be a bad idea anyway. Inherit The final setting is Inherit, which is the default for all controls except Page. This means that controls by default inherit the parent naming container’s ClientIDMode setting. For more detailed information on ClientID behavior and different scenarios you can check out a blog post of mine on this subject: http://www.west-wind.com/weblog/posts/54760.aspx. ClientID Enhancements Summary The ClientIDMode property is a welcome addition to ASP.NET 4.0. To me this is probably the most useful WebForms feature as it allows me to generate clean IDs simply by setting ClientIDMode="Static" on either the page or inside of Web.config (in the Pages section) which applies the setting down to the entire page which is my 95% scenario. For the few cases when it matters - for list controls and inside of multi-use user controls or custom server controls) - I can use Predictable or even AutoID to force controls to unique names. For application-level page development, this is easy to accomplish and provides maximum usability for working with client script code against page controls. ViewStateMode Another area of large criticism for WebForms is ViewState. ViewState is used internally by ASP.NET to persist page-level changes to non-postback properties on controls as pages post back to the server. It’s a useful mechanism that works great for the overall mechanics of WebForms, but it can also cause all sorts of overhead for page operation as ViewState can very quickly get out of control and consume huge amounts of bandwidth in your page content. ViewState can also wreak havoc with client-side scripting applications that modify control properties that are tracked by ViewState, which can produce very unpredictable results on a Postback after client-side updates. Over the years in my own development, I’ve often turned off ViewState on pages to reduce overhead. Yes, you lose some functionality, but you can easily implement most of the common functionality in non-ViewState workarounds. Relying less on heavy ViewState controls and sticking with simpler controls or raw HTML constructs avoids getting around ViewState problems. In ASP.NET 3.x and prior, it wasn’t easy to control ViewState - you could turn it on or off and if you turned it off at the page or web.config level, you couldn’t turn it back on for specific controls. In short, it was an all or nothing approach. With ASP.NET 4.0, the new ViewStateMode property gives you more control. It allows you to disable ViewState globally either on the page or web.config level and then turn it back on for specific controls that might need it. ViewStateMode only works when EnableViewState="true" on the page or web.config level (which is the default). You can then use ViewStateMode of Disabled, Enabled or Inherit to control the ViewState settings on the page. If you’re shooting for minimal ViewState usage, the ideal situation is to set ViewStateMode to disabled on the Page or web.config level and only turn it back on particular controls: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"        ClientIDMode="Static"                ViewStateMode="Disabled"     EnableViewState="true"  %> <!-- this control has viewstate  --> <asp:TextBox runat="server" ID="txtName"  ViewStateMode="Enabled" />       <!-- this control has no viewstate - it inherits  from parent container --> <asp:TextBox runat="server" ID="txtAddress" /> Note that the EnableViewState="true" at the Page level isn’t required since it’s the default, but it’s important that the value is true. ViewStateMode has no effect if EnableViewState="false" at the page level. The main benefit of ViewStateMode is that it allows you to more easily turn off ViewState for most of the page and enable only a few key controls that might need it. For me personally, this is a perfect combination as most of my WebForm apps can get away without any ViewState at all. But some controls - especially third party controls - often don’t work well without ViewState enabled, and now it’s much easier to selectively enable controls rather than the old way, which required you to pretty much turn off ViewState for all controls that you didn’t want ViewState on. Inline HTML Encoding HTML encoding is an important feature to prevent cross-site scripting attacks in data entered by users on your site. In order to make it easier to create HTML encoded content, ASP.NET 4.0 introduces a new Expression syntax using <%: %> to encode string values. The encoding expression syntax looks like this: <%: "<script type='text/javascript'>" +     "alert('Really?');</script>" %> which produces properly encoded HTML: &lt;script type=&#39;text/javascript&#39; &gt;alert(&#39;Really?&#39;);&lt;/script&gt; Effectively this is a shortcut to: <%= HttpUtility.HtmlEncode( "<script type='text/javascript'>" + "alert('Really?');</script>") %> Of course the <%: %> syntax can also evaluate expressions just like <%= %> so the more common scenario applies this expression syntax against data your application is displaying. Here’s an example displaying some data model values: <%: Model.Address.Street %> This snippet shows displaying data from your application’s data store or more importantly, from data entered by users. Anything that makes it easier and less verbose to HtmlEncode text is a welcome addition to avoid potential cross-site scripting attacks. Although I listed Inline HTML Encoding here under WebForms, anything that uses the WebForms rendering engine including ASP.NET MVC, benefits from this feature. ScriptManager Enhancements The ASP.NET ScriptManager control in the past has introduced some nice ways to take programmatic and markup control over script loading, but there were a number of shortcomings in this control. The ASP.NET 4.0 ScriptManager has a number of improvements that make it easier to control script loading and addresses a few of the shortcomings that have often kept me from using the control in favor of manual script loading. The first is the AjaxFrameworkMode property which finally lets you suppress loading the ASP.NET AJAX runtime. Disabled doesn’t load any ASP.NET AJAX libraries, but there’s also an Explicit mode that lets you pick and choose the library pieces individually and reduce the footprint of ASP.NET AJAX script included if you are using the library. There’s also a new EnableCdn property that forces any script that has a new WebResource attribute CdnPath property set to a CDN supplied URL. If the script has this Attribute property set to a non-null/empty value and EnableCdn is enabled on the ScriptManager, that script will be served from the specified CdnPath. [assembly: WebResource(    "Westwind.Web.Resources.ww.jquery.js",    "application/x-javascript",    CdnPath =  "http://mysite.com/scripts/ww.jquery.min.js")] Cool, but a little too static for my taste since this value can’t be changed at runtime to point at a debug script as needed, for example. Assembly names for loading scripts from resources can now be simple names rather than fully qualified assembly names, which make it less verbose to reference scripts from assemblies loaded from your bin folder or the assembly reference area in web.config: <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <Scripts>         <asp:ScriptReference          Name="Westwind.Web.Resources.ww.jquery.js"         Assembly="Westwind.Web" />     </Scripts>        </asp:ScriptManager> The ScriptManager in 4.0 also supports script combining via the CompositeScript tag, which allows you to very easily combine scripts into a single script resource served via ASP.NET. Even nicer: You can specify the URL that the combined script is served with. Check out the following script manager markup that combines several static file scripts and a script resource into a single ASP.NET served resource from a static URL (allscripts.js): <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <CompositeScript          Path="~/scripts/allscripts.js">         <Scripts>             <asp:ScriptReference                    Path="~/scripts/jquery.js" />             <asp:ScriptReference                    Path="~/scripts/ww.jquery.js" />             <asp:ScriptReference            Name="Westwind.Web.Resources.editors.js"                 Assembly="Westwind.Web" />         </Scripts>     </CompositeScript> </asp:ScriptManager> When you render this into HTML, you’ll see a single script reference in the page: <script src="scripts/allscripts.debug.js"          type="text/javascript"></script> All you need to do to make this work is ensure that allscripts.js and allscripts.debug.js exist in the scripts folder of your application - they can be empty but the file has to be there. This is pretty cool, but you want to be real careful that you use unique URLs for each combination of scripts you combine or else browser and server caching will easily screw you up royally. The script manager also allows you to override native ASP.NET AJAX scripts now as any script references defined in the Scripts section of the ScriptManager trump internal references. So if you want custom behavior or you want to fix a possible bug in the core libraries that normally are loaded from resources, you can now do this simply by referencing the script resource name in the Name property and pointing at System.Web for the assembly. Not a common scenario, but when you need it, it can come in real handy. Still, there are a number of shortcomings in this control. For one, the ScriptManager and ClientScript APIs still have no common entry point so control developers are still faced with having to check and support both APIs to load scripts so that controls can work on pages that do or don’t have a ScriptManager on the page. The CdnUrl is static and compiled in, which is very restrictive. And finally, there’s still no control over where scripts get loaded on the page - ScriptManager still injects scripts into the middle of the HTML markup rather than in the header or optionally the footer. This, in turn, means there is little control over script loading order, which can be problematic for control developers. MetaDescription, MetaKeywords Page Properties There are also a number of additional Page properties that correspond to some of the other features discussed in this column: ClientIDMode, ClientTarget and ViewStateMode. Another minor but useful feature is that you can now directly access the MetaDescription and MetaKeywords properties on the Page object to set the corresponding meta tags programmatically. Updating these values programmatically previously required either <%= %> expressions in the page markup or dynamic insertion of literal controls into the page. You can now just set these properties programmatically on the Page object in any Control derived class on the page or the Page itself: Page.MetaKeywords = "ASP.NET,4.0,New Features"; Page.MetaDescription = "This article discusses the new features in ASP.NET 4.0"; Note, that there’s no corresponding ASP.NET tag for the HTML Meta element, so the only way to specify these values in markup and access them is via the @Page tag: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"      ClientIDMode="Static"                MetaDescription="Article that discusses what's                      new in ASP.NET 4.0"     MetaKeywords="ASP.NET,4.0,New Features" %> Nothing earth shattering but quite convenient. Visual Studio 2010 Enhancements for Web Development For Web development there are also a host of editor enhancements in Visual Studio 2010. Some of these are not Web specific but they are useful for Web developers in general. Text Editors Throughout Visual Studio 2010, the text editors have all been updated to a new core engine based on WPF which provides some interesting new features for various code editors including the nice ability to zoom in and out with Ctrl-MouseWheel to quickly change the size of text. There are many more API options to control the editor and although Visual Studio 2010 doesn’t yet use many of these features, we can look forward to enhancements in add-ins and future editor updates from the various language teams that take advantage of the visual richness that WPF provides to editing. On the negative side, I’ve noticed that occasionally the code editor and especially the HTML and JavaScript editors will lose the ability to use various navigation keys like arrows, back and delete keys, which requires closing and reopening the documents at times. This issue seems to be well documented so I suspect this will be addressed soon with a hotfix or within the first service pack. Overall though, the code editors work very well, especially given that they were re-written completely using WPF, which was one of my big worries when I first heard about the complete redesign of the editors. Multi-Targeting Visual Studio now targets all versions of the .NET framework from 2.0 forward. You can use Visual Studio 2010 to work on your ASP.NET 2, 3.0 and 3.5 applications which is a nice way to get your feet wet with the new development environment without having to make changes to existing applications. It’s nice to have one tool to work in for all the different versions. Multi-Monitor Support One cool feature of Visual Studio 2010 is the ability to drag windows out of the Visual Studio environment and out onto the desktop including onto another monitor easily. Since Web development often involves working with a host of designers at the same time - visual designer, HTML markup window, code behind and JavaScript editor - it’s really nice to be able to have a little more screen real estate to work on each of these editors. Microsoft made a welcome change in the environment. IntelliSense Snippets for HTML and JavaScript Editors The HTML and JavaScript editors now finally support IntelliSense scripts to create macro-based template expansions that have been in the core C# and Visual Basic code editors since Visual Studio 2005. Snippets allow you to create short XML-based template definitions that can act as static macros or real templates that can have replaceable values that can be embedded into the expanded text. The XML syntax for these snippets is straight forward and it’s pretty easy to create custom snippets manually. You can easily create snippets using XML and store them in your custom snippets folder (C:\Users\rstrahl\Documents\Visual Studio 2010\Code Snippets\Visual Web Developer\My HTML Snippets and My JScript Snippets), but it helps to use one of the third-party tools that exist to simplify the process for you. I use SnippetEditor, by Bill McCarthy, which makes short work of creating snippets interactively (http://snippeteditor.codeplex.com/). Note: You may have to manually add the Visual Studio 2010 User specific Snippet folders to this tool to see existing ones you’ve created. Code snippets are some of the biggest time savers and HTML editing more than anything deals with lots of repetitive tasks that lend themselves to text expansion. Visual Studio 2010 includes a slew of built-in snippets (that you can also customize!) and you can create your own very easily. If you haven’t done so already, I encourage you to spend a little time examining your coding patterns and find the repetitive code that you write and convert it into snippets. I’ve been using CodeRush for this for years, but now you can do much of the basic expansion natively for HTML and JavaScript snippets. jQuery Integration Is Now Native jQuery is a popular JavaScript library and recently Microsoft has recently stated that it will become the primary client-side scripting technology to drive higher level script functionality in various ASP.NET Web projects that Microsoft provides. In Visual Studio 2010, the default full project template includes jQuery as part of a new project including the support files that provide IntelliSense (-vsdoc files). IntelliSense support for jQuery is now also baked into Visual Studio 2010, so unlike Visual Studio 2008 which required a separate download, no further installs are required for a rich IntelliSense experience with jQuery. Summary ASP.NET 4.0 brings many useful improvements to the platform, but thankfully most of the changes are incremental changes that don’t compromise backwards compatibility and they allow developers to ease into the new features one feature at a time. None of the changes in ASP.NET 4.0 or Visual Studio 2010 are monumental or game changers. The bigger features are language and .NET Framework changes that are also optional. This ASP.NET and tools release feels more like fine tuning and getting some long-standing kinks worked out of the platform. It shows that the ASP.NET team is dedicated to paying attention to community feedback and responding with changes to the platform and development environment based on this feedback. If you haven’t gotten your feet wet with ASP.NET 4.0 and Visual Studio 2010, there’s no reason not to give it a shot now - the ASP.NET 4.0 platform is solid and Visual Studio 2010 works very well for a brand new release. Check it out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • CodePlex Daily Summary for Thursday, March 11, 2010

    CodePlex Daily Summary for Thursday, March 11, 2010New ProjectsASP.NET Wiki Control: This ASP.NET user control allows you to embed a very useful wiki directly into your already existing ASP.NET website taking advantage of the popula...BabyLog: Log baby daily activity.buddyHome: buddyHome is a project that can make your home smarter. as good as your buddy. Cloud Community: Cloud Community makes it easier for organizations to have a simple to use community platform. Our mission is to create an easy to use community pl...Community Connectors for Microsoft CRM 4.0: Community Connectors for Microsoft CRM 4.0 allows Microsoft CRM 4.0 customers and partners to monitor and analyze customers’ interaction from their...Console Highlighter: Hightlights Microsoft Windows Command prompt (cmd.exe) by outputting ANSI VT100 Control sequences to color the output. These sequences are not hand...Cornell Store: This is IN NO WAY officially affiliated or related to the Cornell University store. Instead, this is a project that I am doing for a class. Ther...DevUtilities: This project is for creating some utility tools, and they will be useful during the development.DotNetNuke® Skin Maple: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by DyNNamite.co.uk. The package includes 4 color variations and sev...HRNet: HRNetIIS Web Site Monitoring: A software for monitor a particular web site on IIS, even if its IP is sharing between different web site.Iowa Code Camp: The source code for the Iowa Code Camp website.Leonidas: Leonidas is a virtual tutorLunch 'n Learn: The Lunch 'n Learn web application is an open source ASP.NET MVC application that allows you to setup lunch 'n learn presentations for your team, c...MNT Cryptography: A very simple cryptography classMooiNooi MVC2LINQ2SQL Web Databinder: mvc2linq2sql is a databinder for ASP.NET MVC that make able developer to clean bind object from HTML FORMS to Linq entities. Even 1 to N relations ...MoqBot: MoqBot is an auto mocking library for Moq and Ninject.mtExperience1: hoiMvcPager: MvcPager is a free paging component for ASP.NET MVC web application, it exposes a series of extension methods for using in ASP.NET MVC applications...OCal: OCal is based on object calisthenics to identify code smellsPex Custom Arithmetic Solver: Pex Custom Arithmetic Solver contains a collection of meta-heuristic search algorithms. The goal is to improve Pex's code coverage for code involvi...SetControls: Расширеные контролы для ASP.NET приложений. Полная информация ближе к релизу...shadowrage1597: CTC 195 Game Design classSharePoint Team-Mailer: A SharePoint 2007 solution that defines a generic CustomList for sending e-mails to SharePoint Groups.Sql Share: SQL Share is a collaboration tool used within the science to allow database engineers to work tightly with domain scientists.TechCalendar: Tech Events Calendar ASP.NET project.ZLYScript: A very simple script language compiler.New ReleasesALGLIB: ALGLIB 2.4.0: New ALGLIB release contains: improved versions of several linear algebra algorithms: QR decomposition, matrix inversion, condition number estimatio...AmiBroker Plug-Ins with C#: AmiBroker Plug-Ins v0.0.2: Source codes and a binaryAppFabric Caching UI Admin Tool: AppFabric Caching Beta 2 UI Admin Tool: System Requirements:.NET 4.0 RC AppFabric Caching Beta2 Test On:Win 7 (64x)Autodocs - WCF REST Automatic API Documentation Generator: Autodocs.ServiceModel.Web: This archive contains the reference DLL, instructions and license.Compact Plugs & Compact Injection: Compact Injection and Compact Plugs 1.1 Beta: First release of Compact Plugs (CP). The solution includes a simple example project of CP, called "TestCompactPlugs1". Also some fixes where made ...Console Highlighter: Console Highlighter 0.9 (preview release): Preliminary release.Encrypted Notes: Encrypted Notes 1.3: This is the latest version of Encrypted Notes (1.3). It has an installer - it will create a directory 'CPascoe' in My Documents. The last one was ...Family Tree Analyzer: Version 1.0.2: Family Tree Analyzer Version 1.0.2 This early beta version implements loading a gedcom file and displaying some basic reports. These reports inclu...FRC1103 - FRC Dashboard viewer: 2010 Documentation v0.1: This is my current version of the control system documentation for 2010. It isn't complete, but it has the information required for a custom dashbo...jQuery.cssLess: jQuery.cssLess 0.5 (Even less release): NEW - support for nested special CSS classes (like :hover) MAIN RELEASE This release, code "Even less", is the one that will interpret cssLess wit...MooiNooi MVC2LINQ2SQL Web Databinder: MooiNooi MVC2LINQ2SQL DataBinder: I didn't try this... I just took it off from my project. Please, tell me any problem implementing in your own development and I'll be pleased to h...MvcPager: MvcPager 1.2 for ASP.NET MVC 1.0: MvcPager 1.2 for ASP.NET MVC 1.0Mytrip.Mvc: Mytrip 1.0 preview 1: Article Manager Blog Manager L2S Membership(.NET Framework 3.5) EF Membership(.NET Framework 4) User Manager File Manager Localization Captcha ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.117: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version adds ...Pex Custom Arithmetic Solver: PexCustomArithmeticSolver: This is the alpha release containing the Alternating Variable Method and Evolution Strategies to try and solve constraints over floating point vari...Scrum Sprint Monitor: v1.0.0.44877: What is new in this release? Major performance increase in animations (up to 50 fps from 2 fps) by replacing DropShadow effect with png bitmaps; ...sELedit: sELedit v1.0b: + Added support for empty strings / wstrings + Fixed: critical bug in configuration files (list 53)sPWadmin: pwAdmin v0.9_nightly: + Fixed: XML editor can now open and save character templates + Added: PWI item name database + Added: Plugin SupportTechCalendar: Events Calendar v.1.0: Initial release.The Silverlight Hyper Video Player [http://slhvp.com]: Beta 2: Beta 2.0 Some fixes from Beta 1, and a couple small enhancements. Intensive testing continues, and I will continue to update the code at least ever...ThreadSafe.Caching: 2010.03.10.1: Updates to the scavanging behaviour since last release. Scavenging will now occur every 30 seconds by default and all objects in the cache will be ...VCC: Latest build, v2.1.30310.0: Automatic drop of latest buildVisual Studio DSite: Email Sender (C++): The same Email Sender program that I but made in visual c plus plus 2008 instead of visual basic 2008.Web Forms MVP: Web Forms MVP CTP7: The release can be considered stable, and is in use behind several high traffic, public websites. It has been marked as a CTP release as it is not ...White Tiger: 0.0.3.1: Now you can load or create files with whatever root element you want *check f or sets file permisionsMost Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesASP.NET Ajax LibraryMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsN2 CMSFasterflect - A Fast and Simple Reflection APIjQuery Library for SharePoint Web ServicesBlogEngine.NETFarseer Physics Enginepatterns & practices – Enterprise LibraryCaliburn: An Application Framework for WPF and Silverlight

    Read the article

  • Routing Issue in ASP.NET MVC 3 RC 2

    - by imran_ku07
         Introduction:             Two weeks ago, ASP.NET MVC team shipped the ASP.NET MVC 3 RC 2 release. This release includes some new features and some performance optimization. This release also fixes most of the bugs but still some minor issues are present in this release. Some of these issues are already discussed by Scott Guthrie at Update on ASP.NET MVC 3 RC2 (and a workaround for a bug in it). In addition to these issues, I have found another issue in this release regarding routing. In this article, I will show you the issue regarding routing and a simple workaround for this issue.       Description:             The easiest way to understand an issue is to reproduce it in the application. So create a MVC 2 application and a MVC 3 RC 2 application. Then in both applications, just open global.asax file and update the default route as below,     routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id1}/{id2}", // URL with parameters new { controller = "Home", action = "Index", id1 = UrlParameter.Optional, id2 = UrlParameter.Optional } // Parameter defaults );              Then just open Index View and add the following lines,    <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Home Page </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <% Html.RenderAction("About"); %> </asp:Content>             The above view will issue a child request to About action method. Now run both applications. ASP.NET MVC 2 application will run just fine. But ASP.NET MVC 3 RC 2 application will throw an exception as shown below,                  You may think that this is a routing issue but this is not the case here as both ASP.NET MVC 2 and ASP.NET MVC  3 RC 2 applications(created above) are built with .NET Framework 4.0 and both will use the same routing defined in System.Web. Something is wrong in ASP.NET MVC 3 RC 2. So after digging into ASP.NET MVC source code, I have found that the UrlParameter class in ASP.NET MVC 3 RC 2 overrides the ToString method which simply return an empty string.     public sealed class UrlParameter { public static readonly UrlParameter Optional = new UrlParameter(); private UrlParameter() { } public override string ToString() { return string.Empty; } }             In MVC 2 the ToString method was not overridden. So to quickly fix the above problem just replace UrlParameter.Optional default value with a different value other than null or empty(for example, a single white space) or replace UrlParameter.Optional default value with a new class object containing the same code as UrlParameter class have except the ToString method is not overridden (or with a overridden ToString method that return a string value other than null or empty). But by doing this you will loose the benefit of ASP.NET MVC 2 Optional URL Parameters. There may be many different ways to fix the above problem and not loose the benefit of optional parameters. Here I will create a new class MyUrlParameter with the same code as UrlParameter class have except the ToString method is not overridden. Then I will create a base controller class which contains a constructor to remove all MyUrlParameter route data parameters, same like ASP.NET MVC doing with UrlParameter route data parameters early in the request.     public class BaseController : Controller { public BaseController() { if (System.Web.HttpContext.Current.CurrentHandler is MvcHandler) { RouteValueDictionary rvd = ((MvcHandler)System.Web.HttpContext.Current.CurrentHandler).RequestContext.RouteData.Values; string[] matchingKeys = (from entry in rvd where entry.Value == MyUrlParameter.Optional select entry.Key).ToArray(); foreach (string key in matchingKeys) { rvd.Remove(key); } } } } public class HomeController : BaseController { public ActionResult Index(string id1) { ViewBag.Message = "Welcome to ASP.NET MVC!"; return View(); } public ActionResult About() { return Content("Child Request Contents"); } }     public sealed class MyUrlParameter { public static readonly MyUrlParameter Optional = new MyUrlParameter(); private MyUrlParameter() { } }     routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id1}/{id2}", // URL with parameters new { controller = "Home", action = "Index", id1 = MyUrlParameter.Optional, id2 = MyUrlParameter.Optional } // Parameter defaults );             MyUrlParameter class is a copy of UrlParameter class except that MyUrlParameter class not overrides the ToString method. Note that the default route is modified to use MyUrlParameter.Optional instead of UrlParameter.Optional. Also note that BaseController class constructor is removing MyUrlParameter parameters from the current request route data so that the model binder will not bind these parameters with action method parameters. Now just run the ASP.NET MVC 3 RC 2 application again, you will find that it runs just fine.             In case if you are curious to know that why ASP.NET MVC 3 RC 2 application throws an exception if UrlParameter class contains a ToString method which returns an empty string, then you need to know something about a feature of routing for url generation. During url generation, routing will call the ParsedRoute.Bind method internally. This method includes a logic to match the route and build the url. During building the url, ParsedRoute.Bind method will call the ToString method of the route values(in our case this will call the UrlParameter.ToString method) and then append the returned value into url. This method includes a logic after appending the returned value into url that if two continuous returned values are empty then don't match the current route otherwise an incorrect url will be generated. Here is the snippet from ParsedRoute.Bind method which will prove this statement.       if ((builder2.Length > 0) && (builder2[builder2.Length - 1] == '/')) { return null; } builder2.Append("/"); ........................................................... ........................................................... ........................................................... ........................................................... if (RoutePartsEqual(obj3, obj4)) { builder2.Append(UrlEncode(Convert.ToString(obj3, CultureInfo.InvariantCulture))); continue; }             In the above example, both id1 and id2 parameters default values are set to UrlParameter object and UrlParameter class include a ToString method that returns an empty string. That's why this route will not matched.            Summary:             In this article I showed you the issue regarding routing and also showed you how to workaround this problem. I explained this issue with an example by creating a ASP.NET MVC 2 and a ASP.NET MVC 3 RC 2 application. Finally I also explained the reason for this issue. Hopefully you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • Launching a WPF Window in a Separate Thread, Part 1

    - by Reed
    Typically, I strongly recommend keeping the user interface within an application’s main thread, and using multiple threads to move the actual “work” into background threads.  However, there are rare times when creating a separate, dedicated thread for a Window can be beneficial.  This is even acknowledged in the MSDN samples, such as the Multiple Windows, Multiple Threads sample.  However, doing this correctly is difficult.  Even the referenced MSDN sample has major flaws, and will fail horribly in certain scenarios.  To ease this, I wrote a small class that alleviates some of the difficulties involved. The MSDN Multiple Windows, Multiple Threads Sample shows how to launch a new thread with a WPF Window, and will work in most cases.  The sample code (commented and slightly modified) works out to the following: // Create a thread Thread newWindowThread = new Thread(new ThreadStart( () => { // Create and show the Window Window1 tempWindow = new Window1(); tempWindow.Show(); // Start the Dispatcher Processing System.Windows.Threading.Dispatcher.Run(); })); // Set the apartment state newWindowThread.SetApartmentState(ApartmentState.STA); // Make the thread a background thread newWindowThread.IsBackground = true; // Start the thread newWindowThread.Start(); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This sample creates a thread, marks it as single threaded apartment state, and starts the Dispatcher on that thread. That is the minimum requirements to get a Window displaying and handling messages correctly, but, unfortunately, has some serious flaws. The first issue – the created thread will run continuously until the application shuts down, given the code in the sample.  The problem is that the ThreadStart delegate used ends with running the Dispatcher.  However, nothing ever stops the Dispatcher processing.  The thread was created as a Background thread, which prevents it from keeping the application alive, but the Dispatcher will continue to pump dispatcher frames until the application shuts down. In order to fix this, we need to call Dispatcher.InvokeShutdown after the Window is closed.  This would require modifying the above sample to subscribe to the Window’s Closed event, and, at that point, shutdown the Dispatcher: // Create a thread Thread newWindowThread = new Thread(new ThreadStart( () => { Window1 tempWindow = new Window1(); // When the window closes, shut down the dispatcher tempWindow.Closed += (s,e) => Dispatcher.CurrentDispatcher.BeginInvokeShutdown(DispatcherPriority.Background); tempWindow.Show(); // Start the Dispatcher Processing System.Windows.Threading.Dispatcher.Run(); })); // Setup and start thread as before This eliminates the first issue.  Now, when the Window is closed, the new thread’s Dispatcher will shut itself down, which in turn will cause the thread to complete. The above code will work correctly for most situations.  However, there is still a potential problem which could arise depending on the content of the Window1 class.  This is particularly nasty, as the code could easily work for most windows, but fail on others. The problem is, at the point where the Window is constructed, there is no active SynchronizationContext.  This is unlikely to be a problem in most cases, but is an absolute requirement if there is code within the constructor of Window1 which relies on a context being in place. While this sounds like an edge case, it’s fairly common.  For example, if a BackgroundWorker is started within the constructor, or a TaskScheduler is built using TaskScheduler.FromCurrentSynchronizationContext() with the expectation of synchronizing work to the UI thread, an exception will be raised at some point.  Both of these classes rely on the existence of a proper context being installed to SynchronizationContext.Current, which happens automatically, but not until Dispatcher.Run is called.  In the above case, SynchronizationContext.Current will return null during the Window’s construction, which can cause exceptions to occur or unexpected behavior. Luckily, this is fairly easy to correct.  We need to do three things, in order, prior to creating our Window: Create and initialize the Dispatcher for the new thread manually Create a synchronization context for the thread which uses the Dispatcher Install the synchronization context Creating the Dispatcher is quite simple – The Dispatcher.CurrentDispatcher property gets the current thread’s Dispatcher and “creates a new Dispatcher if one is not already associated with the thread.”  Once we have the correct Dispatcher, we can create a SynchronizationContext which uses the dispatcher by creating a DispatcherSynchronizationContext.  Finally, this synchronization context can be installed as the current thread’s context via SynchronizationContext.SetSynchronizationContext.  These three steps can easily be added to the above via a single line of code: // Create a thread Thread newWindowThread = new Thread(new ThreadStart( () => { // Create our context, and install it: SynchronizationContext.SetSynchronizationContext( new DispatcherSynchronizationContext( Dispatcher.CurrentDispatcher)); Window1 tempWindow = new Window1(); // When the window closes, shut down the dispatcher tempWindow.Closed += (s,e) => Dispatcher.CurrentDispatcher.BeginInvokeShutdown(DispatcherPriority.Background); tempWindow.Show(); // Start the Dispatcher Processing System.Windows.Threading.Dispatcher.Run(); })); // Setup and start thread as before This now forces the synchronization context to be in place before the Window is created and correctly shuts down the Dispatcher when the window closes. However, there are quite a few steps.  In my next post, I’ll show how to make this operation more reusable by creating a class with a far simpler API…

    Read the article

  • How Oracle Data Integration Customers Differentiate Their Business in Competitive Markets

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 With data being a central force in driving innovation and competing effectively, data integration has become a key IT approach to remove silos and ensure working with consistent and trusted data. Especially with the release of 12c version, Oracle Data Integrator and Oracle GoldenGate offer easy-to-use and high-performance solutions that help companies with their critical data initiatives, including big data analytics, moving to cloud architectures, modernizing and connecting transactional systems and more. In a recent press release we announced the great momentum and analyst recognition Oracle Data Integration products have achieved in the data integration and replication market. In this press release we described some of the key new features of Oracle Data Integrator 12c and Oracle GoldenGate 12c. In addition, a few from our 4500+ customers explained how Oracle’s data integration platform helped them achieve their business goals. In this blog post I would like to go over what these customers shared about their experience. Land O’Lakes is one of America’s premier member-owned cooperatives, and offers an extensive line of agricultural supplies, as well as production and business services. Rich Bellefeuille, manager, ETL & data warehouse for Land O’Lakes told us how GoldenGate helped them modernize their critical ERP system without impacting service and how they are moving to new projects with Oracle Data Integrator 12c: “With Oracle GoldenGate 11g, we've been able to migrate our enterprise-wide implementation of Oracle’s JD Edwards EnterpriseOne, ERP system, to a new database and application server platform with minimal downtime to our business. Using Oracle GoldenGate 11g we reduced database migration time from nearly 30 hours to less than 30 minutes. Given our quick success, we are considering expansion of our Oracle GoldenGate 12c footprint. We are also in the midst of deploying a solution leveraging Oracle Data Integrator 12c to manage our pricing data to handle orders more effectively and provide a better relationship with our clients. We feel we are gaining higher productivity and flexibility with Oracle's data integration products." ICON, a global provider of outsourced development services to the pharmaceutical, biotechnology and medical device industries, highlighted the competitive advantage that a solid data integration foundation brings. Diarmaid O’Reilly, enterprise data warehouse manager, ICON plc said “Oracle Data Integrator enables us to align clinical trials intelligence with the information needs of our sponsors. It helps differentiate ICON’s services in an increasingly competitive drug-development industry."  You can find more info on ICON's implementation here. A popular use case for Oracle GoldenGate’s real-time data integration is offloading operational reporting from critical transaction processing systems. SolarWorld, one of the world’s largest solar-technology producers and the largest U.S. solar panel manufacturer, implemented Oracle GoldenGate for real-time data integration of manufacturing data for fast analysis. Russ Toyama, U.S. senior database administrator for SolarWorld told us real-time data helps their operations and GoldenGate’s solution supports high performance of their manufacturing systems: “We use Oracle GoldenGate for real-time data integration into our decision support system, which performs real-time analysis for manufacturing operations to continuously improve product quality, yield and efficiency. With reliable and low-impact data movement capabilities, Oracle GoldenGate also helps ensure that our critical manufacturing systems are stable and operate with high performance."  You can watch the full interview with SolarWorld's Russ Toyama here. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Starwood Hotels and Resorts is one of the many customers that found out how well Oracle Data Integration products work with Oracle Exadata. Gordon Light, senior director of information technology for StarWood Hotels, says they had notable performance gain in loading Oracle Exadata reporting environment: “We leverage Oracle GoldenGate to replicate data from our central reservations systems and other OLTP databases – significantly decreasing the overall ETL duration. Moving forward, we plan to use Oracle GoldenGate to help the company achieve near-real-time reporting.”You can listen about Starwood Hotels' implementation here. Many companies combine the power of Oracle GoldenGate with Oracle Data Integrator to have a single, integrated data integration platform for variety of use cases across the enterprise. Ufone is another good example of that. The leading mobile communications service provider of Pakistan has improved customer service using timely customer data in its data warehouse. Atif Aslam, head of management information systems for Ufone says: “Oracle Data Integrator and Oracle GoldenGate help us integrate information from various systems and provide up-to-date and real-time CRM data updates hourly, rather than daily. The applications have simplified data warehouse operations and allowed business users to make faster and better informed decisions to protect revenue in the fast-moving Pakistani telecommunications market.” You can read more about Ufone's use case here. In our Oracle Data Integration 12c launch webcast back in November we also heard from BT’s CTO Surren Parthab about their use of GoldenGate for moving to private cloud architecture. Surren also shared his perspectives on Oracle Data Integrator 12c and Oracle GoldenGate 12c releases. You can watch the video here. These are only a few examples of leading companies that have made data integration and real-time data access a key part of their data governance and IT modernization initiatives. They have seen real improvements in how their businesses operate and differentiate in today’s competitive markets. You can read about other customer examples in our Ebook: The Path to the Future and access resources including white papers, data sheets, podcasts and more via our Oracle Data Integration resource kit. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • So, how is the Oracle HCM Cloud User Experience? In a word, smokin’!

    - by Edith Mireles-Oracle
    By Misha Vaughan, Oracle Applications User Experience Oracle unveiled its game-changing cloud user experience strategy at Oracle OpenWorld 2013 (remember that?) with a new simplified user interface (UI) paradigm.  The Oracle HCM cloud user experience is about light-weight interaction, tailored to the task you are trying to accomplish, on the device you are comfortable working with. A key theme for the Oracle user experience is being able to move from smartphone to tablet to desktop, with all of your data in the cloud. The Oracle HCM Cloud user experience provides designs for better productivity, no matter when and how your employees need to work. Release 8  Oracle recently demonstrated how fast it is moving development forward for our cloud applications, with the availability of release 8.  In release 8, users will see expanded simplicity in the HCM cloud user experience, such as filling out a time card and succession planning. Oracle has also expanded its mobile capabilities with task flows for payslips, managing absences, and advanced analytics. In addition, users will see expanded extensibility with the new structures editor for simplified pages, and the with the user interface text editor, which allows you to update language throughout the UI from one place. If you don’t like calling people who work for you “employees,” you can use this tool to create a term that is suited to your business.  Take a look yourself at what’s available now. What are people saying?Debra Lilley (@debralilley), an Oracle ACE Director who has a long history with Oracle Applications, recently gave her perspective on release 8: “Having had the privilege of seeing a preview of release 8, I am again impressed with the enhancements around simplified UI. Even more so, at a user group event in London this week, an existing Cloud HCM customer speaking publically about his implementation said he was very excited about release 8 as the absence functionality was so superior and simple to use.”  In an interview with Lilley for a blog post by Dennis Howlett  (@dahowlett), we probably couldn’t have asked for a more even-handed look at the Oracle Applications Cloud and the impact of user experience. Take the time to watch all three videos and get the full picture.  In closing, Howlett’s said: “There is always the caveat that getting from the past to Fusion [from the editor: Fusion is now called the Oracle Applications Cloud] is not quite as simple as may be painted, but the outcomes are much better than anticipated in large measure because the user experience is so much better than what went before.” Herman Slange, Technical Manager with Oracle Applications partner Profource, agrees with that comment. “We use on-premise Financials & HCM for internal use. Having a simple user interface that works on a desktop as well as a tablet for (very) non-technical users is a big relief. Coming from E-Business Suite, there is less training (none) required to access HCM content.  From a technical point of view, having the abilities to tailor the simplified UI very easy makes it very efficient for us to adjust to specific customer needs.  When we have a conversation about simplified UI, we just hand over a tablet and ask the customer to just use it. No training and no explanation required.” Finally, in a story by Computer Weekly  about Oracle customer BG Group, a natural gas exploration and production company based in the UK and with a presence in 20 countries, the author states: “The new HR platform has proved to be easier and more intuitive for HR staff to use than the previous SAP-based technology.” What’s Next for Oracle’s Applications Cloud User Experiences? This is the question that Steve Miranda, Oracle Executive Vice President, Applications Development, asks the Applications User Experience team, and we’ve been hard at work for some time now on “what’s next.”  I can’t say too much about it, but I can tell you that we’ve started talking to customers and partners, under non-disclosure agreements, about user experience concepts that we are working on in order to get their feedback. We recently had a chance to talk about possibilities for the Oracle HCM Cloud user experience at an Oracle HCM Southern California Customer Success Summit. This was a fantastic event, hosted by Shane Bliss and Vance Morossi of the Oracle Client Success Team. We got to use the uber-slick facilities of Allergan, our hosts (of Botox fame), headquartered in Irvine, Calif., with a presence in more than 100 countries. Photo by Misha Vaughan, Oracle Applications User Experience Vance Morossi, left, and Shane Bliss, of the Oracle Client Success Team, at an Oracle HCM Southern California Customer Success Summit.  We were treated to a few really excellent talks around human resources (HR). Alice White, VP Human Resources, discussed Allergan's process for global talent acquisition -- how Allergan has designed and deployed a global process, and global tools, along with Oracle and Cognizant, and are now at the end of a global implementation. She shared a couple of insights about the journey for Allergan: “One of the major areas for improvement was on role clarification within the company.” She said the company is “empowering managers and deputizing them as recruiters. Now it is a global process that is nimble and efficient."  Deepak Rammohan, VP Product Management, HCM Cloud, Oracle, also took the stage to talk about pioneering modern HR. He reflected modern HR problems of getting the right data about the workforce, the importance of getting the right talent as a key strategic initiative, and other workforce insights. "How do we design systems to deal with all of this?” he asked. “Make sure the systems are talent-centric. The next piece is collaborative, engaging, and mobile. A lot of this is influenced by what users see today. The last thing is around insight; insight at the point of decision-making." Rammohan showed off some killer HCM Cloud talent demos focused on simplicity and mobility that his team has been cooking up, and closed with a great line about the nature of modern recruiting: "Recruiting is a team sport." Deepak Rammohan, left, and Jake Kuramoto, both of Oracle, debate the merits of a Google Glass concept demo for recruiters on-the-go. Later, in an expo-style format, the Apps UX team showed several concepts for next-generation HCM Cloud user experiences, including demos shown by Jake Kuramoto (@jkuramoto) of The AppsLab, and Aylin Uysal (@aylinuysal), Director, HCM Cloud user experience. We even hauled out our eye-tracker, a research tool used to show where the eye is looking at a particular screen, thanks to teammate Michael LaDuke. Dionne Healy, HCM Client Executive, and Aylin Uysal, Director, HCM Cloud user experiences, Oracle, take a look at new HCM Cloud UX concepts. We closed the day with Jeremy Ashley (@jrwashley), VP, Applications User Experience, who brought it all back together by talking about the big picture for applications cloud user experiences. He covered the trends we are paying attention to now, what users will be expecting of their modern enterprise apps, and what Oracle’s design strategy is around these ideas.   We closed with an excellent reception hosted by ADP Payroll services at Bistango. Want to read more?Want to see where our cloud user experience is going next? Read more on the UsableApps web site about our latest design initiative: “Glance, Scan, Commit.” Or catch up on the back story by looking over our Applications Cloud user experience content on the UsableApps web site.  You can also find out where we’ll be next at the Events page on UsableApps.

    Read the article

  • Flashing your Windows Phone Dummies

    - by Martin Hinshelwood
    The rate at which vendors release new updates for the HD2 is ridiculously slow. You have to wait for Microsoft to release the new OS, then you wait for HTC to build it into a ROM, and then you have to wait up to 6 months for your operator to badly customise it for their network. Once Windows Phone 7 is released this problem should go away as Microsoft is likely to be able to update the phone over the air, but what do we do until then? I want Windows Mobile 6.5.5 now!   I’m an early adopter. If there is a new version of something then that’s the version I want. As long as you accept that you are using something on a “let the early adopter beware” and accept that there may be bugs, sometimes serious crippling bugs the go for it. Note that I won't be responsible if you end up bricking your phone, unlocking or flashing your radio or ROM can be risky. If you follow the instructions then you should be fine, I've flashed my phones (SPV, M300, M1000, M2000, M3100, TyTN, TyTN 2, HD2) hundreds of times without any problems! I have been using Windows Mobile 6.5.5 before it was called 6.5.5 and for long enough that I don’t even remember when I first started using it. I was using it on my HTC TyTN 2 before I got an HD2 a couple of months before Christmas, and the first custom ROM’s for the HD2 were a couple of months after that. I always update to the latest ROM that I like, and occasionally I go back to the stock ROM’s to have a look see, but I am always disappointed. Terms: Soft Reset: Same as pulling out the battery, but is like a reboot for your phone Hard Reset: Reinstalls the Operating system from the Image that is stored on it ROM: This is Image that is loaded onto your phone and it is used to reinstall your phone whenever you do a “hard reset”. Stock ROM: A ROM from the original vendor… So HTC Cook a ROM: Referring to Cooking a ROM is the process a ROM developer goes through to take all of the parts (OS, Drivers and Applications) that make up a running phone and compiling them into a ROM. ROM Kitchen: A place where you get an SDK and all the component parts of the phone: OD, Drivers and Application. There are usually lots of Tools for making it easier to compile and build the image. Flashing: The process of updating one of the layers of your phone with a new layer Bricked: This is what happens when flashing goes wrong. Your phone is now good for only one thing… stopping paper blowing away in a windy place. You can “cook” you own ROM using one of the many good “ROM Kitchens” or you can use a ROM built and tested by someone else. I have cooked my own ROM before, and while the tutorials are good, it is a lot of hassle. You can only Flash new ROM’s that are specifically for your phone only so find a ROM for your phone and XDA Developers is the best place to look. It has a forum based structure and you can find your phone quite easily. XDA Developer Forum Installing a new ROM does have its risks. In the past there have been stories about phones being “bricked” but I have not heard of a bricked phone for quite some years. if you follow the instructions carefully you should not have any problems. note: Most of the tools are written by people for whom English is not their first language to you will need concentrate hard to understand some of the instructions. Have you ever read a manual that was just literally translated from another language? Enough said… There are a number of layers on your phone that you will need to know about: SPL: This is the lowest level, like a BIOS on a PC and is the Operating Systems gateway to the hardware Radio: I think of this as the hardware drivers, and you will need a different Radio for CDMA than GSM networks ROM: This is like your Windows CD, but it is stored internally to the Phone. Flashing your phone consists of replacing one Image with another and then wiping your phone and automatically reinstall from the Image. Sometimes when you download an Image wither it is for a Radio or for ROM you only get a file called *.nbh. What do you do with this? Well you need an RUU application to push that Image to your phone. The RUU’s are different per phone, but there is a CustomRUU for the HD2 that will update your phone with any *.nbh placed in the same directory. Download and Instructions for CustomRUU #1 Flash HardSPL An SPL is kind of like a BIOS, and the default one has checks to make sure that you are only installing a signed ROM. This would prevent you from installing one that comes from any other source but the vendor. NOTE: Installing a HARD SPL invalidates your warranty so remember to Flash your phone with a “stock” vendor ROM before trying to send your phone in for repairs. Is the warranty reinstated when you go back to a stock ROM? I don’t know… Updating your SPL to a HardSPL effectively unlocks your phone so you can install anything you like. I would recommend the HardSPL2. Download and Instructions for HardSPL2 #2 Task29 One of the problems that has been seen on the HD2 when flashing new ROM’s is that things are left over from the old ROM. For a while the recommendation was to Flash a stock ROM first, but some clever cookies have come up with “Task29” which formats your phone first. After running this your phone will be blank and will only boot to the white HTC logo and no further. You should follow the instructions and reboot (remove battery) and hold down the “volume down” button while turning you HD2 on to enter the bootloader. From here you can run CustomRUU once the USB message appears. Download and Instructions for Task29 #2 Flash Radio You may need to play around with this one, there is no good and bad version and the latest is not always the best. You know that annoying thing when you hit “end call” on your phone and nothing happens? Well that's down to the Radio. Get this version right for you and you may even be able to make calls. From a Windows Mobile as well Download There are no instructions here, but they are the same as th ROM, but you use this *.nbh file. #3 Flash ROM If you have gotten this far then you are probably a pro by now Just download the latest ROM below and Flash to your phone. I have been really impressed by the Artemis line of ROM’s but it is no way the only choice. I like this one as the developer builds them as close to the stock ROM as possible while updating to the latest of everything. Download and Instructions for  Artemis HD2 vXX Conclusion While updating your ROM is not for the faint hearted it provides more options than the Stock ROM’s and quicker feature updates than waiting… Technorati Tags: WM6

    Read the article

  • CodePlex Daily Summary for Saturday, May 15, 2010

    CodePlex Daily Summary for Saturday, May 15, 2010New ProjectsBizTalk EDI Guidance: BizTalk EDI Guidance is intended to simplify the delivery of EDI solutions by leveraging the ESB Toolkit. This project is currently Alpha and sh...Continues Integration Sample: I'm providing a series of blog post to show a complete CI process using CruiseControl.Net and msbuild. The source code for this series is hosted here.DioM2D: My Dragons in our Midst RPG. Runs on my custom Starlight Engine.Ethical Hacking ASP.NET: Security tools and guidelines for white-hat hacking and protecting ASP.NET web applications.Farseer Engine with XNATouch: Farseer is great engine for game physics. This implementation uses XNATouch framework.Feature Builder Guidance Extensions: Feature Builder Guidance Extensions are Feature Extensions which extend the guidance for the Feature Building experience. Each FBGX will be suppli...Microsoft Office Document Security: MODS is a plugin for office 2007 thats includes Hash Encryption, Hex Convertion and more. Plugins: MODS For Word still working on (MODS for Excel ...Minimize Engine (XNA): The Minimize Engine is a basic 3D Games Engine created using XNA, with its primary focus around Grid Based games.MSForge TownCrier: This project is meant to build a notification and calling system for MSForge.net User Groups.NatureProtector: Silverlight 4 project.OutSync: OutSync is a free Windows desktop application that syncs photos of your Facebook friends with matching contacts in Microsoft Outlook. It allows you...Quick Save Images, Clipboard save to file, Quick save, bmp, png, jpeg, Image: ClipSa is a very small tool for very quick picture saving. You put some picture into the clipboard (PrintScrn/Alt-PrintScrn/Ctrl-C), ClipSa saves ...ResHelper Manager: Resource strings management tool that creates localization files for any type of localization target (asp.net, wpf and so on...)SecureCookieHttpModule: Secure your session cookie (and other session-based) cookies for replay attacks using this easy to use ASP.NET HttpModule.simpleChMS: A Church Management System (ChMS) designed for churches or ministries like youth groups that want to facilitate better care or theie membership. Fo...sMAPtool: -SPDomainObject: mapping strong type objects to sp listsSQL Trim: This project aims at developing a universal trim function for Microsoft SQL Server. It trims: 1) pre spaces 2) post spaces 3) double spaces 3) subs...TurretGunner: mt-experienceNew ReleasesBeanProxy: BeanProxy 3.0: BeanProxy is a C# (.NET 3.5) library housing classes that facilitates unit testing. Any non-static, public interface/class or abstract class can be...Blueset Studio Opensource Projects: 蓝色之风记事本 0.2 Alpha: 一个超级Bug版本……CSharp Intellisense: V2.1: - Bug fix (Pascal Casing)DioM2D: DioM2D0.01: http://www.dragonsinourmidst.com/forums/showthread.php?p=690058#post690058Ethical Hacking ASP.NET: Version 1.0.0.1: This is the initial release of the project. Read more about the available tests and features on the Documentation tab. You need the full .NET Frame...Event Scavenger: Collector service update - version 3.2.4: Added check if the database connection string is set up in the config file.Feature Builder Guidance Extensions: FBGX-Binaries: This release consists of a zip file containing all the VSIXs resulting from building each of the FBGX packages found here as source. This will mak...Floe IRC Client: Floe IRC Client 2010-05 R2: - Detaching windows (right click on the tabs to detach them) - Highlight lines with your nick or other patterns - Fixed several bugs - Tabs can now...Free language translator and file converter: Free Language Translator 1.96: Fixed some minor bugs and improved the UI a bit. If you can not install the msi file you might be missing some prerequisites. You can try running t...Geocache Downloader: release 1.0: This is the first release.kp.net: Alpha release is avalable: The goal of this alpha release is to try the code in some production scenarios and find out what features should be tuned.Live-Exchange Calendar Sync: Live-Exchange Calendar Sync: Live-Exchange Calendar Sync Beta May 14, 2010 release of Live-Exchange Calendar Sync 1.0 BETA. (Version 45334) Getting StartedInfo about installat...MAPILab Explorer for SharePoint: MAPILab Explorer for SharePoint ver 2.1.1: 1) Small bug fixed that appears on first start (when earliers versions wasn't installed). How to install:Download ZIP file and extract it on Sha...Microsoft Office Document Security: MODS 4 WORD (SOURCE INCLUDED): Includes Source CodeMoonyDesk (windows desktop widgets): MoonyDesk Alpha: MoonyDesk Alpha (some memory improvements)OnTopReplica: Release 2.9.3: Some bugfixes and improvements. Czech translation added (thanks René Mihula).OutSync: OutSync v1.0.100.0: OutSync v1.0.100.0 is the final release by Mel before the move to CodePlex. I have tested it on Windows 7 32bit and 64bit with Office 2007 and it ...Quick Save Images, Clipboard save to file, Quick save, bmp, png, jpeg, Image: Clipsa v 0.1: Download and extract to any place 2 files - clipSa.exe and clipSa.exe.config Run clipSa.exe. That's all.ResHelper Manager: ResHelperManager: List of changes applied to this version of ResHelper is included in main download zip package. Example sourcesIn Source Code tab are sources of De...Rx Contrib: V1.3: - Bug Fix - BufferWithTimeOrCount with flexible time period setting when ever the time period elapsed...SharePoint DVK Integration: SharePoint 2007 DVK integration v1.0.3: Fixes Fixed default field bindings. I rebound too many fields on every page load. Fixed extension replacing on creating target url (threw it out)...ShoutcastStast for DotNetNuke: DNN_ShoutcastStats alpha 05.00.495: First Alpha release of ShoutcastStats Module for DotNetNuke This first alpha version of the ShoutcastStats Module for DotNetNuke is still in devel...SilverPart 2.1: SilverPart 2.1: SilverPart 2.1 This interim release fixes some major bugs related to Firefox and anonymous access. - Fix for Issue ID 4005 - SilverPart does not w...sMAPtool: sMAPedit v0.7c (Base Release with Maps): Fixed: force a gargabe collection update to prevent pictureBox's memory leak Added: essential map pack with all basic maps in jpg format Added:...SQL Trim: Trim: Initial releaseSSIS Multiple Hash: Multiple Hash V1.2.1: This is version 1.2.1 of the Multiple Hash SSIS Component. It supports SQL 2005 and SQL 2008, although you have to download the correct install pa...StreamInsight Yahoo Finance input adapter example: StockTicker_v1_0_RTM: Updated for StreamInsight RTM.Update Controls .NET: 2.1.0.0: Automatic dependency management for WPF and Silverlight data binding. This release combines both the WPF and Silverlight assemblies into one insta...VCC: Latest build, v2.1.30514.0: Automatic drop of latest buildMost Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryMirror Testing SystemRawrPHPExcelBlogEngine.NETMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersShake - C# MakeStyleCop

    Read the article

  • Highlight Row in GridView with Colored Columns

    - by Vincent Maverick Durano
    I wrote a blog post a while back before here that demonstrate how to highlight a GridView row on mouseover and as you can see its very easy to highlight rows in GridView. One of my colleague uses the same technique for implemeting gridview row highlighting but the problem is that if a Column has background color on it that cell will not be highlighted anymore. To make it more clear then let's build up a sample application. ASPX:   1: <asp:GridView runat="server" id="GridView1" onrowcreated="GridView1_RowCreated" 2: onrowdatabound="GridView1_RowDataBound"> 3: </asp:GridView>   CODE BEHIND:   1: private DataTable FillData() { 2:   3: DataTable dt = new DataTable(); 4: DataRow dr = null; 5:   6: //Create DataTable columns 7: dt.Columns.Add(new DataColumn("RowNumber", typeof(string))); 8: dt.Columns.Add(new DataColumn("Col1", typeof(string))); 9: dt.Columns.Add(new DataColumn("Col2", typeof(string))); 10: dt.Columns.Add(new DataColumn("Col3", typeof(string))); 11:   12: //Create Row for each columns 13: dr = dt.NewRow(); 14: dr["RowNumber"] = 1; 15: dr["Col1"] = "A"; 16: dr["Col2"] = "B"; 17: dr["Col3"] = "C"; 18: dt.Rows.Add(dr); 19:   20: dr = dt.NewRow(); 21: dr["RowNumber"] = 2; 22: dr["Col1"] = "AA"; 23: dr["Col2"] = "BB"; 24: dr["Col3"] = "CC"; 25: dt.Rows.Add(dr); 26:   27: dr = dt.NewRow(); 28: dr["RowNumber"] = 3; 29: dr["Col1"] = "A"; 30: dr["Col2"] = "B"; 31: dr["Col3"] = "CC"; 32: dt.Rows.Add(dr); 33:   34: dr = dt.NewRow(); 35: dr["RowNumber"] = 4; 36: dr["Col1"] = "A"; 37: dr["Col2"] = "B"; 38: dr["Col3"] = "CC"; 39: dt.Rows.Add(dr); 40:   41: dr = dt.NewRow(); 42: dr["RowNumber"] = 5; 43: dr["Col1"] = "A"; 44: dr["Col2"] = "B"; 45: dr["Col3"] = "CC"; 46: dt.Rows.Add(dr); 47:   48: return dt; 49: } 50:   51: protected void Page_Load(object sender, EventArgs e) { 52: if (!IsPostBack) { 53: GridView1.DataSource = FillData(); 54: GridView1.DataBind(); 55: } 56: }   As you can see there's nothing fancy in the code above. It just contain a method that fills a DataTable with a dummy data on it. Now here's the code for row highlighting:   1: protected void GridView1_RowCreated(object sender, GridViewRowEventArgs e) { 2: //Set Background Color for Columns 1 and 3 3: e.Row.Cells[1].BackColor = System.Drawing.Color.Beige; 4: e.Row.Cells[3].BackColor = System.Drawing.Color.Red; 5:   6: //Attach onmouseover and onmouseout for row highlighting 7: e.Row.Attributes.Add("onmouseover", "this.style.backgroundColor='Blue'"); 8: e.Row.Attributes.Add("onmouseout", "this.style.backgroundColor=''"); 9: }   Running the code above will show something like this in the browser: On initial load: On mouseover of GridView row:   Noticed that Col1 and Col3 are not highlighted. Why? the reason is that Col1 and Col3 cells has background color set on it and we only highlight the rows (TR) and not the columns (TD) that's why on mouseover only the rows will be highlighted. To fix the issue we will create a javascript method that would remove the background color of the columns when highlighting a row and on mouseout set back the original color that is set on Col1 and Col3. Here are the codes below: JavaScript   1: <script type="text/javascript"> 2: function HighLightRow(rowIndex, colIndex,colIndex2, flag) { 3: var gv = document.getElementById("<%= GridView1.ClientID %>"); 4: var selRow = gv.rows[rowIndex]; 5: if (rowIndex > 0) { 6: if (flag == "sel") { 7: gv.rows[rowIndex].style.backgroundColor = 'Blue'; 8: gv.rows[rowIndex].style.color = "White"; 9: gv.rows[rowIndex].cells[colIndex].style.backgroundColor = ''; 10: gv.rows[rowIndex].cells[colIndex2].style.backgroundColor = ''; 11: } 12: else { 13: gv.rows[rowIndex].style.backgroundColor = ''; 14: gv.rows[rowIndex].style.color = "Black"; 15: gv.rows[rowIndex].cells[colIndex].style.backgroundColor = 'Beige'; 16: gv.rows[rowIndex].cells[colIndex2].style.backgroundColor = 'Red'; 17: } 18: } 19: } 20: </script>   The HighLightRow method is a javascript function that accepts four (4) parameters which are the rowIndex,colIndex,colIndex2 and the flag. The rowIndex is the current row index of the selected row in GridView. The colIndex is the index of Col1 and colIndex2 is the index of col3. We are passing these index because these columns has background color on it and we need to toggle its backgroundcolor when highlighting the row in GridView. Finally the flag is something that would determine if its selected or not. Now here's the code for calling the JavaScript function above.     1: protected void GridView1_RowCreated(object sender, GridViewRowEventArgs e) { 2:   3: //Set Background Color for Columns 1 and 3 4: e.Row.Cells[1].BackColor = System.Drawing.Color.Beige; 5: e.Row.Cells[3].BackColor = System.Drawing.Color.Red; 6:   7: //Attach onmouseover and onmouseout for row highlighting 8: //and call the HighLightRow method with the required parameters 9: int index = e.Row.RowIndex + 1; 10: e.Row.Attributes.Add("onmouseover", "HighLightRow(" + index + "," + 1 + "," + 3 + ",'sel')"); 11: e.Row.Attributes.Add("onmouseout", "HighLightRow(" + index + "," + 1 + "," + 3 + ",'dsel')"); 12: 13: }   Running the code above will display something like this: On initial load:   On mouseover of GridView row:   That's it! I hope someone find this post useful!

    Read the article

  • Creating and maintaining Orchard translations

    - by Bertrand Le Roy
    Many volunteers have already stepped up to provide translations for Orchard. There are many challenges to overcome with translating such a project. Orchard is a very modular CMS, so the translation mechanism needs to account for the core as well as first and third party modules and themes. Another issue is that every new version of Orchard or of a module changes some localizable strings and adds new ones as others enter obsolescence. In order to address those problems, I've built a small Orchard module that automates some of the most complex tasks that maintaining a translation implies. In this post, I'll walk you through the operations I had to do to update the French translation for Orchard 1.0. In order to make sure you translate all the first party modules, I would recommend that you start from a full source code enlistment. The reason is that I'll show how you can extract the default en-US translation from any source code enlistment. That enables you to create a translation that is even more up-to-date than what is currently on the site. Alternatively, you could start by downloading the current en-US translation. If you decide to do so, just skip the relevant paragraphs. First, let's install the Orchard Translation Manager. I'm starting from a vanilla clone of the latest in the code repository. After you've setup the site, go into the dashboard and click on Gallery. Locate the Orchard Translation Manager in the list of modules and click "Install". Once the module is installed, you need to enable its one feature by going into Configuration/Features and clicking "Enable" next to Vandelay.TranslationManager. We're done with the setup that we need in order to start our translation work. We'll now switch to the command-line and to our favorite text editor. Open a command-line on the Orchard web site folder. I found the easiest way to do this is to do a SHIFT+right-click on the Orchard.Web folder in Windows Explorer and to click "Open command window here". Type bin\orchard to enter the Orchard command-line environment. If you do a "help commands" you should see four commands in the list that came from the module we just installed: extract default translation, install translation, package translation and sync translation. First, we're going to generate the default translation. Note that it is possible to generate that default translation for a specific list of modules and themes by using the /Extensions: switch, which should facilitate the translation of third party extensions, but in this tutorial we're going to generate it for the whole of the Orchard source code. extract default translation /Output:\temp .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This should have created an Orchard.en-us.po.zip file in the temp directory. Extract that archive into an orchard.po folder under \temp. The next step depends on whether you have an existing translation that you want to update or not. If you do have an existing translation, just extract it into the same \temp\orchard.po directory. That should result in a file structure where you have the default en-US translation alongside your own. If you don't have an existing translation, just continue, the commands will be the same. We are now going to synchronize those translations (or generate the stub for a new one if you didn't start from an existing translation). sync translation /Input:\temp\orchard.po /Culture:fr-FR After this command (where you should of course substitute fr-FR with the culture you're working on), we now have updated files that contain a few useful flags. Open each of the .po files under the culture you are working on (there should be around 36) with your favorite text editor. For all the strings that are still valid in the latest version, nothing changes and you don't need to do anything. For all the strings that disappeared from the default culture, the old translation will still be there but they will be prefixed with the following comment: # Obsolete translation Conveniently, all the obsolete strings will be grouped at the end of the file. You can select all those and delete them. For all the new strings, you will see the following comment: # Untranslated string This is where the hard work begins. You'll need to translate each of those new strings by entering the translation between the quotes in: msgstr "" Don't introduce hard carriage returns in the strings, just stay on one line (your text editor should do some reasonable wrapping so this shouldn't be a big deal). Once you're done with a file, save it. Make sure, and this is very important, that your text editor is saving using the UTF-8 encoding. In Notepad, that setting can be found in the file saving dialog by doing a "Save As" rather than a plain "Save": When all the po files have been edited, you are ready to package the translation for submission (a.k.a. sending e-mail to the localization mailing list). package translation /Culture:fr-FR /Input:\temp\orchard.po /Output:\temp You should now see a Orchard.fr-FR.po.zip file in temp that is ready to be submitted. That is, once you've tested it, which can be done by deploying it into the site: install translation \temp\orchard.fr-fr.po.zip Once this is done you can go into the dashboard under Configuration/Settings and click on "Add or remove supported cultures for the site". Choose your culture and click "Add". You can go back to settings and set the default culture. Save. You may now take a tour of the application and verify that everything works as expected: And that's it really. Creating a translation for Orchard is a matter of a few hours. If you don't see a translation for your culture, please consider creating it.

    Read the article

  • Using Radio Button in GridView with Validation

    - by Vincent Maverick Durano
    A developer is asking how to select one radio button at a time if the radio button is inside the GridView.  As you may know setting the group name attribute of radio button will not work if the radio button is located within a Data Representation control like GridView. This because the radio button inside the gridview bahaves differentely. Since a gridview is rendered as table element , at run time it will assign different "name" to each radio button. Hence you are able to select multiple rows. In this post I'm going to demonstrate how select one radio button at a time in gridview and add a simple validation on it. To get started let's go ahead and fire up visual studio and the create a new web application / website project. Add a WebForm and then add gridview. The mark up would look something like this: <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="false" > <Columns> <asp:TemplateField> <ItemTemplate> <asp:RadioButton ID="rb" runat="server" /> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="RowNumber" HeaderText="Row Number" /> <asp:BoundField DataField="Col1" HeaderText="First Column" /> <asp:BoundField DataField="Col2" HeaderText="Second Column" /> </Columns> </asp:GridView> Noticed that I've added a templatefield column so that we can add the radio button there. Also I have set up some BoundField columns and set the DataFields as RowNumber, Col1 and Col2. These columns are just dummy columns and i used it for the simplicity of this example. Now where these columns came from? These columns are created by hand at the code behind file of the ASPX. Here's the code below: private DataTable FillData() { DataTable dt = new DataTable(); DataRow dr = null; //Create DataTable columns dt.Columns.Add(new DataColumn("RowNumber", typeof(string))); dt.Columns.Add(new DataColumn("Col1", typeof(string))); dt.Columns.Add(new DataColumn("Col2", typeof(string))); //Create Row for each columns dr = dt.NewRow(); dr["RowNumber"] = 1; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 2; dr["Col1"] = "AA"; dr["Col2"] = "BB"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 3; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 4; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 5; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); return dt; } And here's the code for binding the GridView with the dummy data above. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { GridView1.DataSource = FillData(); GridView1.DataBind(); } } Okay we have now a GridView data with a radio button on each row. Now lets go ahead and switch back to ASPX mark up. In this example I'm going to use a JavaScript for validating the radio button to select one radio button at a time. Here's the javascript code below: function CheckOtherIsCheckedByGVID(rb) { var isChecked = rb.checked; var row = rb.parentNode.parentNode; if (isChecked) { row.style.backgroundColor = '#B6C4DE'; row.style.color = 'black'; } var currentRdbID = rb.id; parent = document.getElementById("<%= GridView1.ClientID %>"); var items = parent.getElementsByTagName('input'); for (i = 0; i < items.length; i++) { if (items[i].id != currentRdbID && items[i].type == "radio") { if (items[i].checked) { items[i].checked = false; items[i].parentNode.parentNode.style.backgroundColor = 'white'; items[i].parentNode.parentNode.style.color = '#696969'; } } } } The function above sets the row of the current selected radio button's style to determine that the row is selected and then loops through the radio buttons in the gridview and then de-select the previous selected radio button and set the row style back to its default. You can then call the javascript function above at onlick event of radio button like below: <asp:RadioButton ID="rb" runat="server" onclick="javascript:CheckOtherIsCheckedByGVID(this);" /> Here's the output below: On Load: After Selecting a Radio Button: As you have noticed, on initial load there's no default selected radio in the GridView. Now let's add a simple validation for that. We will basically display an error message if a user clicks a button that triggers a postback without selecting  a radio button in the GridView. Here's the javascript for the validation: function ValidateRadioButton(sender, args) { var gv = document.getElementById("<%= GridView1.ClientID %>"); var items = gv.getElementsByTagName('input'); for (var i = 0; i < items.length ; i++) { if (items[i].type == "radio") { if (items[i].checked) { args.IsValid = true; return; } else { args.IsValid = false; } } } } The function above loops through the rows in gridview and find all the radio buttons within it. It will then check each radio button checked property. If a radio is checked then set IsValid to true else set it to false.  The reason why I'm using IsValid is because I'm using the ASP validator control for validation. Now add the following mark up below under the GridView declaration: <br /> <asp:Label ID="lblMessage" runat="server" /> <br /> <asp:Button ID="btn" runat="server" Text="POST" onclick="btn_Click" ValidationGroup="GroupA" /> <asp:CustomValidator ID="CustomValidator1" runat="server" ErrorMessage="Please select row in the grid." ClientValidationFunction="ValidateRadioButton" ValidationGroup="GroupA" style="display:none"></asp:CustomValidator> <asp:ValidationSummary ID="ValidationSummary1" runat="server" ValidationGroup="GroupA" HeaderText="Error List:" DisplayMode="BulletList" ForeColor="Red" /> And then at Button Click event add this simple code below just to test if  the validation works: protected void btn_Click(object sender, EventArgs e) { lblMessage.Text = "Postback at: " + DateTime.Now.ToString("hh:mm:ss tt"); } Here's the output below that you can see in the browser:   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,JavaScript,GridView

    Read the article

  • New Features in ASP.NET Web API 2 - Part I

    - by dwahlin
    I’m a big fan of ASP.NET Web API. It provides a quick yet powerful way to build RESTful HTTP services that can easily be consumed by a variety of clients. While it’s simple to get started using, it has a wealth of features such as filters, formatters, and message handlers that can be used to extend it when needed. In this post I’m going to provide a quick walk-through of some of the key new features in version 2. I’ll focus on some two of my favorite features that are related to routing and HTTP responses and cover additional features in a future post.   Attribute Routing Routing has been a core feature of Web API since it’s initial release and something that’s built into new Web API projects out-of-the-box. However, there are a few scenarios where defining routes can be challenging such as nested routes (more on that in a moment) and any situation where a lot of custom routes have to be defined. For this example, let’s assume that you’d like to define the following nested route:   /customers/1/orders   This type of route would select a customer with an Id of 1 and then return all of their orders. Defining this type of route in the standard WebApiConfig class is certainly possible, but it isn’t the easiest thing to do for people who don’t understand routing well. Here’s an example of how the route shown above could be defined:   public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "CustomerOrdersApiGet", routeTemplate: "api/customers/{custID}/orders", defaults: new { custID = 0, controller = "Customers", action = "Orders" } ); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); GlobalConfiguration.Configuration.Formatters.Insert(0, new JsonpFormatter()); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   With attribute based routing, defining these types of nested routes is greatly simplified. To get started you first need to make a call to the new MapHttpAttributeRoutes() method in the standard WebApiConfig class (or a custom class that you may have created that defines your routes) as shown next:   public static class WebApiConfig { public static void Register(HttpConfiguration config) { // Allow for attribute based routes config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } Once attribute based routes are configured, you can apply the Route attribute to one or more controller actions. Here’s an example:   [HttpGet] [Route("customers/{custId:int}/orders")] public List<Order> Orders(int custId) { var orders = _Repository.GetOrders(custId); if (orders == null) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return orders; }   This example maps the custId route parameter to the custId parameter in the Orders() method and also ensures that the route parameter is typed as an integer. The Orders() method can be called using the following route: /customers/2/orders   While this is extremely easy to use and gets the job done, it doesn’t include the default “api” string on the front of the route that you might be used to seeing. You could add “api” in front of the route and make it “api/customers/{custId:int}/orders” but then you’d have to repeat that across other attribute-based routes as well. To simply this type of task you can add the RoutePrefix attribute above the controller class as shown next so that “api” (or whatever the custom starting point of your route is) is applied to all attribute routes: [RoutePrefix("api")] public class CustomersController : ApiController { [HttpGet] [Route("customers/{custId:int}/orders")] public List<Order> Orders(int custId) { var orders = _Repository.GetOrders(custId); if (orders == null) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return orders; } }   There’s much more that you can do with attribute-based routing in ASP.NET. Check out the following post by Mike Wasson for more details.   Returning Responses with IHttpActionResult The first version of Web API provided a way to return custom HttpResponseMessage objects which were pretty easy to use overall. However, Web API 2 now wraps some of the functionality available in version 1 to simplify the process even more. A new interface named IHttpActionResult (similar to ActionResult in ASP.NET MVC) has been introduced which can be used as the return type for Web API controller actions. To return a custom response you can use new helper methods exposed through ApiController such as: Ok NotFound Exception Unauthorized BadRequest Conflict Redirect InvalidModelState Here’s an example of how IHttpActionResult and the helper methods can be used to cleanup code. This is the typical way to return a custom HTTP response in version 1:   public HttpResponseMessage Delete(int id) { var status = _Repository.DeleteCustomer(id); if (status) { return new HttpResponseMessage(HttpStatusCode.OK); } else { throw new HttpResponseException(HttpStatusCode.NotFound); } } With version 2 we can replace HttpResponseMessage with IHttpActionResult and simplify the code quite a bit:   public IHttpActionResult Delete(int id) { var status = _Repository.DeleteCustomer(id); if (status) { //return new HttpResponseMessage(HttpStatusCode.OK); return Ok(); } else { //throw new HttpResponseException(HttpStatusCode.NotFound); return NotFound(); } } You can also cleanup post (insert) operations as well using the helper methods. Here’s a version 1 post action:   public HttpResponseMessage Post([FromBody]Customer cust) { var newCust = _Repository.InsertCustomer(cust); if (newCust != null) { var msg = new HttpResponseMessage(HttpStatusCode.Created); msg.Headers.Location = new Uri(Request.RequestUri + newCust.ID.ToString()); return msg; } else { throw new HttpResponseException(HttpStatusCode.Conflict); } } This is what the code looks like in version 2:   public IHttpActionResult Post([FromBody]Customer cust) { var newCust = _Repository.InsertCustomer(cust); if (newCust != null) { return Created<Customer>(Request.RequestUri + newCust.ID.ToString(), newCust); } else { return Conflict(); } } More details on IHttpActionResult and the different helper methods provided by the ApiController base class can be found here. Conclusion Although there are several additional features available in Web API 2 that I could cover (CORS support for example), this post focused on two of my favorites features. If you have .NET 4.5.1 available then I definitely recommend checking the new features out. Additional articles that cover features in ASP.NET Web API 2 can be found here.

    Read the article

  • Why It Is So Important to Know Your Customer

    - by Christie Flanagan
    Over the years, I endured enough delayed flights, air turbulence and misadventures in airport security clearance to watch my expectations for the air travel experience fall to abysmally low levels. The extent of my loyalty to any one carrier had more to do with the proximity of the airport parking garage to their particular gate than to any effort on the airline’s part to actually earn and retain my business. That all changed one day when I found myself at the airport hoping to catch a return flight home a few hours earlier than expected, using an airline I had flown with for the first time just that week.  When you travel regularly for business, being able to catch a return flight home that’s even an hour or two earlier than originally scheduled is a big deal. It can mean the difference between having a normal evening with your family and having to sneak in like a cat burglar after everyone is fast asleep. And so I found myself on this particular day hoping to catch an earlier flight home. I approached the gate agent and was told that I could go on standby for their next flight out. Then I asked how much it was going to cost to change the flight, knowing full well that I wouldn’t get reimbursed by my company for any change fees. “Oh, there’s no charge to fly on standby,” the gate agent told me. I made a funny look. I couldn’t believe what I was hearing. This airline was going to let my fly on standby, at no additional charge, even though I was a new customer with no status or points. It had been years since I’d seen an airline pass up a short term revenue generating opportunity in favor of a long term loyalty generating one.  At that moment, this particular airline gained my loyal business. Since then, this airline has had the opportunity to learn a lot about me. They know where I live, where I fly from, where I usually fly to, and where I like to sit on the plane. In general, I’ve found their customer service to be quite good whether at the airport, via call center and even through social channels. They email me occasionally, and when they do, they demonstrate that they know me by promoting deals for flights from where I live to places that I’d be interested in visiting. And that’s part of why I’m always so puzzled when I visit their website.Does this company with the great service, customer friendly policies, and clean planes demonstrate that they know me at all when I visit their website? The answer is no. Even when I log in using my loyalty program credentials, it’s pretty obvious that they’re presenting the same old home page and same old offers to every single one of their site visitors. I mean, those promotional offers that they’re featuring so prominently  -- they’re for flights that originate thousands of miles from where I live! There’s no way I’d ever book one of those flights and I’m sure I’m not the only one of their customers to feel that way.My reason for recounting this story is not to pick on the one customer experience flaw I've noticed with this particular airline, in fact, they do so many things right that I’ll continue to fly with them. But I did want to illustrate just how glaringly obvious it is to customers today when a touch point they have with a brand is impersonal, unconnected and out of sync. As someone who’s spent a number of years in the web experience management and online marketing space, it particularly peeves me when that out of sync touch point is a brand’s website, perhaps because I know how important it is to make a customer’s online experience relevant and how many powerful tools are available for making a relevant experience a reality. The fact is, delivering a one-size-fits-all online customer experience is no longer acceptable or particularly effective in today’s world. Today’s savvy customers expect you to know who they are and to understand their preferences, behavior and relationship with your brand. Not only do they expect you to know about them, but they also expect you to demonstrate this knowledge across all of their touch points with your brand in a consistent and compelling fashion, whether it be on your traditional website, your mobile web presence or through various social channels.Delivering the kind of personalized online experiences that customers want can have tremendous business benefits. This is not just about generating feelings of goodwill and higher customer satisfaction ratings either. More relevant and personalized online experiences boost the effectiveness of online marketing initiatives and the statistics prove this out. Personalized web experiences can help increase online conversion rates by 70% -- that’s a huge number.1  And more than three quarters of consumers indicate that they’ve made additional online purchases based on personalized product recommendations.2Now if only this airline would get on board with delivering a more personalized online customer experience. I’d certainly be happier and more likely to spring for one of their promotional offers. And by targeting relevant offers on their home page to appropriate segments of their site visitors, I bet they’d be happier and generating additional revenue too. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}  ***** If you're interested in hearing more perspectives on the benefits of demonstrating that you know your customers by delivering a more personalized experience, check out this white paper on creating a successful and meaningful customer experience on the web.  Also catch the video below on the business value of CX in attracting new customers featuring Oracle's VP of Customer Experience Strategy, Brian Curran. 1 Search Engine Watch 2 Marketing Charts

    Read the article

  • Feedback on IE9 developer tool

    - by anirudha
    if you already love IE9 this post really not for you. but still you need something more this post for you and want to know about IE9 why not use product guide they give you IE9 product guide well i already put the bad experience into many post here but a little practice more to show what IE9 actually is or what they show. well i believe that their is no one on MSDN can sure that IE9 is another thing for developer to struggle with. because they never thing about the thing they make. the thinking they have that we product windows who are best so everything we do are best and best. come to the point i means Web browsing we can divide them in two parts 1. someone who are developer and use browser mainly for development , debugging and testing what they produced and make better software. 2. user who are not know things more technically but use the web as their passion. so as a developer what developer want. are IE9 is really for developer now make a comparison. commonly every developer have a twitter account to follow the link of someone else to learn and read the best article on web and share to all follower of themselves. chrome and Firefox have many utilities for that but IE still have nothing. social networking is a good way to communicate with others. in IE their is no plug-in to make experience better as firefox and chrome have a list of plug-in to use browser with more comfort. their are a huge list of plug-in on Firefox and chrome is available for making experience better. but IE9 still have no plug-in for that. if you see http://ieaddons.com/ you still see that they are joking yeah white joke who believe on them. they still have no plug-in. are they fool or making other fool. on 2011 whenever Firefox and chrome claim many thing on the plug-in IE9 still have no plug-in. not for developer not for everyone else. yeah a list of useless stuff you can see their. IE9 developer tool maybe better if they copycat the firebug as they copycat Google’s search result for Bing. well it’ not sure but Google claim that. but what is in IE9 developer tool so great that MSDN developer talking about. i found nothing in IE9 developer tool still feel frustrated their is a big trouble to edit css. means you never can change the css without going to CSS tab. but i thing great many thing they make better their but they still produce not better option in IE9 developer tool. as a comparison firebug is great we all know but chrome is a good option if someone want to try their hands on new things. in firebug their is a list of plugin inside firebug available also to make task easier. like firepicker in firebug make colorpicking easier. firebug autocomplete make console script writing better and yslow show you the performance step you need to take for making site better. IE9 still have no plugin or that. IE9 maybe useful stuff whenever the interface they thing to make better. the problem with MSFT these days that they want to ship next version of every softare in WPF. yeah they make live 2011 in wpf. many of user go for someone else or downgrade their 2011 live. the problem they have that they never want to spent the time on learning to use a software again. IE9 not have the serius problem like live have but still IE9 is not so great as chrome. like in chrome their is smooth tabbing. IE9 ditto copycat the things for tabbing. but a little step more in IE have a problem that IE9 tab slip whenever you want to use them. in chrome never slip the tab without user want. well as user someone also want to paint their browser in the style they want or like. in firefox the sollution called personas or themes. same in chrome the things called themes but in IE they still believe that their is no need for them. means use same themes everytime no customization in 2011 yeah great joke. well i read a post [written in 2008] of developer who still claim that they never used Firefox because they have a license for visual studio and some other software and have IE in their system. i not what they want to show. means they always want or thing to show that firefox and chrome is pity and IE is great as all do. but what’s true we all know. when MSFT release IE9 RC they show the ads with comparison of IE9 RC with chrome6 but why not today with chrome 11 developer version. the many things on IE testdrive now work perfect on chrome. well what’s performance matter when a silly browser never give a better experience. yeah performance have matter in useful software. anyone can prove many things whenever they produce a featureless software. well IE9 is looking great in blogger’s post on many kind of website where developer not independently write. actually they are mentally forced to write for IE9 better and show blah blah even blah is very small as they show. i am not believe on some blogger when they write in a style who are easily known that the post in favor of IE9. if you thing of mine then i am not want to hide myself i am one of the lover of open source so i love Firefox and chrome both. but i am not wrong you find yourself that what is difference between IE9 and Firefox and chrome. so don’t believe on someone who are not mentally independent because most of them are write about IE9 because they want to show them better they are forced themselves to show IE9 as a tool and chrome and firefox as pity. well read everything but never believe on everyone without any confident of them. they actually all want to show the things they have as i have with chrome and firefox is better then IE9. so my feedback on IE9 is :- without any plugin , customization or many thing i described in the post make no sense of use of IE9. i still fall in love of firefox and chrome they both give a better support and things to make experience better on the web. so conclusion is that i not forced you to other not IE9. you need to use the tool who save your time. means if your IE9 save your time you should use them because time was more subjective then others. so use the software who save the time as i save my time in chrome and in firefox. i still found nothing inIE9 who save time of mine.

    Read the article

  • Using SQL Execution Plans to discover the Swedish alphabet

    - by Rob Farley
    SQL Server is quite remarkable in a bunch of ways. In this post, I’m using the way that the Query Optimizer handles LIKE to keep it SARGable, the Execution Plans that result, Collations, and PowerShell to come up with the Swedish alphabet. SARGability is the ability to seek for items in an index according to a particular set of criteria. If you don’t have SARGability in play, you need to scan the whole index (or table if you don’t have an index). For example, I can find myself in the phonebook easily, because it’s sorted by LastName and I can find Farley in there by moving to the Fs, and so on. I can’t find everyone in my suburb easily, because the phonebook isn’t sorted that way. I can’t even find people who have six letters in their last name, because also the book is sorted by LastName, it’s not sorted by LEN(LastName). This is all stuff I’ve looked at before, including in the talk I gave at SQLBits in October 2010. If I try to find everyone who’s names start with F, I can do that using a query a bit like: SELECT LastName FROM dbo.PhoneBook WHERE LEFT(LastName,1) = 'F'; Unfortunately, the Query Optimizer doesn’t realise that all the entries that satisfy LEFT(LastName,1) = 'F' will be together, and it has to scan the whole table to find them. But if I write: SELECT LastName FROM dbo.PhoneBook WHERE LastName LIKE 'F%'; then SQL is smart enough to understand this, and performs an Index Seek instead. To see why, I look further into the plan, in particular, the properties of the Index Seek operator. The ToolTip shows me what I’m after: You’ll see that it does a Seek to find any entries that are at least F, but not yet G. There’s an extra Predicate in there (a Residual Predicate if you like), which checks that each LastName is really LIKE F% – I suppose it doesn’t consider that the Seek Predicate is quite enough – but most of the benefit is seen by its working out the Seek Predicate, filtering to just the “at least F but not yet G” section of the data. This got me curious though, particularly about where the G comes from, and whether I could leverage it to create the Swedish alphabet. I know that in the Swedish language, there are three extra letters that appear at the end of the alphabet. One of them is ä that appears in the word Västerås. It turns out that Västerås is quite hard to find in an index when you’re looking it up in a Swedish map. I talked about this briefly in my five-minute talk on Collation from SQLPASS (the one which was slightly less than serious). So by looking at the plan, I can work out what the next letter is in the alphabet of the collation used by the column. In other words, if my alphabet were Swedish, I’d be able to tell what the next letter after F is – just in case it’s not G. It turns out it is… Yes, the Swedish letter after F is G. But I worked this out by using a copy of my PhoneBook table that used the Finnish_Swedish_CI_AI collation. I couldn’t find how the Query Optimizer calculates the G, and my friend Paul White (@SQL_Kiwi) tells me that it’s frustratingly internal to the QO. He’s particularly smart, even if he is from New Zealand. To investigate further, I decided to do some PowerShell, leveraging the Get-SqlPlan function that I blogged about recently (make sure you also have the SqlServerCmdletSnapin100 snap-in added). I started by indicating that I was going to use Finnish_Swedish_CI_AI as my collation of choice, and that I’d start whichever letter cam straight after the number 9. I figure that this is a cheat’s way of guessing the first letter of the alphabet (but it doesn’t actually work in Unicode – luckily I’m using varchar not nvarchar. Actually, there are a few aspects of this code that only work using ASCII, so apologies if you were wanting to apply it to Greek, Japanese, etc). I also initialised my $alphabet variable. $collation = 'Finnish_Swedish_CI_AI'; $firstletter = '9'; $alphabet = ''; Now I created the table for my test. A single field would do, and putting a Clustered Index on it would suffice for the Seeks. Invoke-Sqlcmd -server . -data tempdb -query "create table dbo.collation_test (col varchar(10) collate $collation primary key);" Now I get into the looping. $c = $firstletter; $stillgoing = $true; while ($stillgoing) { I construct the query I want, seeking for entries which start with whatever $c has reached, and get the plan for it: $query = "select col from dbo.collation_test where col like '$($c)%';"; [xml] $pl = get-sqlplan $query "." "tempdb"; At this point, my $pl variable is a scary piece of XML, representing the execution plan. A bit of hunting through it showed me that the EndRange element contained what I was after, and that if it contained NULL, then I was done. $stillgoing = ($pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange -ne $null); Now I could grab the value out of it (which came with apostrophes that needed stripping), and append that to my $alphabet variable.   if ($stillgoing)   {  $c=$pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange.RangeExpressions.ScalarOperator.ScalarString.Replace("'","");     $alphabet += $c;   } Finally, finishing the loop, dropping the table, and showing my alphabet! } Invoke-Sqlcmd -server . -data tempdb -query "drop table dbo.collation_test;"; $alphabet; When I run all this, I see that the Swedish alphabet is ABCDEFGHIJKLMNOPQRSTUVXYZÅÄÖ, which matches what I see at Wikipedia. Interesting to see that the letters on the end are still there, even with Case Insensitivity. Turns out they’re not just “letters with accents”, they’re letters in their own right. I’m sure you gave up reading long ago, and really aren’t that fazed about the idea of doing this using PowerShell. I chose PowerShell because I’d already come up with an easy way of grabbing the estimated plan for a query, and PowerShell does allow for easy navigation of XML. I find the most interesting aspect of this as the fact that the Query Optimizer uses the next letter of the alphabet to maintain the SARGability of LIKE. I’m hoping they do something similar for a whole bunch of operations. Oh, and the fact that you know how to find stuff in the IKEA catalogue. Footnote: If you are interested in whether this works in other languages, you might want to consider the following screenshot, which shows that in principle, it should work with Japanese. It might be a bit harder to run this in PowerShell though, as I’m not sure how it translates. In Hiragana, the Japanese alphabet starts ?, ?, ?, ?, ?, ...

    Read the article

  • Adding RSS to tags in Orchard

    - by Bertrand Le Roy
    A year ago, I wrote a scary post about RSS in Orchard. RSS was one of the first features we implemented in our CMS, and it has stood the test of time rather well, but the post was explaining things at a level that was probably too abstract whereas my readers were expecting something a little more practical. Well, this post is going to correct this by showing how I built a module that adds RSS feeds for each tag on the site. Hopefully it will show that it's not very complicated in practice, and also that the infrastructure is pretty well thought out. In order to provide RSS, we need to do two things: generate the XML for the feed, and inject the address of that feed into the existing tag listing page, in order to make the feed discoverable. Let's start with the discoverability part. One might be tempted to replace the controller or the view that are responsible for the listing of the items under a tag. Fortunately, there is no need to do any of that, and we can be a lot less obtrusive. Instead, we can implement a filter: public class TagRssFilter : FilterProvider, IResultFilter .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } On this filter, we can implement the OnResultExecuting method and simply check whether the current request is targeting the list of items under a tag. If that is the case, we can just register our new feed: public void OnResultExecuting(ResultExecutingContext filterContext) { var routeValues = filterContext.RouteData.Values; if (routeValues["area"] as string == "Orchard.Tags" && routeValues["controller"] as string == "Home" && routeValues["action"] as string == "Search") { var tag = routeValues["tagName"] as string; if (! string.IsNullOrWhiteSpace(tag)) { var workContext = _wca.GetContext(); _feedManager.Register( workContext.CurrentSite + " – " + tag, "rss", new RouteValueDictionary { { "tag", tag } } ); } } } The registration of the new feed is just specifying the title of the feed, its format (RSS) and the parameters that it will need (the tag). _wca and _feedManager are just instances of IWorkContextAccessor and IFeedManager that Orchard injected for us. That is all that's needed to get the following tag to be added to the head of our page, without touching an existing controller or view: <link rel="alternate" type="application/rss+xml" title="VuLu - Science" href="/rss?tag=Science"/> Nifty. Of course, if we navigate to the URL of that feed, we'll get a 404. This is because no implementation of IFeedQueryProvider knows about the tag parameter yet. Let's build one that does: public class TagFeedQuery : IFeedQueryProvider, IFeedQuery IFeedQueryProvider has one method, Match, that we can implement to take over any feed request that has a tag parameter: public FeedQueryMatch Match(FeedContext context) { var tagName = context.ValueProvider.GetValue("tag"); if (tagName == null) return null; return new FeedQueryMatch { FeedQuery = this, Priority = -5 }; } This is just saying that if there is a tag parameter, we will handle it. All that remains to be done is the actual building of the feed now that we have accepted to handle it. This is done by implementing the Execute method of the IFeedQuery interface: public void Execute(FeedContext context) { var tagValue = context.ValueProvider.GetValue("tag"); if (tagValue == null) return; var tagName = (string)tagValue.ConvertTo(typeof (string)); var tag = _tagService.GetTagByName(tagName); if (tag == null) return; var site = _services.WorkContext.CurrentSite; var link = new XElement("link"); context.Response.Element.SetElementValue("title", site.SiteName + " - " + tagName); context.Response.Element.Add(link); context.Response.Element.SetElementValue("description", site.SiteName + " - " + tagName); context.Response.Contextualize(requestContext => link.Add(GetTagUrl(tagName, requestContext))); var items = _tagService.GetTaggedContentItems(tag.Id, 0, 20); foreach (var item in items) { context.Builder.AddItem(context, item.ContentItem); } } This code is resolving the tag content item from its name and then gets content items tagged with it, using the tag services provided by the Orchard.Tags module. Then we add those items to the feed. And that is it. To summarize, we handled the request unobtrusively in order to inject the feed's link, then handled requests for feeds with a tag parameter and generated the list of items for that tag. It remains fairly simple and still it is able to handle arbitrary content types. That makes me quite happy about our little piece of over-engineered code from last year. The full code for this can be found in the Vandelay.TagCloud module: http://orchardproject.net/gallery/List/Modules/ Orchard.Module.Vandelay.TagCloud/1.2

    Read the article

  • Metro: Introduction to the WinJS ListView Control

    - by Stephen.Walther
    The goal of this blog entry is to provide a quick introduction to the ListView control – just the bare minimum that you need to know to start using the control. When building Metro style applications using JavaScript, the ListView control is the primary control that you use for displaying lists of items. For example, if you are building a product catalog app, then you can use the ListView control to display the list of products. The ListView control supports several advanced features that I plan to discuss in future blog entries. For example, you can group the items in a ListView, you can create master/details views with a ListView, and you can efficiently work with large sets of items with a ListView. In this blog entry, we’ll keep things simple and focus on displaying a list of products. There are three things that you need to do in order to display a list of items with a ListView: Create a data source Create an Item Template Declare the ListView Creating the ListView Data Source The first step is to create (or retrieve) the data that you want to display with the ListView. In most scenarios, you will want to bind a ListView to a WinJS.Binding.List object. The nice thing about the WinJS.Binding.List object is that it enables you to take a standard JavaScript array and convert the array into something that can be bound to the ListView. It doesn’t matter where the JavaScript array comes from. It could be a static array that you declare or you could retrieve the array as the result of an Ajax call to a remote server. The following JavaScript file – named products.js – contains a list of products which can be bound to a ListView. (function () { "use strict"; var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44 }, { name: "Oranges", price: 1.99 }, { name: "Wine", price: 8.55 }, { name: "Apples", price: 2.44 }, { name: "Steak", price: 1.99 }, { name: "Eggs", price: 2.44 }, { name: "Mushrooms", price: 1.99 }, { name: "Yogurt", price: 2.44 }, { name: "Soup", price: 1.99 }, { name: "Cereal", price: 2.44 }, { name: "Pepsi", price: 1.99 } ]); WinJS.Namespace.define("ListViewDemos", { products: products }); })(); The products variable represents a WinJS.Binding.List object. This object is initialized with a plain-old JavaScript array which represents an array of products. To avoid polluting the global namespace, the code above uses the module pattern and exposes the products using a namespace. The list of products is exposed to the world as ListViewDemos.products. To learn more about the module pattern and namespaces in WinJS, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/22/metro-namespaces-and-modules.aspx Creating the ListView Item Template The ListView control does not know how to render anything. It doesn’t know how you want each list item to appear. To get the ListView control to render something useful, you must create an Item Template. Here’s what our template for rendering an individual product looks like: <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> This template displays the product name and price from the data source. Normally, you will declare your template in the same file as you declare the ListView control. In our case, both the template and ListView are declared in the default.html file. To learn more about templates, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/27/metro-using-templates.aspx Declaring the ListView The final step is to declare the ListView control in a page. Here’s the markup for declaring a ListView: <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> You declare a ListView by adding the data-win-control to an HTML DIV tag. The data-win-options attribute is used to set two properties of the ListView. The ListView is associated with its data source with the itemDataSource property. Notice that the data source is ListViewDemos.products.dataSource and not just ListViewDemos.products. You need to associate the ListView with the dataSoure property. The ListView is associated with its item template with the help of the itemTemplate property. The ID of the item template — #productTemplate – is used to select the template from the page. Here’s what the complete version of the default.html page looks like: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>ListViewDemos</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- ListViewDemos references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script src="/js/products.js" type="text/javascript"></script> <style type="text/css"> .product { width: 200px; height: 100px; border: white solid 1px; } </style> </head> <body> <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> </body> </html> Notice that the page above includes a reference to the products.js file: <script src=”/js/products.js” type=”text/javascript”></script> The page above also contains a Template control which contains the ListView item template. Finally, the page includes the declaration of the ListView control. Summary The goal of this blog entry was to describe the minimal set of steps which you must complete to use the WinJS ListView control to display a simple list of items. You learned how to create a data source, declare an item template, and declare a ListView control.

    Read the article

  • Creating a podcast feed for iTunes & BlackBerry users using WCF Syndication

    - by brian_ritchie
     In my previous post, I showed how to create a RSS feed using WCF Syndication.  Next, I'll show how to add the additional tags needed to turn a RSS feed into an iTunes podcast.   A podcast is merely a RSS feed with some special characteristics: iTunes RSS tags.  These are additional tags beyond the standard RSS spec.  Apple has a good page on the requirements. Audio file enclosure.  This is a link to the audio file (such as mp3) hosted by your site.  Apple doesn't host the audio, they just read the meta-data from the RSS feed into their system. The SyndicationFeed class supports both AttributeExtensions & ElementExtensions to add custom tags to the RSS feeds. A couple of points of interest in the code below: The imageUrl below provides the album cover for iTunes (170px × 170px) Each SyndicationItem corresponds to an audio episode in your podcast So, here's the code: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: XNamespace itunesNS = "http://www.itunes.com/dtds/podcast-1.0.dtd"; 2: string prefix = "itunes"; 3:   4: var feed = new SyndicationFeed(title, description, new Uri(link)); 5: feed.Categories.Add(new SyndicationCategory(category)); 6: feed.AttributeExtensions.Add(new XmlQualifiedName(prefix, 7: "http://www.w3.org/2000/xmlns/"), itunesNS.NamespaceName); 8: feed.Copyright = new TextSyndicationContent(copyright); 9: feed.Language = "en-us"; 10: feed.Copyright = new TextSyndicationContent(DateTime.Now.Year + " " + ownerName); 11: feed.ImageUrl = new Uri(imageUrl); 12: feed.LastUpdatedTime = DateTime.Now; 13: feed.Authors.Add(new SyndicationPerson() {Name=ownerName, Email=ownerEmail }); 14: var extensions = feed.ElementExtensions; 15: extensions.Add(new XElement(itunesNS + "subtitle", subTitle).CreateReader()); 16: extensions.Add(new XElement(itunesNS + "image", 17: new XAttribute("href", imageUrl)).CreateReader()); 18: extensions.Add(new XElement(itunesNS + "author", ownerName).CreateReader()); 19: extensions.Add(new XElement(itunesNS + "summary", description).CreateReader()); 20: extensions.Add(new XElement(itunesNS + "category", 21: new XAttribute("text", category), 22: new XElement(itunesNS + "category", 23: new XAttribute("text", subCategory))).CreateReader()); 24: extensions.Add(new XElement(itunesNS + "explicit", "no").CreateReader()); 25: extensions.Add(new XDocument( 26: new XElement(itunesNS + "owner", 27: new XElement(itunesNS + "name", ownerName), 28: new XElement(itunesNS + "email", ownerEmail))).CreateReader()); 29:   30: var feedItems = new List<SyndicationItem>(); 31: foreach (var i in Items) 32: { 33: var item = new SyndicationItem(i.title, null, new Uri(link)); 34: item.Summary = new TextSyndicationContent(i.summary); 35: item.Id = i.id; 36: if (i.publishedDate != null) 37: item.PublishDate = (DateTimeOffset)i.publishedDate; 38: item.Links.Add(new SyndicationLink() { 39: Title = i.title, Uri = new Uri(link), 40: Length = i.size, MediaType = i.mediaType }); 41: var itemExt = item.ElementExtensions; 42: itemExt.Add(new XElement(itunesNS + "subtitle", i.subTitle).CreateReader()); 43: itemExt.Add(new XElement(itunesNS + "summary", i.summary).CreateReader()); 44: itemExt.Add(new XElement(itunesNS + "duration", 45: string.Format("{0}:{1:00}:{2:00}", 46: i.duration.Hours, i.duration.Minutes, i.duration.Seconds) 47: ).CreateReader()); 48: itemExt.Add(new XElement(itunesNS + "keywords", i.keywords).CreateReader()); 49: itemExt.Add(new XElement(itunesNS + "explicit", "no").CreateReader()); 50: itemExt.Add(new XElement("enclosure", new XAttribute("url", i.url), 51: new XAttribute("length", i.size), new XAttribute("type", i.mediaType))); 52: feedItems.Add(item); 53: } 54:   55: feed.Items = feedItems; If you're hosting your podcast feed within a MVC project, you can use the code from my previous post to stream it. Once you have created your feed, you can use the Feed Validator tool to make sure it is up to spec.  Or you can use iTunes: Launch iTunes. In the Advanced menu, select Subscribe to Podcast. Enter your feed URL in the text box and click OK. After you've verified your feed is solid & good to go, you can submit it to iTunes.  Launch iTunes. In the left navigation column, click on iTunes Store to open the store. Once the store loads, click on Podcasts along the top navigation bar to go to the Podcasts page. In the right column of the Podcasts page, click on the Submit a Podcast link. Follow the instructions on the Submit a Podcast page. Here are the full instructions.  Once they have approved your podcast, it will be available within iTunes. RIM has also gotten into the podcasting business...which is great for BlackBerry users.  They accept the same enhanced-RSS feed that iTunes uses, so just create an account with them & submit the feed's URL.  It goes through a similar approval process to iTunes.  BlackBerry users must be on BlackBerry 6 OS or download the Podcast App from App World. In my next post, I'll show how to build the podcast feed dynamically from the ID3 tags within the MP3 files.

    Read the article

  • Combination of Operating Mode and Commit Strategy

    - by Kevin Yang
    If you want to populate a source into multiple targets, you may also want to ensure that every row from the source affects all targets uniformly (or separately). Let’s consider the Example Mapping below. If a row from SOURCE causes different changes in multiple targets (TARGET_1, TARGET_2 and TARGET_3), for example, it can be successfully inserted into TARGET_1 and TARGET_3, but failed to be inserted into TARGET_2, and the current Mapping Property TLO (target load order) is “TARGET_1 -> TARGET_2 -> TARGET_3”. What should Oracle Warehouse Builder do, in order to commit the appropriate data to all affected targets at the same time? If it doesn’t behave as you intended, the data could become inaccurate and possibly unusable.                                               Example Mapping In OWB, we can use Mapping Configuration Commit Strategies and Operating Modes together to achieve this kind of requirements. Below we will explore the combination of these two features and how they affect the results in the target tables Before going to the example, let’s review some of the terms we will be using (Details can be found in white paper Oracle® Warehouse Builder Data Modeling, ETL, and Data Quality Guide11g Release 2): Operating Modes: Set-Based Mode: Warehouse Builder generates a single SQL statement that processes all data and performs all operations. Row-Based Mode: Warehouse Builder generates statements that process data row by row. The select statement is in a SQL cursor. All subsequent statements are PL/SQL. Row-Based (Target Only) Mode: Warehouse Builder generates a cursor select statement and attempts to include as many operations as possible in the cursor. For each target, Warehouse Builder inserts each row into the target separately. Commit Strategies: Automatic: Warehouse Builder loads and then automatically commits data based on the mapping design. If the mapping has multiple targets, Warehouse Builder commits and rolls back each target separately and independently of other targets. Use the automatic commit when the consequences of multiple targets being loaded unequally are not great or are irrelevant. Automatic correlated: It is a specialized type of automatic commit that applies to PL/SQL mappings with multiple targets only. Warehouse Builder considers all targets collectively and commits or rolls back data uniformly across all targets. Use the correlated commit when it is important to ensure that every row in the source affects all affected targets uniformly. Manual: select manual commit control for PL/SQL mappings when you want to interject complex business logic, perform validations, or run other mappings before committing data. Combination of the commit strategy and operating mode To understand the effects of each combination of operating mode and commit strategy, I’ll illustrate using the following example Mapping. Firstly we insert 100 rows into the SOURCE table and make sure that the 99th row and 100th row have the same ID value. And then we create a unique key constraint on ID column for TARGET_2 table. So while running the example mapping, OWB tries to load all 100 rows to each of the targets. But the mapping should fail to load the 100th row to TARGET_2, because it will violate the unique key constraint of table TARGET_2. With different combinations of Commit Strategy and Operating Mode, here are the results ¦ Set-based/ Correlated Commit: Configuration of Example mapping:                                                     Result:                                                      What’s happening: A single error anywhere in the mapping triggers the rollback of all data. OWB encounters the error inserting into Target_2, it reports an error for the table and does not load the row. OWB rolls back all the rows inserted into Target_1 and does not attempt to load rows to Target_3. No rows are added to any of the target tables. ¦ Row-based/ Correlated Commit: Configuration of Example mapping:                                                   Result:                                                  What’s happening: OWB evaluates each row separately and loads it to all three targets. Loading continues in this way until OWB encounters an error loading row 100th to Target_2. OWB reports the error and does not load the row. It rolls back the row 100th previously inserted into Target_1 and does not attempt to load row 100 to Target_3. Then, if there are remaining rows, OWB will continue loading them, resuming with loading rows to Target_1. The mapping completes with 99 rows inserted into each target. ¦ Set-based/ Automatic Commit: Configuration of Example mapping: Result: What’s happening: When OWB encounters the error inserting into Target_2, it does not load any rows and reports an error for the table. It does, however, continue to insert rows into Target_3 and does not roll back the rows previously inserted into Target_1. The mapping completes with one error message for Target_2, no rows inserted into Target_2, and 100 rows inserted into Target_1 and Target_3 separately. ¦ Row-based/Automatic Commit: Configuration of Example mapping: Result: What’s happening: OWB evaluates each row separately for loading into the targets. Loading continues in this way until OWB encounters an error loading row 100 to Target_2 and reports the error. OWB does not roll back row 100th from Target_1, does insert it into Target_3. If there are remaining rows, it will continue to load them. The mapping completes with 99 rows inserted into Target_2 and 100 rows inserted into each of the other targets. Note: Automatic Correlated commit is not applicable for row-based (target only). If you design a mapping with the row-based (target only) and correlated commit combination, OWB runs the mapping but does not perform the correlated commit. In set-based mode, correlated commit may impact the size of your rollback segments. Space for rollback segments may be a concern when you merge data (insert/update or update/insert). Correlated commit operates transparently with PL/SQL bulk processing code. The correlated commit strategy is not available for mappings run in any mode that are configured for Partition Exchange Loading or that include a Queue, Match Merge, or Table Function operator. If you want to practice in your own environment, you can follow the steps: 1. Import the MDL file: commit_operating_mode.mdl 2. Fix the location for oracle module ORCL and deploy all tables under it. 3. Insert sample records into SOURCE table, using below plsql code: begin     for i in 1..99     loop         insert into source values(i, 'col_'||i);     end loop;     insert into source values(99, 'col_99'); end; 4. Configure MAPPING_1 to any combinations of operating mode and commit strategy you want to test. And make sure feature TLO of mapping is open. 5. Deploy Mapping “MAPPING_1”. 6. Run the mapping and check the result.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • A New Threat To Web Applications: Connection String Parameter Pollution (CSPP)

    - by eric.maurice
    Hi, this is Shaomin Wang. I am a security analyst in Oracle's Security Alerts Group. My primary responsibility is to evaluate the security vulnerabilities reported externally by security researchers on Oracle Fusion Middleware and to ensure timely resolution through the Critical Patch Update. Today, I am going to talk about a serious type of attack: Connection String Parameter Pollution (CSPP). Earlier this year, at the Black Hat DC 2010 Conference, two Spanish security researchers, Jose Palazon and Chema Alonso, unveiled a new class of security vulnerabilities, which target insecure dynamic connections between web applications and databases. The attack called Connection String Parameter Pollution (CSPP) exploits specifically the semicolon delimited database connection strings that are constructed dynamically based on the user inputs from web applications. CSPP, if carried out successfully, can be used to steal user identities and hijack web credentials. CSPP is a high risk attack because of the relative ease with which it can be carried out (low access complexity) and the potential results it can have (high impact). In today's blog, we are going to first look at what connection strings are and then review the different ways connection string injections can be leveraged by malicious hackers. We will then discuss how CSPP differs from traditional connection string injection, and the measures organizations can take to prevent this kind of attacks. In web applications, a connection string is a set of values that specifies information to connect to backend data repositories, in most cases, databases. The connection string is passed to a provider or driver to initiate a connection. Vendors or manufacturers write their own providers for different databases. Since there are many different providers and each provider has multiple ways to make a connection, there are many different ways to write a connection string. Here are some examples of connection strings from Oracle Data Provider for .Net/ODP.Net: Oracle Data Provider for .Net / ODP.Net; Manufacturer: Oracle; Type: .NET Framework Class Library: - Using TNS Data Source = orcl; User ID = myUsername; Password = myPassword; - Using integrated security Data Source = orcl; Integrated Security = SSPI; - Using the Easy Connect Naming Method Data Source = username/password@//myserver:1521/my.server.com - Specifying Pooling parameters Data Source=myOracleDB; User Id=myUsername; Password=myPassword; Min Pool Size=10; Connection Lifetime=120; Connection Timeout=60; Incr Pool Size=5; Decr Pool Size=2; There are many variations of the connection strings, but the majority of connection strings are key value pairs delimited by semicolons. Attacks on connection strings are not new (see for example, this SANS White Paper on Securing SQL Connection String). Connection strings are vulnerable to injection attacks when dynamic string concatenation is used to build connection strings based on user input. When the user input is not validated or filtered, and malicious text or characters are not properly escaped, an attacker can potentially access sensitive data or resources. For a number of years now, vendors, including Oracle, have created connection string builder class tools to help developers generate valid connection strings and potentially prevent this kind of vulnerability. Unfortunately, not all application developers use these utilities because they are not aware of the danger posed by this kind of attacks. So how are Connection String parameter Pollution (CSPP) attacks different from traditional Connection String Injection attacks? First, let's look at what parameter pollution attacks are. Parameter pollution is a technique, which typically involves appending repeating parameters to the request strings to attack the receiving end. Much of the public attention around parameter pollution was initiated as a result of a presentation on HTTP Parameter Pollution attacks by Stefano Di Paola and Luca Carettoni delivered at the 2009 Appsec OWASP Conference in Poland. In HTTP Parameter Pollution attacks, an attacker submits additional parameters in HTTP GET/POST to a web application, and if these parameters have the same name as an existing parameter, the web application may react in different ways depends on how the web application and web server deal with multiple parameters with the same name. When applied to connections strings, the rule for the majority of database providers is the "last one wins" algorithm. If a KEYWORD=VALUE pair occurs more than once in the connection string, the value associated with the LAST occurrence is used. This opens the door to some serious attacks. By way of example, in a web application, a user enters username and password; a subsequent connection string is generated to connect to the back end database. Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; In the password field, if the attacker enters "xxx; Integrated Security = true", the connection string becomes, Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; Intergrated Security = true; Under the "last one wins" principle, the web application will then try to connect to the database using the operating system account under which the application is running to bypass normal authentication. CSPP poses serious risks for unprepared organizations. It can be particularly dangerous if an Enterprise Systems Management web front-end is compromised, because attackers can then gain access to control panels to configure databases, systems accounts, etc. Fortunately, organizations can take steps to prevent this kind of attacks. CSPP falls into the Injection category of attacks like Cross Site Scripting or SQL Injection, which are made possible when inputs from users are not properly escaped or sanitized. Escaping is a technique used to ensure that characters (mostly from user inputs) are treated as data, not as characters, that is relevant to the interpreter's parser. Software developers need to become aware of the danger of these attacks and learn about the defenses mechanism they need to introduce in their code. As well, software vendors need to provide templates or classes to facilitate coding and eliminate developers' guesswork for protecting against such vulnerabilities. Oracle has introduced the OracleConnectionStringBuilder class in Oracle Data Provider for .NET. Using this class, developers can employ a configuration file to provide the connection string and/or dynamically set the values through key/value pairs. It makes creating connection strings less error-prone and easier to manager, and ultimately using the OracleConnectionStringBuilder class provides better security against injection into connection strings. For More Information: - The OracleConnectionStringBuilder is located at http://download.oracle.com/docs/cd/B28359_01/win.111/b28375/OracleConnectionStringBuilderClass.htm - Oracle has developed a publicly available course on preventing SQL Injections. The Server Technologies Curriculum course "Defending Against SQL Injection Attacks!" is located at http://st-curriculum.oracle.com/tutorial/SQLInjection/index.htm - The OWASP web site also provides a number of useful resources. It is located at http://www.owasp.org/index.php/Main_Page

    Read the article

  • Metro: Grouping Items in a ListView Control

    - by Stephen.Walther
    The purpose of this blog entry is to explain how you can group list items when displaying the items in a WinJS ListView control. In particular, you learn how to group a list of products by product category. Displaying a grouped list of items in a ListView control requires completing the following steps: Create a Grouped data source from a List data source Create a Grouped Header Template Declare the ListView control so it groups the list items Creating the Grouped Data Source Normally, you bind a ListView control to a WinJS.Binding.List object. If you want to render list items in groups, then you need to bind the ListView to a grouped data source instead. The following code – contained in a file named products.js — illustrates how you can create a standard WinJS.Binding.List object from a JavaScript array and then return a grouped data source from the WinJS.Binding.List object by calling its createGrouped() method: (function () { "use strict"; // Create List data source var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44, category: "Beverages" }, { name: "Oranges", price: 1.99, category: "Fruit" }, { name: "Wine", price: 8.55, category: "Beverages" }, { name: "Apples", price: 2.44, category: "Fruit" }, { name: "Steak", price: 1.99, category: "Other" }, { name: "Eggs", price: 2.44, category: "Other" }, { name: "Mushrooms", price: 1.99, category: "Other" }, { name: "Yogurt", price: 2.44, category: "Other" }, { name: "Soup", price: 1.99, category: "Other" }, { name: "Cereal", price: 2.44, category: "Other" }, { name: "Pepsi", price: 1.99, category: "Beverages" } ]); // Create grouped data source var groupedProducts = products.createGrouped( function (dataItem) { return dataItem.category; }, function (dataItem) { return { title: dataItem.category }; }, function (group1, group2) { return group1.charCodeAt(0) - group2.charCodeAt(0); } ); // Expose the grouped data source WinJS.Namespace.define("ListViewDemos", { products: groupedProducts }); })(); Notice that the createGrouped() method requires three functions as arguments: groupKey – This function associates each list item with a group. The function accepts a data item and returns a key which represents a group. In the code above, we return the value of the category property for each product. groupData – This function returns the data item displayed by the group header template. For example, in the code above, the function returns a title for the group which is displayed in the group header template. groupSorter – This function determines the order in which the groups are displayed. The code above displays the groups in alphabetical order: Beverages, Fruit, Other. Creating the Group Header Template Whenever you create a ListView control, you need to create an item template which you use to control how each list item is rendered. When grouping items in a ListView control, you also need to create a group header template. The group header template is used to render the header for each group of list items. Here’s the markup for both the item template and the group header template: <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div id="productGroupHeaderTemplate" data-win-control="WinJS.Binding.Template"> <div class="productGroupHeader"> <h1 data-win-bind="innerText: title"></h1> </div> </div> You should declare the two templates in the same file as you declare the ListView control – for example, the default.html file. Declaring the ListView Control The final step is to declare the ListView control. Here’s the required markup: <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate'), groupDataSource:ListViewDemos.products.groups.dataSource, groupHeaderTemplate:select('#productGroupHeaderTemplate'), layout: {type: WinJS.UI.GridLayout} }"> </div> In the markup above, six properties of the ListView control are set when the control is declared. First the itemDataSource and itemTemplate are specified. Nothing new here. Next, the group data source and group header template are specified. Notice that the group data source is represented by the ListViewDemos.products.groups.dataSource property of the grouped data source. Finally, notice that the layout of the ListView is changed to Grid Layout. You are required to use Grid Layout (instead of the default List Layout) when displaying grouped items in a ListView. Here’s the entire contents of the default.html page: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>ListViewDemos</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- ListViewDemos references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script src="/js/products.js" type="text/javascript"></script> <style type="text/css"> .product { width: 200px; height: 100px; border: white solid 1px; font-size: x-large; } </style> </head> <body> <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div id="productGroupHeaderTemplate" data-win-control="WinJS.Binding.Template"> <div class="productGroupHeader"> <h1 data-win-bind="innerText: title"></h1> </div> </div> <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate'), groupDataSource:ListViewDemos.products.groups.dataSource, groupHeaderTemplate:select('#productGroupHeaderTemplate'), layout: {type: WinJS.UI.GridLayout} }"> </div> </body> </html> Notice that the default.html page includes a reference to the products.js file: <script src=”/js/products.js” type=”text/javascript”></script> The default.html page also contains the declarations of the item template, group header template, and ListView control. Summary The goal of this blog entry was to explain how you can group items in a ListView control. You learned how to create a grouped data source, a group header template, and declare a ListView so that it groups its list items.

    Read the article

  • Workshops, online content show how Oracle infuses simplicity, mobility, extensibility into user experience

    - by mvaughan
    By Kathy Miedema & Misha Vaughan, Oracle Applications User Experience Oracle has made a huge investment into the user experience of its many different software product families, and recent releases showcase big changes and features that aim to promote end user engagement and efficiency by streamlining navigation and simplifying the user interface. But making Oracle’s enterprise software great-looking and usable doesn’t stop when Oracle products go out the door. The Applications User Experience (UX) team recognizes that our customers may need to customize software to fit their work processes. And that’s why we provide tools such as user experience design patterns to help you maintain the Oracle user experience as you tailor your application to fit your business needs. Often, however, customers may need some context around user experience. How has the Oracle user experience been designed and constructed? Why is a good user experience important for users? How does understanding what goes into the user experience benefit the people who purchase the software for users? There’s a short answer to these questions, and you can read about it on Usable Apps. But truly understanding Oracle’s investment and seeing how it applies across product families occasionally requires a deeper dive into the Oracle user experience, especially if you’re an influencer or decision-maker about Oracle products. To help frame these decisions, the Communications & Outreach team has developed several targeted workshops that explore what Oracle means when it talks about user experience, and provides a roadmap into where the Oracle user experience is going. These workshops require non-disclosure agreements, and have been delivered to Oracle sales folks, Oracle partners, Oracle ACE Directors and ACEs, and a few customers. Some of these audience members have been developers or have a technical background; just as many did not. Here’s a breakdown of the kind of training you can get around the Oracle user experience from the OAUX Communications & Outreach team.For Partners: George Papazzian, Principal, Naviscent with Joyce Ohgi, Oracle Oracle Fusion Applications HCM Pre-Sales Seminar:  In concert with Worldwide Alliances  and  Channels under Applications Partner Enablement Director Jonathan Vinoskey’s guidance, the Applications User Experience team delivers a two-day workshop.  Day one focuses on Oracle Fusion Applications HCM and pre-sales strategy, and Day two focuses on positioning and leveraging Oracle’s investment in the Oracle Fusion Applications user experience.  The next workshops will occur on the following dates: December 4-5, 2013 @ Manchester, UK January 29-30, 2014 @ Reston, Virginia February 2014 @ Guadalajara, Mexico (email: Shannon Whiteman) March 11-12, 2014 @ Dubai, United Arab Emirates April 1-2, 2014 @ Chicago, Illinois Partner Advisory Board: A two-day board meeting in the U.S. and U.K. to discuss four main user experience areas for Oracle Fusion Applications: simplicity, visualization & analytics, mobility, & futures. This event is limited to Oracle Diamond Partners, UX bloggers, and key UX influencers and requires legal documentation.  We will be talking about the Oracle applications UX strategy and roadmap. Partner Implementation Training on User Interface: How to Build Great-Looking, Usable Apps:  In this two-day, hands-on workshop built around Oracle’s Application Development Framework, learn how to build desktop and mobile user interfaces and mobile user interfaces based on Oracle’s experience with Fusion Applications. This workshop is for partners with a technology background who are looking for ways to tailor Fusion Applications using ADF, or have built their own custom solutions using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs. Nov 5-6, 2013 @ Redwood Shores, CA, USA January 28-29th, 2014 @ Reston, Virginia, USA February 25-26, 2014 @ Guadalajara, Mexico March 9-10, 2014 @ Dubai, United Arab Emirates To register, contact [email protected] Simplified UI Customization & Extensibility:  Pilot workshop:  We will be reviewing the proposed content for communicating the user experience tool kit available with the next release of Oracle Fusion Applications.  Our core focus will be on what toolkit components our system implementors and independent software vendors will need to respond to customer demand, whether they are extending Fusion Applications, or building custom applications, that will need to leverage the simplified UI. Dec 11th, 2013 @ Reading, UK For information: contact [email protected] Private lab tour and demos: Interested in seeing what’s going on in the Apps UX Labs?  If you are headed to the San Francisco Bay Area, let us know. We can arrange a spin through our usability labs at headquarters. OAUX Expo: This open-house forum gives partners a look at what the UX team is working on, and showcases the next-generation user experiences in a demo environment where attendees can see and touch the applications. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For CustomersAngela Johnston, Gozel Aamoth, Teena Singh, and Yen Chan, Oracle Lab tours: See demos of soon-to-be-released products, and take a spin on usability research equipment such as our eye-tracker. Watch this video to get an idea of what you’ll see. Get our newsletter: Learn about newly released products and see where you can meet us at user group conferences. Participate in a feedback session: Join a focus group or customer feedback session to get an early look at user experience designs for the next generation of software, and provide your thoughts on how well it will work. Join the OUAB: The Oracle Usability Advisory Board meets several times a year to discuss trends in the workforce and provide direction on user experience designs. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Developers (customers, partners, and consultants): Plinio Arbizu, SP Solutions, Richard Bingham, Oracle, Balaji Kamepalli, EiSTechnoogies, Praveen Pillalamarri, EiSTechnologies How to Build Great-Looking, Usable Apps: This workshop is for attendees with a strong technology background who are looking for ways to tailor customer software using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs.  See above for dates and times. UX design patterns web site: Cut the length of your project down by months. Use these patterns to build out the task flow you need to develop for your users. The patterns have already been usability-tested and represent the best practices that the Oracle UX research team has found in its studies. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Oracle Sales Mike Klein, Jeremy Ashley, Brent White, Oracle Contact your local sales person for more information about the Oracle user experience and the training available from the Applications User Experience Communications & Outreach team. See customer-friendly user experience collateral ranging from the new simplified UI in Oracle Fusion Applications Release 7, to E-Business Suite user experience highlights, to Siebel, PeopleSoft, and JD Edwards user experience highlights.   Receive access to the same pre-sales and implementation training we provide to partners. For Oracle Sales only: Oracle-only training on the Oracle Fusion Applications UX Innovation Sales Kit.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >