Search Results

Search found 4778 results on 192 pages for 'john lewis'.

Page 191/192 | < Previous Page | 187 188 189 190 191 192  | Next Page >

  • CodePlex Daily Summary for Monday, November 19, 2012

    CodePlex Daily Summary for Monday, November 19, 2012Popular ReleasesmojoPortal: 2.3.9.4: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2394-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0, but we recommend you to use .NET 4, we will probably drop support for .NET 3.5 once .NET 4.5 is available The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code and are not intended for use in Visual Studio. To download the source code see getting the lates...VidCoder: 1.4.6 Beta: Brought back the x264 advanced options panel due to popular demand. Thank you for all the feedback. x264 Preset/Profile/Tune/Level has been moved back to the Video tab, along with a copy of the "extra options" string. Added Fast Decode and Zero Latency checkboxes to support multiple Tunes. Added cropping option "None". Audio bitrates that are incompatible with the encoder (such as MP3 > 320 kbps) are no longer preset on the list. Fixed crash on opening VidCoder after de-selecting "re...Metodología General Ajustada - MGA: 03.05.02: Cambios Parmenio: Correcciones al formato F03 de programación, se deja en comentarios la validación de la unidad de la actividad sea igul a la del indicador. Cambios John: Integración de código con cambios enviados por Parmenio Bonilla. Generación de instaladores. Soporte técnico por correo electrónico, telefónico y en sitio.SPListViewFilter: Version 1.8: Fixed some bugsDotNetNuke® Store: 03.01.07: What's New in this release? IMPORTANT: this version requires DotNetNuke 04.06.02 or higher! DO NOT REPORT BUGS HERE IN THE ISSUE TRACKER, INSTEAD USE THE DotNetNuke Store Forum! Bugs corrected: - Replaced some hard coded references to the default address provider classes by the corresponding interfaces to allow the creation of another address provider with a different name. New Features: - Added the 'pickup' delivery option at checkout. - Added the 'no delivery' option in the Store Admin ...Bundle Transformer - a modular extension for ASP.NET Web Optimization Framework: Bundle Transformer 1.6.10: Version: 1.6.10 Published: 11/18/2012 Now almost all of the Bundle Transformer's assemblies is signed (except BundleTransformer.Yui.dll); In BundleTransformer.SassAndScss the SassAndCoffee.Ruby library was replaced by my own implementation of the Sass- and SCSS-compiler (based on code of the SassAndCoffee.Ruby library version 2.0.2.0); In BundleTransformer.CoffeeScript added support of CoffeeScript version 1.4.0-3; In BundleTransformer.TypeScript added support of TypeScript version 0....ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.0: +2012-11-18 v3.2.0 -?????????????????SelectedValueArray????????(◇?◆:)。 -???????????????????RecoverPropertiesFromJObject????(〓?〓、????、??、Vian_Pan)。 -????????????,?????????????,???SelectedValueArray???????(sam.chang)。 -??Alert.Show???????????(swtseaman)。 -???????????????,??Icon??IconUrl????(swtseaman)。 -?????????TimePicker(??)。 -?????????,??/res.axd?css=blue.css&v=1。 -????????,?????????????,???????。 -????MenuCheckBox(???????)。 -?RadioButton??AutoPostBack??。 -???????FCKEditor?????????...BugNET Issue Tracker: BugNET 1.2: Please read our release notes for BugNET 1.2: http://blog.bugnetproject.com/bugnet-1-2-has-been-released Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you will pollute the rating, and you won't get an answer.Paint.NET PSD Plugin: 2.2.0: Changes: Layer group visibility is now applied to all layers within the group. This greatly improves the visual fidelity of complex PSD files that have hidden layer groups. Layer group names are prefixed so that users can get an indication of the layer group hierarchy. (Paint.NET has a flat list of layers, so the hierarchy is flattened out on load.) The progress bar now reports status when saving PSD files, instead of showing an indeterminate rolling bar. Performance improvement of 1...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1116.7): [IMPROVED] Detailed error message descriptions for FaultException [FIX] Fixed bug in rule CrmOfflineAccessStateRule which had incorrect State attribute name [FIX] Fixed bug in rule EntityPropertyRule which was missing PropertyValue attribute [FIX] Current connection information was not displayed in status bar while refreshing list of entitiesSuper Metroid Randomizer: Super Metroid Randomizer v5: v5 -Added command line functionality for automation purposes. -Implented Krankdud's change to randomize the Etecoon's item. NOTE: this version will not accept seeds from a previous version. The seed format has changed by necessity. v4 -Started putting version numbers at the top of the form. -Added a warning when suitless Maridia is required in a parsed seed. v3 -Changed seed to only generate filename-legal characters. Using old seeds will still work exactly the same. -Files can now be saved...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.4: Changes This version includes many bug fixes across all platforms, improvements to nuget support and...the biggest news of all...full support for both WinRT and WP8. Download Contents Debug and Release Assemblies Samples Readme.txt License.txt Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro invers...DirectX Tool Kit: November 15, 2012: November 15, 2012 Added support for WIC2 when available on Windows 8 and Windows 7 with KB 2670838 Cleaned up warning level 4 warningsDotNetNuke® Community Edition CMS: 06.02.05: Major Highlights Updated the system so that it supports nested folders in the App_Code folder Updated the Global Error Handling so that when errors within the global.asax handler happen, they are caught and shown in a page displaying the original HTTP error code Fixed issue that stopped users from specifying Link URLs that open on a new window Security FixesFixed issue in the Member Directory module that could show members to non authenticated users Fixed issue in the Lists modul...fastJSON: v2.0.10: - added MonoDroid projectxUnit.net Contrib: xunitcontrib-resharper 0.7 (RS 7.1, 6.1.1): xunitcontrib release 0.6.1 (ReSharper runner) This release provides a test runner plugin for Resharper 7.1 RTM and 6.1.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release drops 7.0 support and targets the latest revisions of the last two major versions of ReSharper (namely 7.0 and 6.1.1). Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. Also note that all builds work against ALL ...OnTopReplica: Release 3.4: Update to the 3 version with major fixes and improvements. Compatible with Windows 8. Now runs (and requires) .NET Framework v.4.0. Added relative mode for region selection (allows the user to select regions as margins from the borders of the thumbnail, useful for windows which have a variable size but fixed size controls, like video players). Improved window seeking when restoring cloned thumbnail or cloning a window by title or by class. Improved settings persistence. Improved co...DotSpatial: DotSpatial 1.4: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.5: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes Docum...AcDown?????: AcDown????? v4.3: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...New Projects1119P1: So far, I haven't found any bugcoolow: a simple projectDatabase Tools: Windows application for managing SQL Server databases.Editable WILEz Books: Sorry for my bad enlgish. I'm italian. With this project you can write a simple book with images, you can customize text font, color, beckground immage ecc.. simply with a editable txt file.EstimateTracker: Program to track estimate time using XAML, MVVM, WPF, ninject Ioc, nhibernate and Microsoft PrismExtJS based ASP.NET 2.0 Controls: About FineUI ExtJS based professional ASP.NET 2.0 Controls. FineUI Mission Create No JavaScript, No CSS, No UpdatePanel, No ViewState and No WebServices web apGCU: This project supports the Gedcom Utility which allows users to review many Gedcom files for certain information.Heng.Elements: Entity Relationship ModelingiRoboticsPrototype1: iRobotics Prototype 1. Under developmentJamaican kitchen: This a website which will display various jamaican food. these dishes which be ranging from mild to spicey food. JarvisProject1: ???? ?????? ??????? ??????? - ????????????? ?????????? ? ???????? ????????? ??????Java 2D Game Developing Setup: Set of classes as "interface" between game powering code and creative game development in Java. KZ.Express: Project is build to resolve bill managementlbpWGaeBlog: my blog on gaeLogistic Management System: D? án giao nh?n v?n t?i logistics. D? án có r?t nhi?u ký t? D? án giao nh?n v?n t?i logistics. D? án có r?t nhi?u ký t?D? án giao nh?n v?n t?i logistics. D? án Managed3D: Managed3D is a scene graph API that allows developers to have both high-level and low-level access to objects in a 3-dimensional scene.MicroTao: MicroTao is for the future.MVC4 ASPX TourBooking: Website Booking tourMyDay: Simple little spike, using a todo list. Spiking MVC 4, Twitter Bootstrap, code-first migrations in EF 5, and AppHarbor deployments. mytestmusicstoremvc: my Study mvcNDateTime: NDateTime is a javascript library that wrap the most commons properties and methods of .NET DateTime object.onexin: This is test.PersiaCaptchaHandler: A Persian Captcha that use number in lettersPhoneGap/Cordova Libs, PhoneGap Demos, PhoneGap Solutions, PhoneGap Practices: Here you can find PhoneGap/Cordova Libs, demos, solutions, best practices, architectures, etc. Most important, all should be the best and free.PhotOrganizer: Windows application to organize your pictures. Scans folder for number and size of pictures. Moves to destination by year-month and removes duplicate files.PoNCE: PoNCE Engine helps creating of Point and Click quest gamesPragTest: List of my projectsProof of Concept Code: This is pretty much throw-away PoC code. I intend to have a folder for each PoC and the solution file for the PoC under the same folder. Security Center: Security Center is a handy tool to secure your secret notes. 512-bits AES algorithm with your private master password is used to protect your data. SharePoint Metro Sliders: SharePoint 2010 Feature that includes two Metro style image sliders web parts: - Image slider with just one image - Image slider with four smaller images in it.SkilledRES_Portal: SkilledRES Portal consists about Organization Information & Activity Profiling of SkilledRESSuper BASE32: This awesome app let you convert all your music, pictures and video to brand new BASE32 encoding! System.Data.Entity.Repository.Filters: System.Data.Entity.Repository.FiltersThe Media Store: The Media StoreUser Group: Maintain support data for user groupsVivitap: Vivitap Samples and SDK support.Web Scripting and Content Creation - DIY Wedding Cake - Assignment 2 - Prototype: Web Scripting and Content Creation - DIY Wedding Cake - Assignment 2webass2: protoype project for the final project for WSCC .WebTechCoursework2.HRSystem: Human Resource System for Web Technology CourseworkWindows Store Application Library: Windows Store Application Library provides a collection of UI controls and utilities for Windows 8 store application developers.WriteMyName: Código para escrever o nome do autor no começo de código fonte.XMPP Chat for Windows 8 Apps Store: xmpp sample for windows app store???-Windows8: ???Windows8???,?????????

    Read the article

  • CodePlex Daily Summary for Monday, November 29, 2010

    CodePlex Daily Summary for Monday, November 29, 2010Popular Releasesexpression Blend 4.0 Comment Uncomment Xaml Code addin: Blend addin 1.0b: First releaseConfuser: Confuser v1.5: Change logs: +packer system +two confusion +console version *Enhanced encryption algorithm and protection. *Better documentationDotSpatial: DotSpatial 11-28-2001: This release introduces some exciting improvements. Support for big raster, both in display and changing the scheme. Faster raster scheme creation for all rasters. Caching of the "sample" values so once obtained the raster symbolizer dialog loads faster. Reprojection supported for raster and image classes. Affine transform fully supported for images and rasters, so skewed images are now possible. Projection uses better checks when loading unprojected layers. GDAL raster support f...SuperWebSocket: SuperWebSocket(60333): It is the first release of SuperWebSocket. Because it is base on SuperSocket, most features of SuperSocket are supported in SuperWebSocket.MDownloader: MDownloader-0.15.25.7002: Fixed updater Fixed FileServe Fixed LetItBitNotepad.NET: Notepad.NET 0.7 Preview 1: Whats New?* Optimized Code Generation: Which means it will run significantly faster. * Preview of Syntax Highlighting: Only VB.NET highlighting is supported, C# and Ruby will come in Preview 2. * Improved Editing Updates (when the line number, etc updates) to be more graceful. * Recent Documents works! * Images can be inserted but they're extremely large. Known Bugs* The Update Process hangs: This is a bug apparently spawning since 0.5. It will be fixed in Preview 2. Until then, perform a fr...Cropper: 1.9.4: Mostly fixes for issues with a few feature requests. Fixed Issues 2730 & 3638 & 14467 11044 11447 11448 11449 14665 Implemented Features 6123 11581PFC: PFC for PB 11.5: This is just a migration from the 11.0 code. No changes have been made yet (and they are needed) for it to work properly with 11.5.PDF Rider: PDF Rider 0.5: This release does not add any new feature for pdf manipulation, but enables automatic updates checking, so it is reccomended to install it in order to stay updated with next releases. Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0...Eneta community portal: Eneta Portal 0.1a: This is very first public deployment package of Eneta portal. This is test and development release and it is not suggested to use it on live or more complex development or test environments. You can find installation guides from documentation section of this site. For help and support please leave your message to discussions.XamlQuery/WPF - The Write Less, Do More, WPF Library: XamlQuery-WPF v1.2 (Runtime, Source): This is the first release of popular XamlQuery library for WPF. XamlQuery has already gained recognition among Silverlight developers.Math.NET Numerics: Beta 1: First beta of Math.NET Numerics. Only contains the managed linear algebra provider. Beta 2 will include the native linear algebra providers along with better documentation and examples.Minecraft GPS: Minecraft GPS 1.1: 1.1 Release New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft.Code Sample from Microsoft: Visual Studio 2010 Code Samples 2010-11-25: Code samples for Visual Studio 2010Wii Backup Fusion: Wii Backup Fusion 0.8.5 Beta: - WBFS repair (default) options fixed - Transfer to image fixed - Settings ui widget names fixed - Some little bug fixes You need to reset the settings! Delete WiiBaFu's config file or registry entries on windows: Linux: ~/.config/WiiBaFu/wiibafu.conf Windows: HKEY_CURRENT_USER\Software\WiiBaFu\wiibafu Mac OS X: ~/Library/Preferences/com.wiibafu.wiibafu.plist Caution: This is a BETA version! Errors, crashes and data loss not impossible! Use in test environments only, not on productive syste...Minemapper: Minemapper v0.1.3: Added process count and world size calculation progress to the status bar. Added View->'Status Bar' menu item to show/hide the status bar. Status bar is automatically shown when loading a world. Added a prompt, when loading a world, to use or clear cached images.Sexy Select: sexy select v0.4: Changes in v0.4 Added method : elements. This returns all the option elements that are currently added to the select list Added method : selectOption. This method accepts two values, the element to be modified and the selected state. (true/false)Deep Zoom for WPF: First Release: This first release of the Deep Zoom control has the same source code, binaries and demos as the CodeProject article (http://www.codeproject.com/KB/WPF/DeepZoom.aspx).BlogEngine.NET: BlogEngine.NET 2.0 RC: This is a Release Candidate version for BlogEngine.NET 2.0. The most current, stable version of BlogEngine.NET is version 1.6. Find out more about the BlogEngine.NET 2.0 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation and the installation screencast. If you are upgrading from a previous version, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. As this ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.156: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a feature for aggregating the overall metrics in a folder full of NodeXL workbooks, adds geographical coordinates to the Twitter import features, and fixes a memory-related bug. See the Complete NodeXL Release History for details. Please Note: There is a new option in the setup program to install for "Just Me" or "Everyone." Most people...New ProjectsAntikCompta: AntikCompta is the easiest way to share comptability beetween an antiquaire and it's account manager.Blend Extensions: This shows you how to extend blend. It provides the basic plumbing for adding custom menu items, panes etc. codinghints: Sample codes of my blog codinghints.blogspot.comConvertitore dec->all: programma conversione da decimale a tutte le altre basi...DHCP Server: Open source managed ipv4 DHCP server implementation, written in C#. With extensive support for DHCP options it is ideally suited for configuring and netbooting local systems such as PLCs and blade racks. The permissive MIT license enables free commercial use and distribution.EducationProject: To be continuedEfficiency: Autumoon Efficiency.FarmerStore: Farmer store is e-commerical website about agricultural products.gStocksGadget: gStocksGadget is a Gadget for Windows Sidebar,display real-time stock prices(China's A-share market).Illumina PRT: Research ProjectImageCrawler: web crawler for downloading imagesInstantWatcher: Allows a user to view their Netflix Instant Watch Queue and launch a video.IRobotOnCLR - Control an "IRobot Create" Like You Are John Connor: This is a C# IRobot Create library w/ an accompanied Silverlight client that allows for remote interaction. To make use of this entire project, mounting a .net capable computer on the IRobot Create is suggested.it0758: it0758jkwcg: jkw cg form submit and managment, use xaf as testModificator of the URLs in the Web.config files: FilesModificatorAdmin utility 1. Purpose. 2. How it works. ================================================= 1. Purpose. In my current project we've got a lot of composite WCF-services. We have several environments: Development,Test1, Test2, Production. We don't have the service repository. That means when we move the services from one environment to another, we have to change addresses (URLs) in the <client> sections of all Web.config files (several dozens). Boring and error prone work, is...NARDAX: A collection of usefull API's and extensions that have been built out of necessity.OpenGLLesson: ?? ??? ???Peachlab ASP.NET Project Starter Kit: this is a personal project.picoCMS: picoCMS is an open-source content management system based on ASP.Net 3.5. It can be used as a learning platform for people who are totally new to ASP.Net and want to get things working around. SogouMusicBox Data: SogouMusicBox Data.Sound It Out - Phonetic Algorithms: Phonetic Algorithm helper Completed: NYSIIS In Progress: Soundex Metaphone Double Metaphone SPMetal Extender: SPMetal Extended helps SharePoint 2010 C# developers to remove some Linq to SharePoint 2010 limitations. You can extend the fields that Linq to SharePoint handles (including SharePoint Server 2010's): taxonaomy, publishing html, publishing image, attachments, created by, modifiedStrategyGame: A Fantasy Strategy Game.Task Scheduler Engine: Embed cron-like scheduling in your .NET application. Want your application to execute a task on the 12th second of the 29th minute of the 9th hour on Tuesdays? Piece of cake--2 lines of code. Coming in at ~250 lines of code, it offers considerable functionality with no bloat.The C# Todoist API: The C# Todoist API is a C# wrapper around the Todoist API documented at http://todoist.com/API/help. This is a work in progress.Tools Project: Tools to ease management.uObject: uObject is an Library That persist sample objectVisual Design: Visual Design is a collaborative diagramming editor written in silverlight to support design of software systems through diagramming, mainly UML, but even other types of diagram not supported in commercial and open source tools.Wusic - Web Music: Wusic.NET is an online video player and management system that highlights silverlight's rich-client functionality and potential. Some of the custom controls that make up Wusic.NET are drag n' drop, modality, sizability, YouTube and FaceBook integration, and Mac'ish menubar.XNADeviceEnumeration Component: A component which helps to enumerate a(all) graphics device/adapter on pc with XNA.XPlatformConvertCPP: Mesh Converter For XPlatformCPPZvP: First project to be tested. It's empty, so don't join us.?????「??」: ????????????????????。

    Read the article

  • Abstracting functionality

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/22/abstracting-functionality.aspxWhat is more important than data? Functionality. Yes, I strongly believe we should switch to a functionality over data mindset in programming. Or actually switch back to it. Focus on functionality Functionality once was at the core of software development. Back when algorithms were the first thing you heard about in CS classes. Sure, data structures, too, were important - but always from the point of view of algorithms. (Niklaus Wirth gave one of his books the title “Algorithms + Data Structures” instead of “Data Structures + Algorithms” for a reason.) The reason for the focus on functionality? Firstly, because software was and is about doing stuff. Secondly because sufficient performance was hard to achieve, and only thirdly memory efficiency. But then hardware became more powerful. That gave rise to a new mindset: object orientation. And with it functionality was devalued. Data took over its place as the most important aspect. Now discussions revolved around structures motivated by data relationships. (John Beidler gave his book the title “Data Structures and Algorithms: An Object Oriented Approach” instead of the other way around for a reason.) Sure, this data could be embellished with functionality. But nevertheless functionality was second. When you look at (domain) object models what you mostly find is (domain) data object models. The common object oriented approach is: data aka structure over functionality. This is true even for the most modern modeling approaches like Domain Driven Design. Look at the literature and what you find is recommendations on how to get data structures right: aggregates, entities, value objects. I´m not saying this is what object orientation was invented for. But I´m saying that´s what I happen to see across many teams now some 25 years after object orientation became mainstream through C++, Delphi, and Java. But why should we switch back? Because software development cannot become truly agile with a data focus. The reason for that lies in what customers need first: functionality, behavior, operations. To be clear, that´s not why software is built. The purpose of software is to be more efficient than the alternative. Money mainly is spent to get a certain level of quality (e.g. performance, scalability, security etc.). But without functionality being present, there is nothing to work on the quality of. What customers want is functionality of a certain quality. ASAP. And tomorrow new functionality needs to be added, existing functionality needs to be changed, and quality needs to be increased. No customer ever wanted data or structures. Of course data should be processed. Data is there, data gets generated, transformed, stored. But how the data is structured for this to happen efficiently is of no concern to the customer. Ask a customer (or user) whether she likes the data structured this way or that way. She´ll say, “I don´t care.” But ask a customer (or user) whether he likes the functionality and its quality this way or that way. He´ll say, “I like it” (or “I don´t like it”). Build software incrementally From this very natural focus of customers and users on functionality and its quality follows we should develop software incrementally. That´s what Agility is about. Deliver small increments quickly and often to get frequent feedback. That way less waste is produced, and learning can take place much easier (on the side of the customer as well as on the side of developers). An increment is some added functionality or quality of functionality.[1] So as it turns out, Agility is about functionality over whatever. But software developers’ thinking is still stuck in the object oriented mindset of whatever over functionality. Bummer. I guess that (at least partly) explains why Agility always hits a glass ceiling in projects. It´s a clash of mindsets, of cultures. Driving software development by demanding small increases in functionality runs against thinking about software as growing (data) structures sprinkled with functionality. (Excuse me, if this sounds a bit broad-brush. But you get my point.) The need for abstraction In the end there need to be data structures. Of course. Small and large ones. The phrase functionality over data does not deny that. It´s not functionality instead of data or something. It´s just over, i.e. functionality should be thought of first. It´s a tad more important. It´s what the customer wants. That´s why we need a way to design functionality. Small and large. We need to be able to think about functionality before implementing it. We need to be able to reason about it among team members. We need to be able to communicate our mental models of functionality not just by speaking about them, but also on paper. Otherwise reasoning about it does not scale. We learned thinking about functionality in the small using flow charts, Nassi-Shneiderman diagrams, pseudo code, or UML sequence diagrams. That´s nice and well. But it does not scale. You can use these tools to describe manageable algorithms. But it does not work for the functionality triggered by pressing the “1-Click Order” on an amazon product page for example. There are several reasons for that, I´d say. Firstly, the level of abstraction over code is negligible. It´s essentially non-existent. Drawing a flow chart or writing pseudo code or writing actual code is very, very much alike. All these tools are about control flow like code is.[2] In addition all tools are computationally complete. They are about logic which is expressions and especially control statements. Whatever you code in Java you can fully (!) describe using a flow chart. And then there is no data. They are about control flow and leave out the data altogether. Thus data mostly is assumed to be global. That´s shooting yourself in the foot, as I hope you agree. Even if it´s functionality over data that does not mean “don´t think about data”. Right to the contrary! Functionality only makes sense with regard to data. So data needs to be in the picture right from the start - but it must not dominate the thinking. The above tools fail on this. Bottom line: So far we´re unable to reason in a scalable and abstract manner about functionality. That´s why programmers are so driven to start coding once they are presented with a problem. Programming languages are the only tool they´ve learned to use to reason about functional solutions. Or, well, there might be exceptions. Mathematical notation and SQL may have come to your mind already. Indeed they are tools on a higher level of abstraction than flow charts etc. That´s because they are declarative and not computationally complete. They leave out details - in order to deliver higher efficiency in devising overall solutions. We can easily reason about functionality using mathematics and SQL. That´s great. Except for that they are domain specific languages. They are not general purpose. (And they don´t scale either, I´d say.) Bummer. So to be more precise we need a scalable general purpose tool on a higher than code level of abstraction not neglecting data. Enter: Flow Design. Abstracting functionality using data flows I believe the solution to the problem of abstracting functionality lies in switching from control flow to data flow. Data flow very naturally is not about logic details anymore. There are no expressions and no control statements anymore. There are not even statements anymore. Data flow is declarative by nature. With data flow we get rid of all the limiting traits of former approaches to modeling functionality. In addition, nomen est omen, data flows include data in the functionality picture. With data flows, data is visibly flowing from processing step to processing step. Control is not flowing. Control is wherever it´s needed to process data coming in. That´s a crucial difference and needs some rewiring in your head to be fully appreciated.[2] Since data flows are declarative they are not the right tool to describe algorithms, though, I´d say. With them you don´t design functionality on a low level. During design data flow processing steps are black boxes. They get fleshed out during coding. Data flow design thus is more coarse grained than flow chart design. It starts on a higher level of abstraction - but then is not limited. By nesting data flows indefinitely you can design functionality of any size, without losing sight of your data. Data flows scale very well during design. They can be used on any level of granularity. And they can easily be depicted. Communicating designs using data flows is easy and scales well, too. The result of functional design using data flows is not algorithms (too low level), but processes. Think of data flows as descriptions of industrial production lines. Data as material runs through a number of processing steps to be analyzed, enhances, transformed. On the top level of a data flow design might be just one processing step, e.g. “execute 1-click order”. But below that are arbitrary levels of flows with smaller and smaller steps. That´s not layering as in “layered architecture”, though. Rather it´s a stratified design à la Abelson/Sussman. Refining data flows is not your grandpa´s functional decomposition. That was rooted in control flows. Refining data flows does not suffer from the limits of functional decomposition against which object orientation was supposed to be an antidote. Summary I´ve been working exclusively with data flows for functional design for the past 4 years. It has changed my life as a programmer. What once was difficult is now easy. And, no, I´m not using Clojure or F#. And I´m not a async/parallel execution buff. Designing the functionality of increments using data flows works great with teams. It produces design documentation which can easily be translated into code - in which then the smallest data flow processing steps have to be fleshed out - which is comparatively easy. Using a systematic translation approach code can mirror the data flow design. That way later on the design can easily be reproduced from the code if need be. And finally, data flow designs play well with object orientation. They are a great starting point for class design. But that´s a story for another day. To me data flow design simply is one of the missing links of systematic lightweight software design. There are also other artifacts software development can produce to get feedback, e.g. process descriptions, test cases. But customers can be delighted more easily with code based increments in functionality. ? No, I´m not talking about the endless possibilities this opens for parallel processing. Data flows are useful independently of multi-core processors and Actor-based designs. That´s my whole point here. Data flows are good for reasoning and evolvability. So forget about any special frameworks you might need to reap benefits from data flows. None are necessary. Translating data flow designs even into plain of Java is possible. ?

    Read the article

  • CodePlex Daily Summary for Thursday, March 22, 2012

    CodePlex Daily Summary for Thursday, March 22, 2012Popular ReleasesTelerik CAB Enabling Kit for RadControls for WinForms: TCEK 2012.1.321.20: major update, new Workspaces and UIAdapters Workspaces: - RadDockWorkspace - RadPageViewWorkspace - RadFormWorkspace - RadFormMdiWorkspace - RadTabbedMdiWorkspace UI Adapters: - RadCommandBarUIAdapter - RadRibbonBarUIAdapter - RadTreeNodeUiAdapter - RadTreeViewUIAdapter - RadItemCollectionUIAdapter - (RadMenu, RadStatusStrip, all controls that support RadItem collections)People's Note: People's Note 0.40: Version 0.40 adds an option to compact the database from the profile screen. Compacting a database can make it smaller and faster by removing empty spaces left over by editing, moving, and deleting notes. To install: copy the appropriate CAB file onto your WM device and run it.Microsoft All-In-One Code Framework - a centralized code sample library: C++, .NET Coding Guideline: Microsoft All-In-One Code Framework Coding Guideline This document describes the coding style guideline for native C++ and .NET (C# and VB.NET) programming used by the Microsoft All-In-One Code Framework project team.SQL Monitor - managing sql server performance: SQLMon 4.2 alpha 13: 1. added logic fault checking in analysis. automatically detect dead loop or memory leakage in stored procedures, for details please refer to http://sqlmon.codeplex.com/workitem/32469WebDAV for WHS: Version 1.0.67: - Added: Check whether the Remote Web Access is turned on or not; - Added: Check for Add-In updates;Metodología General Ajustada - MGA: 02.02.01: Cambios John: Se actualizan los seis formularios de Identificaciòn para que despuès de guardar actualice las grillas, de tal manera que no se dupliquen los registros al guardar. Se genera instalador con los cambios y se actualiza la base datos con ùltimos cambios en el SP de Flujo de Caja.xyzzy+: xyzzy+ 0.2.2.235+0: SHA1: 4a0258736e7df52bb6e2304178b7fcf02414ae17 PrerequisitesMicrosoft Visual C++ 2010 SP1 Redistributable Package (x86) (ja) FeaturesUnicode Visual Style Known ProblemsCharacter encodings other than Shift_JIS and UTF-X may be broken. Functions related to character encodings may not work. (ex. iso-code-char)Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (March 2012) for .NET 4.0: March release of Phalanger 3.0 significantly enhances performance, adds new features and fixes many issues. See following for the list of main improvements: New features: Phalanger Tools installable for Visual Studio 2011 Beta "filter" extension with several most used filters implemented DomDocument HTML parser, loadHTML() method mail() PHP compatible function PHP 5.4 T_CALLABLE token PHP 5.4 "callable" type hint PCRE: UTF32 characters in range support configuration supports <c...Nearforums - ASP.NET MVC forum engine: Nearforums v8.0: Version 8.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Internationalization Custom authentication provider Access control list for forums and threads Webdeploy package checksum: abc62990189cf0d488ef915d4a55e4b14169bc01 Visit Roadmap for more details.BIDS Helper: BIDS Helper 1.6: This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build ...Json.NET: Json.NET 4.5 Release 1: New feature - Windows 8 Metro build New feature - JsonTextReader automatically reads ISO strings as dates New feature - Added DateFormatHandling to control whether dates are written in the MS format or ISO format, with ISO as the default New feature - Added DateTimeZoneHandling to control reading and writing DateTime time zone details New feature - Added async serialize/deserialize methods to JsonConvert New feature - Added Path to JsonReader/JsonWriter/ErrorContext and exceptions w...SCCM Client Actions Tool: SCCM Client Actions Tool v1.11: SCCM Client Actions Tool v1.11 is the latest version. It comes with following changes since last version: Fixed a bug when ping and cmd.exe kept running in endless loop after action progress was finished. Fixed update checking from Codeplex RSS feed. The tool is downloadable as a ZIP file that contains four files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA...WebSocket4Net: WebSocket4Net 0.5: Changes in this release fixed the wss's default port bug improved JsonWebSocket supported set client access policy protocol for silverlight fixed a handshake issue in Silverlight fixed a bug that "Host" field in handshake hadn't contained port if the port is not default supported passing in Origin parameter for handshaking supported reacting pings from server side fixed a bug in data sending fixed the bug sending a closing handshake with no message which would cause an excepti...SuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.5: Changes included in this release: supported closing handshake queue checking improved JSON subprotocol supported sending ping from server to client fixed a bug about sending a closing handshake with no message refactored the code to improve protocol compatibility fixed a bug about sub protocol configuration loading in Mono improved BasicSubProtocol added JsonWebSocketSessionSurvey™ - web survey & form engine: Survey™ 2.0: The new stable Survey™ Project 2.0.0.1 version contains many new features like: Technical changes: - Use of Jquery, ASTreeview, Tabs, Tooltips and new menuprovider Features & Bugfixes: Survey list and search function Folder structure for surveys New Menustructure Library list New Library fields User list and search functions Layout options for a survey with CSS, page header and footer New IP filter security feature Enhanced Token Management New Question fields as ID, Alias...AppBarUtils for Windows Phone SDK 7.1: AppBarUtils 1.2: This release contains IconUri dependency property for both AppBarItemCommand and AppBarItemTrigger as requested by shawnoster at http://appbarutils.codeplex.com/discussions/321745. When using this IconUri dependency property, please be sure to set the Type property to AppBarItemType.Button or just omit this property entirely, because it is only for app bar icon button. The demo has been updated to show how to use this new IconUri dependency property with a new lock button on the app bar. Wh...Offline Navigation for Windows Phone 7: 0.1 Alpha: This is the 0.1 alpha release of source code.SmartNet: V1.0.0.0: DY SmartNet ?????? V1.0Javascript .NET: Javascript .NET v0.6: Upgraded to the latest stable branch of v8 (/tags/3.9.18), and switched to using their scons build system. We no longer include v8 source code as part of this project's source code. Simultaneous multithreaded use of v8 now supported (v8 Isolates), although different contexts may not share objects or call each other. 64-bit .Net 4.0 DLL now included. (Download now includes x86 and x64 for both .Net 3.5 and .Net 4.0.)MyRouter (Virtual WiFi Router): MyRouter 1.0.7: This release should be more stable there were a few bug fixes including the x64 issue as well as an error popping up when MyRouter started this was caused by a NULL valueNew ProjectsActivities.WMI: WF4 ?????? WMI ??????????????? Activities library related WMI, available in WF4Append Customisation Service: A lightweight windows service for applying customisations to enterprise webapps: monitors a file and makes sure your code is always appended to the end of the file. Disable the service, and your customisations go away. c# .net 4. Useful for customising branding/design, javascript, css and so on - in applications such as Dynamics CRM 2011.ASP.NET Security Module: Modulo de seguridad para aplicaciones Web asp.netavgdx: This project is just for Directx 11 learningDaabli: A lightweight C# version of the Daabli serialization framework for C++. If your application needs to load objects and data from human readable text files, then Daabli could be useful to you. It is designed to be as easy to use as possible and works with a 'C' style human editable format. The original C++ version is available here: http://daabli.sourceforge.net/ DNN Simple Tweet: DNN Simple Tweet is a simple DotNetNuke module for display of Twitter Feeds. Using the stock Twitter Profile Widget, its settings allow you to change the Twitter User name, Tweet colors, etc. Developed in VB. DNN version 6 and higher required. *NUISANCE* When selecting between "Display All" and "Timed Interval" for Twitter Behavior, the iColorPicker for the colors will disappear. This is due to the post-back which is occurring and the use of Javascript of the color picker. Updat...Don't We-KC and Sway: Don't We-KC and SwayEksponent CropUp: CropUp is a simple geometric algorithm for "weighted auto cropping". A focus point and optional "area of interest" are defined (e.g. a face in a group pictures). These are shown instead of random stomachs when the picture needs to be sized to a specific format. Umbraco package.Enterprise Modular Application: Guidelines to create Enterprise modular applications in .NET framework, independent of any specific framework.Escape From Canyon: Escape from canyon is a little game project (for academic purpose only) developed using XNA 4.0 and F#FIX Sample code using QuickFIX to connect to TT FIX Adapter: Sample FIX client provides a starting place for developers to connect to the TT FIX Adapter. It's developed in C# using QuickFIX as the FIX engine.FSGreeNetWork: FSGreeNetWorkGac Library -- C++ Utilities for GPU Accelerated GUI and Script: C++ Utilities for GPU Accelerated GUI and ScriptJhVirtualKeyboard for WPF and Silverlight: JhVirtualKeyboard is a virtual-keyboard for software developers to use with either WPF or Silverlight projects. With it - you now have a simple way to provide your users with the ability to enter characters in different languages and alphabets, or any Unicode character. Developed in C#, using Visual Studio 2010 and .NET 4 More information can be found on my blog article at: http://designforge.wordpress.com/2011/01/06/jhvirtualkeyboard/ by James W. HurstKrishaWeb: Krisha webNetFrameworkExtensions: Simple plain framework to add a lot of features to .NET framework core. Most of the features are added in form of extension methods.pav2: proyecto pav 2 SharePoint (2010) Connected Server: Display which web server a user is connected to in the Personal Actions drop down menu (user name in upper right). An extremely useful aid when troubleshooting issues in a multi-server SharePoint environment. I took the exisiting version which ran on MOSS 2007 and rebuilt it to run on SharePoint 2010. <b>Acknowledgements:</b> Full credit goes to Nathan Yorke for the original project http://spconnectedserver.codeplex.com which worked on MOSS 2007. SharePoint 2010 Gauge Web Part: SharePoint 2010 Gauge Web Part can be connected to any column of the “number” type in SharePoint list includes External Lists (only the farm solution version) and show calculations on column values. It supports 2 views: 1. The Gauge View 2. The Simple Indicator View SMTP Test Suite: SMTP server/client that can be used to test other servers/clients. Purely a test tool as most SMTP verbs are accepted without any checking.StarterCSS: StarterCorev4.css is inspired from Starter.master. This CSS file give you detailed explanation on the different class files on corev4.css. StarterCorev4.css will make you understand the purpose of each class files when you do ctrl + click on your master page css class.Triangle.NET: Triangle.NET is a 2D meshing software written in C#. It generates (constrained) Delaunay triangulations and quality meshes of point sets or planar straight line graphs. It is a port of Jonathan Shewchuk's Triangle software written in C.WorkFile: workfile about codeioXNA Electric Effect: An electric effect implemented using XNA 4 fro Windows Phone 7. It provides an easy way to configure settings to create realistic electric effects, lightening effects, etc.

    Read the article

  • CodePlex Daily Summary for Tuesday, November 06, 2012

    CodePlex Daily Summary for Tuesday, November 06, 2012Popular ReleasesMetodología General Ajustada - MGA: 03.04.01: Cambios Parmenio: Se desarrollan las opciones de los botones Siguiente y Anterior en todos los formularios de Identificación y Preparación. Cambios John: Integración de código con cambios enviados por Parmenio Bonilla. Generación de instaladores. Soporte técnico por correo electrónico, telefónico y en sitio.DictationTool: DictationCool-WPF: • Open a media file to start a new dication. • Open a dct file to continue a dictation. • Compare your dictation with original text if exists. • Save your dictation to dct file, and restore it to continue later. • Save the compared result to html file.SSIS Expression Editor & Tester: Expression Editor and Tester v1.0.8.0: Getting Started Download and extract the files, no install required. The ExpressionEditor.zip download contains a folder for each SQL Server version. ExpressionEditor2005 ExpressionEditor2008 ExpressionEditor2012 Changes Fixed issues 32868 and 33291 raised by BIDS Helper users. No functional changes from previous release. Versions There are three versions included, all built from the same code with the same functionality, but each targeting a different release of SQL Server. The downlo...LINQ to Twitter: LINQ to Twitter Beta v2.1.2: Now supports Windows Phone 8. Supports .NET 3.5, .NET 4.0, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, and Windows 8. 100% Twitter API coverage. Now supports Twitter API v1.1! LINQ to Twitter Samples contains example code for using LINQ to Twitter with various .NET technologies. Downloadable source code also has C# samples in the LinqToTwitterDemo project and VB samples in the LinqToTwitterDemoVB project. Also on NuGet.Resume Search: Version 1.0.0.3: - updates to messages - slight change to UIMCEBuddy 2.x: MCEBuddy 2.3.7: Changelog for 2.3.7 (32bit and 64bit) 1. Improved performance of MP4 Fast and M4V Fast Profiles (no deinterlacing, removed --decomb) 2. Improved priority handling 3. Added support for Pausing and Resume conversions 4. Added support for fallback to source directory if network destination directory is unavailable 5. MCEBuddy now installs ShowAnalyzer during installation 6. Added support for long description atom in iTunesKuick Application & ORM Framework: Version 1.0.15322.29505 - Released 2012-11-05: Fix bugs and add new features.FoxyXLS: FoxyXLS Releases: Source code and samplesMySQL Tuner for Windows: 0.2: Welcome to the second beta of MySQL Tuner for Windows! This release fixes a critical bug in 0.1 where it would not work with recent version of MySQL server. Be warned that there will be bugs in this release, so please do not use on production or critical systems. Do post details of issues found to the issue tracker, and I will endeavour to fix them, when I can. I would love to have your feedback, and if possible your support! Requirements Microsoft .NET Framework 4.0 (http://www.microsoft....Dyanamic Reports (RDLC) - SharePoint 2010 Visual WebPart: Initial Release: This is a Initial Release.HTML Renderer: HTML Renderer 1.0.0.0 (3): Major performance improvement (http://theartofdev.wordpress.com/2012/10/25/how-i-optimized-html-renderer-and-fell-in-love-with-vs-profiler/) Minor fixes raised in issue tracker and discussions.ProDinner - ASP.NET MVC Sample (EF4.4, N-Tier, jQuery): 8: update to ASP.net MVC Awesome 3.0 udpate to EntityFramework 4.4 update to MVC 4 added dinners grid on homepageASP.net MVC Awesome - jQuery Ajax Helpers: 3.0: added Grid helper added XML Documentation added textbox helper added Client Side API for AjaxList removed .SearchButton from AjaxList AjaxForm and Confirm helpers have been merged into the Form helper optimized html output for AjaxDropdown, AjaxList, Autocomplete works on MVC 3 and 4BlogEngine.NET: BlogEngine.NET 2.7: Cheap ASP.NET Hosting - $4.95/Month - Click Here!! Click Here for More Info Cheap ASP.NET Hosting - $4.95/Month - Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.7 instructions. If you looking for Web Application Project, ...Launchbar: Launchbar 4.2.2.0: This release is the first step in cleaning up the code and using all the latest features of .NET 4.5 Changes 4.2.2 (2012-11-02) Improved handling of left clicks 4.1.0 (2012-10-17) Removed tray icon Assembly renamed and signed with strong name Note When you upgrade, Launchbar will start with the default settings. You can import your previous settings by following these steps: Run Launchbar and just save the settings without configuring anything Shutdown Launchbar Go to the folder %LOCA...Mouse Jiggler: MouseJiggle-1.3: This adds the much-requested minimize-to-tray feature to Mouse Jiggler.Umbraco CMS: Umbraco 4.10.0 Release Candidate: This is a Release Candidate, which means that if we do not find any major issues in the next week, we will release this version as the final release of 4.10.0 on November 9th, 2012. The documentation for the MVC bits still lives in the Github version of the docs for now and will be updated on our.umbraco.org with the final release of 4.10.0. Browse the documentation here: https://github.com/umbraco/Umbraco4Docs/tree/4.8.0/Documentation/Reference/Mvc If you want to do only MVC then make sur...Skype Auto Recorder: SkypeAutoRecorder 1.3.4: New icon and images. Reworked settings window. Implemented high-quality sound encoding. Implemented a possibility to produce stereo records. Added buttons with system-wide hot keys for manual starting and canceling of recording. Added buttons for opening folder with records. Added Help button. Fixed an issue when recording is continuing after call end. Fixed an issue when recording doesn't start. Fixed several bugs and improved stability. Major refactoring and optimization...Python Tools for Visual Studio: Python Tools for Visual Studio 1.5: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 RTM. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, etc. support. For a quick overview of the general IDE experience, please watch this video There are a number of exciting improvement in this release comp...AssaultCube Reloaded: 2.5.5: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we try to package for those OSes. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) Changelog: Fixed potential bot bugs: Map change, OpenAL...New Projects1105p1: just a testAssaultCube Reloaded Launcher: This is the official ACR Launcher For AssaultCube Reloaded. Betting Manager: This project allows the users to store and analyze all their betting activities.BTalk: Simple communicator with encryption capability.Chronozoom Extensions: This project adds a touch interface to the original chronozoom source code, as well as adding an intuitive way to integrate new data into chronozoom easily.CloudClipX: CloudClipx is a simple clipboard monitor that optionally connects to a cloud service where you can upload your most recent clips and retrieve them from your mobcodeplexproject01: The project is okcodeplexproject02: Jean well done summaryDelayed Reaction: Delayed Reaction is a mini library designed to help with application event distribution. Events can fire from any method in your code in any form or window and be distributed to any window that registered for the event. Events can also be fired with a delay or become sticky. Dyanamic Reports (RDLC) - SharePoint 2010 Visual WebPart: SharePoint 2010 Visual Web Part solution to create RDLC reports on fly, based on lists/libraries available in SharePoint 2010 Portal.Eight OData: Consuming an OData feedFreelance: Freelance projectgraphdrawing: graph drawingGrasshoppers Time Board: Application for measure time during floorball match. Can be used also for socker or other game. HCyber: A Cyberminer Search Engine using the KWIC system.lcwineight: lcs win eight projectMeta Protector: MetaProtector is a DotNetNuke module intended to help sites apply additional Meta tags to their markup. MetroEEG: Mindwave for Windows Phone: Windows Phone 8 API for Minewave Mobile EEG Headset. minipp: This is a test project.mleai: A test project for ml.MVC Multi Layer Best Practices App: MVC Multi Layer Best Practices AppNon-Unique Key Dictionary: A dictionary that does not require unique keys for values. nTribeHR: .NET wrapper for TribeHR web services.Object Serialiser: Object Instance to Object Initialiser Syntax serialiserOrchard Metro JS library: Metro JS is a JavaScript plugin for jQuery developed to easily enable Metro interfaces on the web.Pasty Cloud Clipboard Client: A windows forms application for managing your Pasty cloud clipboard. Also includes a separate API wrapper that can be used independently.PowerComboBox: PowerComboBox is an enhanced ComboBox control. Programmed in Visual Basic, PowerComboBox adds 4 new features, Hovering; Backcolour; Checkboxes; Groups.project4: my summaryProjet POO IMA: Projet POO IMAQuickShot: Quickshot is a desktop screen capture utility that allows you to upload your captures directly to the popular image hosting website, imgur. RePro - NFe: Aplicativo de integração com ERP’s para emissão e distribuição de NFe.Resume Search: Search, filter and parse user resumes in PDF, .DOC, .DOCX format into properly organized database format.Script-base POP3 Connector for Exchange: This is a script based POP3 connector for Exchange. It works with all Exchange having "Pickup folder"Squadron for SharePoint 2010: Squadron is a: - Collection of SharePoint 2010 utility modules - Created with Windows Forms application - Plugin enabled modules - Free SRS Subscription Manager: This utility allows you to re-execute subscriptions in Microsoft SQL Server Reporting ServicesStart Screen Button: Windows 8 / Server 2012 taskbar button to bring up the Start Screen. Useful for RDP, VNC, LogMeIn, etc.TableSizer: A Windows Live Writer plugin that automatically sets the width property of every table to a configurable value prior to publishing the post.TodayHumor for Windows: Windows Phone? ?? ??? ?? ?????????. Towers of Hanoi 3D: A simple Towers of Hanoi 3D game for the Windows Store. Developed with the Visual Studio 3D Starter Kit - http://aka.ms/vs3dkitTweet 4 ME: Tweet 4 ME is a Java Micro Edition (MIDP 2.0 CLDC 1.0) based Twitter client built with a custom GUI framework. The application is part of a college project.UWE Bristol - Robotic Pick and Place System 2012: A robot arm pick and place will be trained to repeatedly move an item from one location to another. The training and control input will be from a 4x4 keypad.

    Read the article

  • CodePlex Daily Summary for Friday, March 09, 2012

    CodePlex Daily Summary for Friday, March 09, 2012Popular ReleasesSSH.NET Library: 2012.3.9: There are still few outstanding issues I wanted to include in this release but since its been a while and there are few new features already I decided to create a new release now. New Features Add SOCKS4, SOCKS5 and HTTP Proxy support when connecting to remote server. For silverlight only IP address can be used for server address when using proxy. Add dynamic port forwarding support using ForwardedPortDynamic class. Add new ShellStream class to work with SSH Shell. Add supports for mu...fnr.exe - Find And Replace Tool: 1.0: You can read all about the new features here: Here is the Summary Preview Matches Stats File errors for read/write Support for regular expressions Fixed a bug that required you to press enter to continue after running fnr.exe from command line Context menu to display containing folder or open the file Double click on results row to open the file (similar to double clicking in windows explorer) Binary detection – skip files that are binaryTest Case Import Utilities for Visual Studio 2010 and Visual Studio 11 Beta: V1.2 RTM: This release (V1.2 RTM) includes: Support for connecting to Hosted Team Foundation Server Preview. Support for connecting to Team Foundation Server 11 Beta. Fix to issue with read-only attribute being set for LinksMapping-ReportFile which may have led to problems when saving the report file. Fix to issue with “related links” not being set properly in certain conditions. Fix to ensure that tool works fine when the Excel file contained rich text data. Note: Data is still imported in pl...Audio Pitch & Shift: Audio Pitch And Shift 3.5.0: Modules (mod, xm, it, etc..) supportcallisto: callisto 2.0.19: BUG FIX: Autorun.load() function in scripting now has sandboxed path (Thanks Mikey!) BUG FIX: UserObject.Name property now allows full 20 byte string replacements. FEATURE REQUEST: File.* script functions now allow file extensions.DotNetNuke® Community Edition CMS: 06.01.04: Major Highlights Fixed issue with loading the splash page skin in the login, privacy and terms of use pages Fixed issue when searching for words with special characters in them Fixed redirection issue when the user does not have permissions to access a resource Fixed issue when clearing the cache using the ClearHostCache() function Fixed issue when displaying the site structure in the link to page feature Fixed issue when inline editing the title of modules Fixed issue with ...Mayhem: Mayhem Developer Preview: This is the developer preview of Mayhem. Enjoy!Magelia WebStore Open-source Ecommerce software: Magelia WebStore 1.2: Medium trust compliant lot of small change for medium trust compliance full refactoring of user management refactoring of Client Refactoring of user management Magelia.WebStore.Client no longer reference Magelia.WebStore.Services.Contract Refactoring page category multi parent category added copy category feature added Refactoring page catalog copy catalog feature added variant management improvement ability to define a default variant for a variable product ability to ord...Delta Engine: Delta Engine Beta Preview v0.9.4: v0.9.4 is the release for February 2012, but it was delayed till 2012-03-07 until content generation worked much better for v0.9.4. The main improvements were done on the server side (content generation and improved build support for iOS and Android). v0.9.4 is also the first version everyone can use to deploy their application onto all supported platforms, see Marketplace Licensing for details: http://deltaengine.net/Marketplace Documentation for this version can be found at: http://help.de...PDFsharp - A .NET library for processing PDF: PDFsharp and MigraDoc Foundation 1.32: PDFsharp and MigraDoc Foundation 1.32 is a stable version that fixes a few bugs that were found with version 1.31. Version 1.32 includes solutions for Visual Studio 2010 only (but it should be possible to add the project files to existing solutions for VS 2005 or VS 2008). Users of VS 2005 or VS 2008 can still download version 1.31 with the solutions for those versions that allow them to easily try the samples that are included. While it may create smaller PDF files than version 1.30 because...Terminals: Version 2.0 - Release: Changes since version 1.9a:New art works New usability in Organize favorites window Improved usability of imports/exports and scans Large number of fixes Improvements in single instance mode Comparing November beta 4, this corrects: New application icons Doesn't show Logon error codes Fixed command line arguments exception for single instance mode Fixed detaching of tabs improved usability in detached window Fixed option settings for Capture manager Fixed system tray noti...AutoLoL: AutoLoL v2.1.5: Updated version of Autolol that works with the Fiora patch.MFCMAPI: March 2012 Release: Build: 15.0.0.1032 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeSimple Injector: Simple Injector v1.4.1: This release adds two small improvements to the SimpleInjector.Extensions.dll. No changes have been made to the core library. New features and improvements in this release for the SimpleInjector.Extensions.dll The RegisterManyForOpenGeneric extension methods now accept non-generic decorator, as long as they implement the given open generic service type. GetTypesToRegister methods added to the OpenGenericBatchRegistrationExtensions class which allows to customize the behavior. Note that the...SQL Scriptz Runner: Application: Scriptz Runner source code and applicationPowerGUI Visual Studio Extension: PowerGUI VSX 1.5.2: Added support for PowerGUI 3.2.VidCoder: 1.3.1: Updated HandBrake core to 0.9.6 release (svn 4472). Removed erroneous "None" container choice. Change some logic and help text to stop assuming you have to pick the VIDEO_TS folder for a DVD scan. This should make previewing DVD titles on the Queue Multiple Titles window possible when you've picked the root DVD directory.Google Books Downloader for Windows: Google Books Downloader: Google Books Downloader 1.8ExtAspNet: ExtAspNet v3.1.0: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-03-04 v3.1.0 -??Hidden???????(〓?〓)。 -?PageManager??...AcDown????? - Anime&Comic Downloader: AcDown????? v3.9.1: ?? ●AcDown??????????、??、??????,????1M,????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。??????AcPlay?????,??????、????????????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDo...New ProjectsAngry Birds in 1 Hour: This is a simple "Angry Birds" clone on Windows Phone 7 written in just 1 hour.ascent: ascent capture a capture productascentexpress: ascentexpressASP.NET MVVM Excalibur: ASP.NET MVVM Excalibur Project.this is Web Form base, has a new Binding Expression like WPF MVVM.Azure Virtual Directory: A program (or windows service) that registers a virtual directory on your local machine that is actually a gateway into an Azure Blob service. This will allow you to browse, create, modify and delete files directly in Windows Explorer, through a command prompt, or by any software that would be able to do so (as if it was writing to the local machine). This is not a directory that backs-up to Azure, but is rather *only* on Azure. Developed in C#.ClipFlair: ClipFlair - Foreign Language Learning through Interactive Revoicing and Captioning of ClipsCloudSpotter: CloudSpotter is a Windows Azure sample application that can be used for demo purposes or for learning the basic concepts of cloud application development. CloudSpotter makes it possible to convert, webcam based, cloud pictures to time-lapse video footage. Composing Wcf: Basic library providing a service host and service behavior capable of utilizing MEF for runtime composition of WCF SOAP and REST web services. Library provides composing Hosts and Host Factories for standard ServiceHost types, as well as WebServiceHost (RESTful).convert digit to word upto thousand: convert digit to word upto thousandDAL Generator using Database Application Block 5 and T4 Template: T4 template code for generating data base layer for normal CRUD operation using Repository Pattern. Database application block 5 features are used for generating database call and automatic mapping with DTOeuler 12 problem: euler 12 problemeuler 14 problem: euler 14 problemeuler 19 problem: euler 19 problemeuler 28: euler 28euler 30: euler 30euler 36 problem: euler 36 problemeuler 45: euler 45 problemeuler 52 problem: euler 52 problemeuler21: euler 21euler22: euler 22 problemeuler23: euler 23euler29: euler 29 problemeVet: eVet is a guidance project based on the fictional scenario of a Veterinary System used to monitor pets' medical history. It will be based on Azure and leverage the Worker role and SQL Azure datase to illustrate a multi-tenant cloud-based Pet management system. The ORM layer will be NHibernate and it will be based on the repository design pattern. If you want to help and learn Azure at the same time, I am looking for: - Designers (CSS3, HTML 5, Javascript) - Web Developers (ASP.Net ...Firemap: Generates a html page which displays key performance statistics of chosen computers. FolderHiderNet: FolderHiderNet, its a simple application developed in C#, that let users easily hide and unhide folder on their windows systems. It could be used in USB dispositives.Game of Life for Windows Phone: This is an XNA implementation of Conway's Game of Life for Windows Phone. The game is a grid of cells that live and die based on a simple set of rules. The player can arrange the live and dead cells, and start/stop the cell generation to see how the cells are interrelated. Features: - Save and load games - Start and stop generations - Adjust generation speed - Clear grid - Generate random grid - Sample shapes preloaded as saved games For more information on Conway's Game of Life...Gamoliyas: Gamoliyas is an open source John Conway's Game of Life game totally written in DHTML (JavaScript, CSS and HTML). Uses mouse and keyboard. Very configurable. This cross-platform and cross-browser game was tested under BeOS, Linux, *BSD, Windows and others.Gembed: Transform url into Embed code using javascript. It is developed using jQuery, jQuery templates and javascript. Any contribution would be really apreciated.GIFT: gift appImageLoader iOS: ImageLoader is developed on iOS and it can be used in iPhone and iPad. It try to make application to support image downloading and cache easily. It downloads the image file from url, depended on ASIHttpRequest. And it cache the images into local file.MakkysStackOverflow: Learning how to build stackoverflow like site mysimpleproject: This is my test projectNWN Hak Merging Utility: Mostly automated Hak Merging utility for NWN .hak files.Oasis Text: Oasis Text is a simple, free text editor for Windows. It is written in C# and built with the ScintillaNET editing component. It is a work in progress and is free and open source software. Opds4Net: A .NET Library for Open Publication Distribution System (OPDS) Catalog protocol, a syndication format for electronic publications based on Atom. This project is created to simplify the process of creating an OPDS Catalog in .NET and standardize the result OPDS with least effort. Pratiques: Endroit pour gérer les Pratiques.scooby: This is a scooby dooby doo projectStaffKey: Study Project Projet d'étude Permet le lancement d'un serveur web sur une clé usb.SugataTools: SugataTools are the helper classes that I usually use in my projects.testtom03082012hg05: testtom03082012hg05testtom03082012tfs01: testtom03082012tfs01testtom03082012tfs02: testtom03082012tfs02TNTSerializer: A simple serializer which -Is faster than any other serializer -Does not require ISeriablable - Uses generic cached Reflection wrappers (FAST) -Should serialize ANY structure, no questions asked, no special markup required. -Can handle common attributes -handles optional parameteWholemy.LinkedLists: Wholemy Linked Lists realizationsworkApp: workAppWPF Yahoo Stock API: WPF application using PRISM & MVVM to display stock details using Yahoo API (YPL)

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • E-Business Suite Technology Sessions at OpenWorld 2012

    - by Max Arderius
    Oracle OpenWorld 2012 is almost here! We're looking forward to updating you on our products, strategy, and roadmaps. This year, the E-Business Suite Applications Technology Group (ATG) will participate in 25 speaker sessions, two Meet the Experts round-table discussions, five demoground booths and seven Special Interest Group meetings as guest speakers. We hope to see you at our sessions.  Please join us to hear the latest news and connect with senior ATG development staff. Here's a downloadable listing of all Applications Technology Group-related sessions with times and locations: FOCUS ON Oracle E-Business Suite - Applications Tools and Technology (PDF) General Sessions GEN8474 - Oracle E-Business Suite - Strategy, Update, and RoadmapCliff Godwin, SVP, Oracle Monday, Oct 1, 12:15 PM - 1:15 PM - Moscone West 2002/2004 In this session, hear Oracle E-Business Suite General Manager Cliff Godwin deliver an update on the Oracle E-Business Suite product line. This session covers the value delivered by the current release of Oracle E-Business Suite, the momentum, and how Oracle E-Business Suite applications integrate into Oracle’s overall applications strategy. You’ll come away with an understanding of the value Oracle E-Business Suite applications deliver now and will deliver in the future. GEN9173 - Optimize and Extend Oracle Applications - The Path to Oracle Fusion ApplicationsNadia Bendjedou, Oracle; Corre Curtice, Bhavish Madurai (CSC) Tuesday, Oct 2, 10:15 AM - 11:15 AM - Moscone West 3002/3004 One of the main objectives of this session is to help organizations build their IT roadmap for the next five years and be aligned with the Oracle Applications strategy in general and the Oracle Fusion Applications strategy in particular. Come hear about some of the common sense, practical steps you can take to optimize the performance of your Oracle Applications today and prepare your path to Oracle Fusion Applications for when your organization is ready to embrace them. Each step you take in adopting Oracle Fusion technology gets you partway to Oracle Fusion Applications. Conference Sessions CON9024 - Oracle E-Business Suite Technology: Latest Features and Roadmap Lisa Parekh, Oracle Monday, Oct 1, 10:45 AM - 11:45 AM - Moscone West 2016 This Oracle development session provides a comprehensive overview of Oracle’s product strategy for Oracle E-Business Suite technology, the capabilities and associated business benefits of recent releases, and a review of capabilities on the product roadmap. This is the cornerstone session for the Oracle E-Business Suite technology stack. Come hear about the latest new usability enhancements of the user interface; systems administration and configuration management tools; security-related updates; and tools and options for extending, customizing, and integrating Oracle E-Business Suite with other applications. CON9021 - Oracle E-Business Suite Future Directions: Deployment and System AdministrationMax Arderius, Oracle Monday, Oct 1, 3:15 PM - 4:15 PM - Moscone West 2016  What’s coming in the next major version of Oracle E-Business Suite 12? This Oracle Development session covers the latest technology stack, including the use of Oracle WebLogic Server (Oracle Fusion Middleware 11g) and Oracle Database 11g Release 2 (11.2). Topics include an architectural overview of the latest updates, installation and upgrade options, new configuration options, and new tools for hot cloning and automated “lights-out” cloning. Come learn how online patching (based on the Oracle Database 11g Release 2 Edition-Based Redefinition feature) will reduce your database patching downtimes to however long it takes to bounce your database server. CON9017 - Desktop Integration in Oracle E-Business Suite 12.1 Padmaprabodh Ambale, Gustavo Jimenez, Oracle Monday, Oct 1, 4:45 PM - 5:45 PM - Moscone West 2016 This presentation covers the latest functional enhancements in Oracle Web Applications Desktop Integrator and Oracle Report Manager, enhanced Microsoft Office support, and greater support for building custom desktop integration solutions. The session also presents tips and tricks for upgrading from Oracle Applications Desktop Integrator to Oracle Web Applications Desktop Integrator and Oracle Report Manager. CON9023 - Oracle E-Business Suite Technology Certification Primer and Roadmap Steven Chan, Oracle Tuesday, Oct 2, 10:15 AM - 11:15 AM - Moscone West 2016  Is your Oracle E-Business Suite technology stack up to date? Are you taking advantage of all the latest options and capabilities? This Oracle development session summarizes the latest certifications and roadmap for the Oracle E-Business Suite technology stack, including elements such as database releases and options, Java, Oracle Forms, Oracle Containers for J2EE, desktop operating systems, browsers, JRE releases, development and Web authoring tools, user authentication and management, business intelligence, Oracle Application Management Packs, security options, clouds, Oracle VM, and virtualization. The session also covers the most commonly asked questions about tech stack component support dates and upgrade implications. CON9028 - Minimizing Oracle E-Business Suite Maintenance DowntimesSantiago Bastidas, Elke Phelps, Oracle Tuesday, Oct 2, 11:45 AM - 12:45 PM - Moscone West 2016 This Oracle development session features a survey of the best techniques sysadmins can use to minimize patching downtimes. It starts with an architectural-level review of Oracle E-Business Suite fundamentals and then moves to a practical view of the various tools and approaches for downtimes. Topics include patching shortcuts, merging patches, distributing worker processes across multiple servers, running ADPatch in noninteractive mode, staged APPL_TOPs, shared file systems, deferring systemwide database tasks, avoiding resource bottlenecks, and more. An added bonus: hear about the upcoming Oracle E-Business Suite 12 online patching capabilities based on the groundbreaking Oracle Database 11g Release 2 Edition-Based Redefinition feature. CON9116 - Extending the Use of Oracle E-Business Suite with the Oracle Endeca PlatformOsama Elkady, Muhannad Obeidat, Oracle Tuesday, Oct 2, 11:45 AM - 12:45 PM - Moscone West 2018 The Oracle Endeca platform includes a leading unstructured data correlation and analytics engine, together with a best-in class catalog search and guided navigation solution, to improve the productivity of all types of users in your enterprise. This development session focuses on the details behind the Oracle Endeca platform’s integration into Oracle E-Business Suite. It demonstrates how easily you can extend the use of the Oracle Endeca platform into other areas of Oracle E-Business Suite and how you can bring in your own data and build new Oracle Endeca applications for Oracle E-Business Suite. CON9005 - Oracle E-Business Suite Integration Best PracticesVeshaal Singh, Oracle, Jeffrey Hand, Zebra Technologies Tuesday, Oct 2, 1:15 PM - 2:15 PM - Moscone West 2018 Oracle is investing across applications and technologies to make the application integration experience easier for customers. Today Oracle has certified Oracle E-Business Suite on Oracle Fusion Middleware 11g and provides a comprehensive set of integration technologies. Learn about Oracle’s integration offering across data- and process-centric integrations. These technologies can be used to address various application integration challenges and styles. In this session, you will get an understanding of how, when, and where you can leverage Oracle’s integration technologies to connect end-to-end business processes across your enterprise, including your Oracle Applications portfolio.  CON9026 - Latest Oracle E-Business Suite 12.1 User Interface and Usability EnhancementsPadmaprabodh Ambale, Oracle Tuesday, Oct 2, 1:15 PM - 2:15 PM - Moscone West 2016 This Oracle development session details the latest UI enhancements to Oracle Application Framework in Oracle E-Business Suite 12.1. Developers will get a detailed look at new features to enhance usability, offer more capabilities for personalization and extensions, and support the development and use of dashboards and Web services. Topics include new rich UI capabilities such as new home page features, Navigator and Favorites pull-down menus, REST interface, embedded widgets for analytics content, Oracle Application Development Framework (Oracle ADF) task flows, third-party widgets, a look-ahead list of values, inline attachments, pop-ups, personalization and extensibility enhancements, business layer extensions, Oracle ADF integration, and mobile devices. CON8805 - Planning Your Oracle E-Business Suite Upgrade from 11i to Release 12.1 and BeyondAnne Carlson, Oracle Tuesday, Oct 2, 5:00 PM - 6:00 PM - Moscone West 3002/3004 Attend this session to hear the latest Oracle E-Business Suite 12.1 upgrade planning tips from Oracle’s support, consulting, development, and IT organizations. You’ll get specific cross-product advice on how to understand the factors that affect your project’s duration, decide on your project’s scope, develop a robust testing strategy, leverage Oracle Support resources, and more. In a nutshell, this session tells you things you need to know before embarking upon your Release 12.1 upgrade project. CON9053 - Advanced Management of Oracle E-Business Suite with Oracle Enterprise ManagerAngelo Rosado, Oracle Tuesday, Oct 2, 5:00 PM - 6:00 PM - Moscone West 2016 The task of managing and monitoring Oracle E-Business Suite environments can be very challenging. Oracle Enterprise Manager is the only product on the market that is designed to monitor and manage all the different technologies that constitute Oracle E-Business Suite applications, including end user, midtier, configuration, host, and database management—to name just a few. Customers that have implemented Oracle Enterprise Manager have experienced dramatic improvements in system visibility and diagnostic capability as well as administrator productivity. The purpose of this session is to highlight the key features and benefits of Oracle Enterprise Manager and Oracle Application Management Suite for Oracle E-Business Suite. CON8809 - Oracle E-Business Suite 12.1 Upgrade Best Practices: Technical InsightIsam Alyousfi, Udayan Parvate, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 3011 This session is ideal for organizations thinking about upgrading to Oracle E-Business Suite 12.1. It covers the fundamentals of upgrading to Release 12.1, including the technology stack components and supported upgrade paths. Hear from Oracle Development about the set of best practices for patching in general and executing the Release 12.1 technical upgrade, with special considerations for minimizing your downtime. Also get to know about relatively recent upgrade resources. CON9032 - Upgrading Your Customizations of Oracle E-Business Suite 12.1Sara Woodhull, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 2016 Have you personalized Oracle Forms or Oracle Application Framework screens in Oracle E-Business Suite? Have you used mod_plsql in Release 11i? Have you extended or customized your Release 11i environment with other tools? The technical options for upgrading these customizations as part of your Oracle E-Business Suite Release 12.1 upgrade can be bewildering. Come to this Oracle development session to learn about selecting the best upgrade approach for your existing customizations. The session will help you understand customization scenarios and use cases, tools, and technologies to ensure that your Oracle E-Business Suite Release 12.1 environment fits your users’ needs closely and that any future customizations will be easy to upgrade. CON9259 - Oracle E-Business Suite Internationalization and Multilingual FeaturesMaher Al-Nubani, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 2018 Oracle E-Business Suite supports more countries, languages, and regions than ever. Come to this Oracle development session to get an overview of internationalization features and capabilities and see new Release 12 features such as calendar support for Hijra and Thai, new group separators, lightweight multilingual support (MLS) setup, new character sets such as AL32UTF, newly supported languages, Mac certifications, Oracle iSetup support for moving MLS setups, new file export options for Unicode, new MLS number spelling options, and more. CON7188 - Mobile Apps for Oracle E-Business Suite with Oracle ADF Mobile and Oracle SOA SuiteSrikant Subramaniam, Joe Huang, Veshaal Singh, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 3001 Follow your mobile customers, employees, and partners with Oracle Fusion Middleware. See how native iPhone and iPad applications can easily be built for Oracle E-Business Suite with the new Oracle ADF Mobile and Oracle SOA Suite. Using Oracle ADF Mobile, developers can quickly develop native applications for Apple iOS and other mobile platforms. The Oracle SOA Suite/Oracle ADF Mobile combination can execute business transactions on Oracle E-Business Suite. This session includes a demo in which a mobile user approves a business transaction in Oracle E-Business Suite and a demo of the tools used to build a native on-device solution. These concepts for mobile applications also apply to other Oracle applications.CON9029 - Oracle E-Business Suite Directions: Slashing Downtimes with Online PatchingKevin Hudson, Oracle Wednesday, Oct 3, 11:45 AM - 12:45 PM - Moscone West 2016 Oracle E-Business Suite will soon include online patching (based on the Oracle Database 11g Release 2 Edition-Based Redefinition feature), which will reduce your database patching downtimes to however long it takes to bounce your database server. This Oracle development session details how online patching works, with special attention to what’s happening at a database object level when database patches are applied to an Oracle E-Business Suite environment that’s still running. Come learn about the operational and system management implications for minimizing maintenance downtimes when applying database patches with this new technology and the related impact on customizations you might have built on top of Oracle E-Business Suite. CON8806 - Upgrading to Oracle E-Business Suite 12.1: Technical and Functional PanelAndrew Katz, Komori America Corporation; Sandra Vucinic, VLAD Group, Inc. ;Srini Chavali, Cummins Inc.; Amrita Mehrok, Nadia Bendjedou, Anne Carlson Oracle Wednesday, Oct 3, 1:15 PM - 2:15 PM - Moscone West 2018 In this panel discussion, Oracle experts, customers, and partners share their experiences in upgrading to the latest release of Oracle E-Business Suite, Release 12.1. The panelists cover aspects of a typical Release 12 upgrade, technical (upgrading the technical infrastructure) as well as functional (upgrading to the new financial infrastructure). Hear directly from the experts who either develop the product or support, implement, or upgrade it, and find out how to apply their lessons learned to your organization. CON9027 - Personalize and Extend Oracle E-Business Suite Applications with Rich MashupsGustavo Jimenez, Padmaprabodh Ambale, Oracle Wednesday, Oct 3, 1:15 PM - 2:15 PM - Moscone West 2016 This session covers the use of several Oracle Fusion Middleware technologies to personalize and extend your existing Oracle E-Business Suite applications. The Oracle Fusion Middleware technologies covered include Oracle Application Development Framework (Oracle ADF), Oracle WebCenter, Oracle Endeca applications, and Oracle Business Intelligence Enterprise Edition with Oracle E-Business Suite Oracle Application Framework applications. CON9036 - Advanced Oracle E-Business Suite Architectures: Maximum Availability, Security, and MoreElke Phelps, Oracle Wednesday, Oct 3, 3:30 PM - 4:30 PM - Moscone West 2016 This session includes architecture diagrams and configuration instructions for building a maximum availability architecture (MAA) that will help you design a disaster recovery solution that fits the needs of your business. Database and application high-availability features it describes include Oracle Data Guard, Oracle Real Application Clusters (Oracle RAC), Oracle Active Data Guard, load-balancing Web and forms services, parallel concurrent processing, and the use of Oracle Exalogic and Oracle Exadata to provide a highly available environment. The session also covers the latest updates to systems management tools, AutoConfig, cloud computing, virtualization, and Oracle WebLogic Server and provides sneak previews of upcoming functionality. CON9047 - Efficiently Scaling Oracle E-Business Suite on Oracle Exadata and Oracle ExalogicIsam Alyousfi, Nishit Rao, Oracle Wednesday, Oct 3, 5:00 PM - 6:00 PM - Moscone West 2016 Oracle Exadata and Oracle Exalogic are designed from the ground up with optimizations in software and hardware to deliver superfast performance for mission-critical applications such as Oracle E-Business Suite. Oracle E-Business Suite applications run three to eight times as fast on the Oracle Exadata/Oracle Exalogic platform in standard benchmark tests. Besides performance, customers benefit from simplified support, enhanced manageability, and the ability to consolidate multiple Oracle E-Business Suite instances. Attend this session to understand best practices for Oracle E-Business Suite deployment on Oracle Exalogic and Oracle Exadata through customer case studies. Learn how adopting the Exa* platform increases efficiency, simplifies scaling, and boosts performance for peak loads. CON8716 - Web Services and SOA Integration Options for Oracle E-Business SuiteRekha Ayothi, Veshaal Singh, Oracle Thursday, Oct 4, 11:15 AM - 12:15 PM - Moscone West 2016 This Oracle development session provides a deep dive into a subset of the Web services and SOA-related integration options available to Oracle E-Business Suite systems integrators. It offers a technical look at Oracle E-Business Suite Integrated SOA Gateway, Oracle SOA Suite, Oracle Application Adapters for Data Integration for Oracle E-Business Suite, and other Web services options for integrating Oracle E-Business Suite with other applications. Systems integrators and developers will get an overview of the latest integration capabilities and technologies available out of the box with Oracle E-Business Suite and possibly a sneak preview of upcoming functionality and features. CON9030 - Recommendations for Oracle E-Business Suite Performance TuningIsam Alyousfi, Samer Barakat, Oracle Thursday, Oct 4, 11:15 AM - 12:15 PM - Moscone West 2018 Need to squeeze more performance out of your existing servers? This packed Oracle development session summarizes practical tips and lessons learned from performance-tuning and benchmarking the world’s largest Oracle E-Business Suite environments. Apps sysadmins will learn concrete tips and techniques for identifying and resolving performance bottlenecks on all layers, with special attention to application- and database-tier servers. Learn about tuning Oracle Forms, Oracle Concurrent Manager, Apache, and Oracle Discoverer. Track down memory leaks and other issues at the Java and JVM layers. The session also covers Oracle E-Business Suite product-level tuning, including Oracle Workflow, Oracle Order Management, Oracle Payroll, and other modules. CON3429 - Using Oracle ADF with Oracle E-Business Suite: The Full Integration ViewSiva Puthurkattil, Lake County; Juan Camilo Ruiz, Sara Woodhull, Oracle Thursday, Oct 4, 11:15 AM - 12:15 PM - Moscone West 3003 Oracle E-Business Suite delivers functionality for handling the core business of your organization. However, user requirements and new technologies are driving an emerging need to implement new types of user interfaces for these applications. This session provides an overview of how to use Oracle Application Development Framework (Oracle ADF) to deliver cutting-edge Web 2.0 and mobile rich user interfaces that front existing Oracle E-Business Suite processes, and it also explores all the existing types of integration between the two worlds. CON9020 - Integrating Oracle E-Business Suite with Oracle Identity Management SolutionsSunil Ghosh, Elke Phelps, Oracle Thursday, Oct 4, 12:45 PM - 1:45 PM - Moscone West 2016 Need to integrate Oracle E-Business Suite with Microsoft Windows Kerberos, Active Directory, CA Netegrity SiteMinder, or other third-party authentication systems? Want to understand your options when Oracle Premier Support for Oracle Single Sign-On ends in December 2011? This Oracle Development session covers the latest certified integrations with Oracle Access Manager 11g and Oracle Internet Directory 11g, which can be used individually or as bridges for integrating with third-party authentication solutions. The session presents an architectural overview of how Oracle Access Manager, its WebGate and AccessGate components, and Oracle Internet Directory work together, with implications for Oracle Discoverer, Oracle Portal, and other Oracle Fusion identity management products. CON9019 - Troubleshooting, Diagnosing, and Optimizing Oracle E-Business Suite TechnologyGustavo Jimenez, Oracle Thursday, Oct 4, 2:15 PM - 3:15 PM - Moscone West 2016 This session covers how you can proactively diagnose Oracle E-Business Suite applications, including extensions built with Oracle Fusion Middleware technologies such as Oracle Application Development Framework (Oracle ADF) and Oracle WebCenter to catch potential issues in the middle tier before they become more serious. Topics include debugging, logging infrastructure, warning signs, performance tuning, information required when logging service requests, general JVM optimization, and an overall picture of all the moving parts that make it possible for Oracle E-Business Suite to isolate and fix problems. Also learn how Oracle Diagnostics Framework will help prevent downtime caused by failures. CON9031 - The Top 10 Things You Can Do to Secure Your Oracle E-Business Suite InstanceEric Bing, Erik Graversen, Oracle Thursday, Oct 4, 2:15 PM - 3:15 PM - Moscone West 2018 Learn the top 10 things you can do to secure your applications and your sensitive data. This Oracle development session for system administrators and security professionals explores some of the most important and overlooked things you can do to secure your Oracle E-Business Suite instance. It also covers data masking and other mechanisms for protecting sensitive data. Special Interest Groups (SIG) Some of our most senior staff have been invited to participate on the following SIG meetings as guest speakers: SIG10525 - OAUG - Archive & Purge SIGBrian Bent - Pre-Sales Engineer, TierData, Inc. Sunday, Sep 30, 10:30 AM - 12:00 PM - Moscone West 3011 The Archive and Purge SIG is an organization in which users can share their experiences and solicit functional and technical advice on archiving and purging data in Oracle E-Business Suite. This session provides an opportunity for users to network and share best practices, tips, and tricks. Guest: Oracle E-Business Suite Database Performance, Archive & Purging - Q&A SessionIsam Alyousfi, Senior Director, Applications Performance, Oracle SIG10547 - OAUG - Oracle E-Business (EBS) Applications Technology SIGSrini Chavali - IT Director, Cummins Inc Sunday, Sep 30, 10:30 AM - 12:00 PM - Moscone West 3018 The general purpose of the EBS Applications Technology SIG is to inform and educate its members about current and future components of the tech stack as they relate to Oracle E-Business Suite. Attend this meeting for networking and education and to share best practices. Guest: Oracle E-Business Suite Technology Certification Roadmap - Presentation and Q&ASteven Chan, Sr. Director, Applications Technology Group, Oracle SIG10559 - OAUG - User Management SIGSusan Behn - VP of Oracle Delivery, Infosemantics, Inc. Sunday, Sep 30, 10:30 AM - 12:00 PM - Moscone West 3024 The E-Business Suite User Management SIG focuses on the components of user management that enable Oracle E-Business Suite users to define administrative functions and manage users’ access to functions and data based on roles within an organization—rather than the user’s individual identity—which is referred to as role-based access control (RBAC). This meeting includes an introduction to Oracle User Management that covers the Oracle User Management building blocks and presents an example of creating a security policy.Guest: Security and User Management - Q&A SessionEric Bing, Sr. Director, EBS Security, OracleSara Woodhull, Principal Product Manager, Applications Technology Group, Oracle SIG10515 - OAUG – Upgrade SIGBarbara Matthews - Consultant, On Call DBASandra Vucinic, VLAD Group, Inc. Sunday, Sep 30, 12:00 PM - 2:00 PM - Moscone West 3009 This Upgrade SIG session starts with a business meeting and then features a Q&A panel discussion on Oracle E-Business Suite upgrade topics. The session• Reviews Upgrade SIG goals and objectives• Provides answers, during the Q&A session, to questions related to Oracle E-Business Suite upgrades• Shares “real world” experiences, tips, and techniques for Oracle E-Business Suite upgrades to Release 12.1. Guest: Oracle E-Business Suite Upgrade - Q&A SessionAnne Carlson - Sr. Director, Oracle E-Business Suite Product Strategy, OracleUdayan Parvate - Director, EBS Release Engineering, OracleSuzana Ferrari, Sr. Principal Consultant, OracleIsam Alyousfi, Sr. Director, Applications Performance, Oracle SIG10552 - OAUG - Oracle E-Business Suite SIGDonna Rosentrater - Manager, Global Sourcing & Procurement Systems, TJX Sunday, Sep 30, 12:15 PM - 1:45 PM - Moscone West 3020 The E-Business Suite SIG, affiliated with OAUG, supports Oracle E-Business Suite users through networking, education, and sharing of best practices. This SIG meeting will feature a general discussion of Oracle E-Business Suite product strategies in Release 12 and migration to Oracle Fusion Applications. Guest: Oracle E-Business Suite - Q&A SessionJeanne Lowell, Vice President, EBS Product Strategy, OracleNadia Bendjedou, Sr. Director, Product Strategy, Oracle SIG10556 - OAUG - SysAdmin SIGRandy Giefer - Sr Systems and Security Architect, Solution Beacon, LLC Sunday, Sep 30, 12:15 PM - 1:45 PM - Moscone West 3022 The SysAdmin SIG provides a forum in which OAUG members and participants can share updates, tips, and successful practices relating to system administration in an Oracle applications environment. The SysAdmin SIG strives to enable system administrators to become more effective and efficient in their jobs by providing them with access to people and information that can increase their system administration knowledge and experience. Attend this meeting to network, share best practices, and benefit from educational content. Guest: Oracle E-Business Suite 12.2 Online Patching- Presentation and Q&AKevin Hudson, Sr. Director, Applications Technology Group, Oracle SIG10553 - OAUG - Database SIGMichael Brown - Senior DBA, COLIBRI LTD LC Sunday, Sep 30, 2:00 PM - 3:15 PM - Moscone West 3020 The OAUG Database SIG provides an opportunity for applications database administrators to learn from and share their experiences with supporting the various Oracle applications environments. This session will include a brief business meeting followed by a short presentation. It will end with an open discussion among the attendees about items of interest to those present. Guest: Oracle E-Business Suite Database Performance - Presentation and Q&AIsam Alyousfi, Sr. Director, Applications Performance, Oracle Meet the Experts We're planning two round-table discussions where you can review your questions with senior E-Business Suite ATG staff: MTE9648 - Meet the Experts for Oracle E-Business Suite: Planning Your Upgrade Jeanne Lowell - VP, EBS Product Strategy, Oracle John Abraham - Sr. Principal Product Manager, Oracle Nadia Bendjedou - Sr. Director - Product Strategy, Oracle Anne Carlson - Sr. Director, Applications Technology Group, Oracle Udayan Parvate - Director, EBS Release Engineering, Oracle Isam Alyousfi, Sr. Director, Applications Performance, Oracle Monday, Oct 1, 3:15 PM - 4:15 PM - Moscone West 2001A Don’t miss this Oracle Applications Meet the Experts session with experts who specialize in Oracle E-Business Suite upgrade best practices. This is the place where attendees can have informal and semistructured but open one-on-one discussions with Strategy and Development regarding Oracle Applications strategy and your specific business and IT strategy. The experts will be available to discuss the value of the latest releases and share insights into the best path for your enterprise, so come ready with your questions. Space is limited, so make sure you register. MTE9649 - Meet the Oracle E-Business Suite Tools and Technology Experts Lisa Parekh - Vice President, Technology Integration, Oracle Steven Chan - Sr. Director, Oracle Elke Phelps - Sr. Principal Product Manager, Applications Technology Group, Oracle Max Arderius - Manager, Applications Technology Group, Oracle Tuesday, Oct 2, 1:15 PM - 2:15 PM - Moscone West 2001A Don’t miss this Oracle Applications Meet the Experts session with experts who specialize in Oracle E-Business Suite technology. This is the place where attendees can have informal and semistructured but open one-on-one discussions with Strategy and Development regarding Oracle Applications strategy and your specific business and IT strategy. The experts will be available to discuss the value of the latest releases and share insights into the best path for your enterprise, so come ready with your questions. Space is limited, so make sure you register. Demos We have five booths in the exhibition demogrounds this year, where you can try ATG technologies firsthand and get your questions answered. Please stop by and meet our staff at the following locations: Advanced Architecture and Technology Stack for Oracle E-Business Suite (W-067) New User Productivity Capabilities in Oracle E-Business Suite (W-065) End-to-End Management of Oracle E-Business Suite (W-063) Oracle E-Business Suite 12.1 Technical Upgrade Best Practices (W-066) SOA-Based Integration for Oracle E-Business Suite (W-064)

    Read the article

  • Entity Framework Code-First, OData & Windows Phone Client

    - by Jon Galloway
    Entity Framework Code-First is the coolest thing since sliced bread, Windows  Phone is the hottest thing since Tickle-Me-Elmo and OData is just too great to ignore. As part of the Full Stack project, we wanted to put them together, which turns out to be pretty easy… once you know how.   EF Code-First CTP5 is available now and there should be very few breaking changes in the release edition, which is due early in 2011.  Note: EF Code-First evolved rapidly and many of the existing documents and blog posts which were written with earlier versions, may now be obsolete or at least misleading.   Code-First? With traditional Entity Framework you start with a database and from that you generate “entities” – classes that bridge between the relational database and your object oriented program. With Code-First (Magic-Unicorn) (see Hanselman’s write up and this later write up by Scott Guthrie) the Entity Framework looks at classes you created and says “if I had created these classes, the database would have to have looked like this…” and creates the database for you! By deriving your entity collections from DbSet and exposing them via a class that derives from DbContext, you "turn on" database backing for your POCO with a minimum of code and no hidden designer or configuration files. POCO == Plain Old CLR Objects Your entity objects can be used throughout your applications - in web applications, console applications, Silverlight and Windows Phone applications, etc. In our case, we'll want to read and update data from a Windows Phone client application, so we'll expose the entities through a DataService and hook the Windows Phone client application to that data via proxies.  Piece of Pie.  Easy as cake. The Demo Architecture To see this at work, we’ll create an ASP.NET/MVC application which will act as the host for our Data Service.  We’ll create an incredibly simple data layer using EF Code-First on top of SQLCE4 and we’ll expose the data in a WCF Data Service using the oData protocol.  Our Windows Phone 7 client will instantiate  the data context via a URI and load the data asynchronously. Setting up the Server project with MVC 3, EF Code First, and SQL CE 4 Create a new application of type ASP.NET MVC 3 and name it DeadSimpleServer.  We need to add the latest SQLCE4 and Entity Framework Code First CTP's to our project. Fortunately, NuGet makes that really easy. Open the Package Manager Console (View / Other Windows / Package Manager Console) and type in "Install-Package EFCodeFirst.SqlServerCompact" at the PM> command prompt. Since NuGet handles dependencies for you, you'll see that it installs everything you need to use Entity Framework Code First in your project. PM> install-package EFCodeFirst.SqlServerCompact 'SQLCE (= 4.0.8435.1)' not installed. Attempting to retrieve dependency from source... Done 'EFCodeFirst (= 0.8)' not installed. Attempting to retrieve dependency from source... Done 'WebActivator (= 1.0.0.0)' not installed. Attempting to retrieve dependency from source... Done You are downloading SQLCE from Microsoft, the license agreement to which is available at http://173.203.67.148/licenses/SQLCE/EULA_ENU.rtf. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'SQLCE 4.0.8435.1' You are downloading EFCodeFirst from Microsoft, the license agreement to which is available at http://go.microsoft.com/fwlink/?LinkID=206497. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'EFCodeFirst 0.8' Successfully installed 'WebActivator 1.0.0.0' You are downloading EFCodeFirst.SqlServerCompact from Microsoft, the license agreement to which is available at http://173.203.67.148/licenses/SQLCE/EULA_ENU.rtf. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'EFCodeFirst.SqlServerCompact 0.8' Successfully added 'SQLCE 4.0.8435.1' to EfCodeFirst-CTP5 Successfully added 'EFCodeFirst 0.8' to EfCodeFirst-CTP5 Successfully added 'WebActivator 1.0.0.0' to EfCodeFirst-CTP5 Successfully added 'EFCodeFirst.SqlServerCompact 0.8' to EfCodeFirst-CTP5 Note: We're using SQLCE 4 with Entity Framework here because they work really well together from a development scenario, but you can of course use Entity Framework Code First with other databases supported by Entity framework. Creating The Model using EF Code First Now we can create our model class. Right-click the Models folder and select Add/Class. Name the Class Person.cs and add the following code: using System.Data.Entity; namespace DeadSimpleServer.Models { public class Person { public int ID { get; set; } public string Name { get; set; } } public class PersonContext : DbContext { public DbSet<Person> People { get; set; } } } Notice that the entity class Person has no special interfaces or base class. There's nothing special needed to make it work - it's just a POCO. The context we'll use to access the entities in the application is called PersonContext, but you could name it anything you wanted. The important thing is that it inherits DbContext and contains one or more DbSet which holds our entity collections. Adding Seed Data We need some testing data to expose from our service. The simplest way to get that into our database is to modify the CreateCeDatabaseIfNotExists class in AppStart_SQLCEEntityFramework.cs by adding some seed data to the Seed method: protected virtual void Seed( TContext context ) { var personContext = context as PersonContext; personContext.People.Add( new Person { ID = 1, Name = "George Washington" } ); personContext.People.Add( new Person { ID = 2, Name = "John Adams" } ); personContext.People.Add( new Person { ID = 3, Name = "Thomas Jefferson" } ); personContext.SaveChanges(); } The CreateCeDatabaseIfNotExists class name is pretty self-explanatory - when our DbContext is accessed and the database isn't found, a new one will be created and populated with the data in the Seed method. There's one more step to make that work - we need to uncomment a line in the Start method at the top of of the AppStart_SQLCEEntityFramework class and set the context name, as shown here, public static class AppStart_SQLCEEntityFramework { public static void Start() { DbDatabase.DefaultConnectionFactory = new SqlCeConnectionFactory("System.Data.SqlServerCe.4.0"); // Sets the default database initialization code for working with Sql Server Compact databases // Uncomment this line and replace CONTEXT_NAME with the name of your DbContext if you are // using your DbContext to create and manage your database DbDatabase.SetInitializer(new CreateCeDatabaseIfNotExists<PersonContext>()); } } Now our database and entity framework are set up, so we can expose data via WCF Data Services. Note: This is a bare-bones implementation with no administration screens. If you'd like to see how those are added, check out The Full Stack screencast series. Creating the oData Service using WCF Data Services Add a new WCF Data Service to the project (right-click the project / Add New Item / Web / WCF Data Service). We’ll be exposing all the data as read/write.  Remember to reconfigure to control and minimize access as appropriate for your own application. Open the code behind for your service. In our case, the service was called PersonTestDataService.svc so the code behind class file is PersonTestDataService.svc.cs. using System.Data.Services; using System.Data.Services.Common; using System.ServiceModel; using DeadSimpleServer.Models; namespace DeadSimpleServer { [ServiceBehavior( IncludeExceptionDetailInFaults = true )] public class PersonTestDataService : DataService<PersonContext> { // This method is called only once to initialize service-wide policies. public static void InitializeService( DataServiceConfiguration config ) { config.SetEntitySetAccessRule( "*", EntitySetRights.All ); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; config.UseVerboseErrors = true; } } } We're enabling a few additional settings to make it easier to debug if you run into trouble. The ServiceBehavior attribute is set to include exception details in faults, and we're using verbose errors. You can remove both of these when your service is working, as your public production service shouldn't be revealing exception information. You can view the output of the service by running the application and browsing to http://localhost:[portnumber]/PersonTestDataService.svc/: <service xml:base="http://localhost:49786/PersonTestDataService.svc/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:app="http://www.w3.org/2007/app" xmlns="http://www.w3.org/2007/app"> <workspace> <atom:title>Default</atom:title> <collection href="People"> <atom:title>People</atom:title> </collection> </workspace> </service> This indicates that the service exposes one collection, which is accessible by browsing to http://localhost:[portnumber]/PersonTestDataService.svc/People <?xml version="1.0" encoding="iso-8859-1" standalone="yes"?> <feed xml:base=http://localhost:49786/PersonTestDataService.svc/ xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <title type="text">People</title> <id>http://localhost:49786/PersonTestDataService.svc/People</id> <updated>2010-12-29T01:01:50Z</updated> <link rel="self" title="People" href="People" /> <entry> <id>http://localhost:49786/PersonTestDataService.svc/People(1)</id> <title type="text"></title> <updated>2010-12-29T01:01:50Z</updated> <author> <name /> </author> <link rel="edit" title="Person" href="People(1)" /> <category term="DeadSimpleServer.Models.Person" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:ID m:type="Edm.Int32">1</d:ID> <d:Name>George Washington</d:Name> </m:properties> </content> </entry> <entry> ... </entry> </feed> Let's recap what we've done so far. But enough with services and XML - let's get this into our Windows Phone client application. Creating the DataServiceContext for the Client Use the latest DataSvcUtil.exe from http://odata.codeplex.com. As of today, that's in this download: http://odata.codeplex.com/releases/view/54698 You need to run it with a few options: /uri - This will point to the service URI. In this case, it's http://localhost:59342/PersonTestDataService.svc  Pick up the port number from your running server (e.g., the server formerly known as Cassini). /out - This is the DataServiceContext class that will be generated. You can name it whatever you'd like. /Version - should be set to 2.0 /DataServiceCollection - Include this flag to generate collections derived from the DataServiceCollection base, which brings in all the ObservableCollection goodness that handles your INotifyPropertyChanged events for you. Here's the console session from when we ran it: <ListBox x:Name="MainListBox" Margin="0,0,-12,0" ItemsSource="{Binding}" SelectionChanged="MainListBox_SelectionChanged"> Next, to keep things simple, change the Binding on the two TextBlocks within the DataTemplate to Name and ID, <ListBox x:Name="MainListBox" Margin="0,0,-12,0" ItemsSource="{Binding}" SelectionChanged="MainListBox_SelectionChanged"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Margin="0,0,0,17" Width="432"> <TextBlock Text="{Binding Name}" TextWrapping="Wrap" Style="{StaticResource PhoneTextExtraLargeStyle}" /> <TextBlock Text="{Binding ID}" TextWrapping="Wrap" Margin="12,-6,12,0" Style="{StaticResource PhoneTextSubtleStyle}" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Getting The Context In the code-behind you’ll first declare a member variable to hold the context from the Entity Framework. This is named using convention over configuration. The db type is Person and the context is of type PersonContext, You initialize it by providing the URI, in this case using the URL obtained from the Cassini web server, PersonContext context = new PersonContext( new Uri( "http://localhost:49786/PersonTestDataService.svc/" ) ); Create a second member variable of type DataServiceCollection<Person> but do not initialize it, DataServiceCollection<Person> people; In the constructor you’ll initialize the DataServiceCollection using the PersonContext, public MainPage() { InitializeComponent(); people = new DataServiceCollection<Person>( context ); Finally, you’ll load the people collection using the LoadAsync method, passing in the fully specified URI for the People collection in the web service, people.LoadAsync( new Uri( "http://localhost:49786/PersonTestDataService.svc/People" ) ); Note that this method runs asynchronously and when it is finished the people  collection is already populated. Thus, since we didn’t need or want to override any of the behavior we don’t implement the LoadCompleted. You can use the LoadCompleted event if you need to do any other UI updates, but you don't need to. The final code is as shown below: using System; using System.Data.Services.Client; using System.Windows; using System.Windows.Controls; using DeadSimpleServer.Models; using Microsoft.Phone.Controls; namespace WindowsPhoneODataTest { public partial class MainPage : PhoneApplicationPage { PersonContext context = new PersonContext( new Uri( "http://localhost:49786/PersonTestDataService.svc/" ) ); DataServiceCollection<Person> people; // Constructor public MainPage() { InitializeComponent(); // Set the data context of the listbox control to the sample data // DataContext = App.ViewModel; people = new DataServiceCollection<Person>( context ); people.LoadAsync( new Uri( "http://localhost:49786/PersonTestDataService.svc/People" ) ); DataContext = people; this.Loaded += new RoutedEventHandler( MainPage_Loaded ); } // Handle selection changed on ListBox private void MainListBox_SelectionChanged( object sender, SelectionChangedEventArgs e ) { // If selected index is -1 (no selection) do nothing if ( MainListBox.SelectedIndex == -1 ) return; // Navigate to the new page NavigationService.Navigate( new Uri( "/DetailsPage.xaml?selectedItem=" + MainListBox.SelectedIndex, UriKind.Relative ) ); // Reset selected index to -1 (no selection) MainListBox.SelectedIndex = -1; } // Load data for the ViewModel Items private void MainPage_Loaded( object sender, RoutedEventArgs e ) { if ( !App.ViewModel.IsDataLoaded ) { App.ViewModel.LoadData(); } } } } With people populated we can set it as the DataContext and run the application; you’ll find that the Name and ID are displayed in the list on the Mainpage. Here's how the pieces in the client fit together: Complete source code available here

    Read the article

  • What Every Developer Should Know About MSI Components

    - by Alois Kraus
    Hopefully nothing. But if you have to do more than simple XCopy deployment and you need to support updates, upgrades and perhaps side by side scenarios there is no way around MSI. You can create Msi files with a Visual Studio Setup project which is severely limited or you can use the Windows Installer Toolset. I cannot talk about WIX with my German colleagues because WIX has a very special meaning. It is funny to always use the long name when I talk about deployment possibilities. Alternatively you can buy commercial tools which help you to author Msi files but I am not sure how good they are. Given enough pain with existing solutions you can also learn the MSI Apis and create your own packaging solution. If I were you I would use either a commercial visual tool when you do easy deployments or use the free Windows Installer Toolset. Once you know the WIX schema you can create well formed wix xml files easily with any editor. Then you can “compile” from the wxs files your Msi package. Recently I had the “pleasure” to get my hands dirty with C++ (again) and the MSI technology. Installation is a complex topic but after several month of digging into arcane MSI issues I can safely say that there should exist an easier way to install and update files as today. I am not alone with this statement as John Robbins (creator of the cool tool Paraffin) states: “.. It's a brittle and scary API in Windows …”. To help other people struggling with installation issues I present you the advice I (and others) found useful and what will happen if you ignore this advice. What is a MSI file? A MSI file is basically a database with tables which reference each other to control how your un/installation should work. The basic idea is that you declare via these tables what you want to install and MSI controls the how to get your stuff onto or off your machine. Your “stuff” consists usually of files, registry keys, shortcuts and environment variables. Therefore the most important tables are File, Registry, Environment and Shortcut table which define what will be un/installed. The key to master MSI is that every resource (file, registry key ,…) is associated with a MSI component. The actual payload consists of compressed files in the CAB format which can either be embedded into the MSI file or reside beside the MSI file or in a subdirectory below it. To examine MSI files you need Orca a free MSI editor provided by MS. There is also another free editor called Super Orca which does support diffs between MSI and it does not lock the MSI files. But since Orca comes with a shell extension I tend to use only Orca because it is so easy to right click on a MSI file and open it with this tool. How Do I Install It? Double click it. This does work for fresh installations as well as major upgrades. Updates need to be installed via the command line via msiexec /i <msi> REINSTALL=ALL REINSTALLMODE=vomus   This tells the installer to reinstall all already installed features (new features will NOT be installed). The reinstallmode letters do force an overwrite of the old cached package in the %WINDIR%\Installer folder. All files, shortcuts and registry keys are redeployed if they are missing or need to be replaced with a newer version. When things did go really wrong and you want to overwrite everything unconditionally use REINSTALLMODE=vamus. How To Enable MSI Logs? You can download a MSI from Microsoft which installs some registry keys to enable full MSI logging. The log files can be found in your %TEMP% folder and are called MSIxxxx.log. Alternatively you can add to your msiexec command line the option msiexec …. /l*vx <LogFileName> Personally I find it rather strange that * does not mean full logging. To really get all logs I need to add v and x which is documented in the msiexec help but I still find this behavior unintuitive. What are MSI components? The whole MSI logic is bound to the concept of MSI components. Nearly every msi table has a Component column which binds an installable resource to a component. Below are the screenshots of the FeatureComponents and Component table of an example MSI. The Feature table defines basically the feature hierarchy.  To find out what belongs to a feature you need to look at the FeatureComponents table where for each feature the components are listed which will be installed when a feature is installed. The MSI components are defined in the  Component table. This table has as first column the component name and as second column the component id which is a GUID. All resources you want to install belong to a MSI component. Therefore nearly all MSI tables have a Component_ column which contains the component name. If you look e.g. a the File table you see that every file belongs to a component which is true for all other tables which install resources. The component table is the glue between all other tables which contain the resources you want to install. So far so easy. Why is MSI then so complex? Most MSI problems arise from the fact that you did violate a MSI component rule in one or the other way. When you install a feature the reference count for all components belonging to this feature will increase by one. If your component is installed by more than one feature it will get a higher refcount. When you uninstall a feature its refcount will drop by one. Interesting things happen if the component reference count reaches zero: Then all associated resources will be deleted. That looks like a reasonable thing and it is. What it makes complex are the strange component rules you have to follow. Below are some important component rules from the Tao of the Windows Installer … Rule 16: Follow Component Rules Components are a very important part of the Installer technology. They are the means whereby the Installer manages the resources that make up your application. The SDK provides the following guidelines for creating components in your package: Never create two components that install a resource under the same name and target location. If a resource must be duplicated in multiple components, change its name or target location in each component. This rule should be applied across applications, products, product versions, and companies. Two components must not have the same key path file. This is a consequence of the previous rule. The key path value points to a particular file or folder belonging to the component that the installer uses to detect the component. If two components had the same key path file, the installer would be unable to distinguish which component is installed. Two components however may share a key path folder. Do not create a version of a component that is incompatible with all previous versions of the component. This rule should be applied across applications, products, product versions, and companies. Do not create components containing resources that will need to be installed into more than one directory on the user’s system. The installer installs all of the resources in a component into the same directory. It is not possible to install some resources into subdirectories. Do not include more than one COM server per component. If a component contains a COM server, this must be the key path for the component. Do not specify more than one file per component as a target for the Start menu or a Desktop shortcut. … And these rules do not even talk about component ids, update packages and upgrades which you need to understand as well. Lets suppose you install two MSIs (MSI1 and MSI2) which have the same ComponentId but different component names. Both do install the same file. What will happen when you uninstall MSI2?   Hm the file should stay there. But the component names are different. Yes and yes. But MSI uses not use the component name as key for the refcount. Instead the ComponentId column of the Component table which contains a GUID is used as identifier under which the refcount is stored. The components Comp1 and Comp2 are identical from the MSI perspective. After the installation of both MSIs the Component with the Id {100000….} has a refcount of two. After uninstallation of one MSI there is still a refcount of one which drops to zero just as expected when we uninstall the last msi. Then the file which was the same for both MSIs is deleted. You should remember that MSI keeps a refcount across MSIs for components with the same component id. MSI does manage components not the resources you did install. The resources associated with a component are then and only then deleted when the refcount of the component reaches zero.   The dependencies between features, components and resources can be described as relations. m,k are numbers >= 1, n can be 0. Inside a MSI the following relations are valid Feature    1  –> n Components Component    1 –> m Features Component      1  –>  k Resources These relations express that one feature can install several components and features can share components between them. Every (meaningful) component will install at least one resource which means that its name (primary key to stay in database speak) does occur in some other table in the Component column as value which installs some resource. Lets make it clear with an example. We want to install with the feature MainFeature some files a registry key and a shortcut. We can then create components Comp1..3 which are referenced by the resources defined in the corresponding tables.   Feature Component Registry File Shortcuts MainFeature Comp1 RegistryKey1     MainFeature Comp2   File.txt   MainFeature Comp3   File2.txt Shortcut to File2.txt   It is illegal that the same resource is part of more than one component since this would break the refcount mechanism. Lets illustrate this:            Feature ComponentId Resource Reference Count Feature1 {1000-…} File1.txt 1 Feature2 {2000-….} File1.txt 1 The installation part works well but what happens when you uninstall Feature2? Component {20000…} gets a refcount of zero where MSI deletes all resources belonging to this component. In this case File1.txt will be deleted. But Feature1 still has another component {10000…} with a refcount of one which means that the file was deleted too early. You just have ruined your installation. To fix it you then need to click on the Repair button under Add/Remove Programs to let MSI reinstall any missing registry keys, files or shortcuts. The vigilant reader might has noticed that there is more in the Component table. Beside its name and GUID it has also an installation directory, attributes and a KeyPath. The KeyPath is a reference to a file or registry key which is used to detect if the component is already installed. This becomes important when you repair or uninstall a component. To find out if the component is already installed MSI checks if the registry key or file referenced by the KeyPath property does exist. When it does not exist it assumes that it was either already uninstalled (can lead to problems during uninstall) or that it is already installed and all is fine. Why is this detail so important? Lets put all files into one component. The KeyPath should be then one of the files of your component to check if it was installed or not. When your installation becomes corrupt because a file was deleted you cannot repair it with the Repair button under Add/Remove Programs because MSI checks the component integrity via the Resource referenced by its KeyPath. As long as you did not delete the KeyPath file MSI thinks all resources with your component are installed and never executes any repair action. You get even more trouble when you try to remove files during an upgrade (you cannot remove files during an update) from your super component which contains all files. The only way out and therefore best practice is to assign for every resource you want to install an extra component. This ensures painless updatability and repairs and you have much less effort to remove specific files during an upgrade. In effect you get this best practice relation Feature 1  –> n Components Component   1  –>  1 Resources MSI Component Rules Rule 1 – One component per resource Every resource you want to install (file, registry key, value, environment value, shortcut, directory, …) must get its own component which does never change between versions as long as the install location is the same. Penalty If you add more than one resources to a component you will break the repair capability of MSI because the KeyPath is used to check if the component needs repair. MSI ComponentId Files MSI 1.0 {1000} File1-5 MSI 2.0 {2000} File2-5 You want to remove File1 in version 2.0 of your MSI. Since you want to keep the other files you create a new component and add them there. MSI will delete all files if the component refcount of {1000} drops to zero. The files you want to keep are added to the new component {2000}. Ok that does work if your upgrade does uninstall the old MSI first. This will cause the refcount of all previously installed components to reach zero which means that all files present in version 1.0 are deleted. But there is a faster way to perform your upgrade by first installing your new MSI and then remove the old one.  If you choose this upgrade path then you will loose File1-5 after your upgrade and not only File1 as intended by your new component design.   Rule 2 – Only add, never remove resources from a component If you did follow rule 1 you will not need Rule 2. You can add in a patch more resources to one component. That is ok. But you can never remove anything from it. There are tricky ways around that but I do not want to encourage bad component design. Penalty Lets assume you have 2 MSI files which install under the same component one file   MSI1 MSI2 {1000} - ComponentId {1000} – ComponentId File1.txt File2.txt   When you install and uninstall both MSIs you will end up with an installation where either File1 or File2 will be left. Why? It seems that MSI does not store the resources associated with each component in its internal database. Instead Windows will simply query the MSI that is currently uninstalled for all resources belonging to this component. Since it will find only one file and not two it will only uninstall one file. That is the main reason why you never can remove resources from a component!   Rule 3 Never Remove A Component From an Update MSI. This is the same as if you change the GUID of a component by accident for your new update package. The resulting update package will not contain all components from the previously installed package. Penalty When you remove a component from a feature MSI will set the feature state during update to Advertised and log a warning message into its log file when you did enable MSI logging. SELMGR: ComponentId '{2DCEA1BA-3E27-E222-484C-D0D66AEA4F62}' is registered to feature 'xxxxxxx, but is not present in the Component table.  Removal of components from a feature is not supported! MSI (c) (24:44) [07:53:13:436]: SELMGR: Removal of a component from a feature is not supported Advertised means that MSI treats all components of this feature as not installed. As a consequence during uninstall nothing will be removed since it is not installed! This is not only bad because uninstall does no longer work but this feature will also not get the required patches. All other features which have followed component versioning rules for update packages will be updated but the one faulty feature will not. This results in very hard to find bugs why an update was only partially successful. Things got better with Windows Installer 4.5 but you cannot rely on that nobody will use an older installer. It is a good idea to add to your update msiexec call MSIENFORCEUPGRADECOMPONENTRULES=1 which will abort the installation if you did violate this rule.

    Read the article

  • PTLQueue : a scalable bounded-capacity MPMC queue

    - by Dave
    Title: Fast concurrent MPMC queue -- I've used the following concurrent queue algorithm enough that it warrants a blog entry. I'll sketch out the design of a fast and scalable multiple-producer multiple-consumer (MPSC) concurrent queue called PTLQueue. The queue has bounded capacity and is implemented via a circular array. Bounded capacity can be a useful property if there's a mismatch between producer rates and consumer rates where an unbounded queue might otherwise result in excessive memory consumption by virtue of the container nodes that -- in some queue implementations -- are used to hold values. A bounded-capacity queue can provide flow control between components. Beware, however, that bounded collections can also result in resource deadlock if abused. The put() and take() operators are partial and wait for the collection to become non-full or non-empty, respectively. Put() and take() do not allocate memory, and are not vulnerable to the ABA pathologies. The PTLQueue algorithm can be implemented equally well in C/C++ and Java. Partial operators are often more convenient than total methods. In many use cases if the preconditions aren't met, there's nothing else useful the thread can do, so it may as well wait via a partial method. An exception is in the case of work-stealing queues where a thief might scan a set of queues from which it could potentially steal. Total methods return ASAP with a success-failure indication. (It's tempting to describe a queue or API as blocking or non-blocking instead of partial or total, but non-blocking is already an overloaded concurrency term. Perhaps waiting/non-waiting or patient/impatient might be better terms). It's also trivial to construct partial operators by busy-waiting via total operators, but such constructs may be less efficient than an operator explicitly and intentionally designed to wait. A PTLQueue instance contains an array of slots, where each slot has volatile Turn and MailBox fields. The array has power-of-two length allowing mod/div operations to be replaced by masking. We assume sensible padding and alignment to reduce the impact of false sharing. (On x86 I recommend 128-byte alignment and padding because of the adjacent-sector prefetch facility). Each queue also has PutCursor and TakeCursor cursor variables, each of which should be sequestered as the sole occupant of a cache line or sector. You can opt to use 64-bit integers if concerned about wrap-around aliasing in the cursor variables. Put(null) is considered illegal, but the caller or implementation can easily check for and convert null to a distinguished non-null proxy value if null happens to be a value you'd like to pass. Take() will accordingly convert the proxy value back to null. An advantage of PTLQueue is that you can use atomic fetch-and-increment for the partial methods. We initialize each slot at index I with (Turn=I, MailBox=null). Both cursors are initially 0. All shared variables are considered "volatile" and atomics such as CAS and AtomicFetchAndIncrement are presumed to have bidirectional fence semantics. Finally T is the templated type. I've sketched out a total tryTake() method below that allows the caller to poll the queue. tryPut() has an analogous construction. Zebra stripping : alternating row colors for nice-looking code listings. See also google code "prettify" : https://code.google.com/p/google-code-prettify/ Prettify is a javascript module that yields the HTML/CSS/JS equivalent of pretty-print. -- pre:nth-child(odd) { background-color:#ff0000; } pre:nth-child(even) { background-color:#0000ff; } border-left: 11px solid #ccc; margin: 1.7em 0 1.7em 0.3em; background-color:#BFB; font-size:12px; line-height:65%; " // PTLQueue : Put(v) : // producer : partial method - waits as necessary assert v != null assert Mask = 1 && (Mask & (Mask+1)) == 0 // Document invariants // doorway step // Obtain a sequence number -- ticket // As a practical concern the ticket value is temporally unique // The ticket also identifies and selects a slot auto tkt = AtomicFetchIncrement (&PutCursor, 1) slot * s = &Slots[tkt & Mask] // waiting phase : // wait for slot's generation to match the tkt value assigned to this put() invocation. // The "generation" is implicitly encoded as the upper bits in the cursor // above those used to specify the index : tkt div (Mask+1) // The generation serves as an epoch number to identify a cohort of threads // accessing disjoint slots while s-Turn != tkt : Pause assert s-MailBox == null s-MailBox = v // deposit and pass message Take() : // consumer : partial method - waits as necessary auto tkt = AtomicFetchIncrement (&TakeCursor,1) slot * s = &Slots[tkt & Mask] // 2-stage waiting : // First wait for turn for our generation // Acquire exclusive "take" access to slot's MailBox field // Then wait for the slot to become occupied while s-Turn != tkt : Pause // Concurrency in this section of code is now reduced to just 1 producer thread // vs 1 consumer thread. // For a given queue and slot, there will be most one Take() operation running // in this section. // Consumer waits for producer to arrive and make slot non-empty // Extract message; clear mailbox; advance Turn indicator // We have an obvious happens-before relation : // Put(m) happens-before corresponding Take() that returns that same "m" for T v = s-MailBox if v != null : s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 // unlock slot to admit next producer and consumer return v Pause tryTake() : // total method - returns ASAP with failure indication for auto tkt = TakeCursor slot * s = &Slots[tkt & Mask] if s-Turn != tkt : return null T v = s-MailBox // presumptive return value if v == null : return null // ratify tkt and v values and commit by advancing cursor if CAS (&TakeCursor, tkt, tkt+1) != tkt : continue s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 return v The basic idea derives from the Partitioned Ticket Lock "PTL" (US20120240126-A1) and the MultiLane Concurrent Bag (US8689237). The latter is essentially a circular ring-buffer where the elements themselves are queues or concurrent collections. You can think of the PTLQueue as a partitioned ticket lock "PTL" augmented to pass values from lock to unlock via the slots. Alternatively, you could conceptualize of PTLQueue as a degenerate MultiLane bag where each slot or "lane" consists of a simple single-word MailBox instead of a general queue. Each lane in PTLQueue also has a private Turn field which acts like the Turn (Grant) variables found in PTL. Turn enforces strict FIFO ordering and restricts concurrency on the slot mailbox field to at most one simultaneous put() and take() operation. PTL uses a single "ticket" variable and per-slot Turn (grant) fields while MultiLane has distinct PutCursor and TakeCursor cursors and abstract per-slot sub-queues. Both PTL and MultiLane advance their cursor and ticket variables with atomic fetch-and-increment. PTLQueue borrows from both PTL and MultiLane and has distinct put and take cursors and per-slot Turn fields. Instead of a per-slot queues, PTLQueue uses a simple single-word MailBox field. PutCursor and TakeCursor act like a pair of ticket locks, conferring "put" and "take" access to a given slot. PutCursor, for instance, assigns an incoming put() request to a slot and serves as a PTL "Ticket" to acquire "put" permission to that slot's MailBox field. To better explain the operation of PTLQueue we deconstruct the operation of put() and take() as follows. Put() first increments PutCursor obtaining a new unique ticket. That ticket value also identifies a slot. Put() next waits for that slot's Turn field to match that ticket value. This is tantamount to using a PTL to acquire "put" permission on the slot's MailBox field. Finally, having obtained exclusive "put" permission on the slot, put() stores the message value into the slot's MailBox. Take() similarly advances TakeCursor, identifying a slot, and then acquires and secures "take" permission on a slot by waiting for Turn. Take() then waits for the slot's MailBox to become non-empty, extracts the message, and clears MailBox. Finally, take() advances the slot's Turn field, which releases both "put" and "take" access to the slot's MailBox. Note the asymmetry : put() acquires "put" access to the slot, but take() releases that lock. At any given time, for a given slot in a PTLQueue, at most one thread has "put" access and at most one thread has "take" access. This restricts concurrency from general MPMC to 1-vs-1. We have 2 ticket locks -- one for put() and one for take() -- each with its own "ticket" variable in the form of the corresponding cursor, but they share a single "Grant" egress variable in the form of the slot's Turn variable. Advancing the PutCursor, for instance, serves two purposes. First, we obtain a unique ticket which identifies a slot. Second, incrementing the cursor is the doorway protocol step to acquire the per-slot mutual exclusion "put" lock. The cursors and operations to increment those cursors serve double-duty : slot-selection and ticket assignment for locking the slot's MailBox field. At any given time a slot MailBox field can be in one of the following states: empty with no pending operations -- neutral state; empty with one or more waiting take() operations pending -- deficit; occupied with no pending operations; occupied with one or more waiting put() operations -- surplus; empty with a pending put() or pending put() and take() operations -- transitional; or occupied with a pending take() or pending put() and take() operations -- transitional. The partial put() and take() operators can be implemented with an atomic fetch-and-increment operation, which may confer a performance advantage over a CAS-based loop. In addition we have independent PutCursor and TakeCursor cursors. Critically, a put() operation modifies PutCursor but does not access the TakeCursor and a take() operation modifies the TakeCursor cursor but does not access the PutCursor. This acts to reduce coherence traffic relative to some other queue designs. It's worth noting that slow threads or obstruction in one slot (or "lane") does not impede or obstruct operations in other slots -- this gives us some degree of obstruction isolation. PTLQueue is not lock-free, however. The implementation above is expressed with polite busy-waiting (Pause) but it's trivial to implement per-slot parking and unparking to deschedule waiting threads. It's also easy to convert the queue to a more general deque by replacing the PutCursor and TakeCursor cursors with Left/Front and Right/Back cursors that can move either direction. Specifically, to push and pop from the "left" side of the deque we would decrement and increment the Left cursor, respectively, and to push and pop from the "right" side of the deque we would increment and decrement the Right cursor, respectively. We used a variation of PTLQueue for message passing in our recent OPODIS 2013 paper. ul { list-style:none; padding-left:0; padding:0; margin:0; margin-left:0; } ul#myTagID { padding: 0px; margin: 0px; list-style:none; margin-left:0;} -- -- There's quite a bit of related literature in this area. I'll call out a few relevant references: Wilson's NYU Courant Institute UltraComputer dissertation from 1988 is classic and the canonical starting point : Operating System Data Structures for Shared-Memory MIMD Machines with Fetch-and-Add. Regarding provenance and priority, I think PTLQueue or queues effectively equivalent to PTLQueue have been independently rediscovered a number of times. See CB-Queue and BNPBV, below, for instance. But Wilson's dissertation anticipates the basic idea and seems to predate all the others. Gottlieb et al : Basic Techniques for the Efficient Coordination of Very Large Numbers of Cooperating Sequential Processors Orozco et al : CB-Queue in Toward high-throughput algorithms on many-core architectures which appeared in TACO 2012. Meneghin et al : BNPVB family in Performance evaluation of inter-thread communication mechanisms on multicore/multithreaded architecture Dmitry Vyukov : bounded MPMC queue (highly recommended) Alex Otenko : US8607249 (highly related). John Mellor-Crummey : Concurrent queues: Practical fetch-and-phi algorithms. Technical Report 229, Department of Computer Science, University of Rochester Thomasson : FIFO Distributed Bakery Algorithm (very similar to PTLQueue). Scott and Scherer : Dual Data Structures I'll propose an optimization left as an exercise for the reader. Say we wanted to reduce memory usage by eliminating inter-slot padding. Such padding is usually "dark" memory and otherwise unused and wasted. But eliminating the padding leaves us at risk of increased false sharing. Furthermore lets say it was usually the case that the PutCursor and TakeCursor were numerically close to each other. (That's true in some use cases). We might still reduce false sharing by incrementing the cursors by some value other than 1 that is not trivially small and is coprime with the number of slots. Alternatively, we might increment the cursor by one and mask as usual, resulting in a logical index. We then use that logical index value to index into a permutation table, yielding an effective index for use in the slot array. The permutation table would be constructed so that nearby logical indices would map to more distant effective indices. (Open question: what should that permutation look like? Possibly some perversion of a Gray code or De Bruijn sequence might be suitable). As an aside, say we need to busy-wait for some condition as follows : "while C == 0 : Pause". Lets say that C is usually non-zero, so we typically don't wait. But when C happens to be 0 we'll have to spin for some period, possibly brief. We can arrange for the code to be more machine-friendly with respect to the branch predictors by transforming the loop into : "if C == 0 : for { Pause; if C != 0 : break; }". Critically, we want to restructure the loop so there's one branch that controls entry and another that controls loop exit. A concern is that your compiler or JIT might be clever enough to transform this back to "while C == 0 : Pause". You can sometimes avoid this by inserting a call to a some type of very cheap "opaque" method that the compiler can't elide or reorder. On Solaris, for instance, you could use :"if C == 0 : { gethrtime(); for { Pause; if C != 0 : break; }}". It's worth noting the obvious duality between locks and queues. If you have strict FIFO lock implementation with local spinning and succession by direct handoff such as MCS or CLH,then you can usually transform that lock into a queue. Hidden commentary and annotations - invisible : * And of course there's a well-known duality between queues and locks, but I'll leave that topic for another blog post. * Compare and contrast : PTLQ vs PTL and MultiLane * Equivalent : Turn; seq; sequence; pos; position; ticket * Put = Lock; Deposit Take = identify and reserve slot; wait; extract & clear; unlock * conceptualize : Distinct PutLock and TakeLock implemented as ticket lock or PTL Distinct arrival cursors but share per-slot "Turn" variable provides exclusive role-based access to slot's mailbox field put() acquires exclusive access to a slot for purposes of "deposit" assigns slot round-robin and then acquires deposit access rights/perms to that slot take() acquires exclusive access to slot for purposes of "withdrawal" assigns slot round-robin and then acquires withdrawal access rights/perms to that slot At any given time, only one thread can have withdrawal access to a slot at any given time, only one thread can have deposit access to a slot Permissible for T1 to have deposit access and T2 to simultaneously have withdrawal access * round-robin for the purposes of; role-based; access mode; access role mailslot; mailbox; allocate/assign/identify slot rights; permission; license; access permission; * PTL/Ticket hybrid Asymmetric usage ; owner oblivious lock-unlock pairing K-exclusion add Grant cursor pass message m from lock to unlock via Slots[] array Cursor performs 2 functions : + PTL ticket + Assigns request to slot in round-robin fashion Deconstruct protocol : explication put() : allocate slot in round-robin fashion acquire PTL for "put" access store message into slot associated with PTL index take() : Acquire PTL for "take" access // doorway step seq = fetchAdd (&Grant, 1) s = &Slots[seq & Mask] // waiting phase while s-Turn != seq : pause Extract : wait for s-mailbox to be full v = s-mailbox s-mailbox = null Release PTL for both "put" and "take" access s-Turn = seq + Mask + 1 * Slot round-robin assignment and lock "doorway" protocol leverage the same cursor and FetchAdd operation on that cursor FetchAdd (&Cursor,1) + round-robin slot assignment and dispersal + PTL/ticket lock "doorway" step waiting phase is via "Turn" field in slot * PTLQueue uses 2 cursors -- put and take. Acquire "put" access to slot via PTL-like lock Acquire "take" access to slot via PTL-like lock 2 locks : put and take -- at most one thread can access slot's mailbox Both locks use same "turn" field Like multilane : 2 cursors : put and take slot is simple 1-capacity mailbox instead of queue Borrow per-slot turn/grant from PTL Provides strict FIFO Lock slot : put-vs-put take-vs-take at most one put accesses slot at any one time at most one put accesses take at any one time reduction to 1-vs-1 instead of N-vs-M concurrency Per slot locks for put/take Release put/take by advancing turn * is instrumental in ... * P-V Semaphore vs lock vs K-exclusion * See also : FastQueues-excerpt.java dice-etc/queue-mpmc-bounded-blocking-circular-xadd/ * PTLQueue is the same as PTLQB - identical * Expedient return; ASAP; prompt; immediately * Lamport's Bakery algorithm : doorway step then waiting phase Threads arriving at doorway obtain a unique ticket number Threads enter in ticket order * In the terminology of Reed and Kanodia a ticket lock corresponds to the busy-wait implementation of a semaphore using an eventcount and a sequencer It can also be thought of as an optimization of Lamport's bakery lock was designed for fault-tolerance rather than performance Instead of spinning on the release counter, processors using a bakery lock repeatedly examine the tickets of their peers --

    Read the article

  • Partner Blog Series: PwC Perspectives - The Gotchas, The Do's and Don'ts for IDM Implementations

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Verdana","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Arial Narrow","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} It is generally accepted among business communities that technology by itself is not a silver bullet to all problems, but when it is combined with leading practices, strategy, careful planning and execution, it can create a recipe for success. This post attempts to highlight some of the best practices along with dos & don’ts that our practice has accumulated over the years in the identity & access management space in general, and also in the context of R2, in particular. Best Practices The following section illustrates the leading practices in “How” to plan, implement and sustain a successful OIM deployment, based on our collective experience. Planning is critical, but often overlooked A common approach to planning an IAM program that we identify with our clients is the three step process involving a current state assessment, a future state roadmap and an executable strategy to get there. It is extremely beneficial for clients to assess their current IAM state, perform gap analysis, document the recommended controls to address the gaps, align future state roadmap to business initiatives and get buy in from all stakeholders involved to improve the chances of success. When designing an enterprise-wide solution, the scalability of the technology must accommodate the future growth of the enterprise and the projected identity transactions over several years. Aligning the implementation schedule of OIM to related information technology projects increases the chances of success. As a baseline, it is recommended to match hardware specifications to the sizing guide for R2 published by Oracle. Adherence to this will help ensure that the hardware used to support OIM will not become a bottleneck as the adoption of new services increases. If your Organization has numerous connected applications that rely on reconciliation to synchronize the access data into OIM, consider hosting dedicated instances to handle reconciliation. Finally, ensure the use of clustered environment for development and have at least three total environments to help facilitate a controlled migration to production. If your Organization is planning to implement role based access control, we recommend performing a role mining exercise and consolidate your enterprise roles to keep them manageable. In addition, many Organizations have multiple approval flows to control access to critical roles, applications and entitlements. If your Organization falls into this category, we highly recommend that you limit the number of approval workflows to a small set. Most Organizations have operations managed across data centers with backend database synchronization, if your Organization falls into this category, ensure that the overall latency between the datacenters when replicating the databases is less than ten milliseconds to ensure that there are no front office performance impacts. Ingredients for a successful implementation During the development phase of your project, there are a number of guidelines that can be followed to help increase the chances for success. Most implementations cannot be completed without the use of customizations. If your implementation requires this, it’s a good practice to perform code reviews to help ensure quality and reduce code bottlenecks related to performance. We have observed at our clients that the development process works best when team members adhere to coding leading practices. Plan for time to correct coding defects and ensure developers are empowered to report their own bugs for maximum transparency. Many organizations struggle with defining a consistent approach to managing logs. This is particularly important due to the amount of information that can be logged by OIM. We recommend Oracle Diagnostics Logging (ODL) as an alternative to be used for logging. ODL allows log files to be formatted in XML for easy parsing and does not require a server restart when the log levels are changed during troubleshooting. Testing is a vital part of any large project, and an OIM R2 implementation is no exception. We suggest that at least one lower environment should use production-like data and connectors. Configurations should match as closely as possible. For example, use secure channels between OIM and target platforms in pre-production environments to test the configurations, the migration processes of certificates, and the additional overhead that encryption could impose. Finally, we ask our clients to perform database backups regularly and before any major change event, such as a patch or migration between environments. In the lowest environments, we recommend to have at least a weekly backup in order to prevent significant loss of time and effort. Similarly, if your organization is using virtual machines for one or more of the environments, it is recommended to take frequent snapshots so that rollbacks can occur in the event of improper configuration. Operate & sustain the solution to derive maximum benefits When migrating OIM R2 to production, it is important to perform certain activities that will help achieve a smoother transition. At our clients, we have seen that splitting the OIM tables into their own tablespaces by categories (physical tables, indexes, etc.) can help manage database growth effectively. If we notice that a client hasn’t enabled the Oracle-recommended indexing in the applicable database, we strongly suggest doing so to improve performance. Additionally, we work with our clients to make sure that the audit level is set to fit the organization’s auditing needs and sometimes even allocate UPA tables and indexes into their own table-space for better maintenance. Finally, many of our clients have set up schedules for reconciliation tables to be archived at regular intervals in order to keep the size of the database(s) reasonable and result in optimal database performance. For our clients that anticipate availability issues with target applications, we strongly encourage the use of the offline provisioning capabilities of OIM R2. This reduces the provisioning process for a given target application dependency on target availability and help avoid broken workflows. To account for this and other abnormalities, we also advocate that OIM’s monitoring controls be configured to alert administrators on any abnormal situations. Within OIM R2, we have begun advising our clients to utilize the ‘profile’ feature to encapsulate multiple commonly requested accounts, roles, and/or entitlements into a single item. By setting up a number of profiles that can be searched for and used, users will spend less time performing the same exact steps for common tasks. We advise our clients to follow the Oracle recommended guides for database and application server tuning which provides a good baseline configuration. It offers guidance on database connection pools, connection timeouts, user interface threads and proper handling of adapters/plug-ins. All of these can be important configurations that will allow faster provisioning and web page response times. Many of our clients have begun to recognize the value of data mining and a remediation process during the initial phases of an implementation (to help ensure high quality data gets loaded) and beyond (to support ongoing maintenance and business-as-usual processes). A successful program always begins with identifying the data elements and assigning a classification level based on criticality, risk, and availability. It should finish by following through with a remediation process. Dos & Don’ts Here are the most common dos and don'ts that we socialize with our clients, derived from our experience implementing the solution. Dos Don’ts Scope the project into phases with realistic goals. Look for quick wins to show success and value to the stake holders. Avoid “boiling the ocean” and trying to integrate all enterprise applications in the first phase. Establish an enterprise ID (universal unique ID across the enterprise) earlier in the program. Avoid major UI customizations that require code changes. Have a plan in place to patch during the project, which helps alleviate any major issues or roadblocks (product and database). Avoid publishing all the target entitlements if you don't anticipate their usage during access request. Assess your current state and prepare a roadmap to address your operations, tactical and strategic goals, align it with your business priorities. Avoid integrating non-production environments with your production target systems. Defer complex integrations to the later phases and take advantage of lessons learned from previous phases Avoid creating multiple accounts for the same user on the same system, if there is an opportunity to do so. Have an identity and access data quality initiative built into your plan to identify and remediate data related issues early on. Avoid creating complex approval workflows that would negative impact productivity and SLAs. Identify the owner of the identity systems with fair IdM knowledge and empower them with authority to make product related decisions. This will help ensure overcome any design hurdles. Avoid creating complex designs that are not sustainable long term and would need major overhaul during upgrades. Shadow your internal or external consulting resources during the implementation to build the necessary product skills needed to operate and sustain the solution. Avoid treating IAM as a point solution and have appropriate level of communication and training plan for the IT and business users alike. Conclusion In our experience, Identity programs will struggle with scope, proper resourcing, and more. We suggest that companies consider the suggestions discussed in this post and leverage them to help enable their identity and access program. This concludes PwC blog series on R2 for the month and we sincerely hope that the information we have shared thus far has been beneficial. For more information or if you have questions, you can reach out to Rex Thexton, Senior Managing Director, PwC and or Dharma Padala, Director, PwC. We look forward to hearing from you. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL).

    Read the article

  • I need help on my C++ assignment using MS Visual C++

    - by krayzwytie
    Ok, so I don't want you to do my homework for me, but I'm a little lost with this final assignment and need all the help I can get. Learning about programming is tough enough, but doing it online is next to impossible for me... Now, to get to the program, I am going to paste what I have so far. This includes mostly //comments and what I have written so far. If you can help me figure out where all the errors are and how to complete the assignment, I will really appreciate it. Like I said, I don't want you to do my homework for me (it's my final), but any constructive criticism is welcome. This is my final assignment for this class and it is due tomorrow (Sunday before midnight, Arizona time). This is the assignment: Examine the following situation: o Your company, Datamax, Inc., is in the process of automating its payroll systems. Your manager has asked you to create a program that calculates overtime pay for all employees. Your program must take into account the employee’s salary, total hours worked, and hours worked more than 40 in a week, and then provide an output that is useful and easily understood by company management. • Compile your program utilizing the following background information and the code outline in Appendix D (included in the code section). • Submit your project as an attachment including the code and the output. Company Background: o Three employees: Mark, John, and Mary o The end user needs to be prompted for three specific pieces of input—name, hours worked, and hourly wage. o Calculate overtime if input is greater than 40 hours per week. o Provide six test plans to verify the logic within the program. o Plan 1 must display the proper information for employee #1 with overtime pay. o Plan 2 must display the proper information for employee #1 with no overtime pay. o Plans 3-6 are duplicates of plan 1 and 2 but for the other employees. Program Requirements: o Define a base class to use for the entire program. o The class holds the function calls and the variables related to the overtime pay calculations. o Define one object per employee. Note there will be three employees. o Your program must take the objects created and implement calculations based on total salaries, total hours, and the total number of overtime hours. See the Employee Summary Data section of the sample output. Logic Steps to Complete Your Program: o Define your base class. o Define your objects from your base class. o Prompt for user input, updating your object classes for all three users. o Implement your overtime pay calculations. o Display overtime or regular time pay calculations. See the sample output below. o Implement object calculations by summarizing your employee objects and display the summary information in the example below. And this is the code: // Final_Project.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <iostream> #include <string> #include <iomanip> using namespace std; // //CLASS DECLARATION SECTION // class CEmployee { public: void ImplementCalculations(string EmployeeName, double hours, double wage); void DisplayEmployInformation(void); void Addsomethingup (CEmployee, CEmployee, CEmployee); string EmployeeName ; int hours ; int overtime_hours ; int iTotal_hours ; int iTotal_OvertimeHours ; float wage ; float basepay ; float overtime_pay ; float overtime_extra ; float iTotal_salaries ; float iIndividualSalary ; }; int main() { system("cls"); cout << "Welcome to the Employee Pay Center"; /* Use this section to define your objects. You will have one object per employee. You have only three employees. The format is your class name and your object name. */ std::cout << "Please enter Employee's Name: "; std::cin >> EmployeeName; std::cout << "Please enter Total Hours for (EmployeeName): "; std::cin >> hours; std::cout << "Please enter Base Pay for(EmployeeName): "; std::cin >> basepay; /* Here you will prompt for the first employee’s information. Prompt the employee name, hours worked, and the hourly wage. For each piece of information, you will update the appropriate class member defined above. Example of Prompts Enter the employee name = Enter the hours worked = Enter his or her hourly wage = */ /* Here you will prompt for the second employee’s information. Prompt the employee name, hours worked, and the hourly wage. For each piece of information, you will update the appropriate class member defined above. Enter the employee name = Enter the hours worked = Enter his or her hourly wage = */ /* Here you will prompt for the third employee’s information. Prompt the employee name, hours worked, and the hourly wage. For each piece of information, you will update the appropriate class member defined above. Enter the employee name = Enter the hours worked = Enter his or her hourly wage = */ /* Here you will implement a function call to implement the employ calcuations for each object defined above. You will do this for each of the three employees or objects. The format for this step is the following: [(object name.function name(objectname.name, objectname.hours, objectname.wage)] ; */ /* This section you will send all three objects to a function that will add up the the following information: - Total Employee Salaries - Total Employee Hours - Total Overtime Hours The format for this function is the following: - Define a new object. - Implement function call [objectname.functionname(object name 1, object name 2, object name 3)] /* } //End of Main Function void CEmployee::ImplementCalculations (string EmployeeName, double hours, double wage){ //Initialize overtime variables overtime_hours=0; overtime_pay=0; overtime_extra=0; if (hours > 40) { /* This section is for the basic calculations for calculating overtime pay. - base pay = 40 hours times the hourly wage - overtime hours = hours worked – 40 - overtime pay = hourly wage * 1.5 - overtime extra pay over 40 = overtime hours * overtime pay - salary = overtime money over 40 hours + your base pay */ /* Implement function call to output the employee information. Function is defined below. */ } // if (hours > 40) else { /* Here you are going to calculate the hours less than 40 hours. - Your base pay is = your hours worked times your wage - Salary = your base pay */ /* Implement function call to output the employee information. Function is defined below. */ } // End of the else } //End of Primary Function void CEmployee::DisplayEmployInformation(); { // This function displays all the employee output information. /* This is your cout statements to display the employee information: Employee Name ............. = Base Pay .................. = Hours in Overtime ......... = Overtime Pay Amount........ = Total Pay ................. = */ } // END OF Display Employee Information void CEmployee::Addsomethingup (CEmployee Employ1, CEmployee Employ2) { // Adds two objects of class Employee passed as // function arguments and saves them as the calling object's data member values. /* Add the total hours for objects 1, 2, and 3. Add the salaries for each object. Add the total overtime hours. */ /* Then display the information below. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% EMPLOYEE SUMMARY DATA%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% Total Employee Salaries ..... = 576.43 %%%% Total Employee Hours ........ = 108 %%%% Total Overtime Hours......... = 5 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% */ } // End of function

    Read the article

  • How to group using XSLT

    - by AdRock
    I'm having trouble grouping a set of nodes. I've found an article that does work with grouping and i have tested it and it works on a small test stylesheet i have I now need to use it in my stylesheet where I only want to select node sets that have a specific value. What I want to do in my stylesheet is select all users who have a userlevel of 2 then to group them by the volunteer region. What happens at the minute is that it gets the right amount of users with userlevel 2 but doesn't print them. It just repeats the first user in the xml file. <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:key name="volunteers-by-region" match="volunteer" use="region" /> <xsl:template name="hoo" match="/"> <html> <head> <title>Registered Volunteers</title> <link rel="stylesheet" type="text/css" href="volunteer.css" /> </head> <body> <h1>Registered Volunteers</h1> <h3>Ordered by the username ascending</h3> <xsl:for-each select="folktask/member[user/account/userlevel='2']"> <xsl:for-each select="volunteer[count(. | key('volunteers-by-region', region)[1]) = 1]"> <xsl:sort select="region" /> <xsl:for-each select="key('volunteers-by-region', region)"> <xsl:sort select="folktask/member/user/personal/name" /> <div class="userdiv"> <xsl:call-template name="member_userid"> <xsl:with-param name="myid" select="/folktask/member/user/@id" /> </xsl:call-template> <xsl:call-template name="volunteer_volid"> <xsl:with-param name="volid" select="/folktask/member/volunteer/@id" /> </xsl:call-template> <xsl:call-template name="volunteer_role"> <xsl:with-param name="volrole" select="/folktask/member/volunteer/roles" /> </xsl:call-template> <xsl:call-template name="volunteer_region"> <xsl:with-param name="volloc" select="/folktask/member/volunteer/region" /> </xsl:call-template> </div> </xsl:for-each> </xsl:for-each> </xsl:for-each> <xsl:if test="position()=last()"> <div class="count"><h2>Total number of volunteers: <xsl:value-of select="count(/folktask/member/user/account/userlevel[text()=2])"/></h2></div> </xsl:if> </body> </html> </xsl:template> <xsl:template name="member_userid"> <xsl:param name="myid" select="'Not Available'" /> <div class="heading bold"><h2>USER ID: <xsl:value-of select="$myid" /></h2></div> </xsl:template> <xsl:template name="volunteer_volid"> <xsl:param name="volid" select="'Not Available'" /> <div class="heading2 bold"><h2>VOLUNTEER ID: <xsl:value-of select="$volid" /></h2></div> </xsl:template> <xsl:template name="volunteer_role"> <xsl:param name="volrole" select="'Not Available'" /> <div class="small bold">ROLES:</div> <div class="large"> <xsl:choose> <xsl:when test="string-length($volrole)!=0"> <xsl:value-of select="$volrole" /> </xsl:when> <xsl:otherwise> <xsl:text> </xsl:text> </xsl:otherwise> </xsl:choose> </div> </xsl:template> <xsl:template name="volunteer_region"> <xsl:param name="volloc" select="'Not Available'" /> <div class="small bold">REGION:</div> <div class="large"><xsl:value-of select="$volloc" /></div> </xsl:template> </xsl:stylesheet> here is my full xml file <?xml version="1.0" encoding="ISO-8859-1" ?> <?xml-stylesheet type="text/xsl" href="volunteers.xsl"?> <folktask xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xs:noNamespaceSchemaLocation="folktask.xsd"> <member> <user id="1"> <personal> <name>Abbie Hunt</name> <sex>Female</sex> <address1>108 Access Road</address1> <address2></address2> <city>Wells</city> <county>Somerset</county> <postcode>BA5 8GH</postcode> <telephone>01528927616</telephone> <mobile>07085252492</mobile> <email>[email protected]</email> </personal> <account> <username>AdRock</username> <password>269eb625e2f0cf6fae9a29434c12a89f</password> <userlevel>4</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="1"> <roles></roles> <region>South West</region> </volunteer> </member> <member> <user id="2"> <personal> <name>Aidan Harris</name> <sex>Male</sex> <address1>103 Aiken Street</address1> <address2></address2> <city>Chichester</city> <county>Sussex</county> <postcode>PO19 4DS</postcode> <telephone>01905149894</telephone> <mobile>07784467941</mobile> <email>[email protected]</email> </personal> <account> <username>AmbientExpert</username> <password>8e64214160e9dd14ae2a6d9f700004a6</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="2"> <roles>Van Driver,gas Fitter</roles> <region>South Central</region> </volunteer> </member> <member> <user id="3"> <personal> <name>Skye Saunders</name> <sex>Female</sex> <address1>31 Anns Court</address1> <address2></address2> <city>Cirencester</city> <county>Gloucestershire</county> <postcode>GL7 1JG</postcode> <telephone>01958303514</telephone> <mobile>07260491667</mobile> <email>[email protected]</email> </personal> <account> <username>BigUndecided</username> <password>ea297847f80e046ca24a8621f4068594</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="3"> <roles>Scaffold Erector</roles> <region>South West</region> </volunteer> </member> <member> <user id="4"> <personal> <name>Connor Lawson</name> <sex>Male</sex> <address1>12 Ash Way</address1> <address2></address2> <city>Swindon</city> <county>Wiltshire</county> <postcode>SN3 6GS</postcode> <telephone>01791928119</telephone> <mobile>07338695664</mobile> <email>[email protected]</email> </personal> <account> <username>iTuneStinker</username> <password>3a1f5fda21a07bfff20c41272bae7192</password> <userlevel>3</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <festival id="1"> <event> <eventname>Oxford Folk Festival</eventname> <url>http://www.oxfordfolkfestival.com/</url> <datefrom>2010-04-07</datefrom> <dateto>2010-04-09</dateto> <location>Oxford</location> <eventpostcode>OX1 9BE</eventpostcode> <additional>Oxford Folk Festival is going into it's third year in 2006. As well as needing volunteers to steward for the event on the weekend itself, we would be delighted to hear from people willing to help in year round festival work such as stuffing envelopes for mailings, poster and leaflet distribution, and stewarding duties at festival pre-events.</additional> <coords> <lat>51.735640</lat> <lng>-1.276136</lng> </coords> </event> <contact> <conname>Stuart Vincent</conname> <conaddress1>P.O. Box 642</conaddress1> <conaddress2></conaddress2> <concity>Oxford</concity> <concounty>Bedfordshire</concounty> <conpostcode>OX1 3BY</conpostcode> <contelephone>01865 79073</contelephone> <conmobile></conmobile> <fax></fax> <conemail>[email protected]</conemail> </contact> </festival> </member> <member> <user id="5"> <personal> <name>Lewis King</name> <sex>Male</sex> <address1>67 Arbors Way</address1> <address2></address2> <city>Sherborne</city> <county>Dorset</county> <postcode>DT9 0GS</postcode> <telephone>01446139701</telephone> <mobile>07292614033</mobile> <email>[email protected]</email> </personal> <account> <username>Runninglife</username> <password>98fab0a27c34ddb2b0618bc184d4331d</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="4"> <roles>Van Driver</roles> <region>South West</region> </volunteer> </member> <member> <user id="6"> <personal> <name>Cameron Lee</name> <sex>Male</sex> <address1>77 Arrington Road</address1> <address2></address2> <city>Solihull</city> <county>Warwickshire</county> <postcode>B90 6FG</postcode> <telephone>01435158624</telephone> <mobile>07789503179</mobile> <email>[email protected]</email> </personal> <account> <username>love2Mixer</username> <password>1df752d54876928639cea07ce036a9c0</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="5"> <roles>Fire Warden</roles> <region>Midlands</region> </volunteer> </member> <member> <user id="7"> <personal> <name>Lexie Dean</name> <sex>Female</sex> <address1>38 Bloomfield Court</address1> <address2></address2> <city>Windermere</city> <county>Westmorland</county> <postcode>LA23 8BM</postcode> <telephone>01781207188</telephone> <mobile>07127461231</mobile> <email>[email protected]</email> </personal> <account> <username>MailNetworker</username> <password>0e070701839e612bf46af4421db4f44b</password> <userlevel>3</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <festival id="2"> <event> <eventname>Middlewich Folk And Boat Festival</eventname> <url>http://midfest.org.uk/mfab/</url> <datefrom>2010-06-16</datefrom> <dateto>2010-06-18</dateto> <location>Middlewich</location> <eventpostcode>CW10 9BX</eventpostcode> <additional>We welcome stewards staying on campsite or boats.</additional> <coords> <lat>53.190562</lat> <lng>-2.444926</lng> </coords> </event> <contact> <conname>Festival Committee</conname> <conaddress1>PO Box 141</conaddress1> <conaddress2></conaddress2> <concity>Winsford</concity> <concounty>Cheshire</concounty> <conpostcode>CW10 9WB</conpostcode> <contelephone>07092 39050</contelephone> <conmobile>07092 39050</conmobile> <fax></fax> <conemail>[email protected]</conemail> </contact> </festival> </member> <member> <user id="8"> <personal> <name>Liam Chapman</name> <sex>Male</sex> <address1>99 Black Water Drive</address1> <address2></address2> <city>St.Austell</city> <county>Cornwall</county> <postcode>PL25 3GF</postcode> <telephone>01835629418</telephone> <mobile>07695179069</mobile> <email>[email protected]</email> </personal> <account> <username>GreenWimp</username> <password>1fe3df99a841dc4f723d21af89e0990f</password> <userlevel>1</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> </member> <member> <user id="9"> <personal> <name>Brandon Harrison</name> <sex>Male</sex> <address1>41 Arlington Way</address1> <address2></address2> <city>Dorchester</city> <county>Dorset</county> <postcode>DT1 3JS</postcode> <telephone>01293626735</telephone> <mobile>07277145760</mobile> <email>[email protected]</email> </personal> <account> <username>LovelyStar</username> <password>8b53b66f323aa5e6a083edb4fd44456b</password> <userlevel>1</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> </member> <member> <user id="10"> <personal> <name>Samuel Young</name> <sex>Male</sex> <address1>102 Bailey Hill Road</address1> <address2></address2> <city>Wolverhampton</city> <county>Staffordshire</county> <postcode>WV7 8HS</postcode> <telephone>01594531382</telephone> <mobile>07544663654</mobile> <email>[email protected]</email> </personal> <account> <username>GuruSassy</username> <password>00da02da6c143c3d136bf60b8bfcf43e</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="6"> <roles>Fire Warden</roles> <region>Midlands</region> </volunteer> </member> <member> <user id="11"> <personal> <name>Alexander Harris</name> <sex>Male</sex> <address1>93 Beguine Drive</address1> <address2></address2> <city>Winchester</city> <county>Hampshire</county> <postcode>S23 2FD</postcode> <telephone>01452496582</telephone> <mobile>07353867291</mobile> <email>[email protected]</email> </personal> <account> <username>GuitarExpert</username> <password>0102ad3740028e155925e9918ead3bde</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="7"> <roles>Scaffold Erector</roles> <region>North East</region> </volunteer> </member> <member> <user id="12"> <personal> <name>Tyler Mcdonald</name> <sex>Male</sex> <address1>44 Baker Road</address1> <address2></address2> <city>Bromley</city> <county>Kent</county> <postcode>BR1 2GD</postcode> <telephone>01918704546</telephone> <mobile>07314062451</mobile> <email>[email protected]</email> </personal> <account> <username>WildWish</username> <password>073220bb5e9a12ad202bb7d94dcc86f7</password> <userlevel>1</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> </member> <member> <user id="13"> <personal> <name>Skye Mason</name> <sex>Female</sex> <address1>56 Cedar Creek Church Road</address1> <address2></address2> <city>Bracknell</city> <county>Berkshire</county> <postcode>RG12 1AQ</postcode> <telephone>01787607618</telephone> <mobile>07540218868</mobile> <email>[email protected]</email> </personal> <account> <username>PizzaDork</username> <password>74c54937ee7051ee7f4ebc11296ed531</password> <userlevel>1</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> </member> <member> <user id="14"> <personal> <name>Maryam Rose</name> <sex>Female</sex> <address1>98 Baptist Circle</address1> <address2></address2> <city>Newbury</city> <county>Berkshire</county> <postcode>RG14 8DF</postcode> <telephone>01691317999</telephone> <mobile>07212477154</mobile> <email>[email protected]</email> </personal> <account> <username>SexTech</username> <password>f1c21f9f1e999da97d7dc460bb876fcf</password> <userlevel>3</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <festival id="3"> <event> <eventname>Birdsedge Village Festival</eventname> <url>http://www.birdsedge.co.uk/</url> <datefrom>2010-07-08</datefrom> <dateto>2010-07-09</dateto> <location>Birdsedge</location> <eventpostcode>HD8 8XT</eventpostcode> <additional></additional> <coords> <lat>53.565644</lat> <lng>-1.696196</lng> </coords> </event> <contact> <conname>Jacey Bedford</conname> <conaddress1>Penistone Road</conaddress1> <conaddress2>Birdsedge</conaddress2> <concity>Huddersfield</concity> <concounty>West Yorkshire</concounty> <conpostcode>HD8 8XT</conpostcode> <contelephone>01484 60623</contelephone> <conmobile></conmobile> <fax></fax> <conemail>[email protected]</conemail> </contact> </festival> </member> <member> <user id="15"> <personal> <name>Lexie Rogers</name> <sex>Female</sex> <address1>38 Bishop Road</address1> <address2></address2> <city>Matlock</city> <county>Derbyshire</county> <postcode>DE4 1BX</postcode> <telephone>01961168823</telephone> <mobile>07170855351</mobile> <email>[email protected]</email> </personal> <account> <username>ShipBurglar</username> <password>cc190488a95667cb117e20bc6c7c330e</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="8"> <roles>Gas Fitter</roles> <region>Midlands</region> </volunteer> </member> <member> <user id="16"> <personal> <name>Noah Parker</name> <sex>Male</sex> <address1>112 Canty Road</address1> <address2></address2> <city>Keswick</city> <county>Cumberland</county> <postcode>CA12 4TR</postcode> <telephone>01931272522</telephone> <mobile>07610026576</mobile> <email>[email protected]</email> </personal> <account> <username>AwsomeMoon</username> <password>50b770539bdf08543f15778fc7a6f188</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="9"> <roles>Van Driver</roles> <region>North West</region> </volunteer> </member> <member> <user id="17"> <personal> <name>Elliot Mitchell</name> <sex>Male</sex> <address1>102 Brown Loop</address1> <address2></address2> <city>Grimsby</city> <county>Lincolnshire</county> <postcode>OX16 4QP</postcode> <telephone>01212971319</telephone> <mobile>07544663654</mobile> <email>[email protected]</email> </personal> <account> <username>msBasher</username> <password>c38fad85badcdff6e3559ef38656305d</password> <userlevel>1</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> </member> <member> <user id="18"> <personal> <name>Scarlett Rose</name> <sex>Female</sex> <address1>93 Cedar Lane</address1> <address2></address2> <city>Stourbridge</city> <county>Warminster</county> <postcode>DY8 4NX</postcode> <telephone>01537477435</telephone> <mobile>07353867291</mobile> <email>[email protected]</email> </personal> <account> <username>MakeupWimp</username> <password>16a9b7910fc34304c1d1a6a1b0c31502</password> <userlevel>1</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> </member> <member> <user id="19"> <personal> <name>Katie Butler</name> <sex>Female</sex> <address1>44 Boulder Crest Road</address1> <address2></address2> <city>Bungay</city> <county>Suffolk</county> <postcode>NR35 1LT</postcode> <telephone>01419124094</telephone> <mobile>07314062451</mobile> <email>[email protected]</email> </personal> <account> <username>TomatoCrunch</username> <password>d7eba53443ec4ddcee69ed71b2023fc0</password> <userlevel>1</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> </member> <member> <user id="20"> <personal> <name>Jayden Richards</name> <sex>Male</sex> <address1>56 Corson Trail</address1> <address2></address2> <city>Sandy</city> <county>Bedfordshire</county> <postcode>SG19 6DF</postcode> <telephone>01882134438</telephone> <mobile>07540218868</mobile> <email>[email protected]</email> </personal> <account> <username>nightmareTwig</username> <password>8a9c08c7b6473493e8a5da15dd541025</password> <userlevel>3</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <festival id="4"> <event> <eventname>East Barnet Festival</eventname> <url>http://www.eastbarnetfestival.org.uk</url> <datefrom>2010-07-01</datefrom> <dateto>2010-07-03</dateto> <location>East Barnet</location> <eventpostcode>EN4 8TB</eventpostcode> <additional></additional> <coords> <lat>51.641556</lat> <lng>-0.163018</lng> </coords> </event> <contact> <conname>East Barnet Festival Commitee</conname> <conaddress1>Oak Hill Park</conaddress1> <conaddress2>Church Hill Road</conaddress2> <concity>East Barnet</concity> <concounty>Hertfordshire</concounty> <conpostcode>EN4 8TB</conpostcode> <contelephone>07071781745</contelephone> <conmobile>07071781745</conmobile> <fax></fax> <conemail>[email protected]</conemail> </contact> </festival> </member> <member> <user id="21"> <personal> <name>Abbie Jackson</name> <sex>Female</sex> <address1>98 Briarwood Lane</address1> <address2></address2> <city>Weymouth</city> <county>Dorset</county> <postcode>DT3 6TS</postcode> <telephone>01575629969</telephone> <mobile>07212477154</mobile> <email>[email protected]</email> </personal> <account> <username>CrazyBlockhead</username> <password>4ce56fb13d043be605037ace4fbd9fa5</password> <userlevel>2</u

    Read the article

  • Form contents not showing in email

    - by fmz
    This is a followup to a question I posted yesterday. I thought everything was working fine, but today, I am not getting any results in the email from the drop down field. Here is the form code in question: <label for="purpose"><span class="required">*</span> Purpose</label> <select id="purpose" name="purpose" style="width: 300px; height:35px;"> <option value="" selected="selected">-- Select One --</option> <option value="I am interested in your services">I am interested in your services!</option> <option value="I am interested in a partnership">I am interested in a partnership!</option> <option value="I am interested in a job">I am interested in a job!</option> </select> It is then processed in PHP and should output the selected option to an email, however the Reason for Contact line always comes through with nothing in it. Here is the PHP code: <?php if(!$_POST) exit; $name = $_POST['name']; $company = $_POST['company']; $email = $_POST['email']; $phone = $_POST['phone']; $purpose = $_POST['purpose']; $comments = $_POST['comments']; $verify = $_POST['verify']; if(trim($name) == '') { echo '<div class="error_message">Attention! You must enter your name.</div>'; exit(); } else if(trim($email) == '') { echo '<div class="error_message">Attention! Please enter a valid email address.</div>'; exit(); } else if(trim($phone) == '') { echo '<div class="error_message">Attention! Please enter a valid phone number.</div>'; exit(); } else if(!isEmail($email)) { echo '<div class="error_message">Attention! You have enter an invalid e-mail address, try again.</div>'; exit(); } if(trim($comments) == '') { echo '<div class="error_message">Attention! Please enter your message.</div>'; exit(); } else if(trim($verify) == '') { echo '<div class="error_message">Attention! Please enter the verification number.</div>'; exit(); } else if(trim($verify) != '4') { echo '<div class="error_message">Attention! The verification number you entered is incorrect.</div>'; exit(); } if($error == '') { if(get_magic_quotes_gpc()) { $comments = stripslashes($comments); } // Configuration option. // Enter the email address that you want to emails to be sent to. // Example $address = "[email protected]"; $address = "[email protected]"; // Configuration option. // i.e. The standard subject will appear as, "You've been contacted by John Doe." // Example, $e_subject = '$name . ' has contacted you via Your Website.'; $e_subject = 'You\'ve been contacted by ' . $name . '.'; // Configuration option. // You can change this if you feel that you need to. // Developers, you may wish to add more fields to the form, in which case you must be sure to add them here. $e_body = "You have been contacted by $name.\r\n\n"; $e_company = "Company: $company\r\n\n"; $e_content = "Comments: \"$comments\"\r\n\n"; $e_purpose = "Reason for contact: $purpose\r\n\n"; $e_reply = "You can contact $name via email, $email or via phone $phone"; $msg = $e_body . $e_content . $e_company . $e_purpose . $e_reply; if(mail($address, $e_subject, $msg, "From: $email\r\nReply-To: $email\r\nReturn-Path: $email\r\n")) { // Email has sent successfully, echo a success page. echo "<fieldset>"; echo "<div id='success_page'>"; echo "<h1>Email Sent Successfully.</h1>"; echo "<p>Thank you <strong>$name</strong>, your message has been submitted to us.</p>"; echo "</div>"; echo "</fieldset>"; } else { echo 'ERROR!'; } } function isEmail($email) { // Email address verification, do not edit. return(preg_match("/^[-_.[:alnum:]]+@((([[:alnum:]]|[[:alnum:]][[:alnum:]-]*[[:alnum:]])\.)+(ad|ae|aero|af|ag|ai|al|am|an|ao|aq|ar|arpa|as|at|au|aw|az|ba|bb|bd|be|bf|bg|bh|bi|biz|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|com|coop|cr|cs|cu|cv|cx|cy|cz|de|dj|dk|dm|do|dz|ec|edu|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gh|gi|gl|gm|gn|gov|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|in|info|int|io|iq|ir|is|it|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|mg|mh|mil|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|museum|mv|mw|mx|my|mz|na|name|nc|ne|net|nf|ng|ni|nl|no|np|nr|nt|nu|nz|om|org|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|pro|ps|pt|pw|py|qa|re|ro|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|sk|sl|sm|sn|so|sr|st|su|sv|sy|sz|tc|td|tf|tg|th|tj|tk|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|um|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)$|(([0-9][0-9]?|[0-1][0-9][0-9]|[2][0-4][0-9]|[2][5][0-5])\.){3}([0-9][0-9]?|[0-1][0-9][0-9]|[2][0-4][0-9]|[2][5][0-5]))$/i",$email)); } ?> Any assistance would be greatly appreciated. Thanks!

    Read the article

  • Wikipedia API: list=alllinks confusion

    - by Chris Salij
    I'm doing a research project for the summer and I've got to use get some data from Wikipedia, store it and then do some analysis on it. I'm using the Wikipedia API to gather the data and I've got that down pretty well. What my questions is in regards to the links-alllinks option in the API doc here After reading the description, both there and in the API itself (it's down and bit and I can't link directly to the section), I think I understand what it's supposed to return. However when I ran a query it gave me back something I didn't expect. Here's the query I ran: http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=google&rvprop=ids|timestamp|user|comment|content&rvlimit=1&list=alllinks&alunique&allimit=40&format=xml Which in essence says: Get the last revision of the Google page, include the id, timestamp, user, comment and content of each revision, and return it in XML format. The allinks (I thought) should give me back a list of wikipedia pages which point to the google page (In this case the first 40 unique ones). I'm not sure what the policy is on swears, but this is the result I got back exactly: <?xml version="1.0"?> <api> <query><normalized> <n from="google" to="Google" /> </normalized> <pages> <page pageid="1092923" ns="0" title="Google"> <revisions> <rev revid="366826294" parentid="366673948" user="Citation bot" timestamp="2010-06-08T17:18:31Z" comment="Citations: [161]Tweaked: url. [[User:Mono|Mono]]" xml:space="preserve"> <!-- The page content, I've replaced this cos its not of interest --> </rev> </revisions> </page> </pages> <alllinks> <l ns="0" title="!" /> <l ns="0" title="!!" /> <l ns="0" title="!!!" /> <l ns="0" title="!!!!" /> <l ns="0" title="!!!!!!!!!!!!!!!!!!!!!" /> <l ns="0" title="!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" /> <l ns="0" title="!!!!!!!!!!!!!!!!!!!!*was up all u hater just stopingby to show u some love*!!!!!!!!!!!!!!!!!!!!!!!!!!!" /> <l ns="0" title="!!!!!!!!!!!!&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;********(( )))))F/W///CHRYSLER/FUCKING/FUCKING/FUCKING/I HATE THE QUEEN!!!/I AM HORRID HENRY/Chrysler Cirrus/php" /> <l ns="0" title="!!!!!Hephaestos IS A FUCKING WHINY GUY!!!!!!" /> <l ns="0" title="!!!!Do you really want to see this article on your default search?" /> <l ns="0" title="!!!!Legal!!!!" /> <l ns="0" title="!!!!YOU ARE A COCKSUCKING WHINY GREASER!!!!" /> <l ns="0" title="!!!BESQUERKAN!!!" /> <l ns="0" title="!!!Fuck You!!!" /> <l ns="0" title="!!!Fuck You!!! And Then Some" /> <l ns="0" title="!!!Fuck You!!! And Then some" /> <l ns="0" title="!!!Fuck You!!! And then Some" /> <l ns="0" title="!!!Fuck You!!! and Then Some" /> <l ns="0" title="!!!Three !!! Amigos!!!" /> <l ns="0" title="!!! (album)" /> <l ns="0" title="!!! (band)" /> <l ns="0" title="!!1" /> <l ns="0" title="!!BOSS!!" /> <l ns="0" title="!!Destroy-Oh-Boy!!" /> <l ns="0" title="!!Fuck you!!" /> <l ns="0" title="!!M" /> <l ns="0" title="!!Que Corra La Voz!!" /> <l ns="0" title="!! (chess)" /> <l ns="0" title="!! (disambiguation)" /> <l ns="0" title="!! 6- -.4rtist.com" /> <l ns="0" title="!!m" /> <l ns="0" title="!!suck my balls!!" /> <l ns="0" title="!!~~YOU WIN~~!!" /> <l ns="0" title="!&#039;O-!khung language" /> <l ns="0" title="!(1)Full Name:(2)Age:(3)Sex:(4)Occupation:(5)Phone Number: (6)Delivery Address:(7)Country of Residence:. Dr.John Aboh" /> <l ns="0" title="!-" /> <l ns="0" title="!-My Degrassi Top 10 Episodes" /> <l ns="0" title="!10 Show" /> <l ns="0" title="!2005" /> <l ns="0" title="!2006" /> </alllinks> </query> <query-continue> <revisions rvstartid="366673948" /> <alllinks alfrom="!2009" /> </query-continue> </api> As you can see if you look at the <alllinks> part, its just a load of random gobbledy-gook. No nearly what I thought I'd get. I've done a fair bit of searching but I can't seem to find a direct answer to my question. What should the list=alllinks option return? Why am I getting this crap in there? Thanks for your help

    Read the article

  • Error: No mapping exists from object type....

    - by jakesankey
    Here is the code for my simple parsing application. I am getting an error that states 'No mapping exists from type System.Text.RegularExpressions.Match to a known managed provider native type'. This started to occur when I switched from using Split('_') to RegEx.Match for defining RNumberE, RNumberD, etc. Any guidance is appreciated. using System; using System.Data; using System.Data.SQLite; using System.IO; using System.Text.RegularExpressions; using System.Threading; using System.Collections.Generic; using System.Linq; using System.Data.SqlClient; namespace JohnDeereCMMDataParser { internal class Program { public static List<string> GetImportedFileList() { List<string> ImportedFiles = new List<string>(); using (SqlConnection connect = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { connect.Open(); using (SqlCommand fmd = connect.CreateCommand()) { fmd.CommandText = @"SELECT FileName FROM CMMData;"; fmd.CommandType = CommandType.Text; SqlDataReader r = fmd.ExecuteReader(); while (r.Read()) { ImportedFiles.Add(Convert.ToString(r["FileName"])); } } } return ImportedFiles; } private static void Main(string[] args) { Console.Title = "John Deere CMM Data Parser"; Console.WriteLine("Preparing CMM Data Parser... done"); Console.WriteLine("Scanning for new CMM data..."); Console.ForegroundColor = ConsoleColor.Gray; using (SqlConnection con = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { con.Open(); using (SqlCommand insertCommand = con.CreateCommand()) { Console.WriteLine("Connecting to SQL server..."); SqlCommand cmdd = con.CreateCommand(); string[] files = Directory.GetFiles(@"C:\Documents and Settings\js91162\Desktop\CMM WENZEL\", "*_*_*.txt", SearchOption.AllDirectories); List<string> ImportedFiles = GetImportedFileList(); insertCommand.Parameters.Add(new SqlParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Axis", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@OutOfTol", DbType.Decimal)); foreach (string file in files.Except(ImportedFiles)) { var FileNameExt1 = Path.GetFileName(file); cmdd.Parameters.Clear(); cmdd.Parameters.Add(new SqlParameter("@FileExt", FileNameExt1)); cmdd.CommandText = @" IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'RX_CMMData' AND TABLE_NAME = 'CMMData')) BEGIN SELECT COUNT(*) FROM CMMData WHERE FileName = @FileExt; END"; int count = Convert.ToInt32(cmdd.ExecuteScalar()); con.Close(); con.Open(); if (count == 0) { Console.WriteLine("Preparing to parse CMM data for SQL import..."); if (file.Count(c => c == '_') > 5) continue; insertCommand.CommandText = @" INSERT INTO CMMData (FeatType, FeatName, Axis, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, PartNumber, CMMNumber, Date, FileName) VALUES (@FeatType, @FeatName, @Axis, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @PartNumber, @CMMNumber, @Date, @FileName);"; string FileNameExt = Path.GetFullPath(file); string RNumber = Path.GetFileNameWithoutExtension(file); int index2 = RNumber.IndexOf("~"); Match RNumberE = Regex.Match(RNumber, @"^(R|L)\d{6}(COMP|CRIT|TEST|SU[1-9])(?=_)", RegexOptions.IgnoreCase); Match RNumberD = Regex.Match(RNumber, @"(?<=_)\d{3}[A-Z]\d{4}|\d{3}[A-Z]\d\w\w\d(?=_)", RegexOptions.IgnoreCase); Match RNumberDate = Regex.Match(RNumber, @"(?<=_)\d{8}(?=_)", RegexOptions.IgnoreCase); if (RNumberD.Value == @"") continue; if (RNumberE.Value == @"") continue; if (RNumberDate.Value == @"") continue; if (index2 != -1) continue; /* string RNumberE = RNumber.Split('_')[0]; string RNumberD = RNumber.Split('_')[1]; string RNumberDate = RNumber.Split('_')[2]; */ DateTime dateTime = DateTime.ParseExact(RNumberDate.Value, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy"); string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } Console.WriteLine(tmpLine); foreach (SqlParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] { ',' }); for (int i = 0; i < values.Length - 1; i++) { SqlParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.Parameters.Add(new SqlParameter("@PartNumber", RNumberE)); insertCommand.Parameters.Add(new SqlParameter("@CMMNumber", RNumberD)); insertCommand.Parameters.Add(new SqlParameter("@Date", cmmDate)); insertCommand.Parameters.Add(new SqlParameter("@FileName", FileNameExt)); insertCommand.ExecuteNonQuery(); insertCommand.Parameters.RemoveAt("@PartNumber"); insertCommand.Parameters.RemoveAt("@CMMNumber"); insertCommand.Parameters.RemoveAt("@Date"); insertCommand.Parameters.RemoveAt("@FileName"); } } } Console.WriteLine("CMM data successfully imported to SQL database..."); } con.Close(); } } } }

    Read the article

  • Where am I going wrong in my Xml Schema?

    - by chobo2
    Hi I am trying to make a XML Schema but everytime I use it and try to validate my data I get an error. I get this error: Validation of the XML Document failed! Error message(s): Could not find schema information for the element 'Email'. Line: 1 Column:1213 http://www.xmlforasp.net/SchemaValidator.aspx My Xml file I am trying to validate. <?xml version="1.0" encoding="utf-8" ?> <School> <SchoolPrefix>BCIT</SchoolPrefix> <TeacherAccounts> <Account> <StudentNumber>A00140000</StudentNumber> <Password>123456</Password> <Email>[email protected]</Email> </Account> <Account> <StudentNumber>A00000041</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> <Account> <StudentNumber>A0400100</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> </TeacherAccounts> <FullTimeAccounts> <Account> <StudentNumber>A00000000</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> <Account> <StudentNumber>A00141000</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> </FullTimeAccounts> <PartTimeAccounts> <Account> <StudentNumber>A81020409</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> <Account> <StudentNumber>A040014000</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> <Account> <StudentNumber>A00024040</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> <Account> <StudentNumber>A00004101</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> </PartTimeAccounts> </School> XMl Schema <?xml version="1.0" encoding="utf-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.nothing.com" xmlns="http://www.nothing.com" elementFormDefault="qualified"> <xs:element name="School"> <xs:complexType> <xs:sequence> <xs:element name="SchoolPrefix" minOccurs="1" maxOccurs="1"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="2" /> <xs:maxLength value="8" /> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="TeacherAccounts" minOccurs="1" maxOccurs="1"> <xs:complexType> <xs:sequence> <xs:element name="Account" type="UserInfo" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="FullTimeAccounts"> <xs:complexType> <xs:sequence> <xs:element name="Account" type="UserInfo" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="PartTimeAccounts"> <xs:complexType> <xs:sequence> <xs:element name="Account" type="UserInfo" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> <xs:complexType name="UserInfo"> <xs:sequence> <xs:element name="StudentNumber"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="1"/> <xs:maxLength value="50"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="Password"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="6"/> <xs:maxLength value="50"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="Email"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:pattern value="\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*"/> </xs:restriction> </xs:simpleType> </xs:element> </xs:sequence> </xs:complexType> </xs:schema>

    Read the article

  • HTML client-side portable file generation - no external resources or server calls

    - by awashburn
    I have the following situation: I have set up a series of Cron jobs on an internal company server to run various PHP scripts designed to check data integrity. Each PHP script queries a company database, formats the returned query data into an HTML file containing one or more <tables>, and then mails the HTML file to several client emails as an attachment. From my experience, most of the PHP scripts generate HTML files with only a few tables, however there are a few PHP scripts the create HTML files with around 30 tables. HTML files have been chosen as the distribution format of these scans because HTML makes it easy to view many tables at once in a browser window. I would like to add the functionality for the clients to download a table in the HTML file as a CSV file. I anticipate clients using this feature when they suspect a data integrity issue based on the table data. It would be ideal for them to be able to take the table in question, export the data out to a CSV file, and then study it further. Because need for exporting the data to CSV format is at the discretion of the client, unpredictable as to what table will be under scrutiny, and intermittently used I do not want to create CSV files for every table. Normally creating a CSV file wouldn't be too difficult, using JavaScript/jQuery to preform DOM traversal and generate the CSV file data into a string utilizing a server call or flash library to facilitate the download process; but I have one limiting constraint: The HTML file needs to be "portable." I would like the clients to be able to take their HTML file and preform analysis of the data outside the company intranet. Also it is likely these HTML files will be archived, so making the export functionality "self contained" in the HTML files is a highly desirable feature for the two previous reasons. The "portable" constraint of CSV file generation from a HTML file means: I cannot make a server call. This means ALL the file generation must be done client-side. I want the single HTML file attached to the email to contain all the resources to generate the CSV file. This means I cannot use jQuery or flash libraries to generate the file. I understand, for obvious security reasons, that writing out files to disk using JavaScript isn't supported by any browser. I don't want to create a file without the user knowledge; I would like to generate the file using JavaScript in memory and then prompt the user the "download" the file from memory. I have looked into generating the CSV file as a URI however, according to my research and testing, this approach has a few problems: URIs for files are not supported by IE (See Here) URIs in FireFox saves the file with a random file name and as a .part file As much as it pains me, I can accept the fact the IE<=v9 won't create a URI for files. I would like to create a semi-cross-browser solution in which Chrome, Firefox, and Safari create a URI to download the CSV file after JavaScript DOM traversal compiles the data. My Example Table: <table> <thead class="resulttitle"> <tr> <th style="text-align:center;" colspan="3"> NameOfTheTable</th> </tr> </thead> <tbody> <tr class="resultheader"> <td>VEN_PK</td> <td>VEN_CompanyName</td> <td>VEN_Order</td> </tr> <tr> <td class='resultfield'>1</td> <td class='resultfield'>Brander Ranch</td> <td class='resultfield'>Beef</td> </tr> <tr> <td class='resultfield'>2</td> <td class='resultfield'>Super Tree Produce</td> <td class='resultfield'>Apples</td> </tr> <tr> <td class='resultfield'>3</td> <td class='resultfield'>John's Distilery</td> <td class='resultfield'>Beer</td> </tr> </tbody> <tfoot> <tr> <td colspan="3" style="text-align:right;"> <button onclick="doSomething(this);">Export to CSV File</button></td> </tr> </tfoot> </table> My Example JavaScript: <script type="text/javascript"> function doSomething(inButton) { /* locate elements */ var table = inButton.parentNode.parentNode.parentNode.parentNode; var name = table.rows[0].cells[0].textContent; var tbody = table.tBodies[0]; /* create CSV String through DOM traversal */ var rows = tbody.rows; var csvStr = ""; for (var i=0; i < rows.length; i++) { for (var j=0; j < rows[i].cells.length; j++) { csvStr += rows[i].cells[j].textContent +","; } csvStr += "\n"; } /* temporary proof DOM traversal was successful */ alert("Table Name:\t" + name + "\nCSV String:\n" + csvStr); /* Create URI Here! * (code I am missing) */ /* Approach 1 : Auto-download * downloads CSV data but: * In FireFox downloads as randomCharacers.part instead of name.csv * In Chrome downloads without prompting the user * In Safari opens the files in browser (textfile) */ //var hrefData = "data:text/csv;charset=US-ASCII," + encodeURIComponent(csvStr); //document.location.href = hrefData; /* Approach 2 : Right-Click Save As... */ var hrefData = "data:text/csv;charset=US-ASCII," + encodeURIComponent(csvStr); var fileLink = document.createElement("a"); fileLink.href = hrefData; fileLink.innerHTML = "download"; parentTD = inButton.parentNode; parentTD.appendChild(fileLink); parentTD.removeChild(inButton); } </script> I am looking for an example solution in which the above example table can be downloaded as a CSV file: using a URI the user is prompted to save the file the default filename is the name of the table. code works as described in modern versions of FireFox, Safari, & Chrome I have added a <script> tag with the DOM traversal function doSomething(). The real help I need is with formatting the URI to what I want within the doSomething() function.

    Read the article

  • How can I use Perl regular expressions to parse XML data?

    - by Luke
    I have a pretty long piece of XML that I want to parse. I want to remove everything except for the subclass-code and city. So that I am left with something like the example below. EXAMPLE TEST SUBCLASS|MIAMI CODE <?xml version="1.0" standalone="no"?> <web-export> <run-date>06/01/2010 <pub-code>TEST <ad-type>TEST <cat-code>Real Estate</cat-code> <class-code>TEST</class-code> <subclass-code>TEST SUBCLASS</subclass-code> <placement-description></placement-description> <position-description>Town House</position-description> <subclass3-code></subclass3-code> <subclass4-code></subclass4-code> <ad-number>0000284708-01</ad-number> <start-date>05/28/2010</start-date> <end-date>06/09/2010</end-date> <line-count>6</line-count> <run-count>13</run-count> <customer-type>Private Party</customer-type> <account-number>100099237</account-number> <account-name>DOE, JOHN</account-name> <addr-1>207 CLARENCE STREET</addr-1> <addr-2> </addr-2> <city>MIAMI</city> <state>FL</state> <postal-code>02910</postal-code> <country>USA</country> <phone-number>4014612880</phone-number> <fax-number></fax-number> <url-addr> </url-addr> <email-addr>[email protected]</email-addr> <pay-flag>N</pay-flag> <ad-description>DEANESTATES2BEDS2BATHSAPPLIANCED</ad-description> <order-source>Import</order-source> <order-status>Live</order-status> <payor-acct>100099237</payor-acct> <agency-flag>N</agency-flag> <rate-note></rate-note> <ad-content> MIAMI&#47;Dean Estates&#58; 2 beds&#44; 2 baths&#46; Applianced&#46; Central air&#46; Carpets&#46; Laundry&#46; 2 decks&#46; Pool&#46; Parking&#46; Close to everything&#46;No smoking&#46; No utilities&#46; &#36;1275 mo&#46; 401&#45;578&#45;1501&#46; </ad-content> </ad-type> </pub-code> </run-date> </web-export> PERL So what I want to do is open an existing file read the contents then use regular expressions to eliminate the unnecessary XML tags. open(READFILE, "FILENAME"); while(<READFILE>) { $_ =~ s/<\?xml version="(.*)" standalone="(.*)"\?>\n.*//g; $_ =~ s/<subclass-code>//g; $_ =~ s/<\/subclass-code>\n.*/|/g; $_ =~ s/(.*)PJ RER Houses /PJ RER Houses/g; $_ =~ s/\G //g; $_ =~ s/<city>//g; $_ =~ s/<\/city>\n.*//g; $_ =~ s/<(\/?)web-export>(.*)\n.*//g; $_ =~ s/<(\/?)run-date>(.*)\n.*//g; $_ =~ s/<(\/?)pub-code>(.*)\n.*//g; $_ =~ s/<(\/?)ad-type>(.*)\n.*//g; $_ =~ s/<(\/?)cat-code>(.*)<(\/?)cat-code>\n.*//g; $_ =~ s/<(\/?)class-code>(.*)<(\/?)class-code>\n.*//g; $_ =~ s/<(\/?)placement-description>(.*)<(\/?)placement-description>\n.*//g; $_ =~ s/<(\/?)position-description>(.*)<(\/?)position-description>\n.*//g; $_ =~ s/<(\/?)subclass3-code>(.*)<(\/?)subclass3-code>\n.*//g; $_ =~ s/<(\/?)subclass4-code>(.*)<(\/?)subclass4-code>\n.*//g; $_ =~ s/<(\/?)ad-number>(.*)<(\/?)ad-number>\n.*//g; $_ =~ s/<(\/?)start-date>(.*)<(\/?)start-date>\n.*//g; $_ =~ s/<(\/?)end-date>(.*)<(\/?)end-date>\n.*//g; $_ =~ s/<(\/?)line-count>(.*)<(\/?)line-count>\n.*//g; $_ =~ s/<(\/?)run-count>(.*)<(\/?)run-count>\n.*//g; $_ =~ s/<(\/?)customer-type>(.*)<(\/?)customer-type>\n.*//g; $_ =~ s/<(\/?)account-number>(.*)<(\/?)account-number>\n.*//g; $_ =~ s/<(\/?)account-name>(.*)<(\/?)account-name>\n.*//g; $_ =~ s/<(\/?)addr-1>(.*)<(\/?)addr-1>\n.*//g; $_ =~ s/<(\/?)addr-2>(.*)<(\/?)addr-2>\n.*//g; $_ =~ s/<(\/?)state>(.*)<(\/?)state>\n.*//g; $_ =~ s/<(\/?)postal-code>(.*)<(\/?)postal-code>\n.*//g; $_ =~ s/<(\/?)country>(.*)<(\/?)country>\n.*//g; $_ =~ s/<(\/?)phone-number>(.*)<(\/?)phone-number>\n.*//g; $_ =~ s/<(\/?)fax-number>(.*)<(\/?)fax-number>\n.*//g; $_ =~ s/<(\/?)url-addr>(.*)<(\/?)url-addr>\n.*//g; $_ =~ s/<(\/?)email-addr>(.*)<(\/?)email-addr>\n.*//g; $_ =~ s/<(\/?)pay-flag>(.*)<(\/?)pay-flag>\n.*//g; $_ =~ s/<(\/?)ad-description>(.*)<(\/?)ad-description>\n.*//g; $_ =~ s/<(\/?)order-source>(.*)<(\/?)order-source>\n.*//g; $_ =~ s/<(\/?)order-status>(.*)<(\/?)order-status>\n.*//g; $_ =~ s/<(\/?)payor-acct>(.*)<(\/?)payor-acct>\n.*//g; $_ =~ s/<(\/?)agency-flag>(.*)<(\/?)agency-flag>\n.*//g; $_ =~ s/<(\/?)rate-note>(.*)<(\/?)rate-note>\n.*//g; $_ =~ s/<ad-content>(.*)\n.*//g; $_ =~ s/\t(.*)\n.*//g; $_ =~ s/<\/ad-content>(.*)\n.*//g; } close( READFILE1 ); Is there an easier way of doing this? I don't want to use any modules. I know that it might make this easier but the file I am reading has a lot of data in it.

    Read the article

  • SQL error - Cannot convert nvarchar to decimal

    - by jakesankey
    I have a C# application that simply parses all of the txt documents within a given network directory and imports the data to a SQL server db. Everything was cruising along just fine until about the 1800th file when it happend to have a few blanks in columns that are called out as DBType.Decimal (and the value is usually zero in the files, not blank). So I got this error, "cannot convert nvarchar to decimal". I am wondering how I could tell the app to simply skip the lines that have this issue?? Perhaps I could even just change the column type to varchar even tho values are numbers (what problems could this create?) Thanks for any help! using System; using System.Data; using System.Data.SQLite; using System.IO; using System.Text.RegularExpressions; using System.Threading; using System.Collections.Generic; using System.Linq; using System.Data.SqlClient; namespace JohnDeereCMMDataParser { internal class Program { public static List<string> GetImportedFileList() { List<string> ImportedFiles = new List<string>(); using (SqlConnection connect = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { connect.Open(); using (SqlCommand fmd = connect.CreateCommand()) { fmd.CommandText = @"SELECT FileName FROM CMMData;"; fmd.CommandType = CommandType.Text; SqlDataReader r = fmd.ExecuteReader(); while (r.Read()) { ImportedFiles.Add(Convert.ToString(r["FileName"])); } } } return ImportedFiles; } private static void Main(string[] args) { Console.Title = "John Deere CMM Data Parser"; Console.WriteLine("Preparing CMM Data Parser... done"); Console.WriteLine("Scanning for new CMM data..."); Console.ForegroundColor = ConsoleColor.Gray; using (SqlConnection con = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { con.Open(); using (SqlCommand insertCommand = con.CreateCommand()) { Console.WriteLine("Connecting to SQL server..."); SqlCommand cmdd = con.CreateCommand(); string[] files = Directory.GetFiles(@"C:\Documents and Settings\js91162\Desktop\CMM WENZEL\", "*_*_*.txt", SearchOption.AllDirectories); List<string> ImportedFiles = GetImportedFileList(); insertCommand.Parameters.Add(new SqlParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Axis", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@OutOfTol", DbType.Decimal)); foreach (string file in files.Except(ImportedFiles)) { var FileNameExt1 = Path.GetFileName(file); cmdd.Parameters.Clear(); cmdd.Parameters.Add(new SqlParameter("@FileExt", FileNameExt1)); cmdd.CommandText = @" IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'RX_CMMData' AND TABLE_NAME = 'CMMData')) BEGIN SELECT COUNT(*) FROM CMMData WHERE FileName = @FileExt; END"; int count = Convert.ToInt32(cmdd.ExecuteScalar()); con.Close(); con.Open(); if (count == 0) { Console.WriteLine("Preparing to parse CMM data for SQL import..."); if (file.Count(c => c == '_') > 5) continue; insertCommand.CommandText = @" INSERT INTO CMMData (FeatType, FeatName, Axis, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, PartNumber, CMMNumber, Date, FileName) VALUES (@FeatType, @FeatName, @Axis, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @PartNumber, @CMMNumber, @Date, @FileName);"; string FileNameExt = Path.GetFullPath(file); string RNumber = Path.GetFileNameWithoutExtension(file); int index2 = RNumber.IndexOf("~"); Match RNumberE = Regex.Match(RNumber, @"^(R|L)\d{6}(COMP|CRIT|TEST|SU[1-9])(?=_)", RegexOptions.IgnoreCase); Match RNumberD = Regex.Match(RNumber, @"(?<=_)\d{3}[A-Z]\d{4}|\d{3}[A-Z]\d\w\w\d(?=_)", RegexOptions.IgnoreCase); Match RNumberDate = Regex.Match(RNumber, @"(?<=_)\d{8}(?=_)", RegexOptions.IgnoreCase); string RNumE = Convert.ToString(RNumberE); string RNumD = Convert.ToString(RNumberD); if (RNumberD.Value == @"") continue; if (RNumberE.Value == @"") continue; if (RNumberDate.Value == @"") continue; if (index2 != -1) continue; DateTime dateTime = DateTime.ParseExact(RNumberDate.Value, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy"); string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } Console.WriteLine(tmpLine); foreach (SqlParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] { ',' }); for (int i = 0; i < values.Length - 1; i++) { if (i = "" || i = null) continue; SqlParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.Parameters.Add(new SqlParameter("@PartNumber", RNumE)); insertCommand.Parameters.Add(new SqlParameter("@CMMNumber", RNumD)); insertCommand.Parameters.Add(new SqlParameter("@Date", cmmDate)); insertCommand.Parameters.Add(new SqlParameter("@FileName", FileNameExt)); insertCommand.ExecuteNonQuery(); insertCommand.Parameters.RemoveAt("@PartNumber"); insertCommand.Parameters.RemoveAt("@CMMNumber"); insertCommand.Parameters.RemoveAt("@Date"); insertCommand.Parameters.RemoveAt("@FileName"); } } } Console.WriteLine("CMM data successfully imported to SQL database..."); } con.Close(); } } } }

    Read the article

  • I need help on my C++ assignment using Microsoft Visual C++

    - by krayzwytie
    Ok, so I don't want you to do my homework for me, but I'm a little lost with this final assignment and need all the help I can get. Learning about programming is tough enough, but doing it online is next to impossible for me... Now, to get to the program, I am going to paste what I have so far. This includes mostly //comments and what I have written so far. If you can help me figure out where all the errors are and how to complete the assignment, I will really appreciate it. Like I said, I don't want you to do my homework for me (it's my final), but any constructive criticism is welcome. This is my final assignment for this class and it is due tomorrow (Sunday before midnight, Arizona time). This is the assignment: Examine the following situation: Your company, Datamax, Inc., is in the process of automating its payroll systems. Your manager has asked you to create a program that calculates overtime pay for all employees. Your program must take into account the employee’s salary, total hours worked, and hours worked more than 40 in a week, and then provide an output that is useful and easily understood by company management. Compile your program utilizing the following background information and the code outline in Appendix D (included in the code section). Submit your project as an attachment including the code and the output. Company Background: Three employees: Mark, John, and Mary The end user needs to be prompted for three specific pieces of input—name, hours worked, and hourly wage. Calculate overtime if input is greater than 40 hours per week. Provide six test plans to verify the logic within the program. Plan 1 must display the proper information for employee #1 with overtime pay. Plan 2 must display the proper information for employee #1 with no overtime pay. Plans 3-6 are duplicates of plan 1 and 2 but for the other employees. Program Requirements: Define a base class to use for the entire program. The class holds the function calls and the variables related to the overtime pay calculations. Define one object per employee. Note there will be three employees. Your program must take the objects created and implement calculations based on total salaries, total hours, and the total number of overtime hours. See the Employee Summary Data section of the sample output. Logic Steps to Complete Your Program: Define your base class. Define your objects from your base class. Prompt for user input, updating your object classes for all three users. Implement your overtime pay calculations. Display overtime or regular time pay calculations. See the sample output below. Implement object calculations by summarizing your employee objects and display the summary information in the example below. And this is the code: // Final_Project.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <iostream> #include <string> #include <iomanip> using namespace std; // //CLASS DECLARATION SECTION // class CEmployee { public: void ImplementCalculations(string EmployeeName, double hours, double wage); void DisplayEmployInformation(void); void Addsomethingup (CEmployee, CEmployee, CEmployee); string EmployeeName ; int hours ; int overtime_hours ; int iTotal_hours ; int iTotal_OvertimeHours ; float wage ; float basepay ; float overtime_pay ; float overtime_extra ; float iTotal_salaries ; float iIndividualSalary ; }; int main() { system("cls"); cout << "Welcome to the Employee Pay Center"; /* Use this section to define your objects. You will have one object per employee. You have only three employees. The format is your class name and your object name. */ std::cout << "Please enter Employee's Name: "; std::cin >> EmployeeName; std::cout << "Please enter Total Hours for (EmployeeName): "; std::cin >> hours; std::cout << "Please enter Base Pay for(EmployeeName): "; std::cin >> basepay; /* Here you will prompt for the first employee’s information. Prompt the employee name, hours worked, and the hourly wage. For each piece of information, you will update the appropriate class member defined above. Example of Prompts Enter the employee name = Enter the hours worked = Enter his or her hourly wage = */ /* Here you will prompt for the second employee’s information. Prompt the employee name, hours worked, and the hourly wage. For each piece of information, you will update the appropriate class member defined above. Enter the employee name = Enter the hours worked = Enter his or her hourly wage = */ /* Here you will prompt for the third employee’s information. Prompt the employee name, hours worked, and the hourly wage. For each piece of information, you will update the appropriate class member defined above. Enter the employee name = Enter the hours worked = Enter his or her hourly wage = */ /* Here you will implement a function call to implement the employ calcuations for each object defined above. You will do this for each of the three employees or objects. The format for this step is the following: [(object name.function name(objectname.name, objectname.hours, objectname.wage)] ; */ /* This section you will send all three objects to a function that will add up the the following information: - Total Employee Salaries - Total Employee Hours - Total Overtime Hours The format for this function is the following: - Define a new object. - Implement function call [objectname.functionname(object name 1, object name 2, object name 3)] /* } //End of Main Function void CEmployee::ImplementCalculations (string EmployeeName, double hours, double wage){ //Initialize overtime variables overtime_hours=0; overtime_pay=0; overtime_extra=0; if (hours > 40) { /* This section is for the basic calculations for calculating overtime pay. - base pay = 40 hours times the hourly wage - overtime hours = hours worked – 40 - overtime pay = hourly wage * 1.5 - overtime extra pay over 40 = overtime hours * overtime pay - salary = overtime money over 40 hours + your base pay */ /* Implement function call to output the employee information. Function is defined below. */ } // if (hours > 40) else { /* Here you are going to calculate the hours less than 40 hours. - Your base pay is = your hours worked times your wage - Salary = your base pay */ /* Implement function call to output the employee information. Function is defined below. */ } // End of the else } //End of Primary Function void CEmployee::DisplayEmployInformation(); { // This function displays all the employee output information. /* This is your cout statements to display the employee information: Employee Name ............. = Base Pay .................. = Hours in Overtime ......... = Overtime Pay Amount........ = Total Pay ................. = */ } // END OF Display Employee Information void CEmployee::Addsomethingup (CEmployee Employ1, CEmployee Employ2) { // Adds two objects of class Employee passed as // function arguments and saves them as the calling object's data member values. /* Add the total hours for objects 1, 2, and 3. Add the salaries for each object. Add the total overtime hours. */ /* Then display the information below. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% EMPLOYEE SUMMARY DATA%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% Total Employee Salaries ..... = 576.43 %%%% Total Employee Hours ........ = 108 %%%% Total Overtime Hours......... = 5 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% */ } // End of function

    Read the article

  • what "Debug Assertion Failed" mean and how to fix it, c++?

    - by nonon
    hi, why this program gives me a "Debug Assertion Failed" Error Message while running #include "stdafx.h" #include "iostream" #include "fstream" #include "string" using namespace std; int conv_ch(char b) { int f; f=b; b=b+0; switch(b) { case 48: f=0; break; case 49: f=1; break; case 50: f=2; break; case 51: f=3; break; case 52: f=4; break; case 53: f=5; break; case 54: f=6; break; case 55: f=7; break; case 56: f=8; break; case 57: f=9; break; default: f=0; } return f; } class Student { public: string id; size_t id_len; string first_name; size_t first_len; string last_name; size_t last_len; string phone; size_t phone_len; string grade; size_t grade_len; void print(); void clean(); }; void Student::clean() { id.erase (id.begin()+6, id.end()); first_name.erase (first_name.begin()+15, first_name.end()); last_name.erase (last_name.begin()+15, last_name.end()); phone.erase (phone.begin()+10, phone.end()); grade.erase (grade.begin()+2, grade.end()); } void Student::print() { int i; for(i=0;i<6;i++) { cout<<id[i]; } cout<<endl; for(i=0;i<15;i++) { cout<<first_name[i]; } cout<<endl; for(i=0;i<15;i++) { cout<<last_name[i]; } cout<<endl; for(i=0;i<10;i++) { cout<<phone[i]; } cout<<endl; for(i=0;i<2;i++) { cout<<grade[i]; } cout<<endl; } int main() { Student k[80]; char data[1200]; int length,i,recn=0; int rec_length; int counter = 0; fstream myfile; char x1,x2; char y1,y2; char zz; int ad=0; int ser,j; myfile.open ("example.txt",ios::in); int right; int left; int middle; string key; while(!myfile.eof()){ myfile.get(data,1200); char * pch; pch = strtok (data, "#"); printf ("%s\n", pch); j=0; for(i=0;i<6;i++) { k[recn].id[i]=data[j]; j++; } for(i=0;i<15;i++) { k[recn].first_name[i]=data[j]; j++; } for(i=0;i<15;i++) { k[recn].last_name[i]=data[j]; j++; } for(i=0;i<10;i++) { k[recn].phone[i]=data[j]; j++; } for(i=0;i<2;i++) { k[recn].grade[i]=data[j]; j++; } recn++; j=0; } //cout<<recn; string temp1; size_t temp2; int temp3; for(i=0;i<recn-1;i++) { for(j=0;j<recn-1;j++) { if(k[i].id.compare(k[j].id)<0) { temp1 = k[i].first_name; k[i].first_name = k[j].first_name; k[j].first_name = temp1; temp2 = k[i].first_len; k[i].first_len = k[j].first_len; k[j].first_len = temp2; temp1 = k[i].last_name; k[i].last_name = k[j].last_name; k[j].last_name = temp1; temp2 = k[i].last_len; k[i].last_len = k[j].last_len; k[j].last_len = temp2; temp1 = k[i].grade; k[i].grade = k[j].grade; k[j].grade = temp1; temp2 = k[i].grade_len; k[i].grade_len = k[j].grade_len; k[j].grade_len = temp2; temp1 = k[i].id; k[i].id = k[j].id; k[j].id = temp1; temp2 = k[i].id_len; k[i].id_len = k[j].id_len; k[j].id_len = temp2; temp1 = k[i].phone; k[i].phone = k[j].phone; k[j].phone = temp1; temp2 = k[i].phone_len; k[i].phone_len = k[j].phone_len; k[j].phone_len = temp2; } } } for(i=0;i<recn-1;i++) { k[i].clean(); } char z; string id_sear; cout<<"Enter 1 to display , 2 to search , 3 to exit:"; cin>>z; while(1){ switch(z) { case '1': for(i=0;i<recn-1;i++) { k[i].print(); } break; case '2': cin>>key; right=0; left=recn-2; while(right<=left) { middle=((right+left)/2); if(key.compare(k[middle].id)==0){ cout<<"Founded"<<endl; k[middle].print(); break; } else if(key.compare(k[middle].id)<0) { left=middle-1; } else { right=middle+1; } } break; case '3': exit(0); break; } cout<<"Enter 1 to display , 2 to search , 3 to exit:"; cin>>z; } return 0; } the program reads from a file example.txt 313121crewwe matt 0114323111A # 433444cristinaee john 0113344325A+# 324311matte richee 3040554032B # the idea is to read fixed size field structure with a text seprator record strucutre

    Read the article

  • An Introduction to jQuery Templates

    - by Stephen Walther
    The goal of this blog entry is to provide you with enough information to start working with jQuery Templates. jQuery Templates enable you to display and manipulate data in the browser. For example, you can use jQuery Templates to format and display a set of database records that you have retrieved with an Ajax call. jQuery Templates supports a number of powerful features such as template tags, template composition, and wrapped templates. I’ll concentrate on the features that I think that you will find most useful. In order to focus on the jQuery Templates feature itself, this blog entry is server technology agnostic. All the samples use HTML pages instead of ASP.NET pages. In a future blog entry, I’ll focus on using jQuery Templates with ASP.NET Web Forms and ASP.NET MVC (You can do some pretty powerful things when jQuery Templates are used on the client and ASP.NET is used on the server). Introduction to jQuery Templates The jQuery Templates plugin was developed by the Microsoft ASP.NET team in collaboration with the open-source jQuery team. While working at Microsoft, I wrote the original proposal for jQuery Templates, Dave Reed wrote the original code, and Boris Moore wrote the final code. The jQuery team – especially John Resig – was very involved in each step of the process. Both the jQuery community and ASP.NET communities were very active in providing feedback. jQuery Templates will be included in the jQuery core library (the jQuery.js library) when jQuery 1.5 is released. Until jQuery 1.5 is released, you can download the jQuery Templates plugin from the jQuery Source Code Repository or you can use jQuery Templates directly from the ASP.NET CDN. The documentation for jQuery Templates is already included with the official jQuery documentation at http://api.jQuery.com. The main entry for jQuery templates is located under the topic plugins/templates. A Basic Sample of jQuery Templates Let’s start with a really simple sample of using jQuery Templates. We’ll use the plugin to display a list of books stored in a JavaScript array. Here’s the complete code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head> <title>Intro</title> <link href="0_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="pageContent"> <h1>ASP.NET Bookstore</h1> <div id="bookContainer"></div> </div> <script id="bookTemplate" type="text/x-jQuery-tmpl"> <div> <img src="BookPictures/${picture}" alt="" /> <h2>${title}</h2> price: ${formatPrice(price)} </div> </script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> // Create an array of books var books = [ { title: "ASP.NET 4 Unleashed", price: 37.79, picture: "AspNet4Unleashed.jpg" }, { title: "ASP.NET MVC Unleashed", price: 44.99, picture: "AspNetMvcUnleashed.jpg" }, { title: "ASP.NET Kick Start", price: 4.00, picture: "AspNetKickStart.jpg" }, { title: "ASP.NET MVC Unleashed iPhone", price: 44.99, picture: "AspNetMvcUnleashedIPhone.jpg" }, ]; // Render the books using the template $("#bookTemplate").tmpl(books).appendTo("#bookContainer"); function formatPrice(price) { return "$" + price.toFixed(2); } </script> </body> </html> When you open this page in a browser, a list of books is displayed: There are several things going on in this page which require explanation. First, notice that the page uses both the jQuery 1.4.4 and jQuery Templates libraries. Both libraries are retrieved from the ASP.NET CDN: <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> You can use the ASP.NET CDN for free (even for production websites). You can learn more about the files included on the ASP.NET CDN by visiting the ASP.NET CDN documentation page. Second, you should notice that the actual template is included in a script tag with a special MIME type: <script id="bookTemplate" type="text/x-jQuery-tmpl"> <div> <img src="BookPictures/${picture}" alt="" /> <h2>${title}</h2> price: ${formatPrice(price)} </div> </script> This template is displayed for each of the books rendered by the template. The template displays a book picture, title, and price. Notice that the SCRIPT tag which wraps the template has a MIME type of text/x-jQuery-tmpl. Why is the template wrapped in a SCRIPT tag and why the strange MIME type? When a browser encounters a SCRIPT tag with an unknown MIME type, it ignores the content of the tag. This is the behavior that you want with a template. You don’t want a browser to attempt to parse the contents of a template because this might cause side effects. For example, the template above includes an <img> tag with a src attribute that points at “BookPictures/${picture}”. You don’t want the browser to attempt to load an image at the URL “BookPictures/${picture}”. Instead, you want to prevent the browser from processing the IMG tag until the ${picture} expression is replaced by with the actual name of an image by the jQuery Templates plugin. If you are not worried about browser side-effects then you can wrap a template inside any HTML tag that you please. For example, the following DIV tag would also work with the jQuery Templates plugin: <div id="bookTemplate" style="display:none"> <div> <h2>${title}</h2> price: ${formatPrice(price)} </div> </div> Notice that the DIV tag includes a style=”display:none” attribute to prevent the template from being displayed until the template is parsed by the jQuery Templates plugin. Third, notice that the expression ${…} is used to display the value of a JavaScript expression within a template. For example, the expression ${title} is used to display the value of the book title property. You can use any JavaScript function that you please within the ${…} expression. For example, in the template above, the book price is formatted with the help of the custom JavaScript formatPrice() function which is defined lower in the page. Fourth, and finally, the template is rendered with the help of the tmpl() method. The following statement selects the bookTemplate and renders an array of books using the bookTemplate. The results are appended to a DIV element named bookContainer by using the standard jQuery appendTo() method. $("#bookTemplate").tmpl(books).appendTo("#bookContainer"); Using Template Tags Within a template, you can use any of the following template tags. {{tmpl}} – Used for template composition. See the section below. {{wrap}} – Used for wrapped templates. See the section below. {{each}} – Used to iterate through a collection. {{if}} – Used to conditionally display template content. {{else}} – Used with {{if}} to conditionally display template content. {{html}} – Used to display the value of an HTML expression without encoding the value. Using ${…} or {{= }} performs HTML encoding automatically. {{= }}-- Used in exactly the same way as ${…}. {{! }} – Used for displaying comments. The contents of a {{!...}} tag are ignored. For example, imagine that you want to display a list of blog entries. Each blog entry could, possibly, have an associated list of categories. The following page illustrates how you can use the { if}} and {{each}} template tags to conditionally display categories for each blog entry:   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>each</title> <link href="1_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="blogPostContainer"></div> <script id="blogPostTemplate" type="text/x-jQuery-tmpl"> <h1>${postTitle}</h1> <p> ${postEntry} </p> {{if categories}} Categories: {{each categories}} <i>${$value}</i> {{/each}} {{else}} Uncategorized {{/if}} </script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> var blogPosts = [ { postTitle: "How to fix a sink plunger in 5 minutes", postEntry: "Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna.", categories: ["HowTo", "Sinks", "Plumbing"] }, { postTitle: "How to remove a broken lightbulb", postEntry: "Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna.", categories: ["HowTo", "Lightbulbs", "Electricity"] }, { postTitle: "New associate website", postEntry: "Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna." } ]; // Render the blog posts $("#blogPostTemplate").tmpl(blogPosts).appendTo("#blogPostContainer"); </script> </body> </html> When this page is opened in a web browser, the following list of blog posts and categories is displayed: Notice that the first and second blog entries have associated categories but the third blog entry does not. The third blog entry is “Uncategorized”. The template used to render the blog entries and categories looks like this: <script id="blogPostTemplate" type="text/x-jQuery-tmpl"> <h1>${postTitle}</h1> <p> ${postEntry} </p> {{if categories}} Categories: {{each categories}} <i>${$value}</i> {{/each}} {{else}} Uncategorized {{/if}} </script> Notice the special expression $value used within the {{each}} template tag. You can use $value to display the value of the current template item. In this case, $value is used to display the value of each category in the collection of categories. Template Composition When building a fancy page, you might want to build a template out of multiple templates. In other words, you might want to take advantage of template composition. For example, imagine that you want to display a list of products. Some of the products are being sold at their normal price and some of the products are on sale. In that case, you might want to use two different templates for displaying a product: a productTemplate and a productOnSaleTemplate. The following page illustrates how you can use the {{tmpl}} tag to build a template from multiple templates:   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Composition</title> <link href="2_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="pageContainer"> <h1>Products</h1> <div id="productListContainer"></div> <!-- Show list of products using composition --> <script id="productListTemplate" type="text/x-jQuery-tmpl"> <div> {{if onSale}} {{tmpl "#productOnSaleTemplate"}} {{else}} {{tmpl "#productTemplate"}} {{/if}} </div> </script> <!-- Show product --> <script id="productTemplate" type="text/x-jQuery-tmpl"> ${name} </script> <!-- Show product on sale --> <script id="productOnSaleTemplate" type="text/x-jQuery-tmpl"> <b>${name}</b> <img src="images/on_sale.png" alt="On Sale" /> </script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> var products = [ { name: "Laptop", onSale: false }, { name: "Apples", onSale: true }, { name: "Comb", onSale: false } ]; $("#productListTemplate").tmpl(products).appendTo("#productListContainer"); </script> </div> </body> </html>   In the page above, the main template used to display the list of products looks like this: <script id="productListTemplate" type="text/x-jQuery-tmpl"> <div> {{if onSale}} {{tmpl "#productOnSaleTemplate"}} {{else}} {{tmpl "#productTemplate"}} {{/if}} </div> </script>   If a product is on sale then the product is displayed with the productOnSaleTemplate (which includes an on sale image): <script id="productOnSaleTemplate" type="text/x-jQuery-tmpl"> <b>${name}</b> <img src="images/on_sale.png" alt="On Sale" /> </script>   Otherwise, the product is displayed with the normal productTemplate (which does not include the on sale image): <script id="productTemplate" type="text/x-jQuery-tmpl"> ${name} </script>   You can pass a parameter to the {{tmpl}} tag. The parameter becomes the data passed to the template rendered by the {{tmpl}} tag. For example, in the previous section, we used the {{each}} template tag to display a list of categories for each blog entry like this: <script id="blogPostTemplate" type="text/x-jQuery-tmpl"> <h1>${postTitle}</h1> <p> ${postEntry} </p> {{if categories}} Categories: {{each categories}} <i>${$value}</i> {{/each}} {{else}} Uncategorized {{/if}} </script>   Another way to create this template is to use template composition like this: <script id="blogPostTemplate" type="text/x-jQuery-tmpl"> <h1>${postTitle}</h1> <p> ${postEntry} </p> {{if categories}} Categories: {{tmpl(categories) "#categoryTemplate"}} {{else}} Uncategorized {{/if}} </script> <script id="categoryTemplate" type="text/x-jQuery-tmpl"> <i>${$data}</i> &nbsp; </script>   Using the {{each}} tag or {{tmpl}} tag is largely a matter of personal preference. Wrapped Templates The {{wrap}} template tag enables you to take a chunk of HTML and transform the HTML into another chunk of HTML (think easy XSLT). When you use the {{wrap}} tag, you work with two templates. The first template contains the HTML being transformed and the second template includes the filter expressions for transforming the HTML. For example, you can use the {{wrap}} template tag to transform a chunk of HTML into an interactive tab strip: When you click any of the tabs, you see the corresponding content. This tab strip was created with the following page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Wrapped Templates</title> <style type="text/css"> body { font-family: Arial; background-color:black; } .tabs div { display:inline-block; border-bottom: 1px solid black; padding:4px; background-color:gray; cursor:pointer; } .tabs div.tabState_true { background-color:white; border-bottom:1px solid white; } .tabBody { border-top:1px solid white; padding:10px; background-color:white; min-height:400px; width:400px; } </style> </head> <body> <div id="tabsView"></div> <script id="tabsContent" type="text/x-jquery-tmpl"> {{wrap "#tabsWrap"}} <h3>Tab 1</h3> <div> Content of tab 1. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> <h3>Tab 2</h3> <div> Content of tab 2. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> <h3>Tab 3</h3> <div> Content of tab 3. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> {{/wrap}} </script> <script id="tabsWrap" type="text/x-jquery-tmpl"> <div class="tabs"> {{each $item.html("h3", true)}} <div class="tabState_${$index === selectedTabIndex}"> ${$value} </div> {{/each}} </div> <div class="tabBody"> {{html $item.html("div")[selectedTabIndex]}} </div> </script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> // Global for tracking selected tab var selectedTabIndex = 0; // Render the tab strip $("#tabsContent").tmpl().appendTo("#tabsView"); // When a tab is clicked, update the tab strip $("#tabsView") .delegate(".tabState_false", "click", function () { var templateItem = $.tmplItem(this); selectedTabIndex = $(this).index(); templateItem.update(); }); </script> </body> </html>   The “source” for the tab strip is contained in the following template: <script id="tabsContent" type="text/x-jquery-tmpl"> {{wrap "#tabsWrap"}} <h3>Tab 1</h3> <div> Content of tab 1. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> <h3>Tab 2</h3> <div> Content of tab 2. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> <h3>Tab 3</h3> <div> Content of tab 3. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> {{/wrap}} </script>   The tab strip is created with a list of H3 elements (which represent each tab) and DIV elements (which represent the body of each tab). Notice that the HTML content is wrapped in the {{wrap}} template tag. This template tag points at the following tabsWrap template: <script id="tabsWrap" type="text/x-jquery-tmpl"> <div class="tabs"> {{each $item.html("h3", true)}} <div class="tabState_${$index === selectedTabIndex}"> ${$value} </div> {{/each}} </div> <div class="tabBody"> {{html $item.html("div")[selectedTabIndex]}} </div> </script> The tabs DIV contains all of the tabs. The {{each}} template tag is used to loop through each of the H3 elements from the source template and render a DIV tag that represents a particular tab. The template item html() method is used to filter content from the “source” HTML template. The html() method accepts a jQuery selector for its first parameter. The tabs are retrieved from the source template by using an h3 filter. The second parameter passed to the html() method – the textOnly parameter -- causes the filter to return the inner text of each h3 element. You can learn more about the html() method at the jQuery website (see the section on $item.html()). The tabBody DIV renders the body of the selected tab. Notice that the {{html}} template tag is used to display the tab body so that HTML content in the body won’t be HTML encoded. The html() method is used, once again, to grab all of the DIV elements from the source HTML template. The selectedTabIndex global variable is used to display the contents of the selected tab. Remote Templates A common feature request for jQuery templates is support for remote templates. Developers want to be able to separate templates into different files. Adding support for remote templates requires only a few lines of extra code (Dave Ward has a nice blog entry on this). For example, the following page uses a remote template from a file named BookTemplate.htm: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Remote Templates</title> <link href="0_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="pageContent"> <h1>ASP.NET Bookstore</h1> <div id="bookContainer"></div> </div> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> // Create an array of books var books = [ { title: "ASP.NET 4 Unleashed", price: 37.79, picture: "AspNet4Unleashed.jpg" }, { title: "ASP.NET MVC Unleashed", price: 44.99, picture: "AspNetMvcUnleashed.jpg" }, { title: "ASP.NET Kick Start", price: 4.00, picture: "AspNetKickStart.jpg" }, { title: "ASP.NET MVC Unleashed iPhone", price: 44.99, picture: "AspNetMvcUnleashedIPhone.jpg" }, ]; // Get the remote template $.get("BookTemplate.htm", null, function (bookTemplate) { // Render the books using the remote template $.tmpl(bookTemplate, books).appendTo("#bookContainer"); }); function formatPrice(price) { return "$" + price.toFixed(2); } </script> </body> </html>   The remote template is retrieved (and rendered) with the following code: // Get the remote template $.get("BookTemplate.htm", null, function (bookTemplate) { // Render the books using the remote template $.tmpl(bookTemplate, books).appendTo("#bookContainer"); });   This code uses the standard jQuery $.get() method to get the BookTemplate.htm file from the server with an Ajax request. After the BookTemplate.htm file is successfully retrieved, the $.tmpl() method is used to render an array of books with the template. Here’s what the BookTemplate.htm file looks like: <div> <img src="BookPictures/${picture}" alt="" /> <h2>${title}</h2> price: ${formatPrice(price)} </div> Notice that the template in the BooksTemplate.htm file is not wrapped by a SCRIPT element. There is no need to wrap the template in this case because there is no possibility that the template will get interpreted before you want it to be interpreted. If you plan to use the bookTemplate multiple times – for example, you are paging or sorting the books -- then you should compile the template into a function and cache the compiled template function. For example, the following page can be used to page through a list of 100 products (using iPhone style More paging). <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Template Caching</title> <link href="6_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <h1>Products</h1> <div id="productContainer"></div> <button id="more">More</button> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> // Globals var pageIndex = 0; // Create an array of products var products = []; for (var i = 0; i < 100; i++) { products.push({ name: "Product " + (i + 1) }); } // Get the remote template $.get("ProductTemplate.htm", null, function (productTemplate) { // Compile and cache the template $.template("productTemplate", productTemplate); // Render the products renderProducts(0); }); $("#more").click(function () { pageIndex++; renderProducts(); }); function renderProducts() { // Get page of products var pageOfProducts = products.slice(pageIndex * 5, pageIndex * 5 + 5); // Used cached productTemplate to render products $.tmpl("productTemplate", pageOfProducts).appendTo("#productContainer"); } function formatPrice(price) { return "$" + price.toFixed(2); } </script> </body> </html>   The ProductTemplate is retrieved from an external file named ProductTemplate.htm. This template is retrieved only once. Furthermore, it is compiled and cached with the help of the $.template() method: // Get the remote template $.get("ProductTemplate.htm", null, function (productTemplate) { // Compile and cache the template $.template("productTemplate", productTemplate); // Render the products renderProducts(0); });   The $.template() method compiles the HTML representation of the template into a JavaScript function and caches the template function with the name productTemplate. The cached template can be used by calling the $.tmp() method. The productTemplate is used in the renderProducts() method: function renderProducts() { // Get page of products var pageOfProducts = products.slice(pageIndex * 5, pageIndex * 5 + 5); // Used cached productTemplate to render products $.tmpl("productTemplate", pageOfProducts).appendTo("#productContainer"); } In the code above, the first parameter passed to the $.tmpl() method is the name of a cached template. Working with Template Items In this final section, I want to devote some space to discussing Template Items. A new Template Item is created for each rendered instance of a template. For example, if you are displaying a list of 100 products with a template, then 100 Template Items are created. A Template Item has the following properties and methods: data – The data associated with the Template Instance. For example, a product. tmpl – The template associated with the Template Instance. parent – The parent template item if the template is nested. nodes – The HTML content of the template. calls – Used by {{wrap}} template tag. nest – Used by {{tmpl}} template tag. wrap – Used to imperatively enable wrapped templates. html – Used to filter content from a wrapped template. See the above section on wrapped templates. update – Used to re-render a template item. The last method – the update() method -- is especially interesting because it enables you to re-render a template item with new data or even a new template. For example, the following page displays a list of books. When you hover your mouse over any of the books, additional book details are displayed. In the following screenshot, details for ASP.NET Kick Start are displayed. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Template Item</title> <link href="0_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="pageContent"> <h1>ASP.NET Bookstore</h1> <div id="bookContainer"></div> </div> <script id="bookTemplate" type="text/x-jQuery-tmpl"> <div class="bookItem"> <img src="BookPictures/${picture}" alt="" /> <h2>${title}</h2> price: ${formatPrice(price)} </div> </script> <script id="bookDetailsTemplate" type="text/x-jQuery-tmpl"> <div class="bookItem"> <img src="BookPictures/${picture}" alt="" /> <h2>${title}</h2> price: ${formatPrice(price)} <p> ${description} </p> </div> </script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> // Create an array of books var books = [ { title: "ASP.NET 4 Unleashed", price: 37.79, picture: "AspNet4Unleashed.jpg", description: "The most comprehensive book on Microsoft’s new ASP.NET 4.. " }, { title: "ASP.NET MVC Unleashed", price: 44.99, picture: "AspNetMvcUnleashed.jpg", description: "Writing for professional programmers, Walther explains the crucial concepts that make the Model-View-Controller (MVC) development paradigm work…" }, { title: "ASP.NET Kick Start", price: 4.00, picture: "AspNetKickStart.jpg", description: "Visual Studio .NET is the premier development environment for creating .NET applications…." }, { title: "ASP.NET MVC Unleashed iPhone", price: 44.99, picture: "AspNetMvcUnleashedIPhone.jpg", description: "ASP.NET MVC Unleashed for the iPhone…" }, ]; // Render the books using the template $("#bookTemplate").tmpl(books).appendTo("#bookContainer"); // Get compiled details template var bookDetailsTemplate = $("#bookDetailsTemplate").template(); // Add hover handler $(".bookItem").mouseenter(function () { // Get template item associated with DIV var templateItem = $(this).tmplItem(); // Change template to compiled template templateItem.tmpl = bookDetailsTemplate; // Re-render template templateItem.update(); }); function formatPrice(price) { return "$" + price.toFixed(2); } </script> </body> </html>   There are two templates used to display a book: bookTemplate and bookDetailsTemplate. When you hover your mouse over a template item, the standard bookTemplate is swapped out for the bookDetailsTemplate. The bookDetailsTemplate displays a book description. The books are rendered with the bookTemplate with the following line of code: // Render the books using the template $("#bookTemplate").tmpl(books).appendTo("#bookContainer");   The following code is used to swap the bookTemplate and the bookDetailsTemplate to show details for a book: // Get compiled details template var bookDetailsTemplate = $("#bookDetailsTemplate").template(); // Add hover handler $(".bookItem").mouseenter(function () { // Get template item associated with DIV var templateItem = $(this).tmplItem(); // Change template to compiled template templateItem.tmpl = bookDetailsTemplate; // Re-render template templateItem.update(); });   When you hover your mouse over a DIV element rendered by the bookTemplate, the mouseenter handler executes. First, this handler retrieves the Template Item associated with the DIV element by calling the tmplItem() method. The tmplItem() method returns a Template Item. Next, a new template is assigned to the Template Item. Notice that a compiled version of the bookDetailsTemplate is assigned to the Template Item’s tmpl property. The template is compiled earlier in the code by calling the template() method. Finally, the Template Item update() method is called to re-render the Template Item with the bookDetailsTemplate instead of the original bookTemplate. Summary This is a long blog entry and I still have not managed to cover all of the features of jQuery Templates J However, I’ve tried to cover the most important features of jQuery Templates such as template composition, template wrapping, and template items. To learn more about jQuery Templates, I recommend that you look at the documentation for jQuery Templates at the official jQuery website. Another great way to learn more about jQuery Templates is to look at the (unminified) source code.

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 187 188 189 190 191 192  | Next Page >