Search Results

Search found 3836 results on 154 pages for 'argument'.

Page 150/154 | < Previous Page | 146 147 148 149 150 151 152 153 154  | Next Page >

  • 10 Windows Tweaking Myths Debunked

    - by Chris Hoffman
    Windows is big, complicated, and misunderstood. You’ll still stumble across bad advice from time to time when browsing the web. These Windows tweaking, performance, and system maintenance tips are mostly just useless, but some are actively harmful. Luckily, most of these myths have been stomped out on mainstream sites and forums. However, if you start searching the web, you’ll still find websites that recommend you do these things. Erase Cache Files Regularly to Speed Things Up You can free up disk space by running an application like CCleaner, another temporary-file-cleaning utility, or even the Windows Disk Cleanup tool. In some cases, you may even see an old computer speed up when you erase a large amount of useless files. However, running CCleaner or similar utilities every day to erase your browser’s cache won’t actually speed things up. It will slow down your web browsing as your web browser is forced to redownload the files all over again, and reconstruct the cache you regularly delete. If you’ve installed CCleaner or a similar program and run it every day with the default settings, you’re actually slowing down your web browsing. Consider at least preventing the program from wiping out your web browser cache. Enable ReadyBoost to Speed Up Modern PCs Windows still prompts you to enable ReadyBoost when you insert a USB stick or memory card. On modern computers, this is completely pointless — ReadyBoost won’t actually speed up your computer if you have at least 1 GB of RAM. If you have a very old computer with a tiny amount of RAM — think 512 MB — ReadyBoost may help a bit. Otherwise, don’t bother. Open the Disk Defragmenter and Manually Defragment On Windows 98, users had to manually open the defragmentation tool and run it, ensuring no other applications were using the hard drive while it did its work. Modern versions of Windows are capable of defragmenting your file system while other programs are using it, and they automatically defragment your disks for you. If you’re still opening the Disk Defragmenter every week and clicking the Defragment button, you don’t need to do this — Windows is doing it for you unless you’ve told it not to run on a schedule. Modern computers with solid-state drives don’t have to be defragmented at all. Disable Your Pagefile to Increase Performance When Windows runs out of empty space in RAM, it swaps out data from memory to a pagefile on your hard disk. If a computer doesn’t have much memory and it’s running slow, it’s probably moving data to the pagefile or reading data from it. Some Windows geeks seem to think that the pagefile is bad for system performance and disable it completely. The argument seems to be that Windows can’t be trusted to manage a pagefile and won’t use it intelligently, so the pagefile needs to be removed. As long as you have enough RAM, it’s true that you can get by without a pagefile. However, if you do have enough RAM, Windows will only use the pagefile rarely anyway. Tests have found that disabling the pagefile offers no performance benefit. Enable CPU Cores in MSConfig Some websites claim that Windows may not be using all of your CPU cores or that you can speed up your boot time by increasing the amount of cores used during boot. They direct you to the MSConfig application, where you can indeed select an option that appears to increase the amount of cores used. In reality, Windows always uses the maximum amount of processor cores your CPU has. (Technically, only one core is used at the beginning of the boot process, but the additional cores are quickly activated.) Leave this option unchecked. It’s just a debugging option that allows you to set a maximum number of cores, so it would be useful if you wanted to force Windows to only use a single core on a multi-core system — but all it can do is restrict the amount of cores used. Clean Your Prefetch To Increase Startup Speed Windows watches the programs you run and creates .pf files in its Prefetch folder for them. The Prefetch feature works as a sort of cache — when you open an application, Windows checks the Prefetch folder, looks at the application’s .pf file (if it exists), and uses that as a guide to start preloading data that the application will use. This helps your applications start faster. Some Windows geeks have misunderstood this feature. They believe that Windows loads these files at boot, so your boot time will slow down due to Windows preloading the data specified in the .pf files. They also argue you’ll build up useless files as you uninstall programs and .pf files will be left over. In reality, Windows only loads the data in these .pf files when you launch the associated application and only stores .pf files for the 128 most recently launched programs. If you were to regularly clean out the Prefetch folder, not only would programs take longer to open because they won’t be preloaded, Windows will have to waste time recreating all the .pf files. You could also modify the PrefetchParameters setting to disable Prefetch, but there’s no reason to do that. Let Windows manage Prefetch on its own. Disable QoS To Increase Network Bandwidth Quality of Service (QoS) is a feature that allows your computer to prioritize its traffic. For example, a time-critical application like Skype could choose to use QoS and prioritize its traffic over a file-downloading program so your voice conversation would work smoothly, even while you were downloading files. Some people incorrectly believe that QoS always reserves a certain amount of bandwidth and this bandwidth is unused until you disable it. This is untrue. In reality, 100% of bandwidth is normally available to all applications unless a program chooses to use QoS. Even if a program does choose to use QoS, the reserved space will be available to other programs unless the program is actively using it. No bandwidth is ever set aside and left empty. Set DisablePagingExecutive to Make Windows Faster The DisablePagingExecutive registry setting is set to 0 by default, which allows drivers and system code to be paged to the disk. When set to 1, drivers and system code will be forced to stay resident in memory. Once again, some people believe that Windows isn’t smart enough to manage the pagefile on its own and believe that changing this option will force Windows to keep important files in memory rather than stupidly paging them out. If you have more than enough memory, changing this won’t really do anything. If you have little memory, changing this setting may force Windows to push programs you’re using to the page file rather than push unused system files there — this would slow things down. This is an option that may be helpful for debugging in some situations, not a setting to change for more performance. Process Idle Tasks to Free Memory Windows does things, such as creating scheduled system restore points, when you step away from your computer. It waits until your computer is “idle” so it won’t slow your computer and waste your time while you’re using it. Running the “Rundll32.exe advapi32.dll,ProcessIdleTasks” command forces Windows to perform all of these tasks while you’re using the computer. This is completely pointless and won’t help free memory or anything like that — all you’re doing is forcing Windows to slow your computer down while you’re using it. This command only exists so benchmarking programs can force idle tasks to run before performing benchmarks, ensuring idle tasks don’t start running and interfere with the benchmark. Delay or Disable Windows Services There’s no real reason to disable Windows services anymore. There was a time when Windows was particularly heavy and computers had little memory — think Windows Vista and those “Vista Capable” PCs Microsoft was sued over. Modern versions of Windows like Windows 7 and 8 are lighter than Windows Vista and computers have more than enough memory, so you won’t see any improvements from disabling system services included with Windows. Some people argue for not disabling services, however — they recommend setting services from “Automatic” to “Automatic (Delayed Start)”. By default, the Delayed Start option just starts services two minutes after the last “Automatic” service starts. Setting services to Delayed Start won’t really speed up your boot time, as the services will still need to start — in fact, it may lengthen the time it takes to get a usable desktop as services will still be loading two minutes after booting. Most services can load in parallel, and loading the services as early as possible will result in a better experience. The “Delayed Start” feature is primarily useful for system administrators who need to ensure a specific service starts later than another service. If you ever find a guide that recommends you set a little-known registry setting to improve performance, take a closer look — the change is probably useless. Want to actually speed up your PC? Try disabling useless startup programs that run on boot, increasing your boot time and consuming memory in the background. This is a much better tip than doing any of the above, especially considering most Windows PCs come packed to the brim with bloatware.     

    Read the article

  • CodePlex Daily Summary for Tuesday, May 22, 2012

    CodePlex Daily Summary for Tuesday, May 22, 2012Popular ReleasesBlackJumboDog: Ver5.6.3: 2012.05.22 Ver5.6.3  (1) HTTP????????、ftp://??????????????????????Internals Viewer (updated) for SQL Server 2008 R2.: Internals Viewer for SSMS 2008 R2: Updated code to work with SSMS 2008 R2. Changed dependancies, removing old assemblies no longer present and replacing them with updated versions.Orchard Project: Orchard 1.4.2: This is a service release to address 1.4 and 1.4.1 bugs. Please read our release notes for Orchard 1.4.2: http://docs.orchardproject.net/Documentation/Orchard-1-4-Release-NotesVirtu: Virtu 0.9.2: Source Requirements.NET Framework 4 Visual Studio 2010 with SP1 or Visual Studio 2010 Express with SP1 Silverlight 5 Tools for Visual Studio 2010 with SP1 Windows Phone 7 Developer Tools (which includes XNA Game Studio 4) Binaries RequirementsSilverlight 5 .NET Framework 4 XNA Framework 4SharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.0): New fetures:View other users predictions Hide/Show background image (web part property) Installing SharePoint Euro 2012 PredictorSharePoint Euro 2012 Predictor has been developed as a SharePoint Sandbox solution to support SharePoint Online (Office 365) Download the solution havivi.euro2012.wsp from the download page: Downloads Upload this solution to your Site Collection via the solutions area. Click on Activate to make the web parts in the solution available for use in the Site C...State Machine .netmf: State Machine Example: First release.... Contains 3 state machines running on separate threads. Event driven button to change the states. StateMachineEngine to support the Machines Message class with the type of data to send between statesSilverlight socket component: Smark.NetDisk: Smark.NetDisk?????Silverlight ?.net???????????,???????????????????????。Smark.NetDisk??????????,????.net???????????????????????tcp??;???????Silverlight??????????????????????callisto: callisto 2.0.28: Update log: - Extended Scribble protocol. - Updated HTML5 client code - now supports the latest versions of Google Chrome.ExtAspNet: ExtAspNet v3.1.6: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://bbs.extasp.net/ ??:http://demo.extasp.net/ ??:http://doc.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-05-20 v3.1.6 -??RowD...Dynamics XRM Tools: Dynamics XRM Tools BETA 1.0: The Dynamics XRM Tools 1.0 BETA is now available Seperate downloads are available for On Premise and Online as certain features are only available On Premise. This is a BETA build and may not resemble the final release. Many enhancements are in development and will be made available soon. Please provide feedback so that we may learn and discover how to make these tools better.WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.05: Whats New Added New Editor Skin "BootstrapCK-Skin" Added New Editor Skin "Slick" Added Dnn Pages Drop Down to the Link Dialog (to quickly link to a portal tab) changes Fixed Issue #6956 Localization issue with some languages Fixed Issue #6930 Folder Tree view was not working in some cases Changed the user folder from User name to User id User Folder is now used when using Upload Function and User Folder is enabled File-Browser Fixed Resizer Preview Image Optimized the oEmbed Pl...PHPExcel: PHPExcel 1.7.7: See Change Log for details of the new features and bugfixes included in this release. BREAKING CHANGE! From PHPExcel 1.7.8 onwards, the 3rd-party tcPDF library will no longer be bundled with PHPExcel for rendering PDF files through the PDF Writer. The PDF Writer is being rewritten to allow a choice of 3rd party PDF libraries (tcPDF, mPDF, and domPDF initially), none of which will be bundled with PHPExcel, but which can be downloaded seperately from the appropriate sites.GhostBuster: GhostBuster Setup (91520): Added WMI based RestorePoint support Removed test code from program.cs Improved counting. Changed color of ghosted but unfiltered devices. Changed HwEntries into an ObservableCollection. Added Properties Form. Added Properties MenuItem to Context Menu. Added Hide Unfiltered Devices to Context Menu. If you like this tool, leave me a note, rate this project or write a review or Donate to Ghostbuster. Donate to GhostbusterEXCEL??、??、????????:DataPie(??MSSQL 2008、ORACLE、ACCESS 2007): DataPie_V3.2: V3.2, 2012?5?19? ????ORACLE??????。AvalonDock: AvalonDock 2.0.0795: Welcome to the Beta release of AvalonDock 2.0 After 4 months of hard work I'm ready to upload the beta version of AvalonDock 2.0. This new version boosts a lot of new features and now is stable enough to be deployed in production scenarios. For this reason I encourage everyone is using AD 1.3 or earlier to upgrade soon to this new version. The final version is scheduled for the end of June. What is included in Beta: 1) Stability! thanks to all users contribution I’ve corrected a lot of issues...myCollections: Version 2.1.0.0: New in this version : Improved UI New Metro Skin Improved Performance Added Proxy Settings New Music and Books Artist detail Lot of Bug FixingAspxCommerce: AspxCommerce1.1: AspxCommerce - 'Flexible and easy eCommerce platform' offers a complete e-Commerce solution that allows you to build and run your fully functional online store in minutes. You can create your storefront; manage the products through categories and subcategories, accept payments through credit cards and ship the ordered products to the customers. We have everything set up for you, so that you can only focus on building your own online store. Note: To login as a superuser, the username and pass...SiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.1.1616.403): BUG FIX Hide save button when Titles or Descriptions element is selectedMapWindow 6 Desktop GIS: MapWindow 6.1.2: Looking for a .Net GIS Map Application?MapWindow 6 Desktop GIS is an open source desktop GIS for Microsoft Windows that is built upon the DotSpatial Library. This release requires .Net 4 (Client Profile). Are you a software developer?Instead of downloading MapWindow for development purposes, get started with with the DotSpatial template. The extensions you create from the template can be loaded in MapWindow.DotSpatial: DotSpatial 1.2: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...New ProjectsBunch of Small Tools: Il s'agit du code source de petits projets principalement en rapport avec le japonais ou le chinois, destinés à des apprenants de ces langues. D'autres petits programmes de ma création peuvent y être ajoutés à ma discrétion. Ces projets ne sont plus en développement, et le code source présenté ici est mis à disposition dans le cadre de mon portefolio.Cat: summaryClínica DECORação: Projeto desenvolvido em ASP.NETDnD Campaing Manager: A little project to create a full campaing manager for D&D 3.5 DMsExcel add-in for Ranges: Slice, dice, and splice Excel ranges to your hearts content. Use RANGE.KEY to do "named argument" style programing.FirstPong: bla bla blaHP TRIM Stream Record Attachment Web App: A simple ASP.Net 4.0 Web application that streams the electronic attachment of a record from Hewlett Packard's TRIM record management software (HP TRIM), to the browser. It uses Routing to retrieve an attachment based on the Record Number, ie: http://localhost/View/D12/45 Where D12/45 is a HP TRIM record number. Code was originally from a HP TRIM sample for ASP.Net 2.0, but I expanded upon it and converted it to .Net 4.0 and added routing for a nicer URL. Json Services: Json Services is a web services framework that intended to make the creation and consumption of SOA based applications easier and more efficient. Json Services framework is built using the .NET framework 4.0 and depends heavily on reflection features of .NET, it is depends on the great Json.NET library for serialization. Json Services framework is intended to be simple and efficient and can be consumed using a wide range of clients, Java Script clients will gain the benefit of automatic...LAVAA: LAVAAlion: abcloja chocolate: Utilização de C# para criar uma loja virtual.Mega Terrain ++: This project presents a method to render large terrains with multiple materials using the Ogre graphics engine.Minería de datos - Reglas de asociación: Laboratorio N° 2 del curso Minería de datos, semestre 1 año 2012. Universidad de Santiago de ChileMNT Cryptography: A very simple cryptography classMsAccess to Sqlite converter: This project aims to create a small application to convert a MSAccess Database file into SQLite format. It needs the SQLite ADO library in order to workMSI Previewer: MSI Previewer is a tool, which extracts the given msi and displays the directory structure in which the files will be placed after the installation. It helps to preview the directory structure of the files present in the msi, which will be placed exactly after installation. muvonni: css3 stylesheet developmentMyModeler: MyModeler is a modeling tool for IT professional and ArchitectsNewSite: First asp.net test project i have on codeplex. I'm not entirely sure where this project is headed at the moment and only have some initial ideas. More details to follow soon.PHLTest: Test team foundationPhoto Studio Nana: ?????????? ???????? ? ???playm_20120517_00365: just a colaboration plan...PROJETO CÓDIGO ABDI: Códigos da ABDIsample project for data entry: hi this is summary for first project....Sharepoint WP List Sinc Mapper: loren ipsum dolorSocialAuth4Net: SocialAuth4Net is open authentication wrapper for populer social platforms for example Facebook, LinkedIn and soon Twitter, Google+tiger: abctweetc: tweetc is a windows command line twitter client written in F#

    Read the article

  • CodePlex Daily Summary for Saturday, November 27, 2010

    CodePlex Daily Summary for Saturday, November 27, 2010Popular ReleasesMahTweets for Windows Phone: Nightly 69: Latest nightly for build 69XamlQuery/WPF - The Write Less, Do More, WPF Library: XamlQuery-WPF v1.2 (Runtime, Source): This is the first release of popular XamlQuery library for WPF. XamlQuery has already gained recognition among Silverlight developers.Math.NET Numerics: Beta 1: First beta of Math.NET Numerics. Only contains the managed linear algebra provider. Beta 2 will include the native linear algebra providers along with better documentation and examples.WatchersNET.SiteMap: WatchersNET.SiteMap 01.03.02: Whats NewNew Tax Filter, You can now select which Terms you want to Use.Minecraft GPS: Minecraft GPS 1.1: 1.1 Release New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-11-25: Code samples for Visual Studio 2010Typps (formerly jiffycms) wysiwyg rich text HTML editor for ASP.NET AJAX: Typps 2.9: -When uploading files (not images), through the file uploader and the multi-file uploader, FileUploaded and MultiFileUploaded event handlers were reporting an empty event argument, this is fixed now. -Fixed also url field not updating when uploading a file ( not image)Wii Backup Fusion: Wii Backup Fusion 0.8.5 Beta: - WBFS repair (default) options fixed - Transfer to image fixed - Settings ui widget names fixed - Some little bug fixes You need to reset the settings! Delete WiiBaFu's config file or registry entries on windows: Linux: ~/.config/WiiBaFu/wiibafu.conf Windows: HKEY_CURRENT_USER\Software\WiiBaFu\wiibafu Mac OS X: ~/Library/Preferences/com.wiibafu.wiibafu.plist Caution: This is a BETA version! Errors, crashes and data loss not impossible! Use in test environments only, not on productive syste...Minemapper: Minemapper v0.1.3: Added process count and world size calculation progress to the status bar. Added View->'Status Bar' menu item to show/hide the status bar. Status bar is automatically shown when loading a world. Added a prompt, when loading a world, to use or clear cached images.SQL Monitor: SQL Monitor 1.4: 1.added automatically load sql server instances 2.added friendly wait cursor 3.fixed problem with 4.0 fx 4.added exception handlingSexy Select: sexy select v0.4: Changes in v0.4 Added method : elements. This returns all the option elements that are currently added to the select list Added method : selectOption. This method accepts two values, the element to be modified and the selected state. (true/false)Deep Zoom for WPF: First Release: This first release of the Deep Zoom control has the same source code, binaries and demos as the CodeProject article (http://www.codeproject.com/KB/WPF/DeepZoom.aspx).Simple Service Locator: Simple Service Locator v0.12: The Simple Service Locator is an easy-to-use Inversion of Control library that is a complete implementation of the Common Service Locator interface. It solely supports code-based configuration and is an ideal starting point for developers unfamiliar with larger IoC / DI libraries New features in this release Collections that are registered using RegisterAll<T> can now be injected using automatic constructor injection. A new RegisterAll<T>(params T[]) method overload is added that allows ea...BlogEngine.NET: BlogEngine.NET 2.0 RC: This is a Release Candidate version for BlogEngine.NET 2.0. The most current, stable version of BlogEngine.NET is version 1.6. Find out more about the BlogEngine.NET 2.0 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation and the installation screencast. If you are upgrading from a previous version, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. As this ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.156: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a feature for aggregating the overall metrics in a folder full of NodeXL workbooks, adds geographical coordinates to the Twitter import features, and fixes a memory-related bug. See the Complete NodeXL Release History for details. Please Note: There is a new option in the setup program to install for "Just Me" or "Everyone." Most people...VFPX: FoxBarcode v.0.11: FoxBarcode v.0.11 - Released 2010.11.22 FoxBarcode is a 100% Visual FoxPro class that provides a tool for generating images with different bar code symbologies to be used in VFP forms and reports, or exported to other applications. Its use and distribution is free for all Visual FoxPro Community. Whats is new? Added a third parameter to the BarcodeImage() method Fixed some minor bugs History FoxBarcode v.0.10 - Released 2010.11.19 - 85 Downloads Project page: FoxBarcodeDotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 1.1.0.5: What is new in DotNetAge 1.1.0.5 ?Document Library features and template added. Resolve issues of templates Improving publishing service performance Opml support added. What is new in DotNetAge 1.1 ? D.N.A Core updatesImprove runtime performance , more stabilize. The DNA core objects model added. Personalization features added that allows users create the personal website, manage their resources, store personal data DynamicUIFixed the PageManager could not move page node bug. ...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.3.1 and demos: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form and Pager tested on mozilla, safari, chrome, opera, ie 9b/8/7/6MDownloader: MDownloader-0.15.24.6966: Fixed Updater; Fixed minor bugs;WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.1: Version: 2.0.0.1 (Milestone 1): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...New ProjectsCommunity Megaphone for Windows Phone: The Official CommunityMegaphone application for Windows Phone. Releases are in XAP format, allowing you to upload the compiled application into a device using the Windows Phone Application Deployment tool. Download the source code to learn how to build Windows Phone applications.Group positioning GPS: this is a project for a subject in the university geographic information systems, its about a group of gadjets that can know the position of the other devices in the same group.IIS HTTPS Binder: On IIS 7 that you can not create more than one HTTPS binding, even though you have more then one SSL certificate and you need HTTPS binding on different hosted websites. If you have one IP address you can bind only one SSL to chosen website. This small application can fix this.Inspired Faith Now Playing: A ClickOnce-deployed desktop application that sits in your application tray and notifies you when any of your favorite Messianic Jewish and Christian artists or songs play on the Inspired Faith online radio station (http://inspiredfaith.org)MA Manager: The goal of this software is to help manage a martial arts school through multiple clients and platforms. This allows instructors to easily track students progress overtime.MathModels: This project contains commonly used algorithm implementations in C# .netMyFlickr API: MyFlickr API is a library for developers allows them to call Flickr API from any .Net Application.NerdOnRails.DynamicProxy: a small dynamic-proxy implementation using the new dynamic object that was introduced with .Net 4.NerdOnRails.Injector: a small dependency injection framework for .netNHyperV: NHyperV is a C# Hyper-V programming model.Power Scheme Switcher: This is a very simple utility that exposes an icon in the system tray and allows you to quickly change the Power Plan Scheme from there. It is developed in c# with Vs 2010(WinForm)rcforms: Custom sharepoint forms exampleRobo Commander: This is a Lego NXT commander developed under Visual Studio. This console will have basic commands and uses the NXT++ libraries. Shared Genomics Project - Workbench Codebase: The Shared Genomics workbench enables a diverse user group of researchers to explore the associations between genetic and other factors in their datasets. It provides a graphical user interface to the analysis functions published in a sister Codeplex project i.e. MPI Codebase.SharePoint 2010 PowerShell v2 reference: This project hosts a complete PowerShell v2 reference for SharePoint 2010.Social PowerFlow: Experiments with PowerWF and the social network - integrating Windows Powershell and Windows Workflow Foundation with Facebook and TwitterTweetSayings: Twitter SayingsXamlQuery/WPF - The Write Less, Do More, WPF Library: XamlQuery/WPF is a lightweight yet powerful library that enables rapid development in WPF. It simplifies several tasks like page/window traversing; finding controls by name, type, style, property value or position in control tree; event handling; animating and much more.XmlDSigEx XML Digital Signature Library: The XmlDSigEx library is an alternative to using the SignedXml classes in the .Net Framework. It addresses some of the shortcomings of the standard types particularly in regards to canonicalisation and the enveloped transform. It is currently under development.Yoi's Online Project: This is online storage for Yovi's project?????? ???? ???: ?????? ???? ?? ????? ??? ?????? ? ??????? ?? ???? ??? ????

    Read the article

  • Constant game speed independent of variable FPS in OpenGL with GLUT?

    - by Nazgulled
    I've been reading Koen Witters detailed article about different game loop solutions but I'm having some problems implementing the last one with GLUT, which is the recommended one. After reading a couple of articles, tutorials and code from other people on how to achieve a constant game speed, I think that what I currently have implemented (I'll post the code below) is what Koen Witters called Game Speed dependent on Variable FPS, the second on his article. First, through my searching experience, there's a couple of people that probably have the knowledge to help out on this but don't know what GLUT is and I'm going to try and explain (feel free to correct me) the relevant functions for my problem of this OpenGL toolkit. Skip this section if you know what GLUT is and how to play with it. GLUT Toolkit: GLUT is an OpenGL toolkit and helps with common tasks in OpenGL. The glutDisplayFunc(renderScene) takes a pointer to a renderScene() function callback, which will be responsible for rendering everything. The renderScene() function will only be called once after the callback registration. The glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0) takes the number of milliseconds to pass before calling the callback processAnimationTimer(). The last argument is just a value to pass to the timer callback. The processAnimationTimer() will not be called each TIMER_MILLISECONDS but just once. The glutPostRedisplay() function requests GLUT to render a new frame so we need call this every time we change something in the scene. The glutIdleFunc(renderScene) could be used to register a callback to renderScene() (this does not make glutDisplayFunc() irrelevant) but this function should be avoided because the idle callback is continuously called when events are not being received, increasing the CPU load. The glutGet(GLUT_ELAPSED_TIME) function returns the number of milliseconds since glutInit was called (or first call to glutGet(GLUT_ELAPSED_TIME)). That's the timer we have with GLUT. I know there are better alternatives for high resolution timers, but let's keep with this one for now. I think this is enough information on how GLUT renders frames so people that didn't know about it could also pitch in this question to try and help if they fell like it. Current Implementation: Now, I'm not sure I have correctly implemented the second solution proposed by Koen, Game Speed dependent on Variable FPS. The relevant code for that goes like this: #define TICKS_PER_SECOND 30 #define MOVEMENT_SPEED 2.0f const int TIMER_MILLISECONDS = 1000 / TICKS_PER_SECOND; int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void processAnimationTimer(int value) { // setups the timer to be called again glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Requests to render a new frame (this will call my renderScene() once) glutPostRedisplay(); } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) // Setup the timer to be called one first time glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Read the current time since glutInit was called currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } This implementation doesn't fell right. It works in the sense that helps the game speed to be constant dependent on the FPS. So that moving from point A to point B takes the same time no matter the high/low framerate. However, I believe I'm limiting the game framerate with this approach. Each frame will only be rendered when the time callback is called, that means the framerate will be roughly around TICKS_PER_SECOND frames per second. This doesn't feel right, you shouldn't limit your powerful hardware, it's wrong. It's my understanding though, that I still need to calculate the elapsedTime. Just because I'm telling GLUT to call the timer callback every TIMER_MILLISECONDS, it doesn't mean it will always do that on time. I'm not sure how can I fix this and to be completely honest, I have no idea what is the game loop in GLUT, you know, the while( game_is_running ) loop in Koen's article. But it's my understanding that GLUT is event-driven and that game loop starts when I call glutMainLoop() (which never returns), yes? I thought I could register an idle callback with glutIdleFunc() and use that as replacement of glutTimerFunc(), only rendering when necessary (instead of all the time as usual) but when I tested this with an empty callback (like void gameLoop() {}) and it was basically doing nothing, only a black screen, the CPU spiked to 25% and remained there until I killed the game and it went back to normal. So I don't think that's the path to follow. Using glutTimerFunc() is definitely not a good approach to perform all movements/animations based on that, as I'm limiting my game to a constant FPS, not cool. Or maybe I'm using it wrong and my implementation is not right? How exactly can I have a constant game speed with variable FPS? More exactly, how do I correctly implement Koen's Constant Game Speed with Maximum FPS solution (the fourth one on his article) with GLUT? Maybe this is not possible at all with GLUT? If not, what are my alternatives? What is the best approach to this problem (constant game speed) with GLUT? I originally posted this question on Stack Overflow before being pointed out about this site. The following is a different approach I tried after creating the question in SO, so I'm posting it here too. Another Approach: I've been experimenting and here's what I was able to achieve now. Instead of calculating the elapsed time on a timed function (which limits my game's framerate) I'm now doing it in renderScene(). Whenever changes to the scene happen I call glutPostRedisplay() (ie: camera moving, some object animation, etc...) which will make a call to renderScene(). I can use the elapsed time in this function to move my camera for instance. My code has now turned into this: int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void renderScene(void) { (...) // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Setup the camera position and looking point SceneCamera.LookAt(); // All drawing code goes inside this function drawCompleteScene(); glutSwapBuffers(); /* Redraw the frame ONLY if the user is moving the camera (similar code will be needed to redraw the frame for other events) */ if(!IsTupleEmpty(cameraDirection)) { glutPostRedisplay(); } } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } Conclusion, it's working, or so it seems. If I don't move the camera, the CPU usage is low, nothing is being rendered (for testing purposes I only have a grid extending for 4000.0f, while zFar is set to 1000.0f). When I start moving the camera the scene starts redrawing itself. If I keep pressing the move keys, the CPU usage will increase; this is normal behavior. It drops back when I stop moving. Unless I'm missing something, it seems like a good approach for now. I did find this interesting article on iDevGames and this implementation is probably affected by the problem described on that article. What's your thoughts on that? Please note that I'm just doing this for fun, I have no intentions of creating some game to distribute or something like that, not in the near future at least. If I did, I would probably go with something else besides GLUT. But since I'm using GLUT, and other than the problem described on iDevGames, do you think this latest implementation is sufficient for GLUT? The only real issue I can think of right now is that I'll need to keep calling glutPostRedisplay() every time the scene changes something and keep calling it until there's nothing new to redraw. A little complexity added to the code for a better cause, I think. What do you think?

    Read the article

  • Knockout with ASP.Net MVC2 - HTML Extension Helpers for input controls

    - by Renso
    Goal: Defining Knockout-style input controls can be tedious and also may be something that you may find obtrusive, mixing your HTML with data bind syntax as well as binding your aspx, ascx files to Knockout. The goal is to make specifying Knockout specific HTML tags easy, seamless really, as well as being able to remove references to Knockout easily. Environment considerations: ASP.Net MVC2 or later Knockoutjs.js How to:     public static class HtmlExtensions     {         public static string DataBoundCheckBox(this HtmlHelper helper, string name, bool isChecked, object htmlAttributes)         {             var builder = new TagBuilder("input");             var dic = new RouteValueDictionary(htmlAttributes) { { "data-bind", String.Format("checked: {0}", name) } };             builder.MergeAttributes(dic);             builder.MergeAttribute("type", @"checkbox");             builder.MergeAttribute("name", name);             builder.MergeAttribute("value", @"true");             if (isChecked)             {                 builder.MergeAttribute("checked", @"checked");             }             return builder.ToString(TagRenderMode.SelfClosing);         }         public static MvcHtmlString DataBoundSelectList(this HtmlHelper helper, string name, IEnumerable<SelectListItem> selectList, String optionLabel)         {             var attrProperties = new StringBuilder();             attrProperties.Append(String.Format("optionsText: '{0}'", name));             if (!String.IsNullOrEmpty(optionLabel)) attrProperties.Append(String.Format(", optionsCaption: '{0}'", optionLabel));             attrProperties.Append(String.Format(", value: {0}", name));             var dic = new RouteValueDictionary { { "data-bind", attrProperties.ToString() } };             return helper.DropDownList(name, selectList, optionLabel, dic);         }         public static MvcHtmlString DataBoundSelectList(this HtmlHelper helper, string name, IEnumerable<SelectListItem> selectList, String optionLabel, object htmlAttributes)         {             var attrProperties = new StringBuilder();             attrProperties.Append(String.Format("optionsText: '{0}'", name));             if (!String.IsNullOrEmpty(optionLabel)) attrProperties.Append(String.Format(", optionsCaption: '{0}'", optionLabel));             attrProperties.Append(String.Format(", value: {0}", name));             var dic = new RouteValueDictionary(htmlAttributes) {{"data-bind", attrProperties}};             return helper.DropDownList(name, selectList, optionLabel, dic);         }         public static String DataBoundSelectList(this HtmlHelper helper, String options, String optionsText, String value)         {             return String.Format("<select data-bind=\"options: {0},optionsText: '{1}',value: {2}\"></select>", options, optionsText, value);         }         public static MvcHtmlString DataBoundTextBox(this HtmlHelper helper, string name, object value, object htmlAttributes)         {             var dic = new RouteValueDictionary(htmlAttributes);             dic.Add("data-bind", String.Format("value: {0}", name));             return helper.TextBox(name, value, dic);         }         public static MvcHtmlString DataBoundTextBox(this HtmlHelper helper, string name, string observable, object value, object htmlAttributes)         {             var dic = new RouteValueDictionary(htmlAttributes);             dic.Add("data-bind", String.Format("value: {0}", observable));             return helper.TextBox(name, value, dic);         }         public static MvcHtmlString DataBoundTextArea(this HtmlHelper helper, string name, string value, int rows, int columns, object htmlAttributes)         {             var dic = new RouteValueDictionary(htmlAttributes);             dic.Add("data-bind", String.Format("value: {0}", name));             return helper.TextArea(name, value, rows, columns, dic);         }         public static MvcHtmlString DataBoundTextArea(this HtmlHelper helper, string name, string observable, string value, int rows, int columns, object htmlAttributes)         {             var dic = new RouteValueDictionary(htmlAttributes);             dic.Add("data-bind", String.Format("value: {0}", observable));             return helper.TextArea(name, value, rows, columns, dic);         }         public static string BuildUrlFromExpression<T>(this HtmlHelper helper, Expression<Action<T>> action)         {             var values = CreateRouteValuesFromExpression(action);             var virtualPath = helper.RouteCollection.GetVirtualPath(helper.ViewContext.RequestContext, values);             if (virtualPath != null)             {                 return virtualPath.VirtualPath;             }             return null;         }         public static string ActionLink<T>(this HtmlHelper helper, Expression<Action<T>> action, string linkText)         {             return helper.ActionLink(action, linkText, null);         }         public static string ActionLink<T>(this HtmlHelper helper, Expression<Action<T>> action, string linkText, object htmlAttributes)         {             var values = CreateRouteValuesFromExpression(action);             var controllerName = (string)values["controller"];             var actionName = (string)values["action"];             values.Remove("controller");             values.Remove("action");             return helper.ActionLink(linkText, actionName, controllerName, values, new RouteValueDictionary(htmlAttributes)).ToHtmlString();         }         public static MvcForm Form<T>(this HtmlHelper helper, Expression<Action<T>> action)         {             return helper.Form(action, FormMethod.Post);         }         public static MvcForm Form<T>(this HtmlHelper helper, Expression<Action<T>> action, FormMethod method)         {             var values = CreateRouteValuesFromExpression(action);             string controllerName = (string)values["controller"];             string actionName = (string)values["action"];             values.Remove("controller");             values.Remove("action");             return helper.BeginForm(actionName, controllerName, values, method);         }         public static MvcForm Form<T>(this HtmlHelper helper, Expression<Action<T>> action, FormMethod method, object htmlAttributes)         {             var values = CreateRouteValuesFromExpression(action);             string controllerName = (string)values["controller"];             string actionName = (string)values["action"];             values.Remove("controller");             values.Remove("action");             return helper.BeginForm(actionName, controllerName, values, method, new RouteValueDictionary(htmlAttributes));         }         public static string VertCheckBox(this HtmlHelper helper, string name, bool isChecked)         {             return helper.CustomCheckBox(name, isChecked, null);         }          public static string CustomCheckBox(this HtmlHelper helper, string name, bool isChecked, object htmlAttributes)         {             TagBuilder builder = new TagBuilder("input");             builder.MergeAttributes(new RouteValueDictionary(htmlAttributes));             builder.MergeAttribute("type", "checkbox");             builder.MergeAttribute("name", name);             builder.MergeAttribute("value", "true");             if (isChecked)             {                 builder.MergeAttribute("checked", "checked");             }             return builder.ToString(TagRenderMode.SelfClosing);         }         public static string Script(this HtmlHelper helper, string script, object scriptAttributes)         {             var pathForCRMScripts = ScriptsController.GetPathForCRMScripts();             if (ScriptOptimizerConfig.EnableMinimizedFileLoad)             {                 string newPathForCRM = pathForCRMScripts + "Min/";                 ScriptsController.ServerPathMapper = new ServerPathMapper();                 string fullPath = ScriptsController.ServerMapPath(newPathForCRM);                 if (!File.Exists(fullPath + script))                     return null;                 if (!Directory.Exists(fullPath))                     return null;                 pathForCRMScripts = newPathForCRM;             }             var builder = new TagBuilder("script");             builder.MergeAttributes(new RouteValueDictionary(scriptAttributes));             builder.MergeAttribute("type", @"text/javascript");             builder.MergeAttribute("src", String.Format("{0}{1}", pathForCRMScripts.Replace("~", String.Empty), script));             return builder.ToString(TagRenderMode.SelfClosing);         }         private static RouteValueDictionary CreateRouteValuesFromExpression<T>(Expression<Action<T>> action)         {             if (action == null)                 throw new InvalidOperationException("Action must be provided");             var body = action.Body as MethodCallExpression;             if (body == null)             {                 throw new InvalidOperationException("Expression must be a method call");             }             if (body.Object != action.Parameters[0])             {                 throw new InvalidOperationException("Method call must target lambda argument");             }             // This will build up a RouteValueDictionary containing the controller name, action name, and any             // parameters passed as part of the "action" parameter.             string name = body.Method.Name;             string controllerName = typeof(T).Name;             if (controllerName.EndsWith("Controller", StringComparison.OrdinalIgnoreCase))             {                 controllerName = controllerName.Remove(controllerName.Length - 10, 10);             }             var values = BuildParameterValuesFromExpression(body) ?? new RouteValueDictionary();             values.Add("controller", controllerName);             values.Add("action", name);             return values;         }         private static RouteValueDictionary BuildParameterValuesFromExpression(MethodCallExpression call)         {             // Build up a RouteValueDictionary containing parameter names as keys and parameter values             // as values based on the MethodCallExpression passed in.             var values = new RouteValueDictionary();             ParameterInfo[] parameters = call.Method.GetParameters();             // If the passed in method has no parameters, just return an empty dictionary.             if (parameters.Length == 0)             {                 return values;             }             for (int i = 0; i < parameters.Length; i++)             {                 object parameterValue;                 Expression expression = call.Arguments[i];                 // If the current parameter is a constant, just use its value as the parameter value.                 var constant = expression as ConstantExpression;                 if (constant != null)                 {                     parameterValue = constant.Value;                 }                 else                 {                     // Otherwise, compile and execute the expression and use that as the parameter value.                     var function = Expression.Lambda<Func<object>>(Expression.Convert(expression, typeof(object)),                                                                    new ParameterExpression[0]);                     try                     {                         parameterValue = function.Compile()();                     }                     catch                     {                         parameterValue = null;                     }                 }                 values.Add(parameters[i].Name, parameterValue);             }             return values;         }     }   Some observations: The first two DataBoundSelectList overloaded methods are specifically built to load the data right into the drop down box as part of the HTML response stream rather than let Knockout's engine populate the options client-side. The third overloaded method does it client-side via the viewmodel. The first two overloads can be done when you have no requirement to add complex JSON objects to your lists. Furthermore, why render and parse the JSON object when you can have it all built and rendered server-side like any other list control.

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • Common Live Upgrade problems

    - by user12611829
    As I have worked with customers deploying Live Upgrade in their environments, several problems seem to surface over and over. With this blog article, I will try to collect these troubles, as well as suggest some workarounds. If this sounds like the beginnings of a Wiki, you would be right. At present, there is not enough material for one, so we will use this blog for the time being. I do expect new material to be posted on occasion, so if you wish to bookmark it for future reference, a permanent link can be found here. Live Upgrade copies over ZFS root clone This was introduced in Solaris 10 10/09 (u8) and the root of the problem is a duplicate entry in the source boot environments ICF configuration file. Prior to u8, a ZFS root file system was not included in /etc/vfstab, since the mount is implicit at boot time. Starting with u8, the root file system is included in /etc/vfstab, and when the boot environment is scanned to create the ICF file, a duplicate entry is recorded. Here's what the error looks like. # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Creating file systems on boot environment . Creating file system for in zone on . The error indicator ----- /usr/lib/lu/lumkfs: test: unknown operator zfs Populating file systems on boot environment . Checking selection integrity. Integrity check OK. Populating contents of mount point . This should not happen ------ Copying. Ctrl-C and cleanup If you weren't paying close attention, you might not even know this is an error. The symptoms are lucreate times that are way too long due to the extraneous copy, or the one that alerted me to the problem, the root file system is filling up - again thanks to a redundant copy. This problem has already been identified and corrected, and a patch (121431-58 or later for x86, 121430-57 for SPARC) is available. Unfortunately, this patch has not yet made it into the Solaris 10 Recommended Patch Cluster. Applying the prerequisite patches from the latest cluster is a recommendation from the Live Upgrade Survival Guide blog, so an additional step will be required until the patch is included. Let's see how this works. # patchadd -p | grep 121431 Patch: 121429-13 Obsoletes: Requires: 120236-01 121431-16 Incompatibles: Packages: SUNWluzone Patch: 121431-54 Obsoletes: 121436-05 121438-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWluu SUNWlur # unzip 121431-58 # patchadd 121431-58 Validating patches... Loading patches installed on the system... Done! Loading patches requested to install. Done! Checking patches that you specified for installation. Done! Approved patches will be installed in this order: 121431-58 Checking installed patches... Executing prepatch script... Installing patch packages... Patch 121431-58 has been successfully installed. See /var/sadm/patch/121431-58/log for details Executing postpatch script... Patch packages installed: SUNWlucfg SUNWlur SUNWluu # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. INFORMATION: Unable to determine size or capacity of slice . Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. INFORMATION: Unable to determine size or capacity of slice . Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Cloning file systems from boot environment to create boot environment . Creating snapshot for on . Creating clone for on . Setting canmount=noauto for in zone on . Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. File propagation successful Copied GRUB menu from PBE to ABE No entry for BE in GRUB menu Population of boot environment successful. Creation of boot environment successful. This time it took just a few seconds. A cursory examination of the offending ICF file (/etc/lu/ICF.3 in this case) shows that the duplicate root file system entry is now gone. # cat /etc/lu/ICF.3 s10u8-baseline:-:/dev/zvol/dsk/panroot/swap:swap:8388608 s10u8-baseline:/:panroot/ROOT/s10u8-baseline:zfs:0 s10u8-baseline:/vbox:pandora/vbox:zfs:0 s10u8-baseline:/setup:pandora/setup:zfs:0 s10u8-baseline:/export:pandora/export:zfs:0 s10u8-baseline:/pandora:pandora:zfs:0 s10u8-baseline:/panroot:panroot:zfs:0 s10u8-baseline:/workshop:pandora/workshop:zfs:0 s10u8-baseline:/export/iso:pandora/iso:zfs:0 s10u8-baseline:/export/home:pandora/home:zfs:0 s10u8-baseline:/vbox/HardDisks:pandora/vbox/HardDisks:zfs:0 s10u8-baseline:/vbox/HardDisks/WinXP:pandora/vbox/HardDisks/WinXP:zfs:0 Solaris 10 9/10 introduces new autoregistration file This one is actually mentioned in the Oracle Solaris 9/10 release notes. I know, I hate it when that happens too. Here's what the "error" looks like. # luupgrade -u -s /mnt -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ERROR: The auto registration file does not exist or incomplete. The auto registration file is mandatory for this upgrade. Use -k argument along with luupgrade command. autoreg_file is path to auto registration information file. See sysidcfg(4) for a list of valid keywords for use in this file. The format of the file is as follows. oracle_user=xxxx oracle_pw=xxxx http_proxy_host=xxxx http_proxy_port=xxxx http_proxy_user=xxxx http_proxy_pw=xxxx For more details refer "Oracle Solaris 10 9/10 Installation Guide: Planning for Installation and Upgrade". As with the previous problem, this is also easy to work around. Assuming that you don't want to use the auto-registration feature at upgrade time, create a file that contains just autoreg=disable and pass the filename on to luupgrade. Here is an example. # echo "autoreg=disable" /var/tmp/no-autoreg # luupgrade -u -s /mnt -k /var/tmp/no-autoreg -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ####################################################################### NOTE: To improve products and services, Oracle Solaris communicates configuration data to Oracle after rebooting. You can register your version of Oracle Solaris to capture this data for your use, or the data is sent anonymously. For information about what configuration data is communicated and how to control this facility, see the Release Notes or www.oracle.com/goto/solarisautoreg. INFORMATION: After activated and booted into new BE , Auto Registration happens automatically with the following Information autoreg=disable ####################################################################### Validating the contents of the media . The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains version . Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE . Checking for GRUB menu on ABE . Saving GRUB menu on ABE . Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE . Performing the operating system upgrade of the BE . CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. The Live Upgrade operation now proceeds as expected. Once the system upgrade is complete, we can manually register the system. If you want to do a hands off registration during the upgrade, see the Oracle Solaris Auto Registration section of the Oracle Solaris Release Notes for instructions on how to do that. Technocrati Tags: Oracle Solaris Patching Live Upgrade var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831";

    Read the article

  • Log Blog

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Logging – A log blog In a another blog (Missing Fields and Defaults) I spoke about not doing a blog about log files, but then I looked at it again and realized that this is a nice opportunity to show a simple yet powerful tool and also deal with static variables and functions in C#. My log had to be able to answer a few simple logging rules:   To log or not to log? That is the question – Always log! That is the answer  Do we share a log? Even when a file is opened with a minimal lock, it does not share well and performance greatly suffers. So sharing a log is not a good idea. Also, when sharing, it is harder to find your particular entries and you have to establish rules about retention. My recommendation – Do Not Share!  How verbose? Your log can be very verbose – a good thing when testing, very terse – a good thing in day-to-day runs, or somewhere in between. You must be the judge. In my Blog, I elect to always report a run with start and end times, and always report errors. I normally use 5 levels of logging: 4 – write all, 3 – write more, 2 – write some, 1 – write errors and timing, 0 – write none. The code sample below is more general than that. It uses the config file to set the max log level and each call to the log assigns a level to the call itself. If the level is above the .config highest level, the line will not be written. Programmers decide which log belongs to which level and thus we can set the .config differently for production and testing.  Where do I keep the log? If your career is important to you, discuss this with the boss and with the system admin. We keep logs in the L: drive of our server and make sure that we have a directory for each app that needs a log. When adding a new app, add a new directory. The default location for the log is also found in the .config file Print One or Many? There are two options here:   1.     Print many, Open but once once – you start the stream and close it only when the program ends. This is what you can do when you perform in “batch” mode like in a console app or a stsadm extension.The advantage to this is that starting a closing a stream is expensive and time consuming and because we use a unique file, keeping it open for a long time does not cause contention problems. 2.     Print one entry at a time or Open many – every time you write a line, you start the stream, write to it and close it. This work for event receivers, feature receivers, and web parts. Here scalability requires us to create objects on the fly and get rid of them as soon as possible.  A default value of the onceOrMany resides in the .config.  All of the above applies to any windows or web application, not just SharePoint.  So as usual, here is a routine that does it all, and a few simple functions that call it for a variety of purposes.   So without further ado, here is app.config  <?xml version="1.0" encoding="utf-8" ?> <configuration>     <configSections>         <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, ublicKeyToken=b77a5c561934e089" >         <section name="statics.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />         </sectionGroup>     </configSections>     <applicationSettings>         <statics.Properties.Settings>             <setting name="oneOrMany" serializeAs="String">                 <value>False</value>             </setting>             <setting name="logURI" serializeAs="String">                 <value>C:\staticLog.txt</value>             </setting>             <setting name="highestLevel" serializeAs="String">                 <value>2</value>             </setting>         </statics.Properties.Settings>     </applicationSettings> </configuration>   And now the code:  In order to persist the variables between calls and also to be able to persist (or not to persist) the log file itself, I created an EventLog class with static variables and functions. Static functions do not need an instance of the class in order to work. If you ever wondered why our Main function is static, the answer is that something needs to run before instantiation so that other objects may be instantiated, and this is what the “static” Main does. The various logging functions and variables are created as static because they do not need instantiation and as a fringe benefit they remain un-destroyed between calls. The Main function here is just used for testing. Note that it does not instantiate anything, just uses the log functions. This is possible because the functions are static. Also note that the function calls are of the form: Class.Function.  using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace statics {       class Program     {         static void Main(string[] args)         {             //write a single line             EventLog.LogEvents("ha ha", 3, "C:\\hahafile.txt", 4, true, false);             //this single line will not be written because the msgLevel is too high             EventLog.LogEvents("baba", 3, "C:\\babafile.txt", 2, true, false);             //The next 4 lines will be written in succession - no closing             EventLog.LogLine("blah blah", 1);             EventLog.LogLine("da da", 1);             EventLog.LogLine("ma ma", 1);             EventLog.LogLine("lah lah", 1);             EventLog.CloseLog(); // log will close             //now with specific functions             EventLog.LogSingleLine("one line", 1);             //this is just a test, the log is already closed             EventLog.CloseLog();         }     }     public class EventLog     {         public static string logURI = Properties.Settings.Default.logURI;         public static bool isOneLine = Properties.Settings.Default.oneOrMany;         public static bool isOpen = false;         public static int highestLevel = Properties.Settings.Default.highestLevel;         public static StreamWriter sw;         /// <summary>         /// the program will "print" the msg into the log         /// unless msgLevel is > msgLimit         /// onceOrMany is true when once - the program will open the log         /// print the msg and close the log. False when many the program will         /// keep the log open until close = true         /// normally all the arguments will come from the app.config         /// called by many overloads of logLine         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         /// <param name="logFileName"></param>         /// <param name="msgLimit"></param>         /// <param name="onceOrMany"></param>         /// <param name="close"></param>         public static void LogEvents(string msg, int msgLevel, string logFileName, int msgLimit, bool oneOrMany, bool close)         {             //to print or not to print             if (msgLevel <= msgLimit)             {                 //open the file. from the argument (logFileName) or from the config (logURI)                 if (!isOpen)                 {                     string logFile = logFileName;                     if (logFileName == "")                     {                         logFile = logURI;                     }                     sw = new StreamWriter(logFile, true);                     sw.WriteLine("Started At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     isOpen = true;                 }                 //print                 sw.WriteLine(msg);             }             //close when instructed             if (close || oneOrMany)             {                 if (isOpen)                 {                     sw.WriteLine("Ended At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     sw.Close();                     isOpen = false;                 }             }         }           /// <summary>         /// The simplest, just msg and level         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogLine(string msg, int msgLevel)         {             //use the given msg and msgLevel and all others are defaults             LogEvents(msg, msgLevel, "", highestLevel, isOneLine, false);         }                 /// <summary>         /// one line at a time - open print close         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogSingleLine(string msg, int msgLevel)         {             LogEvents(msg, msgLevel, "", highestLevel, true, true);         }           /// <summary>         /// used to close. high level, low limit, once and close are set         /// </summary>         /// <param name="close"></param>         public static void CloseLog()         {             LogEvents("", 15, "", 1, true, true);         }           }     }   }   That’s all folks!

    Read the article

  • CodePlex Daily Summary for Saturday, October 27, 2012

    CodePlex Daily Summary for Saturday, October 27, 2012Popular ReleasesRazorSourceGenerator: RazorSourceGenerator v1.1 Installer: RazorSourceGenerator v1.1 Installer ?? include ??,???????。Fruit Juice: Fruit Juice v1.1: Changelog (v1.1):Minor design fixes; Added live tiles; Added the new Windows Phone Store Download Badge;ZXMAK2: Version 2.6.7.0: - small performance improvements - fix & improvements for Direct3D renderer (thanks to zebest for testing)Media Companion: Media Companion 3.507b: Once again, it has been some time since our release, and there have been a number changes since then. It is hoped that these changes will address some of the issues users have been experiencing, and of course, work continues! New Features: Added support for adding Home Movies. Option to sort Movies by votes. Added 'selectedBrowser' preference used when opening links in an external browser. Added option to fallback to getting runtime from the movie file if not available on IMDB. Added new Big...MSBuild Extension Pack: October 2012: Release Blog Post The MSBuild Extension Pack October 2012 release provides a collection of over 475 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUI...NAudio: NAudio 1.6: Release notes at http://mark-dot-net.blogspot.co.uk/2012/10/naudio-16-release-notes-10th.htmlPowerShell Community Extensions: 2.1 Production: PowerShell Community Extensions 2.1 Release NotesOct 25, 2012 This version of PSCX supports both Windows PowerShell 2.0 and 3.0. See the ReleaseNotes.txt download above for more information.DbDiff: Database Diff and Database Scripting: 1.3.3.5: - Wrong load options (deskey wrong)Building Windows 8 Apps with C# and XAML: Full Source Chapters 1 - 10 for Windows 8 Fix 001: This is the full source from all chapters of the book, compiled and tested on Windows 8 RTM. Includes a fix for the Netflix example from Chapter 6 that was missing a service reference.PdfReport: PdfReport 1.3: - Removed the limitation of defining non duplicate column names. See DuplicateColumns sample for more info. - Added horizontal stack panel mode. See CharacterMap sample for more info. - Added pdfStamper to onFillAcroForm of PdfTemplate. See QuestionsAcroForm sample for more info. Added 6 new samples (http://pdfreport.codeplex.com/SourceControl/BrowseLatest): - AccountingBalanceColumn - CharacterMap - CustomPriceNumber - DuplicateColumns - QuestionsAcroForm - QuestionsFormUmbraco CMS: Umbraco 4.9.1: Umbraco 4.9.1 is a bugfix release to fix major issues in 4.9.0 BugfixesThe full list of fixes can be found in the issue tracker's filtered results. A summary: Split buttons work again, you can now also scroll easier when the list is too long for the screen Media and Content pickers have information of the full path of the picked item Fixed: Publish status may not be accurate on nodes with large doctypes Fixed: 2 media folders and recycle bins after upgrade to 4.9 The template/code ...AcDown????? - AcDown Downloader Framework: AcDown????? v4.2.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...Rawr: Rawr 5.0.2: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...MCEBuddy 2.x: MCEBuddy 2.3.5: Changelog for 2.3.5 (32bit and 64bit) 1. Fixed a bug causing MCEBuddy to crash during or after installation on Windows XP 2. Bugfix for resource leak with UPnP which would lead to a failure after many days 3. Increased the UPnP discovery re-scan interval from 10 minutes to 30 minutes 4. Added support for specifying TVDB and IMDB id’s in the conversion task page (forcing the internet lookup for metadata)CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1025.5): [NEW] Support for connecting to CRM Online via Office 365 (OSDP) [NEW] Current connection information and loaded ribbon name are displayed in the status bar [IMPROVED] Connect dialog minor improvements and error message descriptions [IMPROVED] Connecting to a CRM server will close currently loaded ribbon upon confirmation (if another ribbon was loaded previously) [FIX] Fixed bug in Open Ribbon dialog which would not allow to refresh entity list more than onceReadable Passphrase Generator: KeePass Plugin 0.8.0: Changes: Interrogative phrases (questions) like why did the statesman burgle amidst lucid sunlamps Support transitive / intransitive verbs (whether a verb needs a subject or not). Change adverbs to be either before or after the verb, at random. Add an "equal" version of each strength, where each possibility is equally likely (for password purists). 3401 words in the default dictionary (~400 more than previous release) Fixed bugs when choosing verb tensesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.72: Fix for Issue #18819 - bad optimization of return/assign operator.WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.390: Version 2.5.0.390 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Fix recent file list remove issue. WAF: Minor code improvements. BookLibrary: Fix Blend design time support o...Fiskalizacija za developere: FiskalizacijaDev 1.1: Ovo je prva nadogradnja ovog projekta nakon inicijalnog predstavljanja - dodali smo nekoliko feature-a, bilo zato što smo sami primijetili da bi ih bilo dobro dodati, bilo na osnovu vaših sugestija - hvala svima koji su se ukljucili :) Ovo su stvari riješene u v1.1.: 1. Bilo bi dobro da se XML dokument koji se šalje u CIS može snimiti u datoteku (http://fiskalizacija.codeplex.com/workitem/612) 2. Podrška za COM DLL (VB6) (http://fiskalizacija.codeplex.com/workitem/613) 3. Podrška za DOS (unu...Liberty: v3.4.0.0 Release 20th October 2012: Change Log -Added -Halo 4 support (invincibility, ammo editing) -Reach A warning dialog now shows up when you first attempt to swap a weapon -Fixed -A few minor bugsNew Projects25minutes the simpliest, best looking, hassle free Pomodoro Technique® timer: 25 minutes __________________________________________________ it is the simplest, best looking, hassle free timer for all Pomodoro Technique® fans out there.Argument: A website for constructing logical arguments in tree form.Asset manager: More information comming soon!bootster: bootster is a bootstrapper for small/medium sized .net web projects.BugHerd-4-DNN: This DotNetNuke(TM) extension simplifies the integration of BugHerd on your DNN portals.CaptureLinks 2.0: CaptureLink2.0 with SharePoint 2013 SupportDeploy File Demo: Deployfile Demo is the companion piece to Deploy File Generator. It shows how to deploy files that have been imported into a special '.resources' file.DIfmClient: DIfmClient is a WPF (Windows Presentation Foundation) DI.FM streaming radio player, written in C#.FinApps201240: Aplicación Financiera para La CaixaFirstTasteKudo: Just a taste of Kudo feature.Imgx: This is specifically for internal use only.jamestest123: Blah blah blahMezanmiTechFireMyTeam: In short we can use this project for any rating systems.MezanmiTechInTouchReminder: just a way to contact peopleMoniMisiDemo: Para el desarrollo de aplicaciones web en ASP.net del tipo SPI (Single Page Interface).Orchard Redirect404: This project allows you to configure redirects for 404 errors via the Orchard admin interface. P/Opus (.NET Wrapper for libopus): P/Opus is a .NET library written in C# to wrap around the libopus C API/library to provide a more .NET friendly way of encoding and decoding Opus packets.Pengaturan Sambungan (Connection Setting): Aplikasi untuk menyimpan connection string secara terpisah dari program utamaPwdManagement: mgrrezaTest: TestSnowflake Id Generator: Snowflake is a network service for generating unique ID numbers at high scale with some simple guarantees.The Game for Microsoft Dynamics CRM2011: A solution containing a framework for Microsoft Dynamics CRM2011 to enable gamification of the product in order to drive user adoption and business objectives.UCMA 4.0 Async Extension Methods: This collection of extension methods makes it easy for developers to use the async/await pattern for multithreaded development with UCMA 4.0.ValtechGitTfs: Sandbox project to test git-tfs with a TFS serverWarriorG: PokerWeb site bán thú cung: [TRY]XVIB 360: XBOX 360???????????????????????。

    Read the article

  • CodePlex Daily Summary for Thursday, August 30, 2012

    CodePlex Daily Summary for Thursday, August 30, 2012Popular ReleasesCoevery - Free CRM: Coevery 1.0: This is the first alpha release of Coevery.ExpressProfiler: Initial release of ExpressProfiler v1.2: This is initial release of ExpressProfilerNabu Library: 2012-08-29, 14: .Net Framework 4.0, .Net Framework 4.5 Debug and Release builds.Math.NET Numerics: Math.NET Numerics v2.2.1: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Since v2.2.0: Student-T density more robust for very large degrees of freedom Sparse Kronecker product much more efficient (now leverages sparsity) Direct access to raw matrix storage implementations for advanced extensibility Now also separate package for signed core library with a strong name (we dropped strong names in v2.2.0) Also available as NuGet packages...Microsoft SQL Server Product Samples: Database: AdventureWorks Databases – 2012, 2008R2 and 2008: About this release This release consolidates AdventureWorks databases for SQL Server 2012, 2008R2 and 2008 versions to one page. Each zip file contains an mdf database file and ldf log file. This should make it easier to find and download AdventureWorks databases since all OLTP versions are on one page. There are no database schema changes. For each release of the product, there is a light-weight and full version of the AdventureWorks sample database. The light-weight version is denoted by ...ImageServer: v1.1: This is the first version releasedChristoc's DotNetNuke Module Development Template: DotNetNuke Project Templates V1.1 for VS2012: This release is specifically for Visual Studio 2012 Support, distributed through the Visual Studio Extensions gallery at http://visualstudiogallery.msdn.microsoft.com/ After you build in Release mode the installable packages (source/install) can be found in the INSTALL folder now, within your module's folder, not the packages folder anymore Check out the blog post for all of the details about this release. http://www.dotnetnuke.com/Resources/Blogs/EntryId/3471/New-Visual-Studio-2012-Projec...Home Access Plus+: v8.0: v8.0828.1800 RELEASE CHANGED TO BETA Any issues, please log them on http://www.edugeek.net/forums/home-access-plus/ This is full release, NO upgrade ZIP will be provided as most files require replacing. To upgrade from a previous version, delete everything but your AppData folder, extract all but the AppData folder and run your HAP+ install Documentation is supplied in the Web Zip The Quota Services require executing a script to register the service, this can be found in there install di...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3406 (September 2012): New features: Extended ReflectionClass libxml error handling, constants DateTime::modify(), DateTime::getOffset() TreatWarningsAsErrors MSBuild option OnlyPrecompiledCode configuration option; allows to use only compiled code Fixes: ArgsAware exception fix accessing .NET properties bug fix ASP.NET session handler fix for OutOfProc mode DateTime methods (WordPress posting fix) Phalanger Tools for Visual Studio: Visual Studio 2010 & 2012 New debugger engine, PHP-like debugging ...NougakuDoCompanion: v1.1.0: Add temp folder of local resource, Resize local resource. Change launch ruby commnadline args from rack to bundle. 1.NougakuDoCompanion v1.1.0 cspkg.zip - cspkg and ServiceConfiguration.xml (small , medium, large, extra large vm) - include NougakudoSetupTool.exe and readme.txt 2.NougakuDoCompanion v1.1.0.zip - Source code. include NougakudoSetupTool.exe - include activerecord-sqlserver-adapter patch in paches folder. 3.Depends tools. - Windows Azure SDK for .NET June 2012(1.7SP1) - Windows ...WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.06: Whats New Added CKEditor 3.6.4 oEmbed Plugin can now handle short urls changes The Template File can now parsed from an xml file instead of js (More Info...) Style Sets can now parsed from an xml file instead of js (More Info...) Fixed Showing wrong Pages in Child Portal in the Link Dialog Fixed Urls in dnnpages Plugin Fixed Issue #6969 WordCount Plugin Fixed Issue #6973 File-Browser: Fixed Deleting of Files File-Browser: Improved loading time File-Browser: Improved the loa...MabiCommerce: MabiCommerce 1.0.1: What's NewSetup now creates shortcuts Fix spelling errors Minor enhancement to the Map window.VFPX: Data Explorer 3: This release is the first alpha release for DX3. Even though great care has been taken, the project manager highly recommends you work with test data and you regularly back up the DataExplorer.DBF found in your HOME(7) folder. New features are documented on the project home page. IMPORTANT Once installed, make sure to go to the Data Explorer Options page, click the Restore to Default button. This brings up a dialog asking if you want to maintain connections and customizations that were done...ScintillaNET: ScintillaNET 2.5.2: This release has been built from the 2.5 branch. Version 2.5.2 is functionally identical to the 2.5.1 release but also includes the XML documentation comments file generated by Visual Studio. It is not 100% comprehensive but it will give you Visual Studio IntelliSense for a large part of the API. Just make sure the ScintillaNET.xml file is in the same folder as the ScintillaNET.dll reference you're using in your projects. (The XML file does not need to be distributed with your application)....Facebook Web Parts for SharePoint 2010: Version 1.0.1 - WSP: SharePoint 2010 solution (WSP) Resolved a bug from Version 1.0 - WSP where user profile names would not properly update.Contactor: GSMContactorProgram V1.0 - Source Code: This is the source code for the program, For Visual Studio 2012 RCTouchInjector: TouchInjector 1.1: Version 1.1: fixed a bug with the autorun optionWinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.0: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...BlackJumboDog: Ver5.7.1: 2012.08.25 Ver5.7.1 (1)?????·?????LING?????????????? (2)SMTP???(????)????、?????\?????????????????????Visual Studio Team Foundation Server Branching and Merging Guide: v2 - Visual Studio 2012: Welcome to the Branching and Merging Guide Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review Documentation has been reviewed by the quality and recording team All critical bugs have been resolved Known Issues / Bugs Spelling, grammar and content revisions are in progress. Hotfix will be published.New ProjectsAd Specific Redirect Generator: Ad Specific Redirect Generatorargsv, command line argument processors: Two libraries to process command line options, one(argsvCPython) is written for Python applications and other one(the argsvCPP) is for C++ applications.AutomationML Export Import Mapper: This Tool supports the mapping between SystemUnitClass-Libraries, used to exchange data in the AutomationML Format between Exporters and Importers.BioPathSearch: Tool for probibalistic biochemical networks construction from input substrete to required product based on information from KEGG databasebuidingapp: i am buiding app but i not sure public because i don't code finish. then i upload code and pulish my application,. thank you for reading mynote. see you next tiCatalogo de Empresas y Productos: Este es el catalogo de empresas y productos asociadas a la produccion agricola en CRclarktestcodeplex: test codeplexDatabase Clean-up Engine - DataWashroom: Database Clean-up Rules EngineEdEx: edexExpense Management System: Expense Management SystemFinger Tracking with Kinect SDK for XBOX: This project explained step by step how to perform finger and hand tracking with the Kinect for XBOX with the official Kinect SDK.Flower - Workflow Engine: Flower is a simple yet powerful workflow engine allowing to develop workflows in C#.Grid: Copyright 2011 Badkid development. All rights reserved. Play Grid the the retro style arcade puzzle game. Join lines between the gids to create boxes to get pHeadSprings Assignment: A program to create custom tokens using class FizzBuzz programming. Ice Scream: IcescreamiD Commerce + Logistics: iD Commerce + Logistics is a company based outside of Chicago, Illinois specializing in fulfillment solutions and custom development.Mido: Mido is a simple utility that adds text or images watermarks to your photos, images and pictures.MLSTest: MLSTest is a Windows based software for the analysis of Multilocus Sequence Data for euckaryotic organisms My Project Foundation Library: A foundation library for asp.net web project, including easy-to-use data access layer and other utility code.MyTestProject001: ?????codeplex,??????????。MyTestProject2: My Test projectNEF - Native Extensibility Framework: NEF is an open source IoC extensibility framework targetting C++. It is modeled after the more useful features of MEF in C#.RazorSpy: A simple toy/tool for exploring the output of the Razor Parser for a particular document.Relay Command for WinRT: A simple RelayCommand for WinRT (sans and entire MVVM framework).Rocca. Store: Document storage with email capabilitiesScrabble Nights: Scrabble is a word game in which two to four players score points by forming words from individual lettered tiles on a game board marked with a 15-by-15 gridSlFrame: SlFrameSmart Data Access layer: WE ARE USING SMART DATA ACCESS LAYER MEANS ITS TIME TO FORGET ABOUT SYSTEM.DATA.SQLCLIENT NAMESPACE.SportSlot: This project allows you to search for available sport venuesSwitcher 2012: Allows for fast switching between source and header file in Visual Studio 2012testddtfs2908201201: satime tracking: coming soonTradePlatform.NET: TradePlatform.NET is addition to MetaTrader 4 client terminal which extends trading experience, MQL language and provides .NET world communication bridge.WaMa-SkyDriveClient: Project Description WaMa-SkyDriveClient is a Windows Skydrive-client, with Live authentication.Web Optimization: The ASP.NET Web optimization framework provides services to improve the performance of your ASP.NET Web applications.Web Pocket: Web tool for rapid application development of mobile software WinRT Toolkit: The purpose of WinRT Toolkit is to provide efficient and developer friendly instruments (API / Components...).Xcel Directory Service: Xcel Directory Service is an Excel 2010 add-in used to retrieve data from Active Directory using an import utility and user defined functions.zhangfacai: this project is based on ASP.NET MVC 4 and HTML5. It's a website to integrate public web services like amazon, live, facebook, etc. By using these services, mak

    Read the article

  • Search in Projects API

    - by Geertjan
    Today I got some help from Jaroslav Havlin, the creator of the new "Search in Projects API". Below are the steps to create a search provider that finds recently modified files, via a new tab in the "Find in Projects" dialog: Here's how to get to the above result. Create a new NetBeans module project named "RecentlyModifiedFilesSearch". Then set dependencies on these libraries: Search in Projects API Lookup API Utilities API Dialogs API Datasystems API File System API Nodes API Create and register an implementation of "SearchProvider". This class tells the application the name of the provider and how it can be used. It should be registered via the @ServiceProvider annotation.Methods to implement: Method createPresenter creates a new object that is added to the "Find in Projects" dialog when it is opened. Method isReplaceSupported should return true if this provider support replacing, not only searching. If you want to disable the search provider (e.g., there aren't required external tools available in the OS), return false from isEnabled. Method getTitle returns a string that will be shown in the tab in the "Find in Projects" dialog. It can be localizable. Example file "org.netbeans.example.search.ExampleSearchProvider": package org.netbeans.example.search; import org.netbeans.spi.search.provider.SearchProvider; import org.netbeans.spi.search.provider.SearchProvider.Presenter; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = SearchProvider.class) public class ExampleSearchProvider extends SearchProvider { @Override public Presenter createPresenter(boolean replaceMode) { return new ExampleSearchPresenter(this); } @Override public boolean isReplaceSupported() { return false; } @Override public boolean isEnabled() { return true; } @Override public String getTitle() { return "Recent Files Search"; } } Next, we need to create a SearchProvider.Presenter. This is an object that is passed to the "Find in Projects" dialog and contains a visual component to show in the dialog, together with some methods to interact with it.Methods to implement: Method getForm returns a JComponent that should contain controls for various search criteria. In the example below, we have controls for a file name pattern, search scope, and the age of files. Method isUsable is called by the dialog to check whether the Find button should be enabled or not. You can use NotificationLineSupport passed as its argument to set a display error, warning, or info message. Method composeSearch is used to apply the settings and prepare a search task. It returns a SearchComposition object, as shown below. Please note that the example uses ComponentUtils.adjustComboForFileName (and similar methods), that modifies a JComboBox component to act as a combo box for selection of file name pattern. These methods were designed to make working with components created in a GUI Builder comfortable. Remember to call fireChange whenever the value of any criteria changes. Example file "org.netbeans.example.search.ExampleSearchPresenter": package org.netbeans.example.search; import java.awt.FlowLayout; import javax.swing.BoxLayout; import javax.swing.JComboBox; import javax.swing.JComponent; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JSlider; import javax.swing.event.ChangeEvent; import javax.swing.event.ChangeListener; import org.netbeans.api.search.SearchScopeOptions; import org.netbeans.api.search.ui.ComponentUtils; import org.netbeans.api.search.ui.FileNameController; import org.netbeans.api.search.ui.ScopeController; import org.netbeans.api.search.ui.ScopeOptionsController; import org.netbeans.spi.search.provider.SearchComposition; import org.netbeans.spi.search.provider.SearchProvider; import org.openide.NotificationLineSupport; import org.openide.util.HelpCtx; public class ExampleSearchPresenter extends SearchProvider.Presenter { private JPanel panel = null; ScopeOptionsController scopeSettingsPanel; FileNameController fileNameComboBox; ScopeController scopeComboBox; ChangeListener changeListener; JSlider slider; public ExampleSearchPresenter(SearchProvider searchProvider) { super(searchProvider, false); } /** * Get UI component that can be added to the search dialog. */ @Override public synchronized JComponent getForm() { if (panel == null) { panel = new JPanel(); panel.setLayout(new BoxLayout(panel, BoxLayout.PAGE_AXIS)); JPanel row1 = new JPanel(new FlowLayout(FlowLayout.LEADING)); JPanel row2 = new JPanel(new FlowLayout(FlowLayout.LEADING)); JPanel row3 = new JPanel(new FlowLayout(FlowLayout.LEADING)); row1.add(new JLabel("Age in hours: ")); slider = new JSlider(1, 72); row1.add(slider); final JLabel hoursLabel = new JLabel(String.valueOf(slider.getValue())); row1.add(hoursLabel); row2.add(new JLabel("File name: ")); fileNameComboBox = ComponentUtils.adjustComboForFileName(new JComboBox()); row2.add(fileNameComboBox.getComponent()); scopeSettingsPanel = ComponentUtils.adjustPanelForOptions(new JPanel(), false, fileNameComboBox); row3.add(new JLabel("Scope: ")); scopeComboBox = ComponentUtils.adjustComboForScope(new JComboBox(), null); row3.add(scopeComboBox.getComponent()); panel.add(row1); panel.add(row3); panel.add(row2); panel.add(scopeSettingsPanel.getComponent()); initChangeListener(); slider.addChangeListener(new ChangeListener() { @Override public void stateChanged(ChangeEvent e) { hoursLabel.setText(String.valueOf(slider.getValue())); } }); } return panel; } private void initChangeListener() { this.changeListener = new ChangeListener() { @Override public void stateChanged(ChangeEvent e) { fireChange(); } }; fileNameComboBox.addChangeListener(changeListener); scopeSettingsPanel.addChangeListener(changeListener); slider.addChangeListener(changeListener); } @Override public HelpCtx getHelpCtx() { return null; // Some help should be provided, omitted for simplicity. } /** * Create search composition for criteria specified in the form. */ @Override public SearchComposition<?> composeSearch() { SearchScopeOptions sso = scopeSettingsPanel.getSearchScopeOptions(); return new ExampleSearchComposition(sso, scopeComboBox.getSearchInfo(), slider.getValue(), this); } /** * Here we return always true, but could return false e.g. if file name * pattern is empty. */ @Override public boolean isUsable(NotificationLineSupport notifySupport) { return true; } } The last part of our search provider is the implementation of SearchComposition. This is a composition of various search parameters, the actual search algorithm, and the displayer that presents the results.Methods to implement: The most important method here is start, which performs the actual search. In this case, SearchInfo and SearchScopeOptions objects are used for traversing. These objects were provided by controllers of GUI components (in the presenter). When something interesting is found, it should be displayed (with SearchResultsDisplayer.addMatchingObject). Method getSearchResultsDisplayer should return the displayer associated with this composition. The displayer can be created by subclassing SearchResultsDisplayer class or simply by using the SearchResultsDisplayer.createDefault. Then you only need a helper object that can create nodes for found objects. Example file "org.netbeans.example.search.ExampleSearchComposition": package org.netbeans.example.search; public class ExampleSearchComposition extends SearchComposition<DataObject> { SearchScopeOptions searchScopeOptions; SearchInfo searchInfo; int oldInHours; SearchResultsDisplayer<DataObject> resultsDisplayer; private final Presenter presenter; AtomicBoolean terminated = new AtomicBoolean(false); public ExampleSearchComposition(SearchScopeOptions searchScopeOptions, SearchInfo searchInfo, int oldInHours, Presenter presenter) { this.searchScopeOptions = searchScopeOptions; this.searchInfo = searchInfo; this.oldInHours = oldInHours; this.presenter = presenter; } @Override public void start(SearchListener listener) { for (FileObject fo : searchInfo.getFilesToSearch( searchScopeOptions, listener, terminated)) { if (ageInHours(fo) < oldInHours) { try { DataObject dob = DataObject.find(fo); getSearchResultsDisplayer().addMatchingObject(dob); } catch (DataObjectNotFoundException ex) { listener.fileContentMatchingError(fo.getPath(), ex); } } } } @Override public void terminate() { terminated.set(true); } @Override public boolean isTerminated() { return terminated.get(); } /** * Use default displayer to show search results. */ @Override public synchronized SearchResultsDisplayer<DataObject> getSearchResultsDisplayer() { if (resultsDisplayer == null) { resultsDisplayer = createResultsDisplayer(); } return resultsDisplayer; } private SearchResultsDisplayer<DataObject> createResultsDisplayer() { /** * Object to transform matching objects to nodes. */ SearchResultsDisplayer.NodeDisplayer<DataObject> nd = new SearchResultsDisplayer.NodeDisplayer<DataObject>() { @Override public org.openide.nodes.Node matchToNode( final DataObject match) { return new FilterNode(match.getNodeDelegate()) { @Override public String getDisplayName() { return super.getDisplayName() + " (" + ageInMinutes(match.getPrimaryFile()) + " minutes old)"; } }; } }; return SearchResultsDisplayer.createDefault(nd, this, presenter, "less than " + oldInHours + " hours old"); } private static long ageInMinutes(FileObject fo) { long fileDate = fo.lastModified().getTime(); long now = System.currentTimeMillis(); return (now - fileDate) / 60000; } private static long ageInHours(FileObject fo) { return ageInMinutes(fo) / 60; } } Run the module, select a node in the Projects window, press Ctrl-F, and you'll see the "Find in Projects" dialog has two tabs, the second is the one you provided above:

    Read the article

  • Parallel Classloading Revisited: Fully Concurrent Loading

    - by davidholmes
    Java 7 introduced support for parallel classloading. A description of that project and its goals can be found here: http://openjdk.java.net/groups/core-libs/ClassLoaderProposal.html The solution for parallel classloading was to add to each class loader a ConcurrentHashMap, referenced through a new field, parallelLockMap. This contains a mapping from class names to Objects to use as a classloading lock for that class name. This was then used in the following way: protected Class loadClass(String name, boolean resolve) throws ClassNotFoundException { synchronized (getClassLoadingLock(name)) { // First, check if the class has already been loaded Class c = findLoadedClass(name); if (c == null) { long t0 = System.nanoTime(); try { if (parent != null) { c = parent.loadClass(name, false); } else { c = findBootstrapClassOrNull(name); } } catch (ClassNotFoundException e) { // ClassNotFoundException thrown if class not found // from the non-null parent class loader } if (c == null) { // If still not found, then invoke findClass in order // to find the class. long t1 = System.nanoTime(); c = findClass(name); // this is the defining class loader; record the stats sun.misc.PerfCounter.getParentDelegationTime().addTime(t1 - t0); sun.misc.PerfCounter.getFindClassTime().addElapsedTimeFrom(t1); sun.misc.PerfCounter.getFindClasses().increment(); } } if (resolve) { resolveClass(c); } return c; } } Where getClassLoadingLock simply does: protected Object getClassLoadingLock(String className) { Object lock = this; if (parallelLockMap != null) { Object newLock = new Object(); lock = parallelLockMap.putIfAbsent(className, newLock); if (lock == null) { lock = newLock; } } return lock; } This approach is very inefficient in terms of the space used per map and the number of maps. First, there is a map per-classloader. As per the code above under normal delegation the current classloader creates and acquires a lock for the given class, checks if it is already loaded, then asks its parent to load it; the parent in turn creates another lock in its own map, checks if the class is already loaded and then delegates to its parent and so on till the boot loader is invoked for which there is no map and no lock. So even in the simplest of applications, you will have two maps (in the system and extensions loaders) for every class that has to be loaded transitively from the application's main class. If you knew before hand which loader would actually load the class the locking would only need to be performed in that loader. As it stands the locking is completely unnecessary for all classes loaded by the boot loader. Secondly, once loading has completed and findClass will return the class, the lock and the map entry is completely unnecessary. But as it stands, the lock objects and their associated entries are never removed from the map. It is worth understanding exactly what the locking is intended to achieve, as this will help us understand potential remedies to the above inefficiencies. Given this is the support for parallel classloading, the class loader itself is unlikely to need to guard against concurrent load attempts - and if that were not the case it is likely that the classloader would need a different means to protect itself rather than a lock per class. Ultimately when a class file is located and the class has to be loaded, defineClass is called which calls into the VM - the VM does not require any locking at the Java level and uses its own mutexes for guarding its internal data structures (such as the system dictionary). The classloader locking is primarily needed to address the following situation: if two threads attempt to load the same class, one will initiate the request through the appropriate loader and eventually cause defineClass to be invoked. Meanwhile the second attempt will block trying to acquire the lock. Once the class is loaded the first thread will release the lock, allowing the second to acquire it. The second thread then sees that the class has now been loaded and will return that class. Neither thread can tell which did the loading and they both continue successfully. Consider if no lock was acquired in the classloader. Both threads will eventually locate the file for the class, read in the bytecodes and call defineClass to actually load the class. In this case the first to call defineClass will succeed, while the second will encounter an exception due to an attempted redefinition of an existing class. It is solely for this error condition that the lock has to be used. (Note that parallel capable classloaders should not need to be doing old deadlock-avoidance tricks like doing a wait() on the lock object\!). There are a number of obvious things we can try to solve this problem and they basically take three forms: Remove the need for locking. This might be achieved by having a new version of defineClass which acts like defineClassIfNotPresent - simply returning an existing Class rather than triggering an exception. Increase the coarseness of locking to reduce the number of lock objects and/or maps. For example, using a single shared lockMap instead of a per-loader lockMap. Reduce the lifetime of lock objects so that entries are removed from the map when no longer needed (eg remove after loading, use weak references to the lock objects and cleanup the map periodically). There are pros and cons to each of these approaches. Unfortunately a significant "con" is that the API introduced in Java 7 to support parallel classloading has essentially mandated that these locks do in fact exist, and they are accessible to the application code (indirectly through the classloader if it exposes them - which a custom loader might do - and regardless they are accessible to custom classloaders). So while we can reason that we could do parallel classloading with no locking, we can not implement this without breaking the specification for parallel classloading that was put in place for Java 7. Similarly we might reason that we can remove a mapping (and the lock object) because the class is already loaded, but this would again violate the specification because it can be reasoned that the following assertion should hold true: Object lock1 = loader.getClassLoadingLock(name); loader.loadClass(name); Object lock2 = loader.getClassLoadingLock(name); assert lock1 == lock2; Without modifying the specification, or at least doing some creative wordsmithing on it, options 1 and 3 are precluded. Even then there are caveats, for example if findLoadedClass is not atomic with respect to defineClass, then you can have concurrent calls to findLoadedClass from different threads and that could be expensive (this is also an argument against moving findLoadedClass outside the locked region - it may speed up the common case where the class is already loaded, but the cost of re-executing after acquiring the lock could be prohibitive. Even option 2 might need some wordsmithing on the specification because the specification for getClassLoadingLock states "returns a dedicated object associated with the specified class name". The question is, what does "dedicated" mean here? Does it mean unique in the sense that the returned object is only associated with the given class in the current loader? Or can the object actually guard loading of multiple classes, possibly across different class loaders? So it seems that changing the specification will be inevitable if we wish to do something here. In which case lets go for something that more cleanly defines what we want to be doing: fully concurrent class-loading. Note: defineClassIfNotPresent is already implemented in the VM as find_or_define_class. It is only used if the AllowParallelDefineClass flag is set. This gives us an easy hook into existing VM mechanics. Proposal: Fully Concurrent ClassLoaders The proposal is that we expand on the notion of a parallel capable class loader and define a "fully concurrent parallel capable class loader" or fully concurrent loader, for short. A fully concurrent loader uses no synchronization in loadClass and the VM uses the "parallel define class" mechanism. For a fully concurrent loader getClassLoadingLock() can return null (or perhaps not - it doesn't matter as we won't use the result anyway). At present we have not made any changes to this method. All the parallel capable JDK classloaders become fully concurrent loaders. This doesn't require any code re-design as none of the mechanisms implemented rely on the per-name locking provided by the parallelLockMap. This seems to give us a path to remove all locking at the Java level during classloading, while retaining full compatibility with Java 7 parallel capable loaders. Fully concurrent loaders will still encounter the performance penalty associated with concurrent attempts to find and prepare a class's bytecode for definition by the VM. What this penalty is depends on the number of concurrent load attempts possible (a function of the number of threads and the application logic, and dependent on the number of processors), and the costs associated with finding and preparing the bytecodes. This obviously has to be measured across a range of applications. Preliminary webrevs: http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.hotspot/ http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.jdk/ Please direct all comments to the mailing list [email protected].

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • CodePlex Daily Summary for Saturday, August 23, 2014

    CodePlex Daily Summary for Saturday, August 23, 2014Popular ReleasesDIII Save Editor: ROS Alpha 1.2.14.100: initial Ros alpha release please report all bugsSEToolbox: SEToolbox 01.044.014 Release 2: Fixed Ship name not saving. Fixed broken cubes view Bug. Fixed cast VRage.MyFixedPoint error when opening games with Meteors. Added checkbox when Importing 3d model to Export ship, to fill it as solid.CS-Script Source: Release v3.8.5: Fixed problem with the warnings getting hidden in case of the successful compilation cs-script.7z - CS-Script Suite (binaries, documentation, samples) cs-script.ExtensionPack.7z - CS-Script Extension Pack (additional binaries and samples) cs-scriptDocs.7z - CS-Script DocumentationOutlook 2013 Backup Add-In: Outlook Backup Add-In 1.3: Changelog for new version: Added button in config-window to reset the last backup-time (this will trigger the backup after closing outlook) Minimum interval set to 0 (backup at each closing of outlook) Catch exception when data store entry is corrupt Added two parameters (prefix and suffix) to automatically rename the backup file Updated VSTO-Runtime to 10.0.50325 Upgraded project to Visual Studio 2013 Added optional command to run after backup (e.g. pack backup files, ...) Add...babelua: 1.6.7.0: V1.6.7.0 - 2014.8.21New feature: add a file search window ( ctrl+1 or ALT+L ), like The file search in VC Assistant; Stability improvement: performance improvement when BabeLua load/unload; performance improvement when debugger load lua files;Open NFe: RDI Open NFe 3.0 (alpha): Atualização para o layout 3.10 da NFe.MSSQL Deployment Tool: Microsoft SQL Deploy Tool v1.3.1: MicrosoftSqlDeployTool: v1.3.1.38348 What's changed? Update namespace and assembly name. Bug fixing.SharePoint 2013 Search Query Tool: SharePoint 2013 Search Query Tool v2.1: Layout improvements Bug fixes Stores auth method and user name Moved experimental settings to Advanced boxCtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.2.41183 Alpha: This alpha of the CtrlAltStudio Viewer provides some preliminary Oculus Rift DK2 support. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-2-41183-alpha Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of Second Life.HDD Guardian: HDD Guardian 0.6.1: New: package now include smartctl 6.3; Removed: standard notification e-mail. Now you have to set your mail server to send e-mail alerts; Bugfix: USB detection error; custom e-mail server settings issue; bottom panel displays a wrong ATA error count.VG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksExchange Database Recovery With and Without Log Files is Possible: Exchange Recovery Application: This Exchange Recovery Software comes with free trial edition which helps users to inspect the working capability of the recovery process. Download free demo version and repair inaccessible mailboxes from EDB file without any obstructions.Linq 4 Javascript: Version 2.4: Minor Changes Made Added Count() and Count(with where clause) Distinct will now use a dictionary instead of a custom dictionary object Organize the unit tests. The variable names will actually make sense and won't be 2 letters. SelectMany will now use the queryable logic.Office / SharePoint 2013 Continuous Integration with TFS 2012: 1.1.0.1: Fixed the following issues in TfsDropDrownloader: Updated to make it work with VS 2013 (including VS 2013 updates) in addition to VS2012. Extend the timeout of downloading drops from 100 seconds to 1 hour. Added more trouble shooting information in the output.CRM Solution CommandLine Helper: CRM Solution Cmd Helper 1.0.0.4: Includes : - Bug fix = Export argument validation : check directory path existence (thanks mszlapa)Office To PDF: OfficeToPDF 1.4: Adds support for additional file types: * mpp (requires MS Project >= 2010) * vsdx, vsdm (requires MS Visio >= 2013) * csv * odt, odc, odp * pot, potm, potx Improves stability and clean removal of COM objects. Adds new flags: * /verbose - to be more verbose when running * /markup - to allow document markup in the PDF when converting Word documents * /excel_max_rows - adds a maximum limit on the number of rows a worksheet can contain when converting Excel documents * /pdfa - crea...MongoRepository: MongoRepository 1.6.6: Installing using NuGet (recommended)MongoRepository is now a NuGet package for your convenience. Step-by-step instructions can be found in Installing MongoRepository using NuGet Installing using BinariesYou can also choose to download the binaries instead of using NuGet. There are 2 downloads: mongorepository_full.x.x.x contains all binaries required (MongoRepository and the 10gen C# driver) mongorepository.x.x.x contains only the MongoRepository binary Make sure you reference MongoReposit...Cryptography Enumerations JavaScript Shell: Cryptography Enumerations JavaScript Shell 1.0.0: First ReleaseCMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...New Projects3D Projectile: A 3D Projectile program showing the motion of a ballASP.NET Web Application Starter Kit: This project template is an ASP.NET solution skeleton for a typical web application or single-page application (SPA).Behaving - Behaviour Tree for C#: Behaviour is a Behaviour Tree implementation in C#.Kinect Stream Saver Application _SDK 2: This application is developed based on a sample called "ColorBasics-D2D C++" developed by Microsoft corporation. (Compatible with SDK 2: K4W v2 Dev Preview)MVC Bootstrap Paginator: The MVC Bootstrap Paginator is lightweight and easy to use. It's works out of the box and requires minimal configuration.NuGet Reference Switcher: NuGet Reference Switcher is a Visual Studio extension which can be used to automatically switch NuGet DLL references to project references and vice-versa. QKit: A WP8.1 library that provides various controls and classes that will help developers quickly and easily augment their apps to behave more like native apps.SharePoint 2013 Document Icon Linker: Links the document icon in library views to the document.SharePoint Autocomplete People Search: SharePoint People SearchWADM: WADM

    Read the article

  • CodePlex Daily Summary for Tuesday, August 19, 2014

    CodePlex Daily Summary for Tuesday, August 19, 2014Popular ReleasesVG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksLinq 4 Javascript: Version 2.4: Minor Changes Made Added Count() and Count(with where clause) Distinct will now use a dictionary instead of a custom dictionary object Organize the unit tests. The variable names will actually make sense and won't be 2 letters. SelectMany will now use the queryable logic.Magick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...OpenCppCoverage: OpenCppCoverage 0.9.1: - Add Jenkins support. - Command line argument can be placed inside a config file. If you do not have Visual Studio C++ 2013 you need to download redistributable packages: http://www.microsoft.com/en-us/download/details.aspx?id=40784Easy Backup Windows Service: Release 2.0 with CU: Fix log error when "To" directory not exist in fyle system. Force run program as administrator by default. Add 'everyday' schedule element. Update solution to VS 2013.Easy Backup Application: Release 2.0 with CU: Fix log error when "To" directory not exist in fyle system. Fix app location initialization. Force run program as administrator by default. Update solution to VS 2013.TEBookConverter: 1.5: Added: Turkish and French translations Added: A few interface changes Removed: SkinDynamulet: Dynamulet v0.1: DynamoDB Transaction Server v0.1Console parallel nunit tests runner: ConsoleUnitTestsRunner 1.03: bugfixingFluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.Google .Net API: Drive.Sample: Google .NET Client API – Drive.SampleInstructions for the Google .NET Client API – Drive.Sample</h2> http://code.google.com/p/google-api-dotnet-client/source/browse/?repo=samples#hg%2FDrive.SampleBrowse Source, or main file http://code.google.com/p/google-api-dotnet-client/source/browse/Drive.Sample/Program.cs?repo=samplesProgram.cs <h3>1. Checkout Instructions</h3> <p><b>Prerequisites:</b> Install Visual Studio, and <a href="http://mercurial.selenic.com/">Mercurial</a>.</p> ...FineUI - jQuery / ExtJS based ASP.NET Controls: FineUI v4.1.1: -??Form??????????????(???-5929)。 -?TemplateField??ExpandOnDoubleClick、ExpandOnEnter、ExpandToSelectRow????(LZOM-5932)。 -BodyPadding???????,??“5”“5 10”,???????????“5px”“5px 10px”。 -??TriggerBox?EnableEdit=false????,??????????????(Jango_Jing-5450)。 -???????????DataKeyNames???????????(yygy-6002)。 -????????????????????????(Gnid-6018)。 -??PageManager???AutoSizePanelID????,??????????????????(yygy-6008)。 -?FState???????????????,????????????????(????-5925)。 -??????OnClientClick???return?????????(FineU...DNN CMS Platform: 07.03.02: Major Highlights Fixed backwards compatibility issue with 3rd party control panels Fixed issue in the drag and drop functionality of the File Uploader in IE 11 and Safari Fixed issue where users were able to create pages with the same name Fixed issue that affected older versions of DNN that do not include the maxAllowedContentLength during upgrade Fixed issue that stopped some skins from being upgraded to newer versions Fixed issue that randomly showed an unexpected error during us...WordMat: WordMat for Mac: WordMat for Mac has a few limitations compared to the Windows version - Graph is not supported (Gnuplot, GeoGebra and Excel works) - Units are not supported yet (Coming up) The Mac version is yet as tested as the windows version.MFCMAPI: August 2014 Release: Build: 15.0.0.1042 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010/2013 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeNew Projectsangle.works: Angle.works sample solutionAutomatically Update CMS from SCOM: Microsoft Central Management Server (CMS) was released with SQL Server 2008; however keeping the server list up-to-date is challenging.BLE for NETMF: Class library targeted to .NET Micro Framework for BLE support (both central and peripheral profiles).CIIS: ciisClipboard Calculator: Simple tool to calculate numbers from the clipboard. Useful for SQL Server Management Studio User.CYCLOPS: CYCLOPS is a project that seeks to aid the development and testing of ANPR software. CYCLOPS provides a virtual camera and virtual frame grabber for image procedemo4: demo4eAsset: Web based Fixed Asset Management SystemEFML-Runtime: Create applications with xml,css,js feeadmin: Complete fee managementFinal Fantasy VI Randomizer: This is a simple program that randomizes skills, items, and characters in Final Fantasy VI. It is used for racing the game.Object Editor: Object Editor is a visual editor which allows to edit any properties of any class and to simplify the changing and displaying of a class instance.pbb: Pinoy Badminton Buddies Web ApplicationRubyFeed: RubyFeed is a stand-alone HTML5 page for Internet Explorer on Windows 8 and above that makes it easy to view and manage your Windows RSS Platform feeds.

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Adding Client Validation To DataAnnotations DataType Attribute

    - by srkirkland
    The System.ComponentModel.DataAnnotations namespace contains a validation attribute called DataTypeAttribute, which takes an enum specifying what data type the given property conforms to.  Here are a few quick examples: public class DataTypeEntity { [DataType(DataType.Date)] public DateTime DateTime { get; set; }   [DataType(DataType.EmailAddress)] public string EmailAddress { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This attribute comes in handy when using ASP.NET MVC, because the type you specify will determine what “template” MVC uses.  Thus, for the DateTime property if you create a partial in Views/[loc]/EditorTemplates/Date.ascx (or cshtml for razor), that view will be used to render the property when using any of the Html.EditorFor() methods. One thing that the DataType() validation attribute does not do is any actual validation.  To see this, let’s take a look at the EmailAddress property above.  It turns out that regardless of the value you provide, the entity will be considered valid: //valid new DataTypeEntity {EmailAddress = "Foo"}; .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Hmmm.  Since DataType() doesn’t validate, that leaves us with two options: (1) Create our own attributes for each datatype to validate, like [Date], or (2) add validation into the DataType attribute directly.  In this post, I will show you how to hookup client-side validation to the existing DataType() attribute for a desired type.  From there adding server-side validation would be a breeze and even writing a custom validation attribute would be simple (more on that in future posts). Validation All The Way Down Our goal will be to leave our DataTypeEntity class (from above) untouched, requiring no reference to System.Web.Mvc.  Then we will make an ASP.NET MVC project that allows us to create a new DataTypeEntity and hookup automatic client-side date validation using the suggested “out-of-the-box” jquery.validate bits that are included with ASP.NET MVC 3.  For simplicity I’m going to focus on the only DateTime field, but the concept is generally the same for any other DataType. Building a DataTypeAttribute Adapter To start we will need to build a new validation adapter that we can register using ASP.NET MVC’s DataAnnotationsModelValidatorProvider.RegisterAdapter() method.  This method takes two Type parameters; The first is the attribute we are looking to validate with and the second is an adapter that should subclass System.Web.Mvc.ModelValidator. Since we are extending DataAnnotations we can use the subclass of ModelValidator called DataAnnotationsModelValidator<>.  This takes a generic argument of type DataAnnotations.ValidationAttribute, which lucky for us means the DataTypeAttribute will fit in nicely. So starting from there and implementing the required constructor, we get: public class DataTypeAttributeAdapter : DataAnnotationsModelValidator<DataTypeAttribute> { public DataTypeAttributeAdapter(ModelMetadata metadata, ControllerContext context, DataTypeAttribute attribute) : base(metadata, context, attribute) { } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now you have a full-fledged validation adapter, although it doesn’t do anything yet.  There are two methods you can override to add functionality, IEnumerable<ModelValidationResult> Validate(object container) and IEnumerable<ModelClientValidationRule> GetClientValidationRules().  Adding logic to the server-side Validate() method is pretty straightforward, and for this post I’m going to focus on GetClientValidationRules(). Adding a Client Validation Rule Adding client validation is now incredibly easy because jquery.validate is very powerful and already comes with a ton of validators (including date and regular expressions for our email example).  Teamed with the new unobtrusive validation javascript support we can make short work of our ModelClientValidationDateRule: public class ModelClientValidationDateRule : ModelClientValidationRule { public ModelClientValidationDateRule(string errorMessage) { ErrorMessage = errorMessage; ValidationType = "date"; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If your validation has additional parameters you can the ValidationParameters IDictionary<string,object> to include them.  There is a little bit of conventions magic going on here, but the distilled version is that we are defining a “date” validation type, which will be included as html5 data-* attributes (specifically data-val-date).  Then jquery.validate.unobtrusive takes this attribute and basically passes it along to jquery.validate, which knows how to handle date validation. Finishing our DataTypeAttribute Adapter Now that we have a model client validation rule, we can return it in the GetClientValidationRules() method of our DataTypeAttributeAdapter created above.  Basically I want to say if DataType.Date was provided, then return the date rule with a given error message (using ValidationAttribute.FormatErrorMessage()).  The entire adapter is below: public class DataTypeAttributeAdapter : DataAnnotationsModelValidator<DataTypeAttribute> { public DataTypeAttributeAdapter(ModelMetadata metadata, ControllerContext context, DataTypeAttribute attribute) : base(metadata, context, attribute) { }   public override System.Collections.Generic.IEnumerable<ModelClientValidationRule> GetClientValidationRules() { if (Attribute.DataType == DataType.Date) { return new[] { new ModelClientValidationDateRule(Attribute.FormatErrorMessage(Metadata.GetDisplayName())) }; }   return base.GetClientValidationRules(); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Putting it all together Now that we have an adapter for the DataTypeAttribute, we just need to tell ASP.NET MVC to use it.  The easiest way to do this is to use the built in DataAnnotationsModelValidatorProvider by calling RegisterAdapter() in your global.asax startup method. DataAnnotationsModelValidatorProvider.RegisterAdapter(typeof(DataTypeAttribute), typeof(DataTypeAttributeAdapter)); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Show and Tell Let’s see this in action using a clean ASP.NET MVC 3 project.  First make sure to reference the jquery, jquery.vaidate and jquery.validate.unobtrusive scripts that you will need for client validation. Next, let’s make a model class (note we are using the same built-in DataType() attribute that comes with System.ComponentModel.DataAnnotations). public class DataTypeEntity { [DataType(DataType.Date, ErrorMessage = "Please enter a valid date (ex: 2/14/2011)")] public DateTime DateTime { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Then we make a create page with a strongly-typed DataTypeEntity model, the form section is shown below (notice we are just using EditorForModel): @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>Fields</legend>   @Html.EditorForModel()   <p> <input type="submit" value="Create" /> </p> </fieldset> } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The final step is to register the adapter in our global.asax file: DataAnnotationsModelValidatorProvider.RegisterAdapter(typeof(DataTypeAttribute), typeof(DataTypeAttributeAdapter)); Now we are ready to run the page: Looking at the datetime field’s html, we see that our adapter added some data-* validation attributes: <input type="text" value="1/1/0001" name="DateTime" id="DateTime" data-val-required="The DateTime field is required." data-val-date="Please enter a valid date (ex: 2/14/2011)" data-val="true" class="text-box single-line valid"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here data-val-required was added automatically because DateTime is non-nullable, and data-val-date was added by our validation adapter.  Now if we try to add an invalid date: Our custom error message is displayed via client-side validation as soon as we tab out of the box.  If we didn’t include a custom validation message, the default DataTypeAttribute “The field {0} is invalid” would have been shown (of course we can change the default as well).  Note we did not specify server-side validation, but in this case we don’t have to because an invalid date will cause a server-side error during model binding. Conclusion I really like how easy it is to register new data annotations model validators, whether they are your own or, as in this post, supplements to existing validation attributes.  I’m still debating about whether adding the validation directly in the DataType attribute is the correct place to put it versus creating a dedicated “Date” validation attribute, but it’s nice to know either option is available and, as we’ve seen, simple to implement. I’m also working through the nascent stages of an open source project that will create validation attribute extensions to the existing data annotations providers using similar techniques as seen above (examples: Email, Url, EqualTo, Min, Max, CreditCard, etc).  Keep an eye on this blog and subscribe to my twitter feed (@srkirkland) if you are interested for announcements.

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • Silverlight Tree View with Multiple Levels

    - by psheriff
    There are many examples of the Silverlight Tree View that you will find on the web, however, most of them only show you how to go to two levels. What if you have more than two levels? This is where understanding exactly how the Hierarchical Data Templates works is vital. In this blog post, I am going to break down how these templates work so you can really understand what is going on underneath the hood. To start, let’s look at the typical two-level Silverlight Tree View that has been hard coded with the values shown below: <sdk:TreeView>  <sdk:TreeViewItem Header="Managers">    <TextBlock Text="Michael" />    <TextBlock Text="Paul" />  </sdk:TreeViewItem>  <sdk:TreeViewItem Header="Supervisors">    <TextBlock Text="John" />    <TextBlock Text="Tim" />    <TextBlock Text="David" />  </sdk:TreeViewItem></sdk:TreeView> Figure 1 shows you how this tree view looks when you run the Silverlight application. Figure 1: A hard-coded, two level Tree View. Next, let’s create three classes to mimic the hard-coded Tree View shown above. First, you need an Employee class and an EmployeeType class. The Employee class simply has one property called Name. The constructor is created to accept a “name” argument that you can use to set the Name property when you create an Employee object. public class Employee{  public Employee(string name)  {    Name = name;  }   public string Name { get; set; }} Finally you create an EmployeeType class. This class has one property called EmpType and contains a generic List<> collection of Employee objects. The property that holds the collection is called Employees. public class EmployeeType{  public EmployeeType(string empType)  {    EmpType = empType;    Employees = new List<Employee>();  }   public string EmpType { get; set; }  public List<Employee> Employees { get; set; }} Finally we have a collection class called EmployeeTypes created using the generic List<> class. It is in the constructor for this class where you will build the collection of EmployeeTypes and fill it with Employee objects: public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;            type = new EmployeeType("Manager");    type.Employees.Add(new Employee("Michael"));    type.Employees.Add(new Employee("Paul"));    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} You now have a data hierarchy in memory (Figure 2) which is what the Tree View control expects to receive as its data source. Figure 2: A hierachial data structure of Employee Types containing a collection of Employee objects. To connect up this hierarchy of data to your Tree View you create an instance of the EmployeeTypes class in XAML as shown in line 13 of Figure 3. The key assigned to this object is “empTypes”. This key is used as the source of data to the entire Tree View by setting the ItemsSource property as shown in Figure 3, Callout #1. Figure 3: You need to start from the bottom up when laying out your templates for a Tree View. The ItemsSource property of the Tree View control is used as the data source in the Hierarchical Data Template with the key of employeeTypeTemplate. In this case there is only one Hierarchical Data Template, so any data you wish to display within that template comes from the collection of Employee Types. The TextBlock control in line 20 uses the EmpType property of the EmployeeType class. You specify the name of the Hierarchical Data Template to use in the ItemTemplate property of the Tree View (Callout #2). For the second (and last) level of the Tree View control you use a normal <DataTemplate> with the name of employeeTemplate (line 14). The Hierarchical Data Template in lines 17-21 sets its ItemTemplate property to the key name of employeeTemplate (Line 19 connects to Line 14). The source of the data for the <DataTemplate> needs to be a property of the EmployeeTypes collection used in the Hierarchical Data Template. In this case that is the Employees property. In the Employees property there is a “Name” property of the Employee class that is used to display the employee name in the second level of the Tree View (Line 15). What is important here is that your lowest level in your Tree View is expressed in a <DataTemplate> and should be listed first in your Resources section. The next level up in your Tree View should be a <HierarchicalDataTemplate> which has its ItemTemplate property set to the key name of the <DataTemplate> and the ItemsSource property set to the data you wish to display in the <DataTemplate>. The Tree View control should have its ItemsSource property set to the data you wish to display in the <HierarchicalDataTemplate> and its ItemTemplate property set to the key name of the <HierarchicalDataTemplate> object. It is in this way that you get the Tree View to display all levels of your hierarchical data structure. Three Levels in a Tree View Now let’s expand upon this concept and use three levels in our Tree View (Figure 4). This Tree View shows that you now have EmployeeTypes at the top of the tree, followed by a small set of employees that themselves manage employees. This means that the EmployeeType class has a collection of Employee objects. Each Employee class has a collection of Employee objects as well. Figure 4: When using 3 levels in your TreeView you will have 2 Hierarchical Data Templates and 1 Data Template. The EmployeeType class has not changed at all from our previous example. However, the Employee class now has one additional property as shown below: public class Employee{  public Employee(string name)  {    Name = name;    ManagedEmployees = new List<Employee>();  }   public string Name { get; set; }  public List<Employee> ManagedEmployees { get; set; }} The next thing that changes in our code is the EmployeeTypes class. The constructor now needs additional code to create a list of managed employees. Below is the new code. public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;    Employee emp;    Employee managed;     type = new EmployeeType("Manager");    emp = new Employee("Michael");    managed = new Employee("John");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Tim");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);     emp = new Employee("Paul");    managed = new Employee("Michael");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Sara");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} Now that you have all of the data built in your classes, you are now ready to hook up this three-level structure to your Tree View. Figure 5 shows the complete XAML needed to hook up your three-level Tree View. You can see in the XAML that there are now two Hierarchical Data Templates and one Data Template. Again you list the Data Template first since that is the lowest level in your Tree View. The next Hierarchical Data Template listed is the next level up from the lowest level, and finally you have a Hierarchical Data Template for the first level in your tree. You need to work your way from the bottom up when creating your Tree View hierarchy. XAML is processed from the top down, so if you attempt to reference a XAML key name that is below where you are referencing it from, you will get a runtime error. Figure 5: For three levels in a Tree View you will need two Hierarchical Data Templates and one Data Template. Each Hierarchical Data Template uses the previous template as its ItemTemplate. The ItemsSource of each Hierarchical Data Template is used to feed the data to the previous template. This is probably the most confusing part about working with the Tree View control. You are expecting the content of the current Hierarchical Data Template to use the properties set in the ItemsSource property of that template. But you need to look to the template lower down in the XAML to see the source of the data as shown in Figure 6. Figure 6: The properties you use within the Content of a template come from the ItemsSource of the next template in the resources section. Summary Understanding how to put together your hierarchy in a Tree View is simple once you understand that you need to work from the bottom up. Start with the bottom node in your Tree View and determine what that will look like and where the data will come from. You then build the next Hierarchical Data Template to feed the data to the previous template you created. You keep doing this for each level in your Tree View until you get to the last level. The data for that last Hierarchical Data Template comes from the ItemsSource in the Tree View itself. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “Silverlight TreeView with Multiple Levels” from the drop down list.

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Code Contracts: Unit testing contracted code

    - by DigiMortal
    Code contracts and unit tests are not replacements for each other. They both have different purpose and different nature. It does not matter if you are using code contracts or not – you still have to write tests for your code. In this posting I will show you how to unit test code with contracts. In my previous posting about code contracts I showed how to avoid ContractExceptions that are defined in code contracts runtime and that are not accessible for us in design time. This was one step further to make my randomizer testable. In this posting I will complete the mission. Problems with current code This is my current code. public class Randomizer {     public static int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           var rnd = new Random();         return rnd.Next(min, max);     } } As you can see this code has some problems: randomizer class is static and cannot be instantiated. We cannot move this class between components if we need to, GetRandomFromRangeContracted() is not fully testable because we cannot currently affect random number generator output and therefore we cannot test post-contract. Now let’s solve these problems. Making randomizer testable As a first thing I made Randomizer to be class that must be instantiated. This is simple thing to do. Now let’s solve the problem with Random class. To make Randomizer testable I define IRandomGenerator interface and RandomGenerator class. The public constructor of Randomizer accepts IRandomGenerator as argument. public interface IRandomGenerator {     int Next(int min, int max); }   public class RandomGenerator : IRandomGenerator {     private Random _random = new Random();       public int Next(int min, int max)     {         return _random.Next(min, max);     } } And here is our Randomizer after total make-over. public class Randomizer {     private IRandomGenerator _generator;       private Randomizer()     {         _generator = new RandomGenerator();     }       public Randomizer(IRandomGenerator generator)     {         _generator = generator;     }       public int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           return _generator.Next(min, max);     } } It seems to be inconvenient to instantiate Randomizer now but you can always use DI/IoC containers and break compiled dependencies between the components of your system. Writing tests for randomizer IRandomGenerator solved problem with testing post-condition. Now it is time to write tests for Randomizer class. Writing tests for contracted code is not easy. The main problem is still ContractException that we are not able to access. Still it is the main exception we get as soon as contracts fail. Although pre-conditions are able to throw exceptions with type we want we cannot do much when post-conditions will fail. We have to use Contract.ContractFailed event and this event is called for every contract failure. This way we find ourselves in situation where supporting well input interface makes it impossible to support output interface well and vice versa. ContractFailed is nasty hack and it works pretty weird way. Although documentation sais that ContractFailed is good choice for testing contracts it is still pretty painful. As a last chance I got tests working almost normally when I wrapped them up. Can you remember similar solution from the times of Visual Studio 2008 unit tests? Cannot understand how Microsoft was able to mess up testing again. [TestClass] public class RandomizerTest {     private Mock<IRandomGenerator> _randomMock;     private Randomizer _randomizer;     private string _lastContractError;       public TestContext TestContext { get; set; }       public RandomizerTest()     {         Contract.ContractFailed += (sender, e) =>         {             e.SetHandled();             e.SetUnwind();               throw new Exception(e.FailureKind + ": " + e.Message);         };     }       [TestInitialize()]     public void RandomizerTestInitialize()     {         _randomMock = new Mock<IRandomGenerator>();         _randomizer = new Randomizer(_randomMock.Object);         _lastContractError = string.Empty;     }       #region InputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_not_less_than_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(100, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_equal_to_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(10, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     public void GetRandomFromRangeContracted_should_work_when_min_is_less_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 50;           _randomMock.Setup(r => r.Next(minValue, maxValue))             .Returns(returnValue)             .Verifiable();           var result = _randomizer.GetRandomFromRangeContracted(minValue, maxValue);           _randomMock.Verify();         Assert.AreEqual<int>(returnValue, result);     }     #endregion       #region OutputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_less_than_min()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 7;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_more_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 102;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }     #endregion        } Although these tests are pretty awful and contain hacks we are at least able now to make sure that our code works as expected. Here is the test list after running these tests. Conclusion Code contracts are very new stuff in Visual Studio world and as young technology it has some problems – like all other new bits and bytes in the world. As you saw then making our contracted code testable is easy only to the point when pre-conditions are considered. When we start dealing with post-conditions we will end up with hacked tests. I hope that future versions of code contracts will solve error handling issues the way that testing of contracted code will be easier than it is right now.

    Read the article

  • Microsoft Declares the Future of ASP.NET is Web API

    - by sbwalker
    Sitting on a plane on my way home from Tech Ed 2012 in Orlando, I thought it would be a good time to jot down some key takeaways from this year’s conference. Some of these items I have known since the Microsoft MVP Summit which occurred in Redmond in late February ( but due to NDA restrictions I could not share them with the developer community at large ) and some of them are a result of insightful conversations with a wide variety of industry insiders and Microsoft employees at the conference. First, let’s travel back in time 4 years to the Microsoft MVP Summit in 2008. Microsoft was facing some heat from market newcomer Ruby on Rails and responded with a new web development framework of its own, ASP.NET MVC. At the Summit they estimated that MVC would only be applicable for ~10% of all new web development projects. Based on that prediction I questioned why they were investing such considerable resources for such a relative edge case, but my guess is that they felt it was an important edge case at the time as some of the more vocal .NET evangelists as well as some very high profile start-ups ( ie. Twitter ) had publicly announced their intent to use Rails. Microsoft made a lot of noise about MVC. In fact, they focused so much of their messaging and marketing hype around MVC that it appeared that WebForms was essentially dead. Yes, it may have been true that Microsoft continued to invest in WebForms, but from an outside perspective it really appeared that MVC was the only framework getting any real attention. As a result, MVC started to gain market share. An inside source at Microsoft told me that MVC usage has grown at a rate of about 5% per year and now sits at ~30%. Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.  This is because in the Microsoft ecosystem there is somewhat of a bandwagon mentality amongst developers. If Microsoft spends a lot of time talking about a specific technology, developers get the perception that it must be really important. So rather than choosing the right tool for the job, they often choose the tool with the most marketing hype and then try to sell it to the customer. In 2010, I blogged about the fact that MVC did not make any business sense for the DotNetNuke platform. This was because our ecosystem relied on third party extensions which were dependent on the WebForms model. If we migrated the core to MVC it would mean that all of the third party extensions would no longer be compatible, which would be an irresponsible business decision for us to make at the expense of our users and customers. However, this did not stop the debate from continuing to occur in our ecosystem. Clearly some developers had drunk Microsoft’s Kool-Aid about MVC and were of the mindset, to paraphrase an old Scottish saying, “If its not MVC, it’s crap”. Now, this is a rather ignorant position to take as most of the benefits of MVC can be achieved in WebForms with solid architecture and responsible coding practices. Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline. So over the past few years some horror stories have begun to bubble to the surface of software development projects focused on ground-up rewrites of web applications for the sole purpose of migrating from WebForms to MVC. These large scale rewrites were typically initiated by engineering teams with only a single argument driving the business decision, that Microsoft was promoting MVC as “the future”. These ill-fated rewrites offered no benefit to end users or customers and in fact resulted in a less stable, less scalable and more complicated systems – basically taking one step forward and two full steps back. A case in point is the announcement earlier this week that a popular open source .NET CMS provider has decided to pull the plug on their new MVC product which has been under active development for more than 18 months and revert back to WebForms. The availability of multiple server-side development models has deeply fragmented the Microsoft developer community. Some folks like to compare it to the age-old VB vs. C# language debate. However, the VB vs. C# language debate was ultimately more of a religious war because at least the two dominant programming languages were compatible with one another and could be used interchangeably. The issue with WebForms vs. MVC is much more challenging. This is because the messaging from Microsoft has positioned the two solutions as being incompatible with one another and as a result web developers feel like they are forced to choose one path or another. Yes, it is true that it has always been technically possible to use WebForms and MVC in the same project, but the tooling support has always made this feel “dirty”. The fragmentation has also made it difficult to attract newcomers as the perceived barrier to entry for learning ASP.NET has become higher. As a result many new software developers entering the market are gravitating to environments where the development model seems more simple and intuitive ( ie. PHP or Ruby ). At the same time that the Web Platform team was busy promoting ASP.NET MVC, the Microsoft Office team has been promoting Sharepoint as a platform for building internal enterprise web applications. Sharepoint has great penetration in the enterprise and over time has been enhanced with improved extensibility capabilities for software developers. But, like many other mature enterprise ASP.NET web applications, it is built on the WebForms development model. Similar to DotNetNuke, Sharepoint leverages a rich third party ecosystem for both generic web controls and more specialized WebParts – both of which rely on WebForms. So basically this resulted in a situation where the Web Platform group had headed off in one direction and the Office team had gone in another direction, and the end customer was stuck in the middle trying to figure out what to do with their existing investments in Microsoft technology. It really emphasized the perception that the left hand was not speaking to the right hand, as strategically speaking there did not seem to be any high level plan from Microsoft to ensure consistency and continuity across the different product lines. With the introduction of ASP.NET MVC, it also made some of the third party control vendors scratch their heads, and wonder what the heck Microsoft was thinking. The original value proposition of ASP.NET over Classic ASP was the ability for web developers to emulate the highly productive desktop development model by using abstract components for creating rich, interactive web interfaces. Web control vendors like Telerik, Infragistics, DevExpress, and ComponentArt had all built sizable businesses offering powerful user interface components to WebForms developers. And even after MVC was introduced these vendors continued to improve their products, offering greater productivity and a superior user experience via AJAX to what was possible in MVC. And since many developers were comfortable and satisfied with these third party solutions, the demand remained strong and the third party web control market continued to prosper despite the availability of MVC. While all of this was going on in the Microsoft ecosystem, there has also been a fundamental shift in the general software development industry. Driven by the explosion of Internet-enabled devices, the focus has now centered on service-oriented architecture (SOA). Service-oriented architecture is all about defining a public API for your product that any client can consume; whether it’s a native application running on a smart phone or tablet, a web browser taking advantage of HTML5 and Javascript, or a rich desktop application running on a PC. REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques. SOA also has the benefit of producing a cross-platform API, as every major technology stack is able to interact with standard REST-based web services. And for web applications, more and more developers are turning to robust Javascript libraries like JQuery and Knockout for browser-based client-side development techniques for calling web services and rendering content to end users. In fact, traditional server-side page rendering has largely fallen out of favor, resulting in decreased demand for server-side frameworks like Ruby on Rails, WebForms, and (gasp) MVC. In response to these new industry trends, Microsoft did what it always does – it immediately poured some resources into developing a solution which will ensure they remain relevant and competitive in the web space. This work culminated in a new framework which was branded as Web API. It is convention-based and designed to embrace native HTTP standards without copious layers of abstraction. This framework is designed to be the ultimate replacement for both the REST aspects of WCF and ASP.NET MVC Web Services. And since it was developed out of band with a dependency only on ASP.NET 4.0, it means that it can be used immediately in a variety of production scenarios. So at Tech Ed 2012 it was made abundantly clear in numerous sessions that Microsoft views Web API as the “Future of ASP.NET”. In fact, one Microsoft PM even went as far as to say that if we look 3-4 years into the future, that all ASP.NET web applications will be developed using the Web API approach. This is a fairly bold prediction and clearly telegraphs where Microsoft plans to allocate its resources going forward. Currently Web API is being delivered as part of the MVC4 package, but this is only temporary for the sake of convenience. It also sounds like there are still internal discussions going on in terms of how to brand the various aspects of ASP.NET going forward – perhaps the moniker of “ASP.NET Web Stack” coined a couple years ago by Scott Hanselman and utilized as part of the open source release of ASP.NET bits on Codeplex a few months back will eventually stick. Web API is being positioned as the unification of ASP.NET – the glue that is able to pull this fragmented mess back together again. The  “One ASP.NET” strategy will promote the use of all frameworks - WebForms, MVC, and Web API, even within the same web project. Basically the message is utilize the appropriate aspects of each framework to solve your business problems. Instead of navigating developers to a fork in the road, the plan is to educate them that “hybrid” applications are a great strategy for delivering solutions to customers. In addition, the service-oriented approach coupled with client-side development promoted by Web API can effectively be used in both WebForms and MVC applications. So this means it is also relevant to application platforms like DotNetNuke and Sharepoint, which means that it starts to create a unified development strategy across all ASP.NET product lines once again. And so what about MVC? There have actually been rumors floated that MVC has reached a stage of maturity where, similar to WebForms, it will be treated more as a maintenance product line going forward ( MVC4 may in fact be the last significant iteration of this framework ). This may sound alarming to some folks who have recently adopted MVC but it really shouldn’t, as both WebForms and MVC will continue to play a vital role in delivering solutions to customers. They will just not be the primary area where Microsoft is spending the majority of its R&D resources. That distinction will obviously go to Web API. And when the question comes up of why not enhance MVC to make it work with Web API, you must take a step back and look at this from the higher level to see that it really makes no sense. MVC is a server-side page compositing framework; whereas, Web API promotes client-side page compositing with a heavy focus on web services. In order to make MVC work well with Web API, would require a complete rewrite of MVC and at the end of the day, there would be no upgrade path for existing MVC applications. So it really does not make much business sense. So what does this have to do with DotNetNuke? Well, around 8-12 months ago we recognized the software industry trends towards web services and client-side development. We decided to utilize a “hybrid” model which would provide compatibility for existing modules while at the same time provide a bridge for developers who wanted to utilize more modern web techniques. Customers who like the productivity and familiarity of WebForms can continue to build custom modules using the traditional approach. However, in DotNetNuke 6.2 we also introduced a new Service Framework which is actually built on top of MVC2 ( we chose to leverage MVC because it had the most intuitive, light-weight REST implementation in the .NET stack ). The Services Framework allowed us to build some rich interactive features in DotNetNuke 6.2, including the Messaging and Notification Center and Activity Feed. But based on where we know Microsoft is heading, it makes sense for the next major version of DotNetNuke ( which is expected to be released in Q4 2012 ) to migrate from MVC2 to Web API. This will likely result in some breaking changes in the Services Framework but we feel it is the best approach for ensuring the platform remains highly modern and relevant. The fact that our development strategy is perfectly aligned with the “One ASP.NET” strategy from Microsoft means that our customers and developer community can be confident in their current and future investments in the DotNetNuke platform.

    Read the article

  • 7u10: JavaFX packaging tools update

    - by igor
    Last weeks were very busy here in Oracle. JavaOne 2012 is next week. Come to see us there! Meanwhile i'd like to quickly update you on recent developments in the area of packaging tools. This is an area of ongoing development for the team, and we are  continuing to refine and improve both the tools and the process. Thanks to everyone who shared experiences and suggestions with us. We are listening and fixed many of reported issues. Please keep them coming as comments on the blog or (even better) file issues directly to the JIRA. In this post i'll focus on several new packaging features added in JDK 7 update 10: Self-Contained Applications: Select Java Runtime to bundle Self-Contained Applications: Create Package without Java Runtime Self-Contained Applications: Package non-JavaFX application Option to disable proxy setup in the JavaFX launcher Ability to specify codebase for WebStart application Option to update existing jar file Self-Contained Applications: Specify application icon Self-Contained Applications: Pass parameters on the command line All these features and number of other important bug fixes are available in the developer preview builds of JDK 7 update 10 (build 8 or later). Please give them a try and share your feedback! Self-Contained Applications: Select Java Runtime to bundle Packager tools in 7u6 assume current JDK (based on java.home property) is the source for embedded runtime. This is useful simplification for many scenarios but there are cases where ability to specify what to embed explicitly is handy. For example IDE may be using fixed JDK to build the project and this is not the version you want to bundle into your application. To make it more flexible we now allow to specify location of base JDK explicitly. It is optional and if you do not specify it then current JDK will be used (i.e. this change is fully backward compatible). New 'basedir' attribute was added to <fx:platform> tag. Its value is location of JDK to be used. It is ok to point to either JRE inside the JDK or JDK top level folder. However, it must be JDK and not JRE as we need other JDK tools for proper packaging and it must be recent version of JDK that is bundled with JavaFX (i.e. Java 7 update 6 or later). Here are examples (<fx:platform> is part of <fx:deploy> task): <fx:platform basedir="${java.home}"/> <fx:platform basedir="c:\tools\jdk7"/> Hint: this feature enables you to use packaging tools from JDK 7 update 10 (and benefit from bug fixes and other features described below) to create application package with bundled FCS version of JRE 7 update 6. Self-Contained Applications: Create Package without Java Runtime This may sound a bit puzzling at first glance. Package without embedded Java Runtime is not really self-contained and obviously will not help with: Deployment on fresh systems. JRE need to be installed separately (and this step will require admin permissions). Possible compatibility issues due to updates of system runtime. However, these packages are much much smaller in size. If download size matters and you are confident that user have recommended system JRE installed then this may be good option to consider if you want to improve user experience for install and launch. Technically, this is implemented as an extension of previous feature. Pass empty string as value for 'basedir' attribute and this will be treated as request to not bundle Java runtime, e.g. <fx:platform basedir=""/> Self-Contained Applications: Package non-JavaFX application One of popular questions people ask about self-contained applications - can i package my Java application as self-contained application? Absolutely. This is true even for tools shipped with JDK 7 update 6. Simply follow steps for creating package for Swing application with integrated JavaFX content and they will work even if your application does not use JavaFX. What's wrong with it? Well, there are few caveats: bundle size is larger because JavaFX is bundled whilst it is not really needed main application jar needs to be packaged to comply to JavaFX packaging requirements(and this may be not trivial to achieve in your existing build scripts) javafx application launcher may not work well with startup logic of your application (for example launcher will initialize networking stack and this may void custom networking settings in your application code) In JDK 7 update 6 <fx:deploy> was updated to accept arbitrary executable jar as an input. Self-contained application package will be created preserving input jar as-is, i.e. no JavaFX launcher will be embedded. This does not help with first point above but resolves other two. More formally following assertions must be true for packaging to succeed: application can be launched as "java -jar YourApp.jar" from the command line  mainClass attribute of <fx:application> refers to application main class <fx:resources> lists all resources needed for the application To give you an example lets assume we need to create a bundle for application consisting of 3 jars:     dist/javamain.jar     dist/lib/somelib.jar    dist/morelibs/anotherlib.jar where javamain.jar has manifest with      Main-Class: app.Main     Class-Path: lib/somelib.jar morelibs/anotherlib.jar Here is sample ant code to package it: <target name="package-bundle"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <fx:deploy nativeBundles="all" width="100" height="100" outdir="native-packages/" outfile="MyJavaApp"> <info title="Sample project" vendor="Me" description="Test built from Java executable jar"/> <fx:application id="myapp" version="1.0" mainClass="app.Main" name="MyJavaApp"/> <fx:resources> <fx:fileset dir="dist"> <include name="javamain.jar"/> <include name="lib/somelib.jar"/> <include name="morelibs/anotherlib.jar"/> </fx:fileset> </fx:resources> </fx:deploy> </target> Option to disable proxy setup in the JavaFX launcher Since JavaFX 2.2 (part of JDK 7u6) properly packaged JavaFX applications  have proxy settings initialized according to Java Runtime configuration settings. This is handy for most of the application accessing network with one exception. If your application explicitly sets networking properties (e.g. socksProxyHost) then they must be set before networking stack is initialized. Proxy detection will initialize networking stack and therefore your custom settings will be ignored. One way to disable proxy setup by the embedded JavaFX launcher is to pass "-Djavafx.autoproxy.disable=true" on the command line. This is good for troubleshooting (proxy detection may cause significant startup time increases if network is misconfigured) but not really user friendly. Now proxy setup will be disabled if manifest of main application jar has "JavaFX-Feature-Proxy" entry with value "None". Here is simple example of adding this entry using <fx:jar> task: <fx:jar destfile="dist/sampleapp.jar"> <fx:application refid="myapp"/> <fx:resources refid="myresources"/> <fileset dir="build/classes"/> <manifest> <attribute name="JavaFX-Feature-Proxy" value="None"/> </manifest> </fx:jar> Ability to specify codebase for WebStart application JavaFX applications do not need to specify codebase (i.e. absolute location where application code will be deployed) for most of real world deployment scenarios. This is convenient as application does not need to be modified when it is moved from development to deployment environment. However, some developers want to ensure copies of their application JNLP file will redirect to master location. This is where codebase is needed. To avoid need to edit JNLP file manually <fx:deploy> task now accepts optional codebase attribute. If attribute is not specified packager will generate same no-codebase files as before. If codebase value is explicitly specified then generated JNLP files (including JNLP content embedded into web page) will use it.  Here is an example: <fx:deploy width="600" height="400" outdir="Samples" codebase="http://localhost/codebaseTest" outfile="TestApp"> .... </fx:deploy> Option to update existing jar file JavaFX packaging procedures are optimized for new application that can use ant or command line javafxpackager utility. This may lead to some redundant steps when you add it to your existing build process. One typical situation is that you might already have a build procedure that produces executable jar file with custom manifest. To properly package it as JavaFX executable jar you would need to unpack it and then use javafxpackager or <fx:jar> to create jar again (and you need to make sure you pass all important details from your custom manifest). We added option to pass jar file as an input to javafxpackager and <fx:jar>. This simplifies integration of JavaFX packaging tools into existing build  process as postprocessing step. By the way, we are looking for ways to simplify this further. Please share your suggestions! On the technical side this works as follows. Both <fx:jar> and javafxpackager will attempt to update existing jar file if this is the only input file. Update process will add JavaFX launcher classes and update the jar manifest with JavaFX attributes. Your custom attributes will be preserved. Update could be performed in place or result may be saved to a different file. Main-Class and Class-Path elements (if present) of manifest of input jar file will be used for JavaFX application  unless they are explicitly overriden in the packaging command you use. E.g. attribute mainClass of <fx:application> (or -appclass in the javafxpackager case) overrides existing Main-Class in the jar manifest. Note that class specified in the Main-Class attribute could either extend JavaFX Application or provide static main() method. Here are examples of updating jar file using javafxpackager: Create new JavaFX executable jar as a copy of given jar file javafxpackager -createjar -srcdir dist -srcfiles fish_proto.jar -outdir dist -outfile fish.jar  Update existing jar file to be JavaFX executable jar and use test.Fish as main application class javafxpackager -createjar -srcdir dist -appclass test.Fish -srcfiles fish.jar -outdir dist -outfile fish.jar  And here is example of using <fx:jar> to create new JavaFX executable jar from the existing fish_proto.jar: <fx:jar destfile="dist/fish.jar"> <fileset dir="dist"> <include name="fish_proto.jar"/> </fileset> </fx:jar> Self-Contained Applications: Specify application icon The only way to specify application icon for self-contained application using tools in JDK 7 update 6 is to use drop-in resources. Now this bug is resolved and you can also specify icon using <fx:icon> tag. Here is an example: <fx:deploy ...> <fx:info> <fx:icon href="default.png"/> </fx:info> ... </fx:deploy> Few things to keep in mind: Only default kind of icon is applicable to self-contained applications (as of now) Icon should follow platform specific rules for sizes and image format (e.g. .ico on Windows and .icns on Mac) Self-Contained Applications: Pass parameters on the command line JavaFX applications support two types of application parameters: named and unnamed (see the API for Application.Parameters). Static named parameters can be added to the application package using <fx:param> and unnamed parameters can be added using <fx:argument>. They are applicable to all execution modes including standalone applications. It is also possible to pass parameters to a JavaFX application from a Web page that hosts it, using <fx:htmlParam>.  Prior to JavaFX 2.2, this was only supported for embedded applications. Starting from JavaFX 2.2, <fx:htmlParam> is applicable to Web Start applications also. See JavaFX deployment guide for more details on this. However, there was no way to pass dynamic parameters to the self-contained application. This has been improved and now native launchers will  delegate parameters from command line to the application code. I.e. to pass parameter to the application you simply need to run it as "myapp.exe somevalue" and then use getParameters().getUnnamed().get(0) to get "somevalue".

    Read the article

  • CodePlex Daily Summary for Sunday, November 20, 2011

    CodePlex Daily Summary for Sunday, November 20, 2011Popular ReleasesFree SharePoint 2010 Sites Templates: SharePoint Server 2010 Sites Templates: here is the list of sites templates to be downloadednopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.30: Highlight features & improvements: • Performance optimization. • Back in stock notifications. • Product special price support. • Catalog mode (based on customer role) To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).Cetacean Monitoring: Cetacean Monitoring Project Release V 0.1: This is a zip with a working executable for evaluation purposes.WPF Converters: WPF Converters V1.2.0.0: support for enumerations, value types, and reference types in the expression converter's equality operators the expression converter now handles DependencyProperty.UnsetValue as argument values correctly (#4062) StyleCop conformance (more or less)Json.NET: Json.NET 4.0 Release 4: Change - JsonTextReader.Culture is now CultureInfo.InvariantCulture by default Change - KeyValurPairConverter no longer cares about the order of the key and value properties Change - Time zone conversions now use new TimeZoneInfo instead of TimeZone Fix - Fixed boolean values sometimes being capitalized when converting to XML Fix - Fixed error when deserializing ConcurrentDictionary Fix - Fixed serializing some Uris returning the incorrect value Fix - Fixed occasional error when...Media Companion: MC 3.423b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Replaced 'Rebuild' with 'Refresh' throughout entire code. Rebuild will now be known as Refresh. mc_com.exe has been fully updated TV Show Resolutions... Resolved issue #206 - having to hit save twice when updating runtime manually Shrunk cache size and lowered loading times f...Delta Engine: Delta Engine Beta Preview v0.9.1: v0.9.1 beta release with lots of refactoring, fixes, new samples and support for iOS, Android and WP7 (you need a Marketplace account however). If you want a binary release for the games (like v0.9.0), just say so in the Forum or here and we will quickly prepare one. It is just not much different from v0.9.0, so I left it out this time. See http://DeltaEngine.net/Wiki.Roadmap for details.ASP.net Awesome Samples (Web-Forms): 1.0 samples: Full Demo VS2008 Very Simple Demo VS2010 (demos for the ASP.net Awesome jQuery Ajax Controls)SharpMap - Geospatial Application Framework for the CLR: SharpMap-0.9-AnyCPU-Trunk-2011.11.17: This is a build of SharpMap from the 0.9 development trunk as per 2011-11-17 For most applications the AnyCPU release is the recommended, but in case you need an x86 build that is included to. For some dataproviders (GDAL/OGR, SqLite, PostGis) you need to also referense the SharpMap.Extensions assembly For SqlServer Spatial you need to reference the SharpMap.SqlServerSpatial assemblySQL Monitor - tracking sql server activities: SQLMon 4.1 alpha 5: 1. added basic schema support 2. added server instance name and process id 3. fixed problem with object search index out of range 4. improved version comparison with previous/next difference navigation 5. remeber main window spliter and object explorer spliter positionAJAX Control Toolkit: November 2011 Release: AJAX Control Toolkit Release Notes - November 2011 Release Version 51116November 2011 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 - Binary – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 - Binary – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found h...MVC Controls Toolkit: Mvc Controls Toolkit 1.5.5: Added: Now the DateRanteAttribute accepts complex expressions containing "Now" and "Today" as static minimum and maximum. Menu, MenuFor helpers capable of handling a "currently selected element". The developer can choose between using a standard nested menu based on a standard SimpleMenuItem class or specifying an item template based on a custom class. Added also helpers to build the tree structure containing all data items the menu takes infos from. Improved the pager. Now the developer ...SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.7: Reworked API to be more consistent. See Supported formats table. Added some more helper methods - e.g. OpenEntryStream (RarArchive/RarReader does not support this) Fixed up testsSilverlight Toolkit: Windows Phone Toolkit - Nov 2011 (7.1 SDK): This release is coming soon! What's new ListPicker once again works in a ScrollViewer LongListSelector bug fixes around OutOfRange exceptions, wrong ordering of items, grouping issues, and scrolling events. ItemTuple is now refactored to be the public type LongListSelectorItem to provide users better access to the values in selection changed handlers. PerformanceProgressBar binding fix for IsIndeterminate (item 9767 and others) There is no longer a GestureListener dependency with the C...DotNetNuke® Community Edition: 06.01.01: Major Highlights Fixed problem with the core skin object rendering CSS above the other framework inserted files, which caused problems when using core style skin objects Fixed issue with iFrames getting removed when content is saved Fixed issue with the HTML module removing styling and scripts from the content Fixed issue with inserting the link to jquery after the header of the page Security Fixesnone Updated Modules/Providers ModulesHTML version 6.1.0 ProvidersnoneDotNetNuke Performance Settings: 01.00.00: First release of DotNetNuke SQL update queries to set the DNN installation for optimimal performance. Please review and rate this release... (stars are welcome)SCCM Client Actions Tool: SCCM Client Actions Tool v0.8: SCCM Client Actions Tool v0.8 is currently the latest version. It comes with following changes since last version: Added "Wake On LAN" action. WOL.EXE is now included. Added new action "Get all active advertisements" to list all machine based advertisements on remote computers. Added new action "Get all active user advertisements" to list all user based advertisements for logged on users on remote computers. Added config.ini setting "enablePingTest" to control whether ping test is ru...C.B.R. : Comic Book Reader: CBR 0.3: New featuresAdd magnifier size and scale New file info view in the backstage Add dynamic properties on book and settings Sorting and grouping in the explorer with new design Rework on conversion : Images, PDF, Cbr/rar, Cbz/zip, Xps to the destination formats Images, Cbz and XPS ImprovmentsSuppress MainViewModel and ExplorerViewModel dependencies Add view notifications and Messages from MVVM Light for ViewModel=>View notifications Make thread better on open catalog, no more ihm freeze, less t...Desktop Google Reader: 1.4.2: This release remove the like and the broadcast buttons as Google Reader stopped supporting them (no, we don't like this decission...) Additionally and to have at least a small plus: the login window now automaitcally logs you in if you stored username and passwort (no more extra click needed) Finally added WebKit .NET to the about window and removed Awesomium MD5-Hash: 5fccf25a2fb4fecc1dc77ebabc8d3897 SHA-Hash: d44ff788b123bd33596ad1a75f3b9fa74a862fdbRDRemote: Remote Desktop remote configurator V 1.0.0: Remote Desktop remote configurator V 1.0.0New ProjectsAutonomous Robot Combat: The project is about a Robotic game arena where 6-8 robots will engage in a team combat using infrared guns. The infrared guns will also have red light LEDs to simulate muzzle flash. Once the project finishes, it will be displayed in University of Plymouth.B2BPro: B2BPro best solution for businessBattlestar Galactica Fighters: Battlestar Galactica Fighters is a 3D vertical scrolling shoot 'em up. It's developed in C# and F# for the XNA Programming exam at Master in Computer Game Developement 2011/2012 in Verona, Italy.Content Compiler 3 - Compile your XNA media outside Visual Studio!: The Content Compiler helps you to compile your media files for use with XNA without using Visual Studio. After almost three years of Development, the third version of the CCompiler is nearly finished and we decided to put it up on Codeplex to "keep it alive".CreaMotion NHibernate Class Builder: NHibernate Class Builder C# , WPF Supports all type relations Supports MsSql, MySql -- Specially developed for NHibernate Learnersecs_tqdt_kd: Electric Custom System Thông quan di?n t? Kinh DoanhIMDb Helper: IMDb Helper is a C# library that provides access to information in the IMDb website. IMDb Helper uses web requests to access the IMDb website, and regular expressions to parse the responses (it doesn't use any external library, only pure .NET). Lion Solution: This is an open source Accounting project for small business usage. Developed by: Sleiman Jneidi Hussein Zawawi Management of Master Lists in SharePoint with Choice Filter Lookup column: Management of Master Lists in SharePoint with Choice Filter Lookup column you can view the detail of the project on http://sharepointarrow.blogspot.com/2011/11/management-of-master-lists-in.htmlMRDS FezMini Robot Brick: This is an attemp to write services for MRDS to control a FezMini Robot with a wireless connection attached to COM2 on the FezMini Board.MVC Route Unit Tester: Provides convenient, easy to use methods that let you unit test the route table in your ASP.NET MVC application. Unlike many libraries, this lets you test routes both ways -- both incoming and going. You can specify an incoming request and assert that it matches a given route (or that there are no matches). You can also specify route data and assert that a given URL will be generated by your application.MyWalk: MyWalk (codename: MyLife) is a novel health application that makes tracking walking a part of daily life. NGeo: NGeo makes it easier for users of geographic data to invoke GeoNames and Yahoo! GeoPlanet / PlaceFinder services. You'll no longer have to write your own GeoNames, GeoPlanet, or PlaceFinder clients. It's developed in ASP.NET 4.0, and uses WCF ServiceModel libraries to deserialize JSON data into Plain Old C# Objects.Octopus Tools: Octopus is an automated deployment server for .NET applications, powered by NuGet. OctopusTools is a set of useful command line and MSBuild tasks designed for automating Octopus.PDV Moveis: Loja MoveisReddit#: Reddit# is a Reddit library for C# or other .Net languages.Rubik: Rubik is, simply, a stab at creating a decent implementation of a Rubik's cube in WPF, and in the process aplying MVVM to the 3D game milieu.Sample Service-Oriented Architecture: Sample of service-oriented architecture using WCF.SFinger: SFinger adds two finger scrolling to synaptics touchpad on Windows. SigemFinal: Versión final del proyecto de diseñoSlResource: silverlight resource managementSonce - Simple ON-line Circuit Editor: Circuit Editor in Silverlight, unfinished student project written in C#Tricofil: Site da Tricofil com Administrador de ConteudoTwincat Ads .Net Client: This is the client implementation of the Ads/Ams protocol created by Beckhoff. (I'm not affiliated with Beckhoff) This implementation will be in C# and will not depend on other libraries. This means it can be used in silverlight and windows phone projects. This project is not finished yet!!WomanMagazine: This is a woman Online Magazine with lot of info and entertainment resources for women

    Read the article

  • Get XML from Server for Use on Windows Phone

    - by psheriff
    When working with mobile devices you always need to take into account bandwidth usage and power consumption. If you are constantly connecting to a server to retrieve data for an input screen, then you might think about moving some of that data down to the phone and cache the data on the phone. An example would be a static list of US State Codes that you are asking the user to select from. Since this is data that does not change very often, this is one set of data that would be great to cache on the phone. Since the Windows Phone does not have an embedded database, you can just use an XML string stored in Isolated Storage. Of course, then you need to figure out how to get data down to the phone. You can either ship it with the application, or connect and retrieve the data from your server one time and thereafter cache it and retrieve it from the cache. In this blog post you will see how to create a WCF service to retrieve data from a Product table in a database and send that data as XML to the phone and store it in Isolated Storage. You will then read that data from Isolated Storage using LINQ to XML and display it in a ListBox. Step 1: Create a Windows Phone Application The first step is to create a Windows Phone application called WP_GetXmlFromDataSet (or whatever you want to call it). On the MainPage.xaml add the following XAML within the “ContentPanel” grid: <StackPanel>  <Button Name="btnGetXml"          Content="Get XML"          Click="btnGetXml_Click" />  <Button Name="btnRead"          Content="Read XML"          IsEnabled="False"          Click="btnRead_Click" />  <ListBox Name="lstData"            Height="430"            ItemsSource="{Binding}"            DisplayMemberPath="ProductName" /></StackPanel> Now it is time to create the WCF Service Application that you will call to get the XML from a table in a SQL Server database. Step 2: Create a WCF Service Application Add a new project to your solution called WP_GetXmlFromDataSet.Services. Delete the IService1.* and Service1.* files and the App_Data folder, as you don’t generally need these items. Add a new WCF Service class called ProductService. In the IProductService class modify the void DoWork() method with the following code: [OperationContract]string GetProductXml(); Open the code behind in the ProductService.svc and create the GetProductXml() method. This method (shown below) will connect up to a database and retrieve data from a Product table. public string GetProductXml(){  string ret = string.Empty;  string sql = string.Empty;  SqlDataAdapter da;  DataSet ds = new DataSet();   sql = "SELECT ProductId, ProductName,";  sql += " IntroductionDate, Price";  sql += " FROM Product";   da = new SqlDataAdapter(sql,    ConfigurationManager.ConnectionStrings["Sandbox"].ConnectionString);   da.Fill(ds);   // Create Attribute based XML  foreach (DataColumn col in ds.Tables[0].Columns)  {    col.ColumnMapping = MappingType.Attribute;  }   ds.DataSetName = "Products";  ds.Tables[0].TableName = "Product";  ret = ds.GetXml();   return ret;} After retrieving the data from the Product table using a DataSet, you will want to set each column’s ColumnMapping property to Attribute. Using attribute based XML will make the data transferred across the wire a little smaller. You then set the DataSetName property to the top-level element name you want to assign to the XML. You then set the TableName property on the DataTable to the name you want each element to be in your XML. The last thing you need to do is to call the GetXml() method on the DataSet object which will return an XML string of the data in your DataSet object. This is the value that you will return from the service call. The XML that is returned from the above call looks like the following: <Products>  <Product ProductId="1"           ProductName="PDSA .NET Productivity Framework"           IntroductionDate="9/3/2010"           Price="5000" />  <Product ProductId="3"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="7/1/2010"           Price="599.00" />  ...  ...  ... </Products> The GetProductXml() method uses a connection string from the Web.Config file, so add a <connectionStrings> element to the Web.Config file in your WCF Service application. Modify the settings shown below as needed for your server and database name. <connectionStrings>  <add name="Sandbox"        connectionString="Server=Localhost;Database=Sandbox;                         Integrated Security=Yes"/></connectionStrings> The Product Table You will need a Product table that you can read data from. I used the following structure for my product table. Add any data you want to this table after you create it in your database. CREATE TABLE Product(  ProductId int PRIMARY KEY IDENTITY(1,1) NOT NULL,  ProductName varchar(50) NOT NULL,  IntroductionDate datetime NULL,  Price money NULL) Step 3: Connect to WCF Service from Windows Phone Application Back in your Windows Phone application you will now need to add a Service Reference to the WCF Service application you just created. Right-mouse click on the Windows Phone Project and choose Add Service Reference… from the context menu. Click on the Discover button. In the Namespace text box enter “ProductServiceRefrence”, then click the OK button. If you entered everything correctly, Visual Studio will generate some code that allows you to connect to your Product service. On the MainPage.xaml designer window double click on the Get XML button to generate the Click event procedure for this button. In the Click event procedure make a call to a GetXmlFromServer() method. This method will also need a “Completed” event procedure to be written since all communication with a WCF Service from Windows Phone must be asynchronous.  Write these two methods as follows: private const string KEY_NAME = "ProductData"; private void GetXmlFromServer(){  ProductServiceClient client = new ProductServiceClient();   client.GetProductXmlCompleted += new     EventHandler<GetProductXmlCompletedEventArgs>      (client_GetProductXmlCompleted);   client.GetProductXmlAsync();  client.CloseAsync();} void client_GetProductXmlCompleted(object sender,                                   GetProductXmlCompletedEventArgs e){  // Store XML data in Isolated Storage  IsolatedStorageSettings.ApplicationSettings[KEY_NAME] = e.Result;   btnRead.IsEnabled = true;} As you can see, this is a fairly standard call to a WCF Service. In the Completed event you get the Result from the event argument, which is the XML, and store it into Isolated Storage using the IsolatedStorageSettings.ApplicationSettings class. Notice the constant that I added to specify the name of the key. You will use this constant later to read the data from Isolated Storage. Step 4: Create a Product Class Even though you stored XML data into Isolated Storage when you read that data out you will want to convert each element in the XML file into an actual Product object. This means that you need to create a Product class in your Windows Phone application. Add a Product class to your project that looks like the code below: public class Product{  public string ProductName{ get; set; }  public int ProductId{ get; set; }  public DateTime IntroductionDate{ get; set; }  public decimal Price{ get; set; }} Step 5: Read Settings from Isolated Storage Now that you have the XML data stored in Isolated Storage, it is time to use it. Go back to the MainPage.xaml design view and double click on the Read XML button to generate the Click event procedure. From the Click event procedure call a method named ReadProductXml().Create this method as shown below: private void ReadProductXml(){  XElement xElem = null;   if (IsolatedStorageSettings.ApplicationSettings.Contains(KEY_NAME))  {    xElem = XElement.Parse(     IsolatedStorageSettings.ApplicationSettings[KEY_NAME].ToString());     // Create a list of Product objects    var products =         from prod in xElem.Descendants("Product")        orderby prod.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(prod.Attribute("ProductId").Value),          ProductName = prod.Attribute("ProductName").Value,          IntroductionDate =             Convert.ToDateTime(prod.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(prod.Attribute("Price").Value)        };     lstData.DataContext = products;  }} The ReadProductXml() method checks to make sure that the key name that you saved your XML as exists in Isolated Storage prior to trying to open it. If the key name exists, then you retrieve the value as a string. Use the XElement’s Parse method to convert the XML string to a XElement object. LINQ to XML is used to iterate over each element in the XElement object and create a new Product object from each attribute in your XML file. The LINQ to XML code also orders the XML data by the ProductName. After the LINQ to XML code runs you end up with an IEnumerable collection of Product objects in the variable named “products”. You assign this collection of product data to the DataContext of the ListBox you created in XAML. The DisplayMemberPath property of the ListBox is set to “ProductName” so it will now display the product name for each row in your products collection. Summary In this article you learned how to retrieve an XML string from a table in a database, return that string across a WCF Service and store it into Isolated Storage on your Windows Phone. You then used LINQ to XML to create a collection of Product objects from the data stored and display that data in a Windows Phone list box. This same technique can be used in Silverlight or WPF applications too. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "Get XML From Server for Use on Windows Phone" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154  | Next Page >