Search Results

Search found 5212 results on 209 pages for 'forward'.

Page 202/209 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Java JRE 1.7.0_60 Certified with Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    Java Runtime Environment 7u60 (a.k.a. JRE 7u60-b19) and later updates on the JRE 7 codeline are now certified with Oracle E-Business Suite Release 11i and 12.0, 12.1, and 12.2 for Windows-based desktop clients. Effects of new support dates on Java upgrades for EBS environments Support dates for the E-Business Suite and Java have changed.  Please review the sections below for more details: What does this mean for Oracle E-Business Suite users? Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients? Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers? All JRE 6 and 7 releases are certified with EBS upon release Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline, and from JRE 7u10 and later updates on the JRE 7 codeline.  We test all new JRE 1.6 and JRE 7 releases in parallel with the JRE development process, so all new JRE 1.6 and 7 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 or JRE 7 releases to your EBS users' desktops. What's new in JRE 1.7.0_60? JDK 7u60 contains IANA time zone data version 2014b. For more information, refer to Timezone Data Versions in the JRE Software. It is strongly recommended that all customers upgrade to this release.  Details about update in this release are listed in the release notes. 32-bit and 64-bit versions certified This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Recommended Browser documentation for your EBS release for details. Where are the official patch requirements documented? All patches required for ensuring full compatibility of the E-Business Suite with JRE 7 are documented in these Notes: For EBS 11i: Deploying Sun JRE (Native Plug-in) for Windows Clients in Oracle E-Business Suite Release 11i (Note 290807.1) Upgrading Developer 6i with Oracle E-Business Suite 11i (Note 125767.1) For EBS 12.0, 12.1, 12.2 Deploying Sun JRE (Native Plug-in) for Windows Clients in Oracle E-Business Suite Release 12 (Note 393931.1) Upgrading OracleAS 10g Forms and Reports in Oracle E-Business Suite Release 12 (Note 437878.1) EBS + Discoverer 11g Users JRE 1.7.0_60 is certified for Discoverer 11g in E-Business Suite environments with the following minimum requirements: Discoverer (11g) 11.1.1.6 plus Patch 13877486 and later  Reference: How To Find Oracle BI Discoverer 10g and 11g Certification Information (Document 233047.1) Worried about the 'mismanaged session cookie' issue? No need to worry -- it's fixed.  To recap: JRE releases 1.6.0_18 through 1.6.0_22 had issues with mismanaging session cookies that affected some users in some circumstances. The fix for those issues was first included in JRE 1.6.0_23. These fixes will carry forward and continue to be fixed in all future JRE releases on the JRE 6 and 7 codelines.  In other words, if you wish to avoid the mismanaged session cookie issue, you should apply any release after JRE 1.6.0_22 on the JRE 6 codeline, and JRE 7u10 and later JRE 7 codeline updates. Implications of Java 6 End of Public Updates for EBS Users The Support Roadmap for Oracle Java is published here: Oracle Java SE Support Roadmap The latest updates to that page (as of Sept. 19, 2012) state (emphasis added): Java SE 6 End of Public Updates Notice After February 2013, Oracle will no longer post updates of Java SE 6 to its public download sites. Existing Java SE 6 downloads already posted as of February 2013 will remain accessible in the Java Archive on Oracle Technology Network. Developers and end-users are encouraged to update to more recent Java SE versions that remain available for public download. For enterprise customers, who need continued access to critical bug fixes and security fixes as well as general maintenance for Java SE 6 or older versions, long term support is available through Oracle Java SE Support . What does this mean for Oracle E-Business Suite users? EBS users fall under the category of "enterprise users" above.  Java is an integral part of the Oracle E-Business Suite technology stack, so EBS users will continue to receive Java SE 6 updates from February 2013 to the end of Java SE 6 Extended Support in June 2017. In other words, nothing changes for EBS users after February 2013.  EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6 until the end of Java SE 6 Extended Support in June 2017. How can EBS customers obtain Java 6 updates after the public end-of-life? EBS customers can download Java 6 patches from My Oracle Support.  For a complete list of all Java SE patch numbers, see: All Java SE Downloads on MOS (Note 1439822.1) Both JDK and JRE packages are contained in a single combined download after 6u45.  Download the "JDK" package for both the desktop client JRE and the server-side JDK package.  Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients? This upgrade is highly recommended but remains optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 desktop clients.  Java 6 is covered by Extended Support until June 2017.  All E-Business Suite customers must upgrade to JRE 7 by June 2017. Coexistence of JRE 6 and JRE 7 on Windows desktops The upgrade to JRE 7 is highly recommended for EBS users, but some users may need to run both JRE 6 and 7 on their Windows desktops for reasons unrelated to the E-Business Suite. Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 7 will be invoked instead of JRE 6 if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1. Applying Updates to JRE 6 and JRE 7 to Windows desktops Auto-update will keep JRE 7 up-to-date for Windows users with JRE 7 installed. Auto-update will only keep JRE 7 up-to-date for Windows users with both JRE 6 and 7 installed.  JRE 6 users are strongly encouraged to apply the latest Critical Patch Updates as soon as possible after each release. The Jave SE CPUs will be available via My Oracle Support.  EBS users can find more information about JRE 6 and 7 updates here: Information Center: Installation & Configuration for Oracle Java SE (Note 1412103.2) The dates for future Java SE CPUs can be found on the Critical Patch Updates, Security Alerts and Third Party Bulletin.  An RSS feed is available on that site for those who would like to be kept up-to-date. What do Mac users need? Mac users running Mac OS X 10.9 can run JRE 7 plug-ins.  See this article: EBS Release 12 Certified with Mac OS X 10.9 with Safari 7 and JRE 7 Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers? JRE is used for desktop clients.  JDK is used for application tier servers JDK upgrades for E-Business Suite application tier servers are highly recommended but currently remain optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6 for application tier servers.  Java SE 6 is covered by Extended Support until June 2017.  All EBS customers with application tier servers on Windows, Solaris, and Linux must upgrade to JDK 7 by June 2017. EBS customers running their application tier servers on other operating systems should check with their respective vendors for the support dates for those platforms. JDK 7 is certified with E-Business Suite 12.  See: Java (JDK) 7 Certified for E-Business Suite 12.0 and 12.1 Servers Java (JDK) 7 Certified with E-Business Suite 12.2 Servers References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • CodePlex Daily Summary for Sunday, November 18, 2012

    CodePlex Daily Summary for Sunday, November 18, 2012Popular ReleasesSTeaL : stealed functionarities from STL: STeaL 0.3 (prerelease): set_adaptor<T> added.ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.0: +2012-11-18 v3.2.0 -?????????????????SelectedValueArray????????(◇?◆:)。 -???????????????????RecoverPropertiesFromJObject????(〓?〓、????、??、Vian_Pan)。 -????????????,?????????????,???SelectedValueArray???????(sam.chang)。 -??Alert.Show???????????(swtseaman)。 -???????????????,??Icon??IconUrl????(swtseaman)。 -?????????TimePicker(??)。 -?????????,??/res.axd?css=blue.css&v=1。 -????????,?????????????,???????。 -????MenuCheckBox(???????)。 -?RadioButton??AutoPostBack??。 -???????FCKEditor?????????...BugNET Issue Tracker: BugNET 1.2: Please read our release notes for BugNET 1.2: http://blog.bugnetproject.com/bugnet-1-2-has-been-released Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you will pollute the rating, and you won't get an answer.Paint.NET PSD Plugin: 2.2.0: Changes: Layer group visibility is now applied to all layers within the group. This greatly improves the visual fidelity of complex PSD files that have hidden layer groups. Layer group names are prefixed so that users can get an indication of the layer group hierarchy. (Paint.NET has a flat list of layers, so the hierarchy is flattened out on load.) The progress bar now reports status when saving PSD files, instead of showing an indeterminate rolling bar. Performance improvement of 1...replaceSID: replaceSID v0.1.145.12321: Changelog: - Added Backup ACL - Added Restore ACL - Some Bugfixes - Settings Tested in development enviroment. Still needs to be tested in production.AppBarUtils: AppBarUtils 2.2: Starting from this release, AppBarUtils supports both Windows Phone SDK 7.1 and 8.0. You can download the dll accordingly. If you're upgrading an existing app to Windows Phone 8.0, you can just replace the dll without any changes to the existing code. Of course, you need to make sure that you have the correct Blend SDK dll referenced. The source code contains two testing projects, one for WP SDK 7.1, the other for WP SDK 8.0, which share the same code base. You can refer to these two projec...YALV! - Yet Another Log4Net Viewer: YALV! v1.2.0.0: New release for YALV Project - Version 1.2.0.0 New feature: - Russian localization Improvements - Minor GUI changesfastBinaryJSON: v1.3.5: - added support for root level DataSet and DataTable deserialize (you have to do ToObject<DataSet>(...) ) - added dataset tests - added MonoDroid projectWater Entity for SunBurn: Sunburn Water Entity For 2.0.1.8 (Deffered Only): Sunburn water entity for Sunburn 2.0.1.8 for deffered rendering only, forward water is not working yet. You need to download water normal maps, from Sunburn Reflection/Refraction example from Here.CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1116.7): [IMPROVED] Detailed error message descriptions for FaultException [FIX] Fixed bug in rule CrmOfflineAccessStateRule which had incorrect State attribute name [FIX] Fixed bug in rule EntityPropertyRule which was missing PropertyValue attribute [FIX] Current connection information was not displayed in status bar while refreshing list of entitiesSuper Metroid Randomizer: Super Metroid Randomizer v5: v5 -Added command line functionality for automation purposes. -Implented Krankdud's change to randomize the Etecoon's item. NOTE: this version will not accept seeds from a previous version. The seed format has changed by necessity. v4 -Started putting version numbers at the top of the form. -Added a warning when suitless Maridia is required in a parsed seed. v3 -Changed seed to only generate filename-legal characters. Using old seeds will still work exactly the same. -Files can now be saved...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.4: Changes This version includes many bug fixes across all platforms, improvements to nuget support and...the biggest news of all...full support for both WinRT and WP8. Download Contents Debug and Release Assemblies Samples Readme.txt License.txt Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro invers...DirectX Tool Kit: November 15, 2012: November 15, 2012 Added support for WIC2 when available on Windows 8 and Windows 7 with KB 2670838 Cleaned up warning level 4 warningsDotNetNuke® Community Edition CMS: 06.02.05: Major Highlights Updated the system so that it supports nested folders in the App_Code folder Updated the Global Error Handling so that when errors within the global.asax handler happen, they are caught and shown in a page displaying the original HTTP error code Fixed issue that stopped users from specifying Link URLs that open on a new window Security FixesFixed issue in the Member Directory module that could show members to non authenticated users Fixed issue in the Lists modul...xUnit.net Contrib: xunitcontrib-resharper 0.7 (RS 7.1, 6.1.1): xunitcontrib release 0.6.1 (ReSharper runner) This release provides a test runner plugin for Resharper 7.1 RTM and 6.1.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release drops 7.0 support and targets the latest revisions of the last two major versions of ReSharper (namely 7.0 and 6.1.1). Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. Also note that all builds work against ALL ...OnTopReplica: Release 3.4: Update to the 3 version with major fixes and improvements. Compatible with Windows 8. Now runs (and requires) .NET Framework v.4.0. Added relative mode for region selection (allows the user to select regions as margins from the borders of the thumbnail, useful for windows which have a variable size but fixed size controls, like video players). Improved window seeking when restoring cloned thumbnail or cloning a window by title or by class. Improved settings persistence. Improved co...DotSpatial: DotSpatial 1.4: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.5: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes Docum...AcDown?????: AcDown????? v4.3: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...????: ???? 1.0: ????New ProjectsBigStringComparer: Proj for compa re big strin gsCarShow: ProjDocManager: ProjFinDesktop: ProjFolder-File Compare: Windows app that compares the files in two given folders/subfolders. Results display in five tabs: Files in A,in B, in A not in B, in B not in A, and differentFoq: Foq is a lightweight mocking library for F#. Use Foq to mock abstract classes and interfaces.GtFramework: A 2D Game FrameworkGucSharep: ProjiDrive: The iDrive solution is a website that enables you to upload and share files.InitialPrototype: This is Initial Prototype of my web 2.0 projectIvanProjects: This is Ivan's ProjectsJAudio Player: JAudio Player is a player for BMS music sequences that are used in several games for Nintendo GameCube and Wii.JGoldDirector: ProjKontinum: Demo site for working flow.Netkill: Simple kill button for network connectivity, with some additional features. Disable network connectivity with a single click, or when a certain program starts.only one test project: um projecto que basicamente não faz nada!! :PP2StillLife: Making a still life with shaders. Warps an image.PetaPX: PetaPX is a photo community powered by creative people worldwide that lets you discover, share, buy and sell inspiring photographs. Qbicon Editor: 2D Polygonal Maps Editor for games and other - Graphical Editor for polygonal objects, points. - Layers - Textures Quick-Chat-Application: Chat Application made using multi-layered architectures( ASP.NET MVC and WCF service).Regular Expression Editor: Regular Expression Editor is the tool for editing and testing Regular ExpressionsScout - Web Patrol: A Microsoft Internet Explorer plug-in that will notifies the user when selected Favorites have been updated since those pages were last visited.silowniafitness: nothingnessSports Tracker: Sports league management system.Stellissimo Wordpress Text Box plugin: Develope to wordpress plugin to add a text boxvba: vba????WCFNH11: clase nh WebCalendar: This will be the summary...?lassifier Tool For OpenCV: Program to calculate vector for function DetectMultiScale in OpenCV.

    Read the article

  • CodePlex Daily Summary for Thursday, May 31, 2012

    CodePlex Daily Summary for Thursday, May 31, 2012Popular ReleasesNaked Objects: Naked Objects Release 4.1.0: Corresponds to the packaged version 4.1.0 available via NuGet. Note that the versioning has moved to SemVer (http://semver.org/) This is a bug fix release with no new functionality. Please note that the easiest way to install and run the Naked Objects Framework is via the NuGet package manager: just search the Official NuGet Package Source for 'nakedobjects'. It is only necessary to download the source code (from here) if you wish to modify or re-build the framework yourself. If you do wi...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.54: Fix for issue #18161: pretty-printing CSS @media rule throws an exception due to mismatched Indent/Unindent pair.OMS.Ice - T4 Text Template Generator: OMS.Ice - T4 Text Template Generator v1.4.0.14110: Issue 601 - Template file name cannot contain characters that are not allowed in C#/VB identifiers Issue 625 - Last line will be ignored by the parser Issue 626 - Usage of environment variables and macrosSilverlight Toolkit: Silverlight 5 Toolkit Source - May 2012: Source code for December 2011 Silverlight 5 Toolkit release.totalem: version 2012.05.30.1: Beta version added function for mass renaming files and foldersAudio Pitch & Shift: Audio Pitch and Shift 4.4.0: Tracklist added on main window Improved performances with tracklist Some other fixesJson.NET: Json.NET 4.5 Release 6: New feature - Added IgnoreDataMemberAttribute support New feature - Added GetResolvedPropertyName to DefaultContractResolver New feature - Added CheckAdditionalContent to JsonSerializer Change - Metro build now always uses late bound reflection Change - JsonTextReader no longer returns no content after consecutive underlying content read failures Fix - Fixed bad JSON in an array with error handling creating an infinite loop Fix - Fixed deserializing objects with a non-default cons...Indent Guides for Visual Studio: Indent Guides v12.1: Version History Changed in v12.1: Fixed crash when unable to start asynchronous analysis Fixed upgrade from v11 Changed in v12: background document analysis new options dialog with Quick Set selections for behavior new "glow" style for guides new menu icon in VS 11 preview control now uses editor theming highlighting can be customised on each line fixed issues with collapsed code blocks improved behaviour around left-aligned pragma/preprocessor commands (C#/C++) new setting...DotNetNuke® Community Edition CMS: 06.02.00: Major Highlights Fixed issue in the Site Settings when single quotes were being treated as escape characters Fixed issue loading the Mobile Premium Data after upgrading from CE to PE Fixed errors logged when updating folder provider settings Fixed the order of the mobile device capabilities in the Site Redirection Management UI The User Profile page was completely rebuilt. We needed User Profiles to have multiple child pages. This would allow for the most flexibility by still f...Thales Simulator Library: Version 0.9.6: The Thales Simulator Library is an implementation of a software emulation of the Thales (formerly Zaxus & Racal) Hardware Security Module cryptographic device. This release fixes a problem with the FK command and a bug in the implementation of PIN block 05 format deconstruction. A new 0.9.6.Binaries file has been posted. This includes executable programs without an installer, including the GUI and console simulators, the key manager and the PVV clashing demo. Please note that you will need ...????: ????2.0.1: 1、?????。WiX Toolset: WiX v3.6 RC: WiX v3.6 RC (3.6.2928.0) provides feature complete Burn with VS11 support. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/5/28/WiX-v3.6-Release-Candidate-availableJavascript .NET: Javascript .NET v0.7: SetParameter() reverts to its old behaviour of allowing JavaScript code to add new properties to wrapped C# objects. The behavior added briefly in 0.6 (throws an exception) can be had via the new SetParameterOptions.RejectUnknownProperties. TerminateExecution now uses its isolate to terminate the correct context automatically. Added support for converting all C# integral types, decimal and enums to JavaScript numbers. (Previously only the common types were handled properly.) Bug fixe...callisto: callisto 2.0.29: Added DNS functionality to scripting. See documentation section for details of how to incorporate this into your scripts.Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (May 2012): Fixes: unserialize() of negative float numbers fix pcre possesive quantifiers and character class containing ()[] array deserilization when the array contains a reference to ISerializable parsing lambda function fix round() reimplemented as it is in PHP to avoid .NET rounding errors filesize bypass for FileInfo.Length bug in Mono New features: Time zones reimplemented, uses Windows/Linux databaseSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.1): New fetures:Admin enable / disable match Hide/Show Euro 2012 SharePoint lists (3 lists) Installing SharePoint Euro 2012 PredictorSharePoint Euro 2012 Predictor has been developed as a SharePoint Sandbox solution to support SharePoint Online (Office 365) Download the solution havivi.euro2012.wsp from the download page: Downloads Upload this solution to your Site Collection via the solutions area. Click on Activate to make the web parts in the solution available for use in the Site C...ScreenShot: InstallScreenShot: This is the current stable release.????SDK for .Net 4.0+(OAuth2.0+??V2?API): ??V2?SDK???: ?????????API?? ???????OAuth2.0?? ????:????????????,??????????“SOURCE CODE”?????????Changeset,http://weibosdk.codeplex.com/SourceControl/list/changesets ???:????????,DEMO??AppKey????????????????,?????AppKey,????AppKey???????????,?????“????>????>????>??????”.Net Code Samples: Code Samples: Code samples (SLNs).LINQ_Koans: LinqKoans v.02: Cleaned up a bitNew ProjectsAAABBBCCC: hauhuhaAutomação e Robótica: Sistema de automação e robótica de periféricos. todo o projeto consiste em integrar interface de controle pelo software do hardware para realização de comando e acionamentos eletricos e eletromecânicos.Barium Live! API Client: Simple test client for the Barium Live! REST API Contact Barium Live! support on http://www.bariumlive.com/support to get an API key for development. Please note that the test client has a dependency to the Newtonsoft Json.NET library that has to be downloaded separately from http://json.codeplex.com/BizTalk Host Restarter: This is a quick tool that I wrote to more quickly Stop, Start and Restart the BizTalk Host instances. It's command line, with full source code, I use it in my build scripts and deploy scripts, works a treat. MUCH faster than the BizTalk Admin Console can do the same task. CNAF Portal: CNAFPortalConsulta_Productos: Nos permite ingresar un código del 1 al 10..... y al dar clic en buscar nos aparecerá un mensaje con el nombre del producto de dicho código.Deployment Tool: Deployement is a small piece in our IT lifecycle for many small projects. But, considering that you are working for a large project, it has many servers, softwares, configurations, databases and so on. The problem emits, even you have upgrades and version controls. The deploymentool is a solution which can help you resolve the difficulties described above.Devmil MVVM: A toolset for MVVM development with focus on INotifyPropertyChanged and dependenciesEAL: no summaryEasyCLI: EasyCLI is a command shell for the Commodore 64 computer. EasyCLI is packaged as an EasyFlash cartridge.ExcelSharePointListSync: This tool help is Getting Data Form SharePoint to Excel 2010 , Following are the silent feature : 1) Maintain a List connection Offline 2) Choose to get data Form a View of Form Columns 3) Sync the data Back to SharePoint GH: Some summaryicontrolpcv: icontrolpvcLotteryVoteMVC: LotteryVote use mvcMail API for Windows Phone 7: Library that allows developers to add some email functionality to their application, like sending and reading email messages, with the capability of adding attachments to the messages. Matchy - Fashion Website: Fashion website for Match Brand, a fashion brand of BOOM JSC.,MIracLE Guild Admin System: asdOrchard Content Picker: Orchard Content Picker enables content editor to select Content Items.ProSoft Mail Services: This project implements a mail server that allows sending mail with SMTP and access the mailbox via POP. Later on, the server also get a MAPI interface and an interface for SPAM defense.Silver Sugar Web Browser: This is a simple web browser named "Silver Sugar". It has a back button, forward button, home button, url text box and a go button.social sdk for windows phone: Windows Phone Social Share Lib. (Including Weibo, Tencent Weibo & Renren)SolutionX: dfsdf asdfds asdfsdf asdfsdfspUtils: This project aims to facilitate easy to use methods that interact with SharePoint's Client OM/JSOM.Streambolics Library: A library of C# classes providing robust tools for real-time data monitoringTestExpression: TestExpressionTFS Helper package: VS2010 is well integrated to TFS and has all the features required on day to day basis. Most common feature any developer use is managing workitems. Creating work items is very easy as is copying work items, but what I need is to copy workitem between projects retaining links and attachments. I am sure this is not Ideal and against sprint projects.Ubelsoft: Sistema de Registro Eletrônico de Ponto - SREPZombieSim: Simulate the spread of a zombie virusZYWater: ......................

    Read the article

  • C#/.NET Little Wonders: Constraining Generics with Where Clause

    - by James Michael Hare
    Back when I was primarily a C++ developer, I loved C++ templates.  The power of writing very reusable generic classes brought the art of programming to a brand new level.  Unfortunately, when .NET 1.0 came about, they didn’t have a template equivalent.  With .NET 2.0 however, we finally got generics, which once again let us spread our wings and program more generically in the world of .NET However, C# generics behave in some ways very differently from their C++ template cousins.  There is a handy clause, however, that helps you navigate these waters to make your generics more powerful. The Problem – C# Assumes Lowest Common Denominator In C++, you can create a template and do nearly anything syntactically possible on the template parameter, and C++ will not check if the method/fields/operations invoked are valid until you declare a realization of the type.  Let me illustrate with a C++ example: 1: // compiles fine, C++ makes no assumptions as to T 2: template <typename T> 3: class ReverseComparer 4: { 5: public: 6: int Compare(const T& lhs, const T& rhs) 7: { 8: return rhs.CompareTo(lhs); 9: } 10: }; Notice that we are invoking a method CompareTo() off of template type T.  Because we don’t know at this point what type T is, C++ makes no assumptions and there are no errors. C++ tends to take the path of not checking the template type usage until the method is actually invoked with a specific type, which differs from the behavior of C#: 1: // this will NOT compile! C# assumes lowest common denominator. 2: public class ReverseComparer<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } So why does C# give us a compiler error even when we don’t yet know what type T is?  This is because C# took a different path in how they made generics.  Unless you specify otherwise, for the purposes of the code inside the generic method, T is basically treated like an object (notice I didn’t say T is an object). That means that any operations, fields, methods, properties, etc that you attempt to use of type T must be available at the lowest common denominator type: object.  Now, while object has the broadest applicability, it also has the fewest specific.  So how do we allow our generic type placeholder to do things more than just what object can do? Solution: Constraint the Type With Where Clause So how do we get around this in C#?  The answer is to constrain the generic type placeholder with the where clause.  Basically, the where clause allows you to specify additional constraints on what the actual type used to fill the generic type placeholder must support. You might think that narrowing the scope of a generic means a weaker generic.  In reality, though it limits the number of types that can be used with the generic, it also gives the generic more power to deal with those types.  In effect these constraints says that if the type meets the given constraint, you can perform the activities that pertain to that constraint with the generic placeholders. Constraining Generic Type to Interface or Superclass One of the handiest where clause constraints is the ability to specify the type generic type must implement a certain interface or be inherited from a certain base class. For example, you can’t call CompareTo() in our first C# generic without constraints, but if we constrain T to IComparable<T>, we can: 1: public class ReverseComparer<T> 2: where T : IComparable<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } Now that we’ve constrained T to an implementation of IComparable<T>, this means that our variables of generic type T may now call any members specified in IComparable<T> as well.  This means that the call to CompareTo() is now legal. If you constrain your type, also, you will get compiler warnings if you attempt to use a type that doesn’t meet the constraint.  This is much better than the syntax error you would get within C++ template code itself when you used a type not supported by a C++ template. Constraining Generic Type to Only Reference Types Sometimes, you want to assign an instance of a generic type to null, but you can’t do this without constraints, because you have no guarantee that the type used to realize the generic is not a value type, where null is meaningless. Well, we can fix this by specifying the class constraint in the where clause.  By declaring that a generic type must be a class, we are saying that it is a reference type, and this allows us to assign null to instances of that type: 1: public static class ObjectExtensions 2: { 3: public static TOut Maybe<TIn, TOut>(this TIn value, Func<TIn, TOut> accessor) 4: where TOut : class 5: where TIn : class 6: { 7: return (value != null) ? accessor(value) : null; 8: } 9: } In the example above, we want to be able to access a property off of a reference, and if that reference is null, pass the null on down the line.  To do this, both the input type and the output type must be reference types (yes, nullable value types could also be considered applicable at a logical level, but there’s not a direct constraint for those). Constraining Generic Type to only Value Types Similarly to constraining a generic type to be a reference type, you can also constrain a generic type to be a value type.  To do this you use the struct constraint which specifies that the generic type must be a value type (primitive, struct, enum, etc). Consider the following method, that will convert anything that is IConvertible (int, double, string, etc) to the value type you specify, or null if the instance is null. 1: public static T? ConvertToNullable<T>(IConvertible value) 2: where T : struct 3: { 4: T? result = null; 5:  6: if (value != null) 7: { 8: result = (T)Convert.ChangeType(value, typeof(T)); 9: } 10:  11: return result; 12: } Because T was constrained to be a value type, we can use T? (System.Nullable<T>) where we could not do this if T was a reference type. Constraining Generic Type to Require Default Constructor You can also constrain a type to require existence of a default constructor.  Because by default C# doesn’t know what constructors a generic type placeholder does or does not have available, it can’t typically allow you to call one.  That said, if you give it the new() constraint, it will mean that the type used to realize the generic type must have a default (no argument) constructor. Let’s assume you have a generic adapter class that, given some mappings, will adapt an item from type TFrom to type TTo.  Because it must create a new instance of type TTo in the process, we need to specify that TTo has a default constructor: 1: // Given a set of Action<TFrom,TTo> mappings will map TFrom to TTo 2: public class Adapter<TFrom, TTo> : IEnumerable<Action<TFrom, TTo>> 3: where TTo : class, new() 4: { 5: // The list of translations from TFrom to TTo 6: public List<Action<TFrom, TTo>> Translations { get; private set; } 7:  8: // Construct with empty translation and reverse translation sets. 9: public Adapter() 10: { 11: // did this instead of auto-properties to allow simple use of initializers 12: Translations = new List<Action<TFrom, TTo>>(); 13: } 14:  15: // Add a translator to the collection, useful for initializer list 16: public void Add(Action<TFrom, TTo> translation) 17: { 18: Translations.Add(translation); 19: } 20:  21: // Add a translator that first checks a predicate to determine if the translation 22: // should be performed, then translates if the predicate returns true 23: public void Add(Predicate<TFrom> conditional, Action<TFrom, TTo> translation) 24: { 25: Translations.Add((from, to) => 26: { 27: if (conditional(from)) 28: { 29: translation(from, to); 30: } 31: }); 32: } 33:  34: // Translates an object forward from TFrom object to TTo object. 35: public TTo Adapt(TFrom sourceObject) 36: { 37: var resultObject = new TTo(); 38:  39: // Process each translation 40: Translations.ForEach(t => t(sourceObject, resultObject)); 41:  42: return resultObject; 43: } 44:  45: // Returns an enumerator that iterates through the collection. 46: public IEnumerator<Action<TFrom, TTo>> GetEnumerator() 47: { 48: return Translations.GetEnumerator(); 49: } 50:  51: // Returns an enumerator that iterates through a collection. 52: IEnumerator IEnumerable.GetEnumerator() 53: { 54: return GetEnumerator(); 55: } 56: } Notice, however, you can’t specify any other constructor, you can only specify that the type has a default (no argument) constructor. Summary The where clause is an excellent tool that gives your .NET generics even more power to perform tasks higher than just the base "object level" behavior.  There are a few things you cannot specify with constraints (currently) though: Cannot specify the generic type must be an enum. Cannot specify the generic type must have a certain property or method without specifying a base class or interface – that is, you can’t say that the generic must have a Start() method. Cannot specify that the generic type allows arithmetic operations. Cannot specify that the generic type requires a specific non-default constructor. In addition, you cannot overload a template definition with different, opposing constraints.  For example you can’t define a Adapter<T> where T : struct and Adapter<T> where T : class.  Hopefully, in the future we will get some of these things to make the where clause even more useful, but until then what we have is extremely valuable in making our generics more user friendly and more powerful!   Technorati Tags: C#,.NET,Little Wonders,BlackRabbitCoder,where,generics

    Read the article

  • The Internet of Things & Commerce: Part 3 -- Interview with Kristen J. Flanagan, Commerce Product Management

    - by Katrina Gosek, Director | Commerce Product Strategy-Oracle
    Internet of Things & Commerce Series: Part 3 (of 3) And now for the final installment my three part series on the Internet of Things & Commerce. Post one, “The Next 7,000 Days”, introduced the idea of the Internet of Things, followed by a second post interviewing one of our chief commerce innovation strategists, Brian Celenza.  This final post in the series is an interview with Kristen J. Flanagan, lead product manager for Oracle Commerce omnichannel strategy. She takes us through the past, present, and future of how our Commerce Solution is re-imagining the way physical and digital shopping come together. ------- QUESTION: It’s your job to stay on top of what our customers’ need to not only run their online businesses effectively, but also to make sure they have product capabilities they can innovate and grow on. What key trend has been top-of-mind for you and our customers around this collision of physical and digital shopping? Kristen: I’ll agree with Brian Celenza that hands down mobile has forced a major disruption in shopping and selling behavior. A few years ago, mobile exploded at a pace I don't think anyone was expecting. Early on, we saw our customers scrambling to establish a mobile presence---mostly through "screen scraping" technologies. As smartphones continued to advance (at lightening speed!), our customers started to investigate ways to truly tap in to their eCommerce capabilities to deliver the mobile experience. They started looking to us for a means of using the eCommerce services and capabilities to deliver a mobile experience that is tailored for mobile rather than the desktop experience on a smaller screen. In the future, I think we'll see customers starting to really understand what their shoppers need and expect from a mobile offering and how they can adapt their content and delivery of that content to meet those needs. And, mobile shopping doesn’t stop at the consumer / buyer. Because the in-store experience is compelling and has advantages that digital just can't offer, we're also starting to see the eCommerce services being leveraged for mobile for in-store sales associates. Brick-and-mortar retailers are interested in putting the omnichannel product catalog, promotions, and cart into the hands of knowledgeable associates. Retailers are now looking to connect and harness the eCommerce data in-store so that shoppers have a reason to walk-in. I think we'll be seeing a lot more customers thinking about melding the in-store and digital experiences to present a richer offering for shoppers.    QUESTION: What are some examples of what our customers are doing currently to bring these concepts to reality? Kristen: Well, without question, connecting digital and brick-and-mortar worlds is becoming tablestakes for selling experiences. If a brand has a foot in both worlds (i.e., isn’t a pureplay online retailer), they have to connect the dots because shoppers – whether consumers or B2B buyers –don't think in clearly defined channels anymore. The expectation is connectedness – for on- and offline experiences, promotions, products, and customer data. What does this mean practically for businesses selling goods on- and offline? It touches a lot of systems: inventory info on the eCommerce site, fulfillment options across channels (buy online/pickup in store), order information (representing various channels for a cohesive view of shopper order history), promotions across digital and store, etc.  A few years ago, the main link between store and digital was the smartphone. We all remember when “apps” became a thing and many of our customers were scrambling to get a native app out there. Now we're seeing more strategic thinking around the benefits of mobile web vs. native and how that ties in to the purpose and role of mobile within the digital channel. Put it more broadly, how these pieces fit together in the overall brand puzzle.  The same could be said for “showrooming.” Where it was a major concern (i.e., shoppers using stores to look at merchandise and then order online from Amazon), in recent months, it’s emerged that the inverse is now becoming a a reality as well. "Webrooming" (using digital sites to do research before making a purchase in the store) is a new behavior pure play retailers are challenged with. There are many technologies, behaviors, and information that need to tie together to offer a holistic omnichannel shopping experience. As a result, brands are looking for ways to connect the digital and in-store experiences to bridge the gaps: shared assortments across channels, assisted selling apps that arm associates with information about shoppers, shared promotions, inventory, etc. QUESTION: How has Oracle Commerce been built to help brands make the link between in-store and digital over the last few years? Kristen: Over the last seven years, the product has been in step with the changes in industry needs. Here is a brief history of the evolution: Prior to Oracle’s acquisition of ATG and Endeca, key investments were made to cross-channel functionality that we are still building on today. Commerce Service Center (v2007.1) ATG introduced the Commerce Service Center in 2007.1 and marked the first entry into what was then called “cross-channel.” The Commerce Service Center is a call-center-agent-facing application that enables agents to see shopper orders, online catalog, promotions, and pricing. It is tightly integrated with the eCommerce capabilities of the platform and commerce engine and provided a means of connecting data from the call center and online channels.  REST services framework (v9.1)  In v9.1 we introduced the REST services framework and interface in the Platform that enabled customers to use ATG web services in other applications. This framework has become the basis for our subsequent omni-channel features and functionality. Multisite Architecture (v10) With the v10 release, we introduced the Multisite Architecture, which enabled customers to manage multiple sites (and channels) within a single instance of the BCC. Customers could create site- and channel-specific catalogs, promotions, targeters, and scenarios. Endeca Page Builder (2.x) / Experience Manager (3.x) With the introduction of Endeca for Mobile (now part of the core platform, available through the reference store – see blow) on top of Page Builder (and then eventually Experience Manager), Endeca gave business users the tools to create and manage native and mobile web applications. And since the acquisition of both ATG (2011) and Endeca (2012), Oracle Commerce has leveraged the best of each leading technology’s capabilities for omnichannel commerce to continue to drive innovation for our customers. Service enablement of core Oracle Commerce capabilities (v10.1.1, 10.2, & 11) After the establishment of the REST services framework and interface, we followed up in subsequent releases with service enablement of core Oracle Commerce capabilities throughout the iOS native app and the enablement of the core Commerce Service Center features. The result is that customers can leverage these services for their integrations with other systems, as well as their omnichannel initiatives.  Mobile web reference application (v10.1) In 10.1 we introduced the shopper-facing mobile reference application that showed how to use Oracle Commerce to deliver a mobile web experience for shoppers. This included the use of Experience Manager and cartridges to drive those experiences on select pages.  Native (iOS) reference application (v10.1.1)  We came out with the 10.1.1 shopper-facing native iOS ref app that illustrated how to use the Commerce REST services to deliver an iOS app. Also included Experience Manager-driven pages.   Assisted Selling reference application (v10.2.1)  The Assisted Selling reference application is our first reference application designed for the in-store associate. This iOS app shows customers how they can use Oracle Commerce data and information to provide a high-touch, consultative sales environment as well as to put the endless aisle into hands of their associates. Shoppers can start a cart online, and in-store associates can access that cart via the application to provide more information or add products and then transact using the ATG engine. Support for Retail promotions (v11) As part of the v11 release, we worked with teams in the Oracle Retail Global Business Unit (RGBU) to assess which promotion types and capabilities are supported across our products. Those products included Oracle Commerce, Oracle Point of Service (ORPOS), and Oracle Retail Price Management (RPM). The result is that customers can now more easily support omnichannel use cases between the store and digital.  Making sure Oracle Commerce can help support the omnichannel needs of our customers is core to our product strategy. With 89% of consumers now use two or more channels to make a single purchase, ensuring that cross-channel interactions are linked is critical to a great customer experience – and to sales. As Oracle Commerce evolves, we want to make it simple for organizations to create, deliver, and scale experiences across touchpoints with our create once, deploy commerce anywhere framework. We have a flexible, services-oriented architecture that allows data, content, catalogs, cart, experiences, personalization, and merchandising to be shared across touchpoints and easily extended in to new environments like mobile, social, in-store, Call Center, and new Websites. [For the latest downloads and Oracle Commerce documentation, please visit the Oracle Technical Network.] ------ Thank you to both Brian and Kristen for their contributions and to this blog series and their continued thought leadership for Oracle Commerce. We are all looking forward to the coming years of months of new shopping behaviors and opportunities to innovate. Because – if the digital fabric of our everyday lives continues to change at the same pace – the next five years (that just under 2,000 days), will be dramatic. ---------- THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • OK - What now? How do we become a Social Business?

    - by Michael Snow
    We hope that those of you that attended yesterday's Webcast with Brian Solis enjoyed Brian's discussion with Christian Finn for our last Webcast of the season for the Oracle Social Business Thought Leaders Series.  For those of you that may have missed the webcast or were stuck at a company holiday party - you'll be glad to hear that the webcast will be available On-Demand starting later today (12/14/12). And any of you who'd like to listen to a quick but informative podcast with Brian - can listen to that here. Some of you may still be left with questions about how to get from point A to point B and even more confused than when you started thinking about this new world of Digital Darwinism. The post below, grabbed from an abundance of great thought leadership prose on Brian's blog may help you frame the path you need to start walking sooner versus later to stay off of the endangered species list.  As you explore your path forward, please keep Oracle in mind - we do offer a wide range of solutions to help your organization 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} optimize the engagement for your customers, employees and partners. The Path from a Social Brand to a Social Business Brian Solis Originally posted May 2, 2012 I’ve been a long-time supporter of MediaTemple’s (MT)Residence program along with Gary Vaynerchuk, Neil Patel, and many others whom I respect. I wanted to share my “7 questions to answer to become a social business” with you here.. Social Media is pervasive and is becoming the new normal in corporate marketing. Brands who get this right are starting to build their own media networks rich with customer connections numbering in the millions. Right now, Coca-Cola has over 34 million fans on Facebook, but they’re hardly alone. Disney follows just behind with 29 million fans, Starbucks boasts 25 million, and Oreo, Red Bull, and Converse play host to over 20 million fans. If we were to look at other networks such as Twitter and Youtube, we would see a recurring theme. People are connecting en masse with the businesses they support and new media represents the ability to cultivate consumer relationships in ways not possible with traditional earned or paid media. Sounds great right? This might sound abrupt, but the truth is that we’re hardly realizing the potential of what lies before us. Everything begins with understanding not just how other brands are marketing themselves in social media, but also seeing what they’re not doing and envisioning what’s possible. We’re already approaching the first of many crossroads that new media will present. Do we take the path of a social brand or that of a social business? What’s the difference? A social brand is just that, a business that is remodeling or retrofitting its existing marketing practices to new media. A social business is something altogether different as it embraces introspection and extrospection to reevaluate internal and external processes, systems, and opportunities to transform into a living, breathing entity that adapts to market conditions and opportunities. It’s a tough decision to make right now especially at a time when all we read about is how much success many businesses are finding without having to answer this very question. With all of the newfound success in social networks, the truth is that we’re only just beginning to learn what’s possible and that’s where you come in. When compared to the investment in time and resources across the board, social media represents only a small part of the mix. But with your help, that’s all about to change. The CMO Survey, an organization that disseminates the opinions of top marketers in order to predict the future of markets, recently published a report that gave credence to the fact that social media is taking off. One of the most profound takeaways from the report was this gem; “The “like button” [in Facebook] packs more customer-acquisition punch than other demand-generating activities.” With insights like this, it’s easy to see why the race to social is becoming heated. The report also highlighted exactly where social fits in the marketing mix today and as you can see, despite all of the hype, it’s not a dominant focus yet. As of August 2011, the percentage of overall marketing budgets dedicated to social media hovered at around 7%. However, in 2012 the investment in social media will climb to 10%. And, in five years, social media is expected to represent almost 18% of the total marketing budget. Think about that for a moment. In 2016, social media will only represent 18%? Queue the sound of a record scratching here. With businesses finding success in social networks, why are businesses failing to realize the true opportunity brought forth by the ability to listen to, connect with, and engage with customers? While there’s value in earning views, driving traffic, and building connections through the 3F’s (friends, fans and followers), success isn’t just defined simply by what really amounts to low-hanging fruit. The truth is that businesses cannot measure what it is they don’t know to value. As a result, innovation in new engagement initiatives is stifled because we’re applying dated or inflexible frameworks to new paradigms. Social media isn’t owned by marketing, but instead the entire organization. This changes everything and makes your role so much more important. It’s up to you to learn how to think outside of the proverbial social media box to see what others don’t, the ability to improve customers experiences through the evolution of a social brand into a social business. Doing so will translate customer insights from what they do and don’t share in social networks into better products, services, and processes. See, customers want something more from their favorite businesses than creative campaigns, viral content, and everyday dialogue in social networks. Customers want to be heard and they want to know that you’re listening. How businesses use social media must remind them that they’re more than just an audience, consumer, or a conduit to “trigger” a desired social effect. Herein lies both the challenge and opportunity of social media. It’s bigger than marketing. It’s also bigger than customer service. It’s about building relationships with customers that improve experiences and more importantly, teaches businesses how to re-imagine products and internal processes to better adapt to potential crises and seize new opportunities. When it comes down to it, Twitter, Facebook, Youtube, Foursquare, are all channels for listening, learning, and engaging. It’s what you do within each channel that builds a community around your brand. And, at the end of the day, the value of the community you build counts for everything. It’s important to understand that we cannot assume that these networks simply exist for people to lineup for our marketing messages or promotional campaigns. Nor can we assume that they’re reeling in anticipation for simple dialogue. They want value. They want recognition. They want access to exclusive information and offers. They need direction, answers and resolution. What we’re talking about here is the multidimensional makeup of consumers and how a one-sided approach to social media forces the needs for social media to expand beyond traditional marketing to socialize the various departments, lines of business, and functions to engage based on the nature of the situation or opportunity. In the same CMO study, it was revealed that marketers believe that social media has a long way to go toward integrating into the overall company strategy. On a scale of 1-7, with one being “not integrated at all” and seven being “very integrated,” 22% chose “one.” Critical functions such as service, HR, sales, R&D, product marketing and development, IR, CSR, etc. are either not engaged or are operating social media within a silo disconnected from other efforts or possibilities. The problem is that customers don’t view a company by silo, instead they see one company, one brand, and their experience in social media forms an impression that eventually contributes to their view of your brand. The first step here is to understand business priorities and objectives to assess how social media can be additive in achieving these goals. Additionally, surveying the landscape to determine other areas of interest as its specifically related to your business. • Are customers seeking help or direction? • Who are your most valuable customers and what are they sharing? • How can you use social media to acquire and retain customers? - What ideas are circulating and how can you harness user generated activity and content to innovate or adapt to better meet the needs of customers? - How can you broaden a single customer view to recognize the varying needs of customers and how your organization can organize around each circumstance? - What insights exist based on how consumers are interacting with one another? How can this intelligence inform marketing, service, products and other important business initiatives? - How can your business extend their current efforts to deliver better customer experiences and in turn more effectively unit internal collaboration and communication? Customer demands far exceed the capabilities of the marketing department. While creating a social brand is a necessary endeavor, building a social business is an investment in customer relevance now and over time. Beyond relevance, a social business fosters a culture of change that unites employees and customers and sets a foundation for meaningful and beneficial relationships. Innovation, communication, and creativity are the natural byproducts of engagement and transformation. As a social brand, we are competing for the moment. As a social business, we are competing the future in all that we do today.

    Read the article

  • Using SPServices &amp; jQuery to Find My Stuff from Multi-Select Person/Group Field

    - by Mark Rackley
    Okay… quick blog post for all you SPServices fans out there. I needed to quickly write a script that would return all the tasks currently assigned to me.  I also wanted it to return any task that was assigned to a group I belong to. This can actually be done with a CAML query, so no big deal, right?  The rub is that the “assigned to” field is a multi-select person or group field. As far as I know (and I actually know so little) you cannot just write a CAML query to return this information. If you can, please leave a comment below and disregard the rest of this blog post… So… what’s a hacker to do? As always, I break things down to their most simple components (I really love the KISS principle and would get it tattooed on my back if people wouldn’t think it meant “Knights In Satan’s Service”. You really gotta be an old far to get that reference).  Here’s what we’re going to do: Get currently logged in user’s name as it is stored in a person field Find all the SharePoint groups the current user belongs to Retrieve a set of assigned tasks from the task list and then find those that are assigned to current  user or group current user belongs to Nothing too hairy… So let’s get started Some Caveats before I continue There are some obvious performance implications with this solution as I make a total of four SPServices calls and there’s a lot of looping going on. Also, the CAML query in this blog has NOT been optimized. If you move forward with this code, tweak it so that it returns a further subset of data or you will see horrible performance if you have a few hundred entries in your task list. Add a date range to the CAML or something. Find some way to limit the results as much as possible. Lastly, if you DO have a better solution, I would like you to share. Iron sharpens iron and all…   Alright, let’s really get started. Get currently logged in user’s name as it is stored in a person field First thing we need to do is understand how a person group looks when you look at the XML returned from a SharePoint Web Service call. It turns out it’s stored like any other multi select item in SharePoint which is <id>;#<value> and when you assign a person to that field the <value> equals the person’s name “Mark Rackley” in my case. This is for Windows Authentication, I would expect this to be different in FBA, but I’m not using FBA. If you want to know what it looks like with FBA you can use the code in this blog and strategically place an alert to see the value.  Anyway… I need to find the name of the user who is currently logged in as it is stored in the person field. This turns out to be one SPServices call: var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     }); As you can see, the “Title” field has the information we need. I suspect (although again, I haven’t tried) that the Title field also contains the user’s name as we need it if I was using FBA. Okay… last thing we need to do is store our users name in an array for processing later: myGroups = new Array(); myGroups.push(userName); Find all the SharePoint groups the current user belongs to Now for the groups. How are groups returned in that XML stream?  Same as the person <ID>;#<Group Name>, and if it’s a mutli select it’s all returned in one big long string “<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>”.  So, how do we find all the groups the current user belongs to? This is also a simple SPServices call. Using the “GetGroupCollectionFromUser” operation we can find all the groups a user belongs to. So, let’s execute this method and store all our groups. $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });         }     }); So, all we did in the above code was execute the “GetGroupCollectionFromUser” operation and look for the each “Group” node (row) and store the name for each group in our array that we put the user’s name in previously (myGroups). Now we have an array that contains the current user’s name as it will appear in the person field XML and  all the groups the current user belongs to. The Rest Now comes the easy part for all of you familiar with SPServices. We are going to retrieve our tasks from the Task list using “GetListItems” and look at each entry to see if it belongs to this person. If it does belong to this person we are going to store it for later processing. That code looks something like this: // get list of assigned tasks that aren't closed... *modify the CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                        //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                             //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 }); Some notes on why I did certain things and additional caveats. You will notice in my code that I’m doing an AssignedToString.indexOf(GroupName) to see if the task belongs to the person. This could possibly return bad results if you have SharePoint Group names that are named in such a way that the “IndexOf” returns a false positive.  For example if you have a Group called “My Users” and a group called “My Users – SuperUsers” then if a user belonged to “My Users” it would return a false positive on executing “My Users – SuperUsers”.IndexOf(“My Users”). Make sense? Just be aware of this when naming groups, we don’t have this problem. This is where also some fine-tuning can probably be done by those smarter than me. This is a pretty inefficient method to determine if a task belongs to a user, I mean what if a user belongs to 20 groups? That’s a LOT of looping.  See all the opportunities I give you guys to do something fun?? Also, why am I storing my values in an array instead of just writing them out to a Div? Well.. I want to pass my data to a jQuery library to format it all nice and pretty and an Array is a great way to do that. When all is said and done and we put all the code together it looks like:   $(document).ready(function() {         var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     });         myGroups = new Array();     myGroups.push(userName );       $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });                      // get list of assigned tasks that aren't closed... *modify this CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                         //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                            //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 });       }    });  }); Final Thoughts So, there you have it. Take it and run with it. Make it something cool (and tell me how you did it). Another possible way to improve performance in this scenario is to use a DVWP to display the tasks and use jQuery and the “myGroups” array from this blog post to hide all those rows that don’t belong to the current user. I haven’t tried it, but it does move some of the processing off to the server (generating the view) so it may perform better.  As always, thanks for stopping by… hope you have a Merry Christmas…

    Read the article

  • Xml Serialization and the [Obsolete] Attribute

    - by PSteele
    I learned something new today: Starting with .NET 3.5, the XmlSerializer no longer serializes properties that are marked with the Obsolete attribute.  I can’t say that I really agree with this.  Marking something Obsolete is supposed to be something for a developer to deal with in source code.  Once an object is serialized to XML, it becomes data.  I think using the Obsolete attribute as both a compiler flag as well as controlling XML serialization is a bad idea. In this post, I’ll show you how I ran into this and how I got around it. The Setup Let’s start with some make-believe code to demonstrate the issue.  We have a simple data class for storing some information.  We use XML serialization to read and write the data: public class MyData { public int Age { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public List<String> Hobbies { get; set; }   public MyData() { this.Hobbies = new List<string>(); } } Now a few simple lines of code to serialize it to XML: static void Main(string[] args) { var data = new MyData {    FirstName = "Zachary", LastName = "Smith", Age = 50, Hobbies = {"Mischief", "Sabotage"}, }; var serializer = new XmlSerializer(typeof (MyData)); serializer.Serialize(Console.Out, data); Console.ReadKey(); } And this is what we see on the console: <?xml version="1.0" encoding="IBM437"?> <MyData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Age>50</Age> <FirstName>Zachary</FirstName> <LastName>Smith</LastName> <Hobbies> <string>Mischief</string> <string>Sabotage</string> </Hobbies> </MyData>   The Change So we decided to track the hobbies as a list of strings.  As always, things change and we have more information we need to store per-hobby.  We create a custom “Hobby” object, add a List<Hobby> to our MyData class and we obsolete the old “Hobbies” list to let developers know they shouldn’t use it going forward: public class Hobby { public string Name { get; set; } public int Frequency { get; set; } public int TimesCaught { get; set; }   public override string ToString() { return this.Name; } } public class MyData { public int Age { get; set; } public string FirstName { get; set; } public string LastName { get; set; } [Obsolete("Use HobbyData collection instead.")] public List<String> Hobbies { get; set; } public List<Hobby> HobbyData { get; set; }   public MyData() { this.Hobbies = new List<string>(); this.HobbyData = new List<Hobby>(); } } Here’s the kicker: This serialization is done in another application.  The consumers of the XML will be older clients (clients that expect only a “Hobbies” collection) as well as newer clients (that support the new “HobbyData” collection).  This really shouldn’t be a problem – the obsolete attribute is metadata for .NET compilers.  Unfortunately, the XmlSerializer also looks at the compiler attribute to determine what items to serialize/deserialize.  Here’s an example of our problem: static void Main(string[] args) { var xml = @"<?xml version=""1.0"" encoding=""IBM437""?> <MyData xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance"" xmlns:xsd=""http://www.w3.org/2001/XMLSchema""> <Age>50</Age> <FirstName>Zachary</FirstName> <LastName>Smith</LastName> <Hobbies> <string>Mischief</string> <string>Sabotage</string> </Hobbies> </MyData>"; var serializer = new XmlSerializer(typeof(MyData)); var stream = new StringReader(xml); var data = (MyData) serializer.Deserialize(stream);   if( data.Hobbies.Count != 2) { throw new ApplicationException("Hobbies did not deserialize properly"); } } If you run the code above, you’ll hit the exception.  Even though the XML contains a “<Hobbies>” node, the obsolete attribute prevents the node from being processed.  This will break old clients that use the new library, but don’t yet access the HobbyData collection. The Fix This fix (in this case), isn’t too painful.  The XmlSerializer exposes events for times when it runs into items (Elements, Attributes, Nodes, etc…) it doesn’t know what to do with.  We can hook in to those events and check and see if we’re getting something that we want to support (like our “Hobbies” node). Here’s a way to read in the old XML data with full support of the new data structure (and keeping the Hobbies collection marked as obsolete): static void Main(string[] args) { var xml = @"<?xml version=""1.0"" encoding=""IBM437""?> <MyData xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance"" xmlns:xsd=""http://www.w3.org/2001/XMLSchema""> <Age>50</Age> <FirstName>Zachary</FirstName> <LastName>Smith</LastName> <Hobbies> <string>Mischief</string> <string>Sabotage</string> </Hobbies> </MyData>"; var serializer = new XmlSerializer(typeof(MyData)); serializer.UnknownElement += serializer_UnknownElement; var stream = new StringReader(xml); var data = (MyData)serializer.Deserialize(stream);   if (data.Hobbies.Count != 2) { throw new ApplicationException("Hobbies did not deserialize properly"); } }   static void serializer_UnknownElement(object sender, XmlElementEventArgs e) { if( e.Element.Name != "Hobbies") { return; }   var target = (MyData) e.ObjectBeingDeserialized; foreach(XmlElement hobby in e.Element.ChildNodes) { target.Hobbies.Add(hobby.InnerText); target.HobbyData.Add(new Hobby{Name = hobby.InnerText}); } } As you can see, we hook in to the “UnknownElement” event.  Once we determine it’s our “Hobbies” node, we deserialize it ourselves – as well as populating the new HobbyData collection.  In this case, we have a fairly simple solution to a small change in XML layout.  If you make more extensive changes, it would probably be easier to do some custom serialization to support older data. A sample project with all of this code is available from my repository on bitbucket. Technorati Tags: XmlSerializer,Obsolete,.NET

    Read the article

  • CodePlex Daily Summary for Thursday, October 25, 2012

    CodePlex Daily Summary for Thursday, October 25, 2012Popular ReleasesTEncoder: 2.9: -2.9 -Added: Support for audio codec Opus -Added: Support for .opus files -Added: Decreased main window dimensions (will reset position info) -Fixed: Custom arguments were not passed if codec is "Copy" -Fixed: MEncoder "Direct Stream Copy" container problem -Fixed: Minor UI problems -Updated: MPlayer and MEncoder to SB41 -Updated: MediaInfo to 0.7.61 -Updated: FFmpeg to latest from ffmpeg.zeranoe.comdcview: DCView 1.3.9: Clien ?? ?????? ??, ???, ??? ??? ??CRM 2011 Web Resource Linker/Publisher: WebResourceLinker: Initial releaseMobilviWP7: MobilviWP7 1.2: Stabilna i przetestowana wersja.Posh for Jammer: jammer_v0.1_beta: First release of a number of PowerShell Functions calling the Yammer API.Style MVVM: 2.0.3: This is both a feature release and a Bug Fix release Features The Main new Feature is the ability to add event handlers from XAML using this new syntax View:EventHandler.Attach="EventName => ViewModelMethod($eventArgs)" New Example app that shows one or two features of the framework on each page, allowing for more straight forward examples. Bug Fixes IUIVisualizationService is now exported correctly and you can import it into your ViewModels DelegateCommand had a bug where it wasn't r...MCEBuddy 2.x: MCEBuddy 2.3.5: Changelog for 2.3.5 (32bit and 64bit) 1. Fixed a bug causing MCEBuddy to crash during or after installation on Windows XP 2. Bugfix for resource leak with UPnP which would lead to a failure after many days 3. Increased the UPnP discovery re-scan interval from 10 minutes to 30 minutes 4. Added support for specifying TVDB and IMDB id’s in the conversion task page (forcing the internet lookup for metadata)WPF About Box: WPF About Box 1.1.1.1: First Stable ReleaseEdi: First Alpha Version: The initial release does a lot of things out of the box (theming, editing with highlighting, Find/Replace, MRU List). Read the Readme.txt file in the Edi Sub-project to get full details.CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1025.5): [NEW] Support for connecting to CRM Online via Office 365 (OSDP) [NEW] Current connection information and loaded ribbon name are displayed in the status bar [IMPROVED] Connect dialog minor improvements and error message descriptions [IMPROVED] Connecting to a CRM server will close currently loaded ribbon upon confirmation (if another ribbon was loaded previously) [FIX] Fixed bug in Open Ribbon dialog which would not allow to refresh entity list more than oncejob board light version: version 1.0: recuiter section jobseeker section spam flagging advanced search filtering customizable templatesReadable Passphrase Generator: KeePass Plugin 0.8.0: Changes: Interrogative phrases (questions) like why did the statesman burgle amidst lucid sunlamps Support transitive / intransitive verbs (whether a verb needs a subject or not). Change adverbs to be either before or after the verb, at random. Add an "equal" version of each strength, where each possibility is equally likely (for password purists). 3401 words in the default dictionary (~400 more than previous release) Fixed bugs when choosing verb tensesfastJSON: v2.0.9: - added support for root level DataSet and DataTable deserialize (you have to do ToObject<DataSet>(...) ) - added dataset testsMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.72: Fix for Issue #18819 - bad optimization of return/assign operator.DNN Module Creator: 01.01.00: Updated templates for DNN7 ( ie. DAL2, Web Service API ). Numerous bug fixes and enhancements.WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.390: Version 2.5.0.390 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Fix recent file list remove issue. WAF: Minor code improvements. BookLibrary: Fix Blend design time support o...Fiskalizacija za developere: FiskalizacijaDev 1.1: Ovo je prva nadogradnja ovog projekta nakon inicijalnog predstavljanja - dodali smo nekoliko feature-a, bilo zato što smo sami primijetili da bi ih bilo dobro dodati, bilo na osnovu vaših sugestija - hvala svima koji su se ukljucili :) Ovo su stvari riješene u v1.1.: 1. Bilo bi dobro da se XML dokument koji se šalje u CIS može snimiti u datoteku (http://fiskalizacija.codeplex.com/workitem/612) 2. Podrška za COM DLL (VB6) (http://fiskalizacija.codeplex.com/workitem/613) 3. Podrška za DOS (unu...Liberty: v3.4.0.0 Release 20th October 2012: Change Log -Added -Halo 4 support (invincibility, ammo editing) -Reach A warning dialog now shows up when you first attempt to swap a weapon -Fixed -A few minor bugsClosedXML - The easy way to OpenXML: ClosedXML 0.68.1: ClosedXML now resolves formulas! Yes it finally happened. If you call cell.Value and it has a formula the library will try to evaluate the formula and give you the result. For example: var wb = new XLWorkbook(); var ws = wb.AddWorksheet("Sheet1"); ws.Cell("A1").SetValue(1).CellBelow().SetValue(1); ws.Cell("B1").SetValue(1).CellBelow().SetValue(1); ws.Cell("C1").FormulaA1 = "\"The total value is: \" & SUM(A1:B2)"; var...Orchard Project: Orchard 1.6 RC: RELEASE NOTES This is the Release Candidate version of Orchard 1.6. You should use this version to prepare your current developments to the upcoming final release, and report problems. Please read our release notes for Orchard 1.6 RC: http://docs.orchardproject.net/Documentation/Orchard-1-6-Release-Notes Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you wil...New Projects6_6_6_w_m_s_open: jwervxsdf7COM0207 Web Scripting and Content Creation Exercises: Loretta Rose Web Scripting and Content Creation ExercisesAcademyPVT: The project is dedicated to my studies at the Academy Pvt. It will be posted various learning tasks, and my research on the ASP.NET.Arduino Solar Meter: An Arduino is used to monitor several item in the meter cabinet. The standard version can log 3 S0 kWh meters and upload results to PVoutput and local SD card.BookCaching: BookCaching - GeoCaching with books. A funny new way to discover books around you!BruceCommonCMS: This is a CMS system developed by ASP.net(C#) with Visual Studio 2010/2012. My aim is to build a dynamic & flexible CMS system.Bullet On Rails for MS MVC: Bullet On Rails provides out-of-box scaffold controllers ala Ruby on Rails for MS MVC. It also provides automatic validations for information found in the LINQ to SQL generated file such as: - Type of field - Date format - Length of field - Required fieldCalculator++: Calculator++ will be the best application in parsing and calculating equations results.Collections: Collection Management for TV Series and Anime.Common Key Password Generator: This utility constructs a complex password from a GUID. The number of possible combinations generated approaches 16 to the power of 32.Cosmos HiDefOs: HiDef Os is a research project, to make a HD driver that runs smoothly. This is a subproject of QuicksilverOS and is fully open-source.EmpireWork Professional Social Network: EmpireWork Professional Social Network Locations: http://empirework.com http://empirework.ru http://empirework.com.ua http://empirework.byeWay payment gateway provider for NB_Store: eWay payment gateway provider for NB_StoreHDK - WinRT MVVM and MEF Friendly Prism Framework port: HDK - is a "spare-time" framework for WinRT development. Solution consists of: - MVVM - Event Aggregator - PRISM - other useful extensions HowToSetUpACodeplexProject: Er is nog niets gebeurdiBackup Explorer: The goal of this application is to provide a quick way to explore any iOS backup (IPhone, Ipad..)istomato: istomatojob board light version: job board light versionMaternity System: It's a project in development stage objectived to help maternity hospitals to make a better treatment for him usersmeu-projeto: sdalçfj lasdjlfjdsalfj lsdafldsajl fjsdalf jlsdaj flsadjl fjdsal fjdsal jflsdajf lksdajfksadMicrosoft Casablanca Samples and Tutorials: The code repository associated with http://cloudysea.wordpress.com, which discusses the Casablanca Project (http://msdn.microsoft.com/en-us/devlabs/casablanca)myvideo: myvideoProject13251024: papaProjekt PHP: Mitt projekt i kursen PHPSharePoint Version History Extractor: Windows app used to migrate/extract version history from a versioned document library into separate files with historical metadata.Site Backup Repackager: Use the Site Backup Repackager to reformat a SP2007 site backup (.cmp) for SP2010. SpiritMVVM: SpiritMVVM is a Model-View-ViewModel library, focused on richness of feature-set and cross-platform compatibility, using the Portable Class Library.SQL Data transfer wizard: It replaces export data in SSMS by offering dependency aware order of table transfer. studyproject: PHP, HTML, ZFTAPI Connector for Dynamics CRM 2011: Open source Caller ID screenpop/CTI for TAPI enabled phone systems. Simply install the TAPI software on your computer then configure this to connect to DynamicsTFS2012: This is project that used for coding VP8.NET: VP8.NET is a dual-licensed commercial/GPL C++ CLR wrapper which allows .NET applications to easily use the VP8 video codec.X0s0m0a0r0t0W1M1S1: 32432432432432

    Read the article

  • Masters vs. PhD - long [closed]

    - by Sterling
    I'm 21 years old and a first year master's computer science student. Whether or not to continue with my PhD has been plaguing me for the past few months. I can't stop thinking about it and am extremely torn on the issue. I have read http://www.cs.unc.edu/~azuma/hitch4.html and many, many other masters vs phd articles on the web. Unfortunately, I have not yet come to a conclusion. I was hoping that I could post my ideas about the issue on here in hopes to 1) get some extra insight on the issue and 2) make sure that I am correct in my assumptions. Hopefully having people who have experience in the respective fields can tell me if I am wrong so I don't make my decision based on false ideas. Okay, to get this topic out of the way - money. Money isn't the most important thing to me, but it is still important. It's always been a goal of mine to make 6 figures, but I realize that will probably take me a long time with either path. According to most online salary calculating sites, the average starting salary for a software engineer is ~60-70k. The PhD program here is 5 years, so that's about 300k I am missing out on by not going into the workforce with a masters. I have only ever had ~1k at one time in my life so 300k is something I can't even really accurately imagine. I know that I wouldn't have at once obviously, but just to know I would be earning that is kinda crazy to me. I feel like I would be living quite comfortably by the time I'm 30 years old (but risk being too content too soon). I would definitely love to have at least a few years of my 20s to spend with that kind of money before I have a family to spend it all on. I haven't grown up very financially stable so it would be so nice to just spend some money…get a nice car, buy a new guitar or two, eat some good food, and just be financially comfortable. I have always felt like I deserved to make good money in my life, even as a kid growing up, and I just want to have it be a reality. I know that either path I take will make good money by the time I'm ~40-45 years old, but I guess I'm just sick of not making money and am getting impatient about it. However, a big idea pushing me towards a PhD is that I feel the masters path would give me a feeling of selling out if I have the capability to solve real questions in the computer science world. (pretty straight-forward - not much to elaborate on, but this is a big deal) Now onto other aspects of the decision. I originally got into computer science because of programming. I started in high school and knew very soon that it was what I wanted to do for a career. I feel like getting a masters and being a software engineer in the industry gives me much more time to program in my career. In research, I feel like I would spend more time reading, writing, trying to get grant money, etc than I would coding. A guy I work with in the lab just recently published a paper. He showed it to me and I was shocked by it. The first two pages was littered with equations and formulas. Then the next page or so was followed by more equations and formulas that he derived from the previous ones. That was his work - breaking down and creating all of these formulas for robotic arm movement. And whenever I read computer science papers, they all seem to follow this pattern. I always pictured myself coding all day long…not proving equations and things of that nature. I know that's only one part of computer science research, but that part bores me. A couple cons on each side - Phd - I don't really enjoy writing or feel like I'm that great at technical writing. Whenever I'm in groups to make something, I'm always the one who does the large majority of the work and then give it to my team members to write up a report. Presenting is different though - I don't mind presenting at all as long as I have a good grasp on what I am presenting. But writing papers seems like such a chore to me. And because of this, the "publish or perish" phrase really turns me off from research. Another bad thing - I feel like if I am doing research, most of it would be done alone. I work best in small groups. I like to have at least one person to bounce ideas off of when I am brainstorming. The idea of being a part of some small elite group to build things sounds ideal to me. So being able to work in small groups for the majority of my career is a definite plus. I don't feel like I can get this doing research. Masters - I read a lot online that most people come in as engineers and eventually move into management positions. As of now, I don't see myself wanting to be a part of management. Lets say my company wanted to make some new product or system - I would get much more pride, enjoyment, and overall satisfaction to say "I made this" rather than "I managed a group of people that made this." I want to be a big part of the development process. I want to make things. I think it would be great to be more specialized than other people. I would rather know everything about something than something about everything. I always have been that way - was a great pitcher during my baseball years, but not so good at everything else, great at certain classes in school, but not so good at others, etc. To think that my career would be the same way sounds okay to me. Getting a PhD would point me in this direction. It would be great to be some guy who is someone that people look towards and come to ask for help because of being such an important contributor to a very specific field, such as artificial neural networks or robotic haptic perception. From what I gather about the software industry, being specialized can be a very bad thing because of the speed of the new technology. I When it comes to being employed, I have pretty conservative views. I don't want to change companies every 5 years. Maybe this is something everyone wishes, but I would love to just be an important person in one company for 10+ (maybe 20-25+ if I'm lucky!) years if the working conditions were acceptable. I feel like that is more possible as a PhD though, being a professor or researcher. The more I read about people in the software industry, the more it seems like most software engineers bounce from company to company at rapid paces. Some even work like a hired gun from project to project which is NOT what I want AT ALL. But finding a place to make great and important software would be great if that actually happens in the real world. I'm a very competitive person. I thrive on competition. I don't really know why, but I have always been that way even as a kid growing up. Competition always gave me a reason to practice that little extra every night, always push my limits, etc. It seems to me like there is no competition in the research world. It seems like everyone is very relaxed as long as research is being conducted. The only competition is if someone is researching the same thing as you and its whoever can finish and publish first (but everyone seems to careful to check that circumstance). The only noticeable competition to me is just with yourself and your own discipline. I like the idea that in the industry, there is real competition between companies to put out the best product or be put out of business. I feel like this would constantly be pushing me to be better at what I do. One thing that is really pushing me towards a PhD is the lifetime of the things you make. I feel like if you make something truly innovative in the industry…just some really great new application or system…there is a shelf-life of about 5-10 years before someone just does it faster and more efficiently. But with research work, you could create an idea or algorithm that last decades. For instance, the A* search algorithm was described in 1968 and is still widely used today. That is amazing to me. In the words of Palahniuk, "The goal isn't to live forever, its to create something that will." Over anything, I just want to do something that matters. I want my work to help and progress society. Seriously, if I'm stuck programming GUIs for the next 40 years…I might shoot myself in the face. But then again, I hate the idea that less than 1% of the population will come into contact with my work and even less understand its importance. So if anything I have said is false then please inform me. If you think I come off as a masters or PhD, inform me. If you want to give me some extra insight or add on to any point I made, please do. Thank you so much to anyone for any help.

    Read the article

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • How do I implement SkyBox in xna 4.0 Reach Profile (for Windows Phone 7)?

    - by Biny
    I'm trying to Implement SkyBox in my phone game. Most of the samples in the web are for HiDef profile, and they are using custom effects (that not supported on Windows Phone). I've tried to follow this guide. But for some reason my SkyBox is not rendered. This is my SkyBox class: using System; using System.Collections.Generic; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Rocuna.Core; using Rocuna.GameEngine.Graphics; using Rocuna.GameEngine.Graphics.Components; namespace Rocuna.GameEngine.Extension.WP7.Graphics { /// <summary> /// Sky box element for phone games. /// </summary> public class SkyBox : SkyBoxBase { /// <summary> /// Initializes a new instance of the <see cref="SkyBoxBase"/> class. /// </summary> /// <param name="game">The Game that the game component should be attached to.</param> public SkyBox(TextureCube cube, Game game) : base(game) { Cube = cube; CubeFaces = new Texture2D[6]; PositionOffset = new Vector3(20, 20, 20); CreateGraphic(512); StripTexturesFromCube(); InitializeData(Game.GraphicsDevice); } #region Properties /// <summary> /// Gets or sets the position offset. /// </summary> /// <value> /// The position offset. /// </value> public Vector3 PositionOffset { get; set; } /// <summary> /// Gets or sets the position. /// </summary> /// <value> /// The position. /// </value> public Vector3 Position { get; set; } /// <summary> /// Gets or sets the cube. /// </summary> /// <value> /// The cube. /// </value> public TextureCube Cube { get; set; } /// <summary> /// Gets or sets the pixel array. /// </summary> /// <value> /// The pixel array. /// </value> public Color[] PixelArray { get; set; } /// <summary> /// Gets or sets the cube faces. /// </summary> /// <value> /// The cube faces. /// </value> public Texture2D[] CubeFaces { get; set; } /// <summary> /// Gets or sets the vertex buffer. /// </summary> /// <value> /// The vertex buffer. /// </value> public VertexBuffer VertexBuffer { get; set; } /// <summary> /// Gets or sets the index buffer. /// </summary> /// <value> /// The index buffer. /// </value> public IndexBuffer IndexBuffer { get; set; } /// <summary> /// Gets or sets the effect. /// </summary> /// <value> /// The effect. /// </value> public BasicEffect Effect { get; set; } #endregion protected override void LoadContent() { } public override void Update(GameTime gameTime) { var camera = Game.GetService<GraphicManager>().CurrentCamera; this.Position = camera.Position + PositionOffset; base.Update(gameTime); } public override void Draw(GameTime gameTime) { DrawOrder = int.MaxValue; var graphics = Effect.GraphicsDevice; graphics.DepthStencilState = new DepthStencilState() { DepthBufferEnable = false }; graphics.RasterizerState = new RasterizerState() { CullMode = CullMode.None }; graphics.BlendState = new BlendState(); graphics.SamplerStates[0] = SamplerState.AnisotropicClamp; graphics.SetVertexBuffer(VertexBuffer); graphics.Indices = IndexBuffer; Effect.Texture = CubeFaces[0]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 0, 2); Effect.Texture = CubeFaces[1]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 6, 2); Effect.Texture = CubeFaces[2]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 12, 2); Effect.Texture = CubeFaces[3]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 18, 2); Effect.Texture = CubeFaces[4]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 24, 2); Effect.Texture = CubeFaces[5]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 30, 2); base.Draw(gameTime); } #region Fields private List<VertexPositionNormalTexture> _vertices = new List<VertexPositionNormalTexture>(); private List<ushort> _indices = new List<ushort>(); #endregion #region Private methods private void InitializeData(GraphicsDevice graphicsDevice) { VertexBuffer = new VertexBuffer(graphicsDevice, typeof(VertexPositionNormalTexture), _vertices.Count, BufferUsage.None); VertexBuffer.SetData<VertexPositionNormalTexture>(_vertices.ToArray()); // Create an index buffer, and copy our index data into it. IndexBuffer = new IndexBuffer(graphicsDevice, typeof(ushort), _indices.Count, BufferUsage.None); IndexBuffer.SetData<ushort>(_indices.ToArray()); // Create a BasicEffect, which will be used to render the primitive. Effect = new BasicEffect(graphicsDevice); Effect.TextureEnabled = true; Effect.EnableDefaultLighting(); } private void CreateGraphic(float size) { Vector3[] normals = { Vector3.Right, Vector3.Left, Vector3.Up, Vector3.Down, Vector3.Backward, Vector3.Forward, }; Vector2[] textureCoordinates = { Vector2.One, Vector2.UnitY, Vector2.Zero, Vector2.UnitX, Vector2.Zero, Vector2.UnitX, Vector2.One, Vector2.UnitY, Vector2.Zero, Vector2.UnitX, Vector2.One, Vector2.UnitY, Vector2.Zero, Vector2.UnitX, Vector2.One, Vector2.UnitY, Vector2.UnitY, Vector2.Zero, Vector2.UnitX, Vector2.One, Vector2.UnitY, Vector2.Zero, Vector2.UnitX, Vector2.One, }; var index = 0; foreach (var normal in normals) { var side1 = new Vector3(normal.Z, normal.X, normal.Y); var side2 = Vector3.Cross(normal, side1); AddIndex(CurrentVertex + 0); AddIndex(CurrentVertex + 1); AddIndex(CurrentVertex + 2); AddIndex(CurrentVertex + 0); AddIndex(CurrentVertex + 2); AddIndex(CurrentVertex + 3); AddVertex((normal - side1 - side2) * size / 2, normal, textureCoordinates[index++]); AddVertex((normal - side1 + side2) * size / 2, normal, textureCoordinates[index++]); AddVertex((normal + side1 + side2) * size / 2, normal, textureCoordinates[index++]); AddVertex((normal + side1 - side2) * size / 2, normal, textureCoordinates[index++]); } } protected void StripTexturesFromCube() { PixelArray = new Color[Cube.Size * Cube.Size]; for (int s = 0; s < CubeFaces.Length; s++) { CubeFaces[s] = new Texture2D(Game.GraphicsDevice, Cube.Size, Cube.Size, false, SurfaceFormat.Color); switch (s) { case 0: Cube.GetData<Color>(CubeMapFace.PositiveX, PixelArray); CubeFaces[s].SetData<Color>(PixelArray); break; case 1: Cube.GetData(CubeMapFace.NegativeX, PixelArray); CubeFaces[s].SetData(PixelArray); break; case 2: Cube.GetData(CubeMapFace.PositiveY, PixelArray); CubeFaces[s].SetData(PixelArray); break; case 3: Cube.GetData(CubeMapFace.NegativeY, PixelArray); CubeFaces[s].SetData(PixelArray); break; case 4: Cube.GetData(CubeMapFace.PositiveZ, PixelArray); CubeFaces[s].SetData(PixelArray); break; case 5: Cube.GetData(CubeMapFace.NegativeZ, PixelArray); CubeFaces[s].SetData(PixelArray); break; } } } protected void AddVertex(Vector3 position, Vector3 normal, Vector2 textureCoordinates) { _vertices.Add(new VertexPositionNormalTexture(position, normal, textureCoordinates)); } protected void AddIndex(int index) { if (index > ushort.MaxValue) throw new ArgumentOutOfRangeException("index"); _indices.Add((ushort)index); } protected int CurrentVertex { get { return _vertices.Count; } } #endregion } }

    Read the article

  • Windows Phone 7 : Dragging and flicking UI controls

    - by TechTwaddle
    Who would want to flick and drag UI controls!? There might not be many use cases but I think some concepts here are worthy of a post. So we will create a simple silverlight application for windows phone 7, containing a canvas element on which we’ll place a button control and an image and then, as the title says, drag and flick the controls. Here’s Mainpage.xaml, <Grid x:Name="LayoutRoot" Background="Transparent">   <Grid.RowDefinitions>     <RowDefinition Height="Auto"/>     <RowDefinition Height="*"/>   </Grid.RowDefinitions>     <!--TitlePanel contains the name of the application and page title-->   <StackPanel x:Name="TitlePanel" Grid.Row="0" Margin="12,17,0,28">     <TextBlock x:Name="ApplicationTitle" Text="KINETICS" Style="{StaticResource PhoneTextNormalStyle}"/>     <TextBlock x:Name="PageTitle" Text="drag and flick" Margin="9,-7,0,0" Style="{StaticResource PhoneTextTitle1Style}"/>   </StackPanel>     <!--ContentPanel - place additional content here-->   <Grid x:Name="ContentPanel" Grid.Row="1" >     <Canvas x:Name="MainCanvas" HorizontalAlignment="Stretch" VerticalAlignment="Stretch">       <Canvas.Background>         <LinearGradientBrush StartPoint="0 0" EndPoint="0 1">           <GradientStop Offset="0" Color="Black"/>           <GradientStop Offset="1.5" Color="BlanchedAlmond"/>         </LinearGradientBrush>       </Canvas.Background>     </Canvas>   </Grid> </Grid> the second row in the main grid contains a canvas element, MainCanvas, with its horizontal and vertical alignment set to stretch so that it occupies the entire grid. The canvas background is a linear gradient brush starting with Black and ending with BlanchedAlmond. We’ll add the button and image control to this canvas at run time. Moving to Mainpage.xaml.cs the Mainpage class contains the following members, public partial class MainPage : PhoneApplicationPage {     Button FlickButton;     Image FlickImage;       FrameworkElement ElemToMove = null;     double ElemVelX, ElemVelY;       const double SPEED_FACTOR = 60;       DispatcherTimer timer; FlickButton and FlickImage are the controls that we’ll add to the canvas. ElemToMove, ElemVelX and ElemVelY will be used by the timer callback to move the ui control. SPEED_FACTOR is used to scale the velocities of ui controls. Here’s the Mainpage constructor, // Constructor public MainPage() {     InitializeComponent();       AddButtonToCanvas();       AddImageToCanvas();       timer = new DispatcherTimer();     timer.Interval = TimeSpan.FromMilliseconds(35);     timer.Tick += new EventHandler(OnTimerTick); } We’ll look at those AddButton and AddImage functions in a moment. The constructor initializes a timer which fires every 35 milliseconds, this timer will be started after the flick gesture completes with some inertia. Back to AddButton and AddImage functions, void AddButtonToCanvas() {     LinearGradientBrush brush;     GradientStop stop1, stop2;       Random rand = new Random(DateTime.Now.Millisecond);       FlickButton = new Button();     FlickButton.Content = "";     FlickButton.Width = 100;     FlickButton.Height = 100;       brush = new LinearGradientBrush();     brush.StartPoint = new Point(0, 0);     brush.EndPoint = new Point(0, 1);       stop1 = new GradientStop();     stop1.Offset = 0;     stop1.Color = Colors.White;       stop2 = new GradientStop();     stop2.Offset = 1;     stop2.Color = (Application.Current.Resources["PhoneAccentBrush"] as SolidColorBrush).Color;       brush.GradientStops.Add(stop1);     brush.GradientStops.Add(stop2);       FlickButton.Background = brush;       Canvas.SetTop(FlickButton, rand.Next(0, 400));     Canvas.SetLeft(FlickButton, rand.Next(0, 200));       MainCanvas.Children.Add(FlickButton);       //subscribe to events     FlickButton.ManipulationDelta += new EventHandler<ManipulationDeltaEventArgs>(OnManipulationDelta);     FlickButton.ManipulationCompleted += new EventHandler<ManipulationCompletedEventArgs>(OnManipulationCompleted); } this function is basically glorifying a simple task. After creating the button and setting its height and width, its background is set to a linear gradient brush. The direction of the gradient is from top towards bottom and notice that the second stop color is the PhoneAccentColor, which changes along with the theme of the device. The line,     stop2.Color = (Application.Current.Resources["PhoneAccentBrush"] as SolidColorBrush).Color; does the magic of extracting the PhoneAccentBrush from application’s resources, getting its color and assigning it to the gradient stop. AddImage function is straight forward in comparison, void AddImageToCanvas() {     Random rand = new Random(DateTime.Now.Millisecond);       FlickImage = new Image();     FlickImage.Source = new BitmapImage(new Uri("/images/Marble.png", UriKind.Relative));       Canvas.SetTop(FlickImage, rand.Next(0, 400));     Canvas.SetLeft(FlickImage, rand.Next(0, 200));       MainCanvas.Children.Add(FlickImage);       //subscribe to events     FlickImage.ManipulationDelta += new EventHandler<ManipulationDeltaEventArgs>(OnManipulationDelta);     FlickImage.ManipulationCompleted += new EventHandler<ManipulationCompletedEventArgs>(OnManipulationCompleted); } The ManipulationDelta and ManipulationCompleted handlers are same for both the button and the image. OnManipulationDelta() should look familiar, a similar implementation was used in the previous post, void OnManipulationDelta(object sender, ManipulationDeltaEventArgs args) {     FrameworkElement Elem = sender as FrameworkElement;       double Left = Canvas.GetLeft(Elem);     double Top = Canvas.GetTop(Elem);       Left += args.DeltaManipulation.Translation.X;     Top += args.DeltaManipulation.Translation.Y;       //check for bounds     if (Left < 0)     {         Left = 0;     }     else if (Left > (MainCanvas.ActualWidth - Elem.ActualWidth))     {         Left = MainCanvas.ActualWidth - Elem.ActualWidth;     }       if (Top < 0)     {         Top = 0;     }     else if (Top > (MainCanvas.ActualHeight - Elem.ActualHeight))     {         Top = MainCanvas.ActualHeight - Elem.ActualHeight;     }       Canvas.SetLeft(Elem, Left);     Canvas.SetTop(Elem, Top); } all it does is calculate the control’s position, check for bounds and then set the top and left of the control. OnManipulationCompleted() is more interesting because here we need to check if the gesture completed with any inertia and if it did, start the timer and continue to move the ui control until it comes to a halt slowly, void OnManipulationCompleted(object sender, ManipulationCompletedEventArgs args) {     FrameworkElement Elem = sender as FrameworkElement;       if (args.IsInertial)     {         ElemToMove = Elem;           Debug.WriteLine("Linear VelX:{0:0.00}  VelY:{1:0.00}", args.FinalVelocities.LinearVelocity.X,             args.FinalVelocities.LinearVelocity.Y);           ElemVelX = args.FinalVelocities.LinearVelocity.X / SPEED_FACTOR;         ElemVelY = args.FinalVelocities.LinearVelocity.Y / SPEED_FACTOR;           timer.Start();     } } ManipulationCompletedEventArgs contains a member, IsInertial, which is set to true if the manipulation was completed with some inertia. args.FinalVelocities.LinearVelocity.X and .Y will contain the velocities along the X and Y axis. We need to scale down these values so they can be used to increment the ui control’s position sensibly. A reference to the ui control is stored in ElemToMove and the velocities are stored as well, these will be used in the timer callback to access the ui control. And finally, we start the timer. The timer callback function is as follows, void OnTimerTick(object sender, EventArgs e) {     if (null != ElemToMove)     {         double Left, Top;         Left = Canvas.GetLeft(ElemToMove);         Top = Canvas.GetTop(ElemToMove);           Left += ElemVelX;         Top += ElemVelY;           //check for bounds         if (Left < 0)         {             Left = 0;             ElemVelX *= -1;         }         else if (Left > (MainCanvas.ActualWidth - ElemToMove.ActualWidth))         {             Left = MainCanvas.ActualWidth - ElemToMove.ActualWidth;             ElemVelX *= -1;         }           if (Top < 0)         {             Top = 0;             ElemVelY *= -1;         }         else if (Top > (MainCanvas.ActualHeight - ElemToMove.ActualHeight))         {             Top = MainCanvas.ActualHeight - ElemToMove.ActualHeight;             ElemVelY *= -1;         }           Canvas.SetLeft(ElemToMove, Left);         Canvas.SetTop(ElemToMove, Top);           //reduce x,y velocities gradually         ElemVelX *= 0.9;         ElemVelY *= 0.9;           //when velocities become too low, break         if (Math.Abs(ElemVelX) < 1.0 && Math.Abs(ElemVelY) < 1.0)         {             timer.Stop();             ElemToMove = null;         }     } } if ElemToMove is not null, we get the top and left values of the control and increment the values with their X and Y velocities. Check for bounds, and if the control goes out of bounds we reverse its velocity. Towards the end, the velocities are reduced by 10% every time the timer callback is called, and if the velocities reach too low values the timer is stopped and ElemToMove is made null. Here’s a short video of the program, the video is a little dodgy because my display driver refuses to run the animations smoothly. The flicks aren’t always recognised but the program should run well on an actual device (or a pc with better configuration), You can download the source code from here: ButtonDragAndFlick.zip

    Read the article

  • How to make a player stay within bounds of world with 2D Camera

    - by Craig
    Im creating a simple top down survival game. At the moment, i have the sprite which is a ship and moves by rotating left or right then going forward in that direction. I have implemented a 2D camera, its always centered on the player. However, when i move towards the bounds of the world that the sprite is in it just keeps on going :( How to i sort it that it stops at the edge of the world and cant go beyond it? Cheers :) Below is the main game class using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace GamesCoursework_1 { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; // player variables Texture2D Ship; Vector2 Ship_Position; float Ship_Rotation = 0.0f; Vector2 Ship_Origin; Vector2 Ship_Velocity; const float tangentialVelocity = 4f; float friction = 0.05f; static Point CameraViewport = new Point(800, 800); Camera2d cam = new Camera2d((int)CameraViewport.X, (int)CameraViewport.Y); //Size of world static Point worldSize = new Point(1600, 1600); // Screen variables static Point worldCenter = new Point(worldSize.X / 2, worldSize.Y / 2); Rectangle playerBounds = new Rectangle(CameraViewport.X / 2, CameraViewport.Y / 2, worldSize.X - CameraViewport.X, worldSize.Y - CameraViewport.Y); Rectangle worldBounds = new Rectangle(0, 0, worldSize.X, worldSize.Y); Texture2D background; public Game1() { graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = CameraViewport.X; graphics.PreferredBackBufferHeight = CameraViewport.Y; Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here Ship = Content.Load<Texture2D>("Ship"); Ship_Origin.X = Ship.Width / 2; Ship_Origin.Y = Ship.Height / 2; background = Content.Load<Texture2D>("aus"); Ship_Position = new Vector2(worldCenter.X, worldCenter.Y); cam.Pos = Ship_Position; cam.Zoom = 1f; } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here Ship_Position = Ship_Velocity + Ship_Position; keyPressed(); base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, null, null, null,null, cam.get_transformation(GraphicsDevice)); spriteBatch.Draw(background, Vector2.Zero, Color.White); spriteBatch.Draw(Ship, Ship_Position, Ship.Bounds, Color.White, Ship_Rotation, Ship_Origin, 1.0f, SpriteEffects.None, 0f); spriteBatch.End(); base.Draw(gameTime); } private void Ship_Move(Vector2 move) { Ship_Position += move; } private void keyPressed() { KeyboardState keyState; // Move right keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Keys.Right)) { Ship_Rotation = Ship_Rotation + 0.1f; } if (keyState.IsKeyDown(Keys.Left)) { Ship_Rotation = Ship_Rotation - 0.1f; } if (keyState.IsKeyDown(Keys.Up)) { Ship_Velocity.X = (float)Math.Cos(Ship_Rotation) * tangentialVelocity; Ship_Velocity.Y = (float)Math.Sin(Ship_Rotation) * tangentialVelocity; if ((int)Ship_Position.Y < playerBounds.Bottom && (int)Ship_Position.Y > playerBounds.Top) cam._pos.Y = Ship_Position.Y; if ((int)Ship_Position.X > playerBounds.Left && (int)Ship_Position.X < playerBounds.Right) cam._pos.X = Ship_Position.X; //tried world bounds here if (!worldBounds.Contains(new Point((int)Ship_Position.X, (int)Ship_Position.Y))) Ship_Position -= new Vector2(0.0f, -tangentialVelocity * 2); if (!worldBounds.Contains(new Point((int)Ship_Position.X, (int)Ship_Position.Y))) Ship_Position -= new Vector2(0.0f, 2 * tangentialVelocity); } else if(Ship_Velocity != Vector2.Zero) { float i = Ship_Velocity.X; float j = Ship_Velocity.Y; Ship_Velocity.X = i -= friction * i; Ship_Velocity.Y = j -= friction * j; if ((int)Ship_Position.Y < playerBounds.Bottom && (int)Ship_Position.Y > playerBounds.Top) cam._pos.Y = Ship_Position.Y; if ((int)Ship_Position.X > playerBounds.Left && (int)Ship_Position.X < playerBounds.Right) cam._pos.X = Ship_Position.X; } if (keyState.IsKeyDown(Keys.Q)) { if (cam.Zoom < 2f) cam.Zoom += 0.05f; } if (keyState.IsKeyDown(Keys.A)) { if (cam.Zoom > 0.3f) cam.Zoom -= 0.05f; } } } } my 2d camera class using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; namespace GamesCoursework_1 { public class Camera2d { protected float _zoom; // Camera Zoom public Matrix _transform; // Matrix Transform public Vector2 _pos; // Camera Position protected float _rotation; // Camera Rotation public int _viewportWidth, _viewportHeight; // viewport size public Camera2d(int ViewportWidth, int ViewportHeight) { _zoom = 1.0f; _rotation = 0.0f; _pos = Vector2.Zero; _viewportWidth = ViewportWidth; _viewportHeight = ViewportHeight; } // Sets and gets zoom public float Zoom { get { return _zoom; } set { _zoom = value; if (_zoom < 0.1f) _zoom = 0.1f; } // Negative zoom will flip image } public float Rotation { get { return _rotation; } set { _rotation = value; } } // Auxiliary function to move the camera public void Move(Vector2 amount) { _pos += amount; } // Get set position public Vector2 Pos { get { return _pos; } set { _pos = value; } } public Matrix get_transformation(GraphicsDevice graphicsDevice) { _transform = // Thanks to o KB o for this solution Matrix.CreateTranslation(new Vector3(-_pos.X, -_pos.Y, 0)) * Matrix.CreateRotationZ(Rotation) * Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) * Matrix.CreateTranslation(new Vector3(_viewportWidth * 0.5f, _viewportHeight * 0.5f, 0)); return _transform; } } }

    Read the article

  • Project Jigsaw: Late for the train: The Q&A

    - by Mark Reinhold
    I recently proposed, to the Java community in general and to the SE 8 (JSR 337) Expert Group in particular, to defer Project Jigsaw from Java 8 to Java 9. I also proposed to aim explicitly for a regular two-year release cycle going forward. Herewith a summary of the key questions I’ve seen in reaction to these proposals, along with answers. Making the decision Q Has the Java SE 8 Expert Group decided whether to defer the addition of a module system and the modularization of the Platform to Java SE 9? A No, it has not yet decided. Q By when do you expect the EG to make this decision? A In the next month or so. Q How can I make sure my voice is heard? A The EG will consider all relevant input from the wider community. If you have a prominent blog, column, or other communication channel then there’s a good chance that we’ve already seen your opinion. If not, you’re welcome to send it to the Java SE 8 Comments List, which is the EG’s official feedback channel. Q What’s the overall tone of the feedback you’ve received? A The feedback has been about evenly divided as to whether Java 8 should be delayed for Jigsaw, Jigsaw should be deferred to Java 9, or some other, usually less-realistic, option should be taken. Project Jigsaw Q Why is Project Jigsaw taking so long? A Project Jigsaw started at Sun, way back in August 2008. Like many efforts during the final years of Sun, it was not well staffed. Jigsaw initially ran on a shoestring, with just a handful of mostly part-time engineers, so progress was slow. During the integration of Sun into Oracle all work on Jigsaw was halted for a time, but it was eventually resumed after a thorough consideration of the alternatives. Project Jigsaw was really only fully staffed about a year ago, around the time that Java 7 shipped. We’ve added a few more engineers to the team since then, but that can’t make up for the inadequate initial staffing and the time lost during the transition. Q So it’s really just a matter of staffing limitations and corporate-integration distractions? A Aside from these difficulties, the other main factor in the duration of the project is the sheer technical difficulty of modularizing the JDK. Q Why is modularizing the JDK so hard? A There are two main reasons. The first is that the JDK code base is deeply interconnected at both the API and the implementation levels, having been built over many years primarily in the style of a monolithic software system. We’ve spent considerable effort eliminating or at least simplifying as many API and implementation dependences as possible, so that both the Platform and its implementations can be presented as a coherent set of interdependent modules, but some particularly thorny cases remain. Q What’s the second reason? A We want to maintain as much compatibility with prior releases as possible, most especially for existing classpath-based applications but also, to the extent feasible, for applications composed of modules. Q Is modularizing the JDK even necessary? Can’t you just put it in one big module? A Modularizing the JDK, and more specifically modularizing the Java SE Platform, will enable standard yet flexible Java runtime configurations scaling from large servers down to small embedded devices. In the long term it will enable the convergence of Java SE with the higher-end Java ME Platforms. Q Is Project Jigsaw just about modularizing the JDK? A As originally conceived, Project Jigsaw was indeed focused primarily upon modularizing the JDK. The growing demand for a truly standard module system for the Java Platform, which could be used not just for the Platform itself but also for libraries and applications built on top of it, later motivated expanding the scope of the effort. Q As a developer, why should I care about Project Jigsaw? A The introduction of a modular Java Platform will, in the long term, fundamentally change the way that Java implementations, libraries, frameworks, tools, and applications are designed, built, and deployed. Q How much progress has Project Jigsaw made? A We’ve actually made a lot of progress. Much of the core functionality of the module system has been prototyped and works at both compile time and run time. We’ve extended the Java programming language with module declarations, worked out a structure for modular source trees and corresponding compiled-class trees, and implemented these features in javac. We’ve defined an efficient module-file format, extended the JVM to bootstrap a modular JRE, and designed and implemented a preliminary API. We’ve used the module system to make a good first cut at dividing the JDK and the Java SE API into a coherent set of modules. Among other things, we’re currently working to retrofit the java.util.ServiceLoader API to support modular services. Q I want to help! How can I get involved? A Check out the project page, read the draft requirements and design overview documents, download the latest prototype build, and play with it. You can tell us what you think, and follow the rest of our work in real time, on the jigsaw-dev list. The Java Platform Module System JSR Q What’s the relationship between Project Jigsaw and the eventual Java Platform Module System JSR? A At a high level, Project Jigsaw has two phases. In the first phase we’re exploring an approach to modularity that’s markedly different from that of existing Java modularity solutions. We’ve assumed that we can change the Java programming language, the virtual machine, and the APIs. Doing so enables a design which can strongly enforce module boundaries in all program phases, from compilation to deployment to execution. That, in turn, leads to better usability, diagnosability, security, and performance. The ultimate goal of the first phase is produce a working prototype which can inform the work of the Module-System JSR EG. Q What will happen in the second phase of Project Jigsaw? A The second phase will produce the reference implementation of the specification created by the Module-System JSR EG. The EG might ultimately choose an entirely different approach than the one we’re exploring now. If and when that happens then Project Jigsaw will change course as necessary, but either way I think that the end result will be better for having been informed by our current work. Maven & OSGi Q Why not just use Maven? A Maven is a software project management and comprehension tool. As such it can be seen as a kind of build-time module system but, by its nature, it does nothing to support modularity at run time. Q Why not just adopt OSGi? A OSGi is a rich dynamic component system which includes not just a module system but also a life-cycle model and a dynamic service registry. The latter two facilities are useful to some kinds of sophisticated applications, but I don’t think they’re of wide enough interest to be standardized as part of the Java SE Platform. Q Okay, then why not just adopt the module layer of OSGi? A The OSGi module layer is not operative at compile time; it only addresses modularity during packaging, deployment, and execution. As it stands, moreover, it’s useful for library and application modules but, since it’s built strictly on top of the Java SE Platform, it can’t be used to modularize the Platform itself. Q If Maven addresses modularity at build time, and the OSGi module layer addresses modularity during deployment and at run time, then why not just use the two together, as many developers already do? A The combination of Maven and OSGi is certainly very useful in practice today. These systems have, however, been built on top of the existing Java platform; they have not been able to change the platform itself. This means, among other things, that module boundaries are weakly enforced, if at all, which makes it difficult to diagnose configuration errors and impossible to run untrusted code securely. The prototype Jigsaw module system, by contrast, aims to define a platform-level solution which extends both the language and the JVM in order to enforce module boundaries strongly and uniformly in all program phases. Q If the EG chooses an approach like the one currently being taken in the Jigsaw prototype, will Maven and OSGi be made obsolete? A No, not at all! No matter what approach is taken, to ensure wide adoption it’s essential that the standard Java Platform Module System interact well with Maven. Applications that depend upon the sophisticated features of OSGi will no doubt continue to use OSGi, so it’s critical that implementations of OSGi be able to run on top of the Java module system and, if suitably modified, support OSGi bundles that depend upon Java modules. Ideas for how to do that are currently being explored in Project Penrose. Java 8 & Java 9 Q Without Jigsaw, won’t Java 8 be a pretty boring release? A No, far from it! It’s still slated to include the widely-anticipated Project Lambda (JSR 335), work on which has been going very well, along with the new Date/Time API (JSR 310), Type Annotations (JSR 308), and a set of smaller features already in progress. Q Won’t deferring Jigsaw to Java 9 delay the eventual convergence of the higher-end Java ME Platforms with Java SE? A It will slow that transition, but it will not stop it. To allow progress toward that convergence to be made with Java 8 I’ve suggested to the Java SE 8 EG that we consider specifying a small number of Profiles which would allow compact configurations of the SE Platform to be built and deployed. Q If Jigsaw is deferred to Java 9, would the Oracle engineers currently working on it be reassigned to other Java 8 features and then return to working on Jigsaw again after Java 8 ships? A No, these engineers would continue to work primarily on Jigsaw from now until Java 9 ships. Q Why not drop Lambda and finish Jigsaw instead? A Even if the engineers currently working on Lambda could instantly switch over to Jigsaw and immediately become productive—which of course they can’t—there are less than nine months remaining in the Java 8 schedule for work on major features. That’s just not enough time for the broad review, testing, and feedback which such a fundamental change to the Java Platform requires. Q Why not ship the module system in Java 8, and then modularize the platform in Java 9? A If we deliver a module system in one release but don’t use it to modularize the JDK until some later release then we run a big risk of getting something fundamentally wrong. If that happens then we’d have to fix it in the later release, and fixing fundamental design flaws after the fact almost always leads to a poor end result. Q Why not ship Jigsaw in an 8.5 release, less than two years after 8? Or why not just ship a new release every year, rather than every other year? A Many more developers work on the JDK today than a couple of years ago, both because Oracle has dramatically increased its own investment and because other organizations and individuals have joined the OpenJDK Community. Collectively we don’t, however, have the bandwidth required to ship and then provide long-term support for a big JDK release more frequently than about every other year. Q What’s the feedback been on the two-year release-cycle proposal? A For just about every comment that we should release more frequently, so that new features are available sooner, there’s been another asking for an even slower release cycle so that large teams of enterprise developers who ship mission-critical applications have a chance to migrate at a comfortable pace.

    Read the article

  • LINQ to Twitter Maintenance Feedback

    - by Joe Mayo
    Originally posted on: http://geekswithblogs.net/WinAZ/archive/2013/06/16/linq-to-twitter-maintenance-feedback.aspxIt’s always fun to receive positive feedback on your work. If you receive a sufficient amount of positive feedback, you know you’re doing something right. Sometimes, people provide negative feedback too. There are a couple ways to handle it: come back fighting or engage for clarification. The way you handle the negative feedback depends on what your goals are. Feedback Approaches If you know the feedback is incorrect and you need to promote your idea or product, you might want to come back fighting. The feedback might just be comments by a troll or competitor wanting to spread FUD. However, this could be the totally wrong approach if you misjudge the source and intentions of the feedback. In a lot of cases, feedback is a golden opportunity. Sometimes, a problem exists that you either don’t know about or don’t realize the true impact of the problem. If you decide to come back fighting, you might loose the opportunity to learn something new. However, if you engage the person providing the feedback, looking for clarification, you might learn something very important. Negative feedback and it’s clarification can lead to the collection of useful and actionable data. In my case, something that prompted this blog post, I noticed someone who tweeted a negative comment about LINQ to Twitter. Normally, any less than stellar comments are usually from folks that need help – so I help if I can. This was different. I was like “Don’t use LINQ to Twitter”. This is an open source project, the comment didn’t come from a competing project, and  sounded more like an expression of frustration. So I engaged. Not only did the person respond, but I got some decent quality feedback. What’s also interesting is a couple other side conversations sprouted on the subject, which gave me more useful data. LINQ to Twitter Thread Actions Essentially, this particular issue centered around maintenance. There are actually several sub-issues at play here: dependencies, error handling, debugging, and visibility. I’ll describe each one and my interpretation. Dependencies Dependencies are where a library has references to other libraries. This means that when you build your application, you need DLLs for the entire dependency graph for your application. There are several potential problems with this that include more libraries for configuration management, potential versioning mismatches, and lack of cross-platform support. In the early days of LINQ to Twitter, I allowed developers to contribute and add dependencies, but it became very problematic (for reasons stated). It was like a ball and chain that kept me from moving forward. So, I refactored and pulled other open-source into my project to eliminate external dependencies. This lets me fix the code in my project without relying on someone else to upgrade or fix their DLL. The motivation for this was from early negative feedback that translated as important data and acted on it. Today, LINQ to Twitter has zero dependencies. Note: Rejecting good code from community members who worked hard to make your project better is a painful experience in itself. I have to point out that any contribution was not in vain because they had a positive influence on my subsequent refactoring that resulted in a better developer experience. Error Handling Error handling has been a problem in the past. I have this combination of supporting both synchronous and asynchronous (APM) processing that can be complex at times. Within the last 6 months, I did a fair amount of refactoring to detect errors and process them properly. I also refactored TwitterQueryException so it includes important data from Twitter. During this refactoring, I’ve made breaking changes that I felt would improve the development experience (small things like renaming a callback property to Exception, rather than Error). I think the async error handling is much better than it was a year ago. For all the work I’ve done, there is more to do. I think that a combination of more error handling support, e.g. improving semantics, and education through documentation and samples will improve the error handling story. Because of what I’ve done so far, it isn’t bad, but I see opportunities for improvement. Debugging Debugging can be painful. Here’s why: you have multiple layers of technology to navigate and figure out where the real problem is – Twitter API, Security, HTTP, LINQ to Twitter, and application. You can probably add your own nuances to that list, but the point is that debugging in this environment can be complex. I think that my plans for error handling will contribute to making the debugging process easier. However, there’s more I can do in the way of documentation and guidance. Some of the questions to be answered revolve around when something goes wrong, how does the developer figure out that there is a problem, what the problem is, and what to do about it. One example that has gone a long way to helping LINQ to Twitter developers is the 401 FAQ. A 401 Unauthorized is the error that the Twitter API returns when a use isn’t able to authenticate and is one of the most difficult problems faced by LINQ to Twitter developers. What I did was read guidance from Twitter and collect techniques from my own development and actions helping other developers to compile an extensive list of reasons for the 401 and ways to fix the problem. At one time, over half of the questions I answered in the forums were to help solve 401 issues. After publishing the 401 FAQ, I rarely get a 401 question and it’s because the person didn’t know about the FAQ. If the person is too lazy to read the FAQ, that’s not my issue, but the results in support issues have been dramatic. I think debugging can benefit from the education and documentation approach, but I’m always open to suggestions on whatever else I can do. Visibility Visibility is a nuance of the error handling/debugging discussion but is deeply rooted in comfort and control. The questions to ask in this area are what is happening as my code runs and how testable is the code. In support of these areas, LINQ to Twitter does have logging and TwitterContext properties that help see what’s happening on requests. The logging functionality allows any developer to connect a TextWriter to the Log property of TwitterContext to see what’s happening. Further, TwitterContext has a Headers property to see the headers Twitter returns and a RawResults property to show the Json string Twitter returns. From a testing perspective, I’ve been able to write hundreds of unit tests, over 600 when this post is published, and growing. If you write your own library, you have full control over all of these aspects. The tradeoff here is that while you have access to the LINQ to Twitter source code and modify it for all the visibility, LINQ to Twitter *will* change (which is good) and you will have to figure out how to merge that with your changes (which is hard). The fact is that this is a limitation of any 3rd party library, not just LINQ to Twitter. So, it’s a design decision where the tradeoff is between control and productivity. That said, there are things I can do with LINQ to Twitter to make the visibility story more compelling. I think there are opportunities to improve diagnostics. This would be a ton of work because it would need to provide multi-level logging that can be tuned for production and support any logging provider you want to attach. I’ve considered approaches such as how the new Semantic Logging application block connects to Windows Error Reporting as a potential target. Whatever I do would need to be extensible without creating native external dependencies. e.g. how many 3rd party libraries force a dependency on a logging framework that you don’t use. So, this won’t be an easy feat, but I believe it can be part of the roadmap. I think that a lot of developers are unaware of existing visibility features, so the first step would be to provide more documentation and guidance. My thought are that this would lead to more feedback that will help improve this area. Summary Recent feedback highlights some of items that are important to LINQ to Twitter developers, such as dependencies, error handling, debugging, and visibility. I know that there are maintenance issues that have been problems for LINQ to Twitter developers in the past. I’ve done a lot of work in this area, such as improving error handling, adding visibility features, and providing extensive API documentation. That said, there is more to be done to make LINQ to Twitter the best Twitter API experience available for .NET developers and I welcome anyone’s thoughts on what I’ve written here or new improvements. @JoeMayo

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • .NET WebRequest.PreAuthenticate not quite what it sounds like

    - by Rick Strahl
    I’ve run into the  problem a few times now: How to pre-authenticate .NET WebRequest calls doing an HTTP call to the server – essentially send authentication credentials on the very first request instead of waiting for a server challenge first? At first glance this sound like it should be easy: The .NET WebRequest object has a PreAuthenticate property which sounds like it should force authentication credentials to be sent on the first request. Looking at the MSDN example certainly looks like it does: http://msdn.microsoft.com/en-us/library/system.net.webrequest.preauthenticate.aspx Unfortunately the MSDN sample is wrong. As is the text of the Help topic which incorrectly leads you to believe that PreAuthenticate… wait for it - pre-authenticates. But it doesn’t allow you to set credentials that are sent on the first request. What this property actually does is quite different. It doesn’t send credentials on the first request but rather caches the credentials ONCE you have already authenticated once. Http Authentication is based on a challenge response mechanism typically where the client sends a request and the server responds with a 401 header requesting authentication. So the client sends a request like this: GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive and the server responds with: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: basic realm=rasnote" X-AspNet-Version: 2.0.50727 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM WWW-Authenticate: Basic realm="rasnote" X-Powered-By: ASP.NET Date: Tue, 27 Oct 2009 00:58:20 GMT Content-Length: 5163 plus the actual error message body. The client then is responsible for re-sending the current request with the authentication token information provided (in this case Basic Auth): GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: TimeTrakker=2HJ1998WH06696; WebLogCommentUser=Rick Strahl|http://www.west-wind.com/|[email protected]; WebStoreUser=b8bd0ed9 Authorization: Basic cgsf12aDpkc2ZhZG1zMA== Once the authorization info is sent the server responds with the actual page result. Now if you use WebRequest (or WebClient) the default behavior is to re-authenticate on every request that requires authorization. This means if you look in  Fiddler or some other HTTP client Proxy that captures requests you’ll see that each request re-authenticates: Here are two requests fired back to back: and you can see the 401 challenge, the 200 response for both requests. If you watch this same conversation between a browser and a server you’ll notice that the first 401 is also there but the subsequent 401 requests are not present. WebRequest.PreAuthenticate And this is precisely what the WebRequest.PreAuthenticate property does: It’s a caching mechanism that caches the connection credentials for a given domain in the active process and resends it on subsequent requests. It does not send credentials on the first request but it will cache credentials on subsequent requests after authentication has succeeded: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rick", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rstrahl", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); which results in the desired sequence: where only the first request doesn’t send credentials. This is quite useful as it saves quite a few round trips to the server – bascially it saves one auth request request for every authenticated request you make. In most scenarios I think you’d want to send these credentials this way but one downside to this is that there’s no way to log out the client. Since the client always sends the credentials once authenticated only an explicit operation ON THE SERVER can undo the credentials by forcing another login explicitly (ie. re-challenging with a forced 401 request). Forcing Basic Authentication Credentials on the first Request On a few occasions I’ve needed to send credentials on a first request – mainly to some oddball third party Web Services (why you’d want to use Basic Auth on a Web Service is beyond me – don’t ask but it’s not uncommon in my experience). This is true of certain services that are using Basic Authentication (especially some Apache based Web Services) and REQUIRE that the authentication is sent right from the first request. No challenge first. Ugly but there it is. Now the following works only with Basic Authentication because it’s pretty straight forward to create the Basic Authorization ‘token’ in code since it’s just an unencrypted encoding of the user name and password into base64. As you might guess this is totally unsecure and should only be used when using HTTPS/SSL connections (i’m not in this example so I can capture the Fiddler trace and my local machine doesn’t have a cert installed, but for production apps ALWAYS use SSL with basic auth). The idea is that you simply add the required Authorization header to the request on your own along with the authorization string that encodes the username and password: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "rick"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); This works and causes the request to immediately send auth information to the server. However, this only works with Basic Auth because you can actually create the authentication credentials easily on the client because it’s essentially clear text. The same doesn’t work for Windows or Digest authentication since you can’t easily create the authentication token on the client and send it to the server. Another issue with this approach is that PreAuthenticate has no effect when you manually force the authentication. As far as Web Request is concerned it never sent the authentication information so it’s not actually caching the value any longer. If you run 3 requests in a row like this: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "ricks"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); you’ll find the trace looking like this: where the first request (the one we explicitly add the header to) authenticates, the second challenges, and any subsequent ones then use the PreAuthenticate credential caching. In effect you’ll end up with one extra 401 request in this scenario, which is still better than 401 challenges on each request. Getting Access to WebRequest in Classic .NET Web Service Clients If you’re running a classic .NET Web Service client (non-WCF) one issue with the above is how do you get access to the WebRequest to actually add the custom headers to do the custom Authentication described above? One easy way is to implement a partial class that allows you add headers with something like this: public partial class TaxService { protected NameValueCollection Headers = new NameValueCollection(); public void AddHttpHeader(string key, string value) { this.Headers.Add(key,value); } public void ClearHttpHeaders() { this.Headers.Clear(); } protected override WebRequest GetWebRequest(Uri uri) { HttpWebRequest request = (HttpWebRequest) base.GetWebRequest(uri); request.Headers.Add(this.Headers); return request; } } where TaxService is the name of the .NET generated proxy class. In code you can then call AddHttpHeader() anywhere to add additional headers which are sent as part of the GetWebRequest override. Nice and simple once you know where to hook it. For WCF there’s a bit more work involved by creating a message extension as described here: http://weblogs.asp.net/avnerk/archive/2006/04/26/Adding-custom-headers-to-every-WCF-call-_2D00_-a-solution.aspx. FWIW, I think that HTTP header manipulation should be readily available on any HTTP based Web Service client DIRECTLY without having to subclass or implement a special interface hook. But alas a little extra work is required in .NET to make this happen Not a Common Problem, but when it happens… This has been one of those issues that is really rare, but it’s bitten me on several occasions when dealing with oddball Web services – a couple of times in my own work interacting with various Web Services and a few times on customer projects that required interaction with credentials-first services. Since the servers determine the protocol, we don’t have a choice but to follow the protocol. Lovely following standards that implementers decide to ignore, isn’t it? :-}© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  Web Services  

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework

    - by ScottGu
    Today we released VS 2013 and .NET 4.5.1. These releases include a ton of great improvements, and include some fantastic enhancements to ASP.NET and the Entity Framework.  You can download and start using them now. Below are details on a few of the great ASP.NET, Web Development, and Entity Framework improvements you can take advantage of with this release.  Please visit http://www.asp.net/vnext for additional release notes, documentation, and tutorials. One ASP.NET With the release of Visual Studio 2013, we have taken a step towards unifying the experience of using the different ASP.NET sub-frameworks (Web Forms, MVC, Web API, SignalR, etc), and you can now easily mix and match the different ASP.NET technologies you want to use within a single application. When you do a File-New Project with VS 2013 you’ll now see a single ASP.NET Project option: Selecting this project will bring up an additional dialog that allows you to start with a base project template, and then optionally add/remove the technologies you want to use in it.  For example, you could start with a Web Forms template and add Web API or Web Forms support for it, or create a MVC project and also enable Web Forms pages within it: This makes it easy for you to use any ASP.NET technology you want within your apps, and take advantage of any feature across the entire ASP.NET technology span. Richer Authentication Support The new “One ASP.NET” project dialog also includes a new Change Authentication button that, when pushed, enables you to easily change the authentication approach used by your applications – and makes it much easier to build secure applications that enable SSO from a variety of identity providers.  For example, when you start with the ASP.NET Web Forms or MVC templates you can easily add any of the following authentication options to the application: No Authentication Individual User Accounts (Single Sign-On support with FaceBook, Twitter, Google, and Microsoft ID – or Forms Auth with ASP.NET Membership) Organizational Accounts (Single Sign-On support with Windows Azure Active Directory ) Windows Authentication (Active Directory in an intranet application) The Windows Azure Active Directory support is particularly cool.  Last month we updated Windows Azure Active Directory so that developers can now easily create any number of Directories using it (for free and deployed within seconds).  It now takes only a few moments to enable single-sign-on support within your ASP.NET applications against these Windows Azure Active Directories.  Simply choose the “Organizational Accounts” radio button within the Change Authentication dialog and enter the name of your Windows Azure Active Directory to do this: This will automatically configure your ASP.NET application to use Windows Azure Active Directory and register the application with it.  Now when you run the app your users can easily and securely sign-in using their Active Directory credentials within it – regardless of where the application is hosted on the Internet. For more information about the new process for creating web projects, see Creating ASP.NET Web Projects in Visual Studio 2013. Responsive Project Templates with Bootstrap The new default project templates for ASP.NET Web Forms, MVC, Web API and SPA are built using Bootstrap. Bootstrap is an open source CSS framework that helps you build responsive websites which look great on different form factors such as mobile phones, tables and desktops. For example in a browser window the home page created by the MVC template looks like the following: When you resize the browser to a narrow window to see how it would like on a phone, you can notice how the contents gracefully wrap around and the horizontal top menu turns into an icon: When you click the menu-icon above it expands into a vertical menu – which enables a good navigation experience for small screen real-estate devices: We think Bootstrap will enable developers to build web applications that work even better on phones, tablets and other mobile devices – and enable you to easily build applications that can leverage the rich ecosystem of Bootstrap CSS templates already out there.  You can learn more about Bootstrap here. Visual Studio Web Tooling Improvements Visual Studio 2013 includes a new, much richer, HTML editor for Razor files and HTML files in web applications. The new HTML editor provides a single unified schema based on HTML5. It has automatic brace completion, jQuery UI and AngularJS attribute IntelliSense, attribute IntelliSense Grouping, and other great improvements. For example, typing “ng-“ on an HTML element will show the intellisense for AngularJS: This support for AngularJS, Knockout.js, Handlebars and other SPA technologies in this release of ASP.NET and VS 2013 makes it even easier to build rich client web applications: The screen shot below demonstrates how the HTML editor can also now inspect your page at design-time to determine all of the CSS classes that are available. In this case, the auto-completion list contains classes from Bootstrap’s CSS file. No more guessing at which Bootstrap element names you need to use: Visual Studio 2013 also comes with built-in support for both CoffeeScript and LESS editing support. The LESS editor comes with all the cool features from the CSS editor and has specific Intellisense for variables and mixins across all the LESS documents in the @import chain. Browser Link – SignalR channel between browser and Visual Studio The new Browser Link feature in VS 2013 lets you run your app within multiple browsers on your dev machine, connect them to Visual Studio, and simultaneously refresh all of them just by clicking a button in the toolbar. You can connect multiple browsers (including IE, FireFox, Chrome) to your development site, including mobile emulators, and click refresh to refresh all the browsers all at the same time.  This makes it much easier to easily develop/test against multiple browsers in parallel. Browser Link also exposes an API to enable developers to write Browser Link extensions.  By enabling developers to take advantage of the Browser Link API, it becomes possible to create very advanced scenarios that crosses boundaries between Visual Studio and any browser that’s connected to it. Web Essentials takes advantage of the API to create an integrated experience between Visual Studio and the browser’s developer tools, remote controlling mobile emulators and a lot more. You will see us take advantage of this support even more to enable really cool scenarios going forward. ASP.NET Scaffolding ASP.NET Scaffolding is a new code generation framework for ASP.NET Web applications. It makes it easy to add boilerplate code to your project that interacts with a data model. In previous versions of Visual Studio, scaffolding was limited to ASP.NET MVC projects. With Visual Studio 2013, you can now use scaffolding for any ASP.NET project, including Web Forms. When using scaffolding, we ensure that all required dependencies are automatically installed for you in the project. For example, if you start with an ASP.NET Web Forms project and then use scaffolding to add a Web API Controller, the required NuGet packages and references to enable Web API are added to your project automatically.  To do this, just choose the Add->New Scaffold Item context menu: Support for scaffolding async controllers uses the new async features from Entity Framework 6. ASP.NET Identity ASP.NET Identity is a new membership system for ASP.NET applications that we are introducing with this release. ASP.NET Identity makes it easy to integrate user-specific profile data with application data. ASP.NET Identity also allows you to choose the persistence model for user profiles in your application. You can store the data in a SQL Server database or another data store, including NoSQL data stores such as Windows Azure Storage Tables. ASP.NET Identity also supports Claims-based authentication, where the user’s identity is represented as a set of claims from a trusted issuer. Users can login by creating an account on the website using username and password, or they can login using social identity providers (such as Microsoft Account, Twitter, Facebook, Google) or using organizational accounts through Windows Azure Active Directory or Active Directory Federation Services (ADFS). To learn more about how to use ASP.NET Identity visit http://www.asp.net/identity.  ASP.NET Web API 2 ASP.NET Web API 2 has a bunch of great improvements including: Attribute routing ASP.NET Web API now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your Web API routes by annotating your actions and controllers like this: OAuth 2.0 support The Web API and Single Page Application project templates now support authorization using OAuth 2.0. OAuth 2.0 is a framework for authorizing client access to protected resources. It works for a variety of clients including browsers and mobile devices. OData Improvements ASP.NET Web API also now provides support for OData endpoints and enables support for both ATOM and JSON-light formats. With OData you get support for rich query semantics, paging, $metadata, CRUD operations, and custom actions over any data source. Below are some of the specific enhancements in ASP.NET Web API 2 OData. Support for $select, $expand, $batch, and $value Improved extensibility Type-less support Reuse an existing model OWIN Integration ASP.NET Web API now fully supports OWIN and can be run on any OWIN capable host. With OWIN integration, you can self-host Web API in your own process alongside other OWIN middleware, such as SignalR. For more information, see Use OWIN to Self-Host ASP.NET Web API. More Web API Improvements In addition to the features above there have been a host of other features in ASP.NET Web API, including CORS support Authentication Filters Filter Overrides Improved Unit Testability Portable ASP.NET Web API Client To learn more go to http://www.asp.net/web-api/ ASP.NET SignalR 2 ASP.NET SignalR is library for ASP.NET developers that dramatically simplifies the process of adding real-time web functionality to your applications. Real-time web functionality is the ability to have server-side code push content to connected clients instantly as it becomes available. SignalR 2.0 introduces a ton of great improvements. We’ve added support for Cross-Origin Resource Sharing (CORS) to SignalR 2.0. iOS and Android support for SignalR have also been added using the MonoTouch and MonoDroid components from the Xamarin library (for more information on how to use these additions, see the article Using Xamarin Components from the SignalR wiki). We’ve also added support for the Portable .NET Client in SignalR 2.0 and created a new self-hosting package. This change makes the setup process for SignalR much more consistent between web-hosted and self-hosted SignalR applications. To learn more go to http://www.asp.net/signalr. ASP.NET MVC 5 The ASP.NET MVC project templates integrate seamlessly with the new One ASP.NET experience and enable you to integrate all of the above ASP.NET Web API, SignalR and Identity improvements. You can also customize your MVC project and configure authentication using the One ASP.NET project creation wizard. The MVC templates have also been updated to use ASP.NET Identity and Bootstrap as well. An introductory tutorial to ASP.NET MVC 5 can be found at Getting Started with ASP.NET MVC 5. This release of ASP.NET MVC also supports several nice new MVC-specific features including: Authentication filters: These filters allow you to specify authentication logic per-action, per-controller or globally for all controllers. Attribute Routing: Attribute Routing allows you to define your routes on actions or controllers. To learn more go to http://www.asp.net/mvc Entity Framework 6 Improvements Visual Studio 2013 ships with Entity Framework 6, which bring a lot of great new features to the data access space: Async and Task<T> Support EF6’s new Async Query and Save support enables you to perform asynchronous data access and take advantage of the Task<T> support introduced in .NET 4.5 within data access scenarios.  This allows you to free up threads that might otherwise by blocked on data access requests, and enable them to be used to process other requests whilst you wait for the database engine to process operations. When the database server responds the thread will be re-queued within your ASP.NET application and execution will continue.  This enables you to easily write significantly more scalable server code. Here is an example ASP.NET WebAPI action that makes use of the new EF6 async query methods: Interception and Logging Interception and SQL logging allows you to view – or even change – every command that is sent to the database by Entity Framework. This includes a simple, human readable log – which is great for debugging – as well as some lower level building blocks that give you access to the command and results. Here is an example of wiring up the simple log to Debug in the constructor of an MVC controller: Custom Code-First Conventions The new Custom Code-First Conventions enable bulk configuration of a Code First model – reducing the amount of code you need to write and maintain. Conventions are great when your domain classes don’t match the Code First conventions. For example, the following convention configures all properties that are called ‘Key’ to be the primary key of the entity they belong to. This is different than the default Code First convention that expects Id or <type name>Id. Connection Resiliency The new Connection Resiliency feature in EF6 enables you to register an execution strategy to handle – and potentially retry – failed database operations. This is especially useful when deploying to cloud environments where dropped connections become more common as you traverse load balancers and distributed networks. EF6 includes a built-in execution strategy for SQL Azure that knows about retryable exception types and has some sensible – but overridable – defaults for the number of retries and time between retries when errors occur. Registering it is simple using the new Code-Based Configuration support: These are just some of the new features in EF6. You can visit the release notes section of the Entity Framework site for a complete list of new features. Microsoft OWIN Components Open Web Interface for .NET (OWIN) defines an open abstraction between .NET web servers and web applications, and the ASP.NET “Katana” project brings this abstraction to ASP.NET. OWIN decouples the web application from the server, making web applications host-agnostic. For example, you can host an OWIN-based web application in IIS or self-host it in a custom process. For more information about OWIN and Katana, see What's new in OWIN and Katana. Summary Today’s Visual Studio 2013, ASP.NET and Entity Framework release delivers some fantastic new features that streamline your web development lifecycle. These feature span from server framework to data access to tooling to client-side HTML development.  They also integrate some great open-source technology and contributions from our developer community. Download and start using them today! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >