Search Results

Search found 5279 results on 212 pages for 'customer'.

Page 208/212 | < Previous Page | 204 205 206 207 208 209 210 211 212  | Next Page >

  • Contract Work - Lessons Learned

    - by samerpaul
    I thought I would write a post of a different nature today, but still relevant to the tech world. I do a lot of contract jobs myself and really enjoy it. It's nice to keep jumping from project to project, and not having to go to an office or keep regular hours, etc. I really enjoy it. I have learned a lot in the past few years of doing it (both from experience and from help given to me from others, and the internet) so I thought I'd share some of that knowledge/experience today.So here's my own personal "lesson's learned" that hopefully will help you if you find yourself doing contract work:Should I take the job?Ok, so this is the first step. Assuming you were given sufficient information about what they want, then you should really think about what you're capable of doing and whether or not you should take this job. Personally, my rule is, if I know it's possible, I'll say yes, even if I don't yet know how to do it. That's because the internet is such a great help, it would be rare to run into an issue that you can't figure out with some help. So if your clients are asking for something that you don't yet know how to program, but you know you can do it on the platform then go for it. How else are you going to learn?Use this rule with some limitation, however. If you're really lacking the expertise or foundation in something, then unless you have tons of time to complete the project, then I wouldn't say yes. For example, I haven't personally done any 3d/openGL programming yet so I wouldn't say yes to a project that extensively uses it. OK, so I want the job, but how much do I charge?This part can be tricky. There is no set formula really, but I have some tips for pricing that will hopefully give you a better idea on how to confidently ask your price and have them accept. Here are some personal guidelinesHow much time do you have to complete the project? If it's shorter than average, then charge more. You can even make a subtle note about this (or not so subtle if they still don't get it.) If it seems too short of a time (i.e. near impossible to complete), be sure to say that. It looks bad to promise a time that you can't keep--and it makes it less likely for them to return to you for work.Your Hourly rate: How long have you been working in that language? Do you have existing projects to back you up? Or previous contacts that can vouch for your work? Are there very few people with your particular skill set? All of these things will lend themselves to setting an hourly rate. I'd also try out a quick google search of what your line of work is, to see what the industry standard is at that point in time.I wouldn't price too low, because you want to make your time worth it. You also want them to feel like they're paying for quality work (assuming you can deliver it :) ). Finally, think about your client. If it's a small business, then don't price it too high if you want the job. If it's an enterprise (like a Fortune company), then don't be afraid to price higher. They have the budget for it.Fixed price: If they want a fixed price project, then you need to think about how many hours it will take you to complete it and multiply it by the hourly rate you set for yourself. Then, honestly, I would add 10-20% on top of that. Why? Because nothing ever works exactly how you want it to. There are lots of times that something "trivial" is way harder than it should be, or something that "should work" doesn't for hours and it eats away at your hourly rate. I can't count the number of times I encountered a logical bug that took away an entire's day work because debuggers don't help in those cases. By adding that padding in, it's still OK to have those days where you don't get as much done as you want. And another useful tip: Depending on your client, and the scope, you most likely want to set that you both sign off on a specification sheet before doing any work, and that any changes will result in a re-evaulation of the price. This is to help protect you from being handed a huge new addition to the project half-way in, without any extra payment.Scope of project: Finally, is it a huge project? Is it really small/fast? This affects how much your client will be willing to pay. If it sounds big, they will be willing to pay more for it. If it seems really small, then you won't be able to get away with a large asking price (as easily).Ok, I priced it, now what?So now that you have the price, you want to make sure it feels justified to your client. I never set a price before I can really think about everything. For example, if you're still in your introduction phase, and they want a price, don't give one! Just comment that you will send them a proposal sheet with all the features outlined, and a price for everything. You don't want to shout out a low number and then deliver something that is way higher. You also don't want to shock them with a big number before they feel like they are getting a great product.Make up a proposal document in a word editor. Personally, I leave the price till the very end. Why? Because by the time they reach the end, you've already discussed all the great features you plan to implement, and how it's the best product they'll ever use, etc etc...so your price comes off as a steal! If you hit them up front with a price, they will read through the document with a negative bias. Think about those commercials on TV. They always go on about their product, then at the end, ask "What would you pay for something like this? $100? $50? How about $20!!". This is not by accident.Scenario: I finished the job way earlier than expectedYou have two options then. You can either polish the hell out of the application, and even throw in a few bonus features (assuming they are in-line with the customer's needs) or you can sit and wait on it until you near your deadline. Why don't you want to turn it in too early? Because you should treat that extra time as a surplus. If you said it is going to take you 3 weeks, and it took you only 1, you have a surplus of 2 weeks. I personally don't want to let them know that I can do a 3 week project in 1 week. Why not? Because that may not always be the case! I may later have a 3 week project that takes all 3 weeks, but if I set a precedent of delivering super early, then the pressure is on for that longer project. It also makes it harder to quote longer times if you keep delivering too early.Feel free to deliver early, but again, don't do it too early. They may also wonder why they paid you for 3 weeks of work if you're done in 1. They may further wonder if the product sucks, or what is wrong with it, if it's done so early, etc.I would just polish the application. Everyone loves polish in their applications. The smallest details are what make an application go from "functional" to "fantastic". And since you are still delivering on time, then they are still going to be very happy with you.Scenario: It's taking way too long to finish this, and the deadline is nearing/here!So this is not a fun scenario to be in, but it'll happen. Sometimes the scope of the project gets out of hand. The best policy here is OPENNESS/HONESTY. Tell them that the project is taking longer than expected, and give a reasonable time for when you think you'll have it done. I typically explain it in a way that makes it sound like it isn't something that I did wrong, but it's just something about the nature of the project. This really goes for any scenario, to be honest. Just continue to stay open and communicative about your progress. This doesn't mean that you should email them every five minutes (unless they want you to), but it does mean that maybe every few days or once a week, give them an update on where you're at, and what's next. They'll be happy to know they are paying for progress, and it'll make it easier to ask for an extension when something goes wrong, because they know that you've been working on it all along.Final tips and thoughts:In general, contract work is really fun and rewarding. It's nice to learn new things all the time, as mandated by the project ,and to challenge yourself to do things you may not have done before. The key is to build a great relationship with your clients for future work, and for recommendations. I am always very honest with them and I never promise something I can't deliver. Again, under promise, over deliver!I hope this has proved helpful!Cheers,samerpaul

    Read the article

  • CodePlex Daily Summary for Thursday, October 17, 2013

    CodePlex Daily Summary for Thursday, October 17, 2013Popular ReleasesSocial Network Importer for NodeXL: SocialNetImporter(v.1.9): This new version includes: - Download latest status update and use it as vertex tooltip - Limit the timelines to parse to me, my friends or both - Fixed some reported bugs about the fan page and group importer - Fixed the login bug reported latelyDotNetNuke® Wiki: 05.00.00: Changes made to better support upgrades and the removal of deprecated legacy files that were causing formatting issues. Updated the Version number to better indicate the significance of the C# migration and the new DNN 7.0.2 minimum requirement.TerrariViewer: TerrariViewer v7.1 [Terraria Inventory Editor]: You can now backspace in number fields Items added in 1.2.0.3 no longer corrupt player files Buff durations capped at 9999999 Item stacks capped at 9999999 Version info added Prefix IDs corrected Shoe and Eye color box are now properly clickable Moved Bank and Safe into their own tab Users will now be notified of new updatesPython Tools for Visual Studio: 2.0: PTVS 2.0 We’re pleased to announce the release of Python Tools for Visual Studio 2.0 RTM. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, IPython, and cross platform and cross language debugging support. QUICK VIDEO OVERVIEW For a quick overview of the general IDE experience, please watch this v...C# Intellisense for Notepad++: Release v.1.0.8.2: Solved scrolling problem after DocumentFormatting Implemented "format as you type" --- To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.8.2: Solved scrolling problem after DocumentFormatting Implemented "format as you type" --- To avoid the DLLs getting locked by OS use MSI file for the installation.Collection Commander for Configuration Manager 2012: CMCollCtr 1.0.0: Change log: - MSI Setup - UI Improved - CM12 Console integration - New Powershell code snippets - Client Center IntegrationLINQ to Twitter: LINQ to Twitter v2.1.09: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.Sandcastle Help File Builder: SHFB v1.9.8.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This new release contains bug fixes and feature enhancements. There are some potential breaking changes in this release as some features of the Help File Builder have been moved into...C++ REST SDK (codename "Casablanca"): C++ REST SDK 1.3.0: This release fixes multiple customer reported issues as well as the following: Full support for Dev12 binaries and project files Full support for Windows XP New sample highlighting the Client and Server APIs : BlackJack Expose underlying native handle to set custom options on http_client Improvements to Listener Library Note: Dev10 binaries have been dropped as of this release, however the Dev10 project files are still available in the Source CodeAD ACL Scanner: 1.3.2: Minor bug fixed: Powershell 4.0 will report: Select—Object: Parameter cannot be processed because the parameter name p is ambiguous.Json.NET: Json.NET 5.0 Release 7: New feature - Added support for Immutable Collections New feature - Added WriteData and ReadData settings to DataExtensionAttribute New feature - Added reference and type name handling support to extension data New feature - Added default value and required support to constructor deserialization Change - Extension data is now written when serializing Fix - Added missing casts to JToken Fix - Fixed parsing large floating point numbers Fix - Fixed not parsing some ISO date ...Fast YouTube Downloader: YouTube Downloader 2.2.0: YouTube Downloader 2.2.0VidCoder: 1.5.8 Beta: Added hardware acceleration options: Bicubic OpenCL scaling algorithm, QSV decoding/encoding and DXVA decoding. Updated HandBrake core to SVN 5834. Updated VidCoder setup icon. Fixed crash when choosing the mp4v2 container on x86 and opening on x64. Warning: the hardware acceleration features require specific hardware or file types to work correctly: QSV: Need an Intel processor that supports Quick Sync Video encoding, with a monitor hooked up to the Intel HD Graphics output and the lat...ASP.net MVC Awesome - jQuery Ajax Helpers: 3.5.2: version 3.5.2 - fix for setting single value to multivalue controls - datepicker min max date offset fix - html encoding for keys fix - enable Column.ClientFormatFunc to be a function call that will return a function version 3.5.1 - fixed html attributes rendering - fixed loading animation rendering - css improvements version 3.5 ========================== - autosize for all popups ( can be turned off by calling in js awe.autoSize = false ) - added Parent, Paremeter extensions ...Wsus Package Publisher: Release v1.3.1310.12: Allow the Update Creation Wizard to be set in full screen mode. Fix a bug which prevent WPP to Reset Remote Sus Client ID. Change the behavior of links in the Update Detail Viewer. Left-Click to open, Right-Click to copy to the Clipboard.WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.6.maint: I think this covers all of the issues. new additions: fixed the thumbnail problem for backgrounds. general clean up and error checking. need to get this put through the wringer and all feedback is welcome.BIDS Helper: BIDS Helper 1.6.4: This BIDS Helper release brings the following new features and fixes: New Features: A new Bus Matrix style report option when you run the Printer Friendly Dimension Usage report for an SSAS cube. The Biml engine is now fully in sync with the supported subset of Varigence Mist 3.4. This includes a large number of language enhancements, bugfixes, and project deployment support. Fixed Issues: Fixed Biml execution for project connections fixing a bug with Tabular Translations Editor not a...Free language translator and file converter: Free Language Translator 3.4: fixes for new version look up.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.0.6: Added PersistPrompt parameter to Show-InstallationWelcome and Show-InstallationPrompt. Prompt window is persistently returned to center screen after interval specified in config file (default 10 seconds). For Show-InstallationWelcome, this only takes effect if deferral is not available to user. The user will have no option but to respond to the prompt - resistance is futile! Added example advanced Office 2010 deployment script Asynchronous actions now write to the same log file as synchro...New ProjectsAdditionPage: This is a simple ASP.NET in VB.NET page that allows users to enter 2 numbers, and display their sum. Arad Enterprise Messaging Proxy: In these situations, the overhead and configuration complexity of an external webserver is seldom worth the trouble. AEMGP Server Implementation base on Socke.Boring Sudoku: Boring Sudoku is Sudoku game, made for SFML-game programming tutorial. Feel free to download the source code to learn more about game programming.Configurator Debug: O Configurator Debug é uma ferramenta que auxilia na depuração das queries da feature do Configurador no Dynamics AX 2009.Fast Query: FastQuery - is a tool to execute MS SQL Queries without SQL Server Management Studio (SSMS). gdrwebapp1: Test ProjectK2 workflow Manifestation d'interet with custom IPF methode: K2 SamplesL Language Interpreter: We are now in dev start stage, when none of the functionality is available, but probably you will be able to see it published. this shoud have tex outputLameXP - Audio Encoder Front-End: LameXP is a graphical user-interface for various of audio encoders: It allows you convert audio files from one format to another one in the most simple way.Learn Node.js: Node.js express jade ...localcompare: localhistory add new featureOpen source WPF PDF Viewer: Open source PDF Viewer based on Apitron PDF Rasterizer for .NET component that performs high-quality conversion from PDF file to an image.Pfz.AnimationManagement: A .NET animation library that supports both declarative and imperative animations, capable of creating from simple animation to entire games.SerieSpotter: .net projectSimpleUnitity: ??????? ??? DataBase, TextLog, CacheSPOnlineDevelopTool(SharePoint ??????): SPOnlineDevelopTool?????????SharePoint WebPart?SharePoint WebPartVoya Media: Voya Media is free, open-source and provides one central place to play and organize all your music, pictures and videos.

    Read the article

  • CodePlex Daily Summary for Monday, October 21, 2013

    CodePlex Daily Summary for Monday, October 21, 2013Popular ReleasesVirtual Wifi Hotspot for Windows 7 & 8: Virtual Router Plus 2.6.0: Virtual Router Plus 2.6.0Fast YouTube Downloader: Fast YouTube Downloader 2.3.0: Fast YouTube DownloaderABCat: ABCat v.2.0a: ?????? ????? ??? ????????? ????????????. ???????? ???, ?? ?? ????? ???? :) ?? ??????? ??????????? ???????? ?? Windows 7 - ????? ??????? ????????? ?? ???? ??? ?????? ??????.Magick.NET: Magick.NET 6.8.7.101: Magick.NET linked with ImageMagick 6.8.7.1. Breaking changes: - Renamed Matrix classes: MatrixColor = ColorMatrix and MatrixConvolve = ConvolveMatrix. - Renamed Depth method with Channels parameter to BitDepth and changed the other method into a property.VidCoder: 1.5.9 Beta: Added Rip DVD and Rip Blu-ray AutoPlay actions for Windows: now you can have VidCoder start up and scan a disc when you insert it. Go to Start -> AutoPlay to set it up. Added error message for Windows XP users rather than letting it crash. Removed "quality" preset from list for QSV as it currently doesn't offer much improvement. Changed installer to ignore version number when copying files over. Should reduce the chances of a bug from me forgetting to increment a version number. Fixed ...MSBuild Extension Pack: October 2013: Release Blog Post The MSBuild Extension Pack October 2013 release provides a collection of over 480 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUI...VG-Ripper & PG-Ripper: VG-Ripper 2.9.49: changes NEW: Added Support for "ImageTeam.org links NEW: Added Support for "ImgNext.com" links NEW: Added Support for "HostUrImage.com" links NEW: Added Support for "3XVintage.com" linksMedia Companion: Media Companion MC3.583b: As before release but fixed for no movie poster sourcesNew* Both - Added 'An' as option to ignore in title * Movie - Renaming - added %Z - Sorttitle to Legend * Movie - Renaming - added %O - Audio Channels to Legend * Movie - Remove a poster source from priority list. Reset List back to defaults. * Made Media Companion truly portable application. Fixed* Movie - browse for Poster Or Fanart, allows for jpg, tbn, png and bmp images * Movie - Alt Fanart Browser - Url or Browse window now fully...MoreTerra (Terraria World Viewer): MoreTerra 1.11.3.1: Release 1.11.3.1 ================ = New Features = ================ Added markers for Copper Cache, Silver Cache and the Enchanted Sword. ============= = Bug Fixes = ============= Use Official Colors now no longer tries to change the Draw Wires option instead. World reading was breaking for people with a stock 1.2 Terraria version. Changed world name reading so it does not crash the program if you load MoreTerra while Terraria is saving the world. =================== = Feature Removal = =...patterns & practices - Windows Azure Guidance: Cloud Design Patterns: 1st drop of Cloud Design Patterns project. It contains 14 patterns with 6 related guidance.Player Framework by Microsoft: Player Framework for Windows and WP (v1.3): Includes all changes in v1.3 beta 1 and v1.3 beta 2 Support for Windows 8.1 RTM and VS2013 RTM Xaml: New property: AutoLoadPluginTypes to help control which stock plugins are loaded by default (requires AutoLoadPlugins = true). Support for SystemMediaTransportControls on Windows 8.1 JS: Support for visual markers in the timeline. JS: Support for markers collection and markerreached event. JS: New ChaptersPlugin to automatically populate timeline with chapter tracks. JS: Audio an...Json.NET: Json.NET 5.0 Release 8: Fix - Fixed not writing string quotes when QuoteName is falsePowerShell Community Extensions: 3.1 Production: PowerShell Community Extensions 3.1 Release NotesOct 17, 2013 This version of PSCX supports Windows PowerShell 3.0 and 4.0 See the ReleaseNotes.txt download above for more information.SQL Power Doc: Version 1.0.2.1: Misc. bug fixes Added logic to resolve members of a Windows Group server login Added columns to Excel workbooks to show definitions for server permissions, server roles, database permissions, and database rolesSocial Network Importer for NodeXL: SocialNetImporter(v.1.9): This new version includes: - Download latest status update and use it as vertex tooltip - Limit the timelines to parse to me, my friends or both - Fixed some reported bugs about the fan page and group importer - Fixed the login bug reported latelyTerrariViewer: TerrariViewer v7.1 [Terraria Inventory Editor]: You can now backspace in number fields Items added in 1.2.0.3 no longer corrupt player files Buff durations capped at 9999999 Item stacks capped at 9999999 Version info added Prefix IDs corrected Shoe and Eye color box are now properly clickable Moved Bank and Safe into their own tab Users will now be notified of new updatesPython Tools for Visual Studio: 2.0: PTVS 2.0 We’re pleased to announce the release of Python Tools for Visual Studio 2.0 RTM. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, IPython, and cross platform and cross language debugging support. QUICK VIDEO OVERVIEW For a quick overview of the general IDE experience, please watch this v...LINQ to Twitter: LINQ to Twitter v2.1.09: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.Sandcastle Help File Builder: SHFB v1.9.8.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This new release contains bug fixes and feature enhancements. There are some potential breaking changes in this release as some features of the Help File Builder have been moved into...C++ REST SDK (codename "Casablanca"): C++ REST SDK 1.3.0: This release fixes multiple customer reported issues as well as the following: Full support for Dev12 binaries and project files Full support for Windows XP New sample highlighting the Client and Server APIs : BlackJack Expose underlying native handle to set custom options on http_client Improvements to Listener Library Note: Dev10 binaries have been dropped as of this release, however the Dev10 project files are still available in the Source CodeNew Projects4Elements game project: Project for a game were players will be able to earn money by just playingBible: This project aims to deliver an awesome Bible application for as many platforms as possible.cIFrex: cIFrexConnecting SharePoint with Exchange Server 2010/2013 using remote sessions: The project focuses on connecting SharePoint 2013 with Exchange Server 2010/2013 via Remote PowerShell with C# API Crzy Cms: Crzy CMS is a content management system created in ASP.NET using MVC/Entity Framework.DownloadsArchiver: Kann automatisch das Downloads-Verzeichnis aufräumen.GatePass: Gatepass SystemGenerate PowerShell scripts for SharePoint 2010 Site Collection Administration: Winform tool to generate PowerShell scripts, to move, create, backup and restore site collections It needs to be run on SharePoint Server 2010Generate PowerShell scripts for SharePoint 2013 Site Collection Administration: Winform tool to generate PowerShell scripts, to move, create, backup and restore site collections It needs to be run on SharePoint Server 2013GTerm: GTerm is alternative Terminal for Windows Vista/XP/7 and 8.Ignitron Chess: A newly created software product for playing chess by Ignitron Development Team.Introduccion a C# con VS2012: El curso de Introducción a C# con Visual Studio 2012, tiene como objetivo principal enseñar al estudiante los conceptos básicos de programación utilizando C#Punch Clock: Punch Clock is a online tool for any employee to sign up and calculate their daily,weekly monthly hours and download the report in PDF/Excel format. PVMonitor: This project will provide a monitoring dashboard for PV systems.qttexteditor: qttexteditorRegister Interest app: Register Interest is an app template where you can use to record interest of attendees, useful in marketing distribution.SharePoint Simple Developer: Project allows to simply SharePoint 2010 and SharePoint 2013 daily developers tasks. Main goal is to implement validation of built solutions. Silverlight QB Rating Calculator: Silverlight QB Rating Calculator is a Silverlight 5 App that calculates a QB's NFL and NCAA passing efficiency ratings.SimpleAdditionPage: This projects consists of a simple ASP.NET in VB.NET page that allows users to enter 2 numbers, and display their sum.SP SIN Store: SP SIN Store is an extension to SP SIN that allows users to install solutions in SharePoint similar to the WordPress plugins. Project is in Alpha stage.

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Introducing Oracle Multitenant

    - by OracleMultitenant
    0 0 1 1142 6510 Oracle Corporation 54 15 7637 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} The First Database Designed for the Cloud Today Oracle announced the general availability (GA) of Oracle Database 12c, the first database designed for the Cloud. Oracle Multitenant, new with Oracle Database 12c, is a key component of this – a new architecture for consolidating databases and simplifying operations in the Cloud. With this, the inaugural post in the Multitenant blog, my goal is to start the conversation about Oracle Multitenant. We are very proud of this new architecture, which we view as a major advance for Oracle. Customers, partners and analysts who have had previews are very excited about its capabilities and its flexibility. This high level review of Oracle Multitenant will touch on our design considerations and how we re-architected our database for the cloud. I’ll briefly describe our new multitenant architecture and explain it’s key benefits. Finally I’ll mention some of the major use cases we see for Oracle Multitenant. Industry Trends We always start by talking to our customers about the pressures and challenges they’re facing and what trends they’re seeing in the industry. Some things don’t change. They face the same pressures and the same requirements as ever: Pressure to do more with less; be faster, leaner, cheaper, and deliver services 24/7. Big companies have achieved scale. Now they want to realize economies of scale. As ever, DBAs are faced with the challenges of patching and upgrading large numbers of databases, and provisioning new ones.  Requirements are familiar: Performance, scalability, reliability and high availability are non-negotiable. They need ever more security in this threatening climate. There’s no time to stop and retool with new applications. What’s new are the trends. These are the techniques to use to respond to these pressures within the constraints of the requirements. With the advent of cloud computing and availability of massively powerful servers – even engineered systems such as Exadata – our customers want to consolidate many applications into fewer larger servers. There’s a move to standardized services – even self-service. Consolidation Consolidation is not new; companies have tried various different approaches to consolidation of databases in the cloud. One approach is to partition a powerful server between several virtual machines, one per application. A downside of this is that you have the resource and management overheads of OS and RDBMS per VM – that is, per application. Another is that you have replaced physical sprawl with virtual sprawl and virtual sprawl is still expensive to manage. In the dedicated database model, we have a single physical server supporting multiple databases, one per application. So there’s a shared OS overhead, but RDBMS process and memory overhead are replicated per application. Let's think about our traditional Oracle Database architecture. Every time we create a database, be it a production database, a development or a test database, what do we do? We create a set of files, we allocate a bunch of memory for managing the data, and we kick off a series of background processes. This is replicated for every one of the databases that we create. As more and more databases are fired up, these replicated overheads quickly consume the available server resources and this limits the number of applications we can run on any given server. In Oracle Database 11g and earlier the highest degree of consolidation could be achieved by what we call schema consolidation. In this model we have one big server with one big database. Individual applications are installed in separate schemas or table-owners. Database overheads are shared between all applications, which affords maximum consolidation. The shortcomings are that application changes are often required. There is no tenant isolation. One bad apple can spoil the whole batch. New Architecture & Benefits In Oracle Database 12c, we have a new multitenant architecture, featuring pluggable databases. This delivers all the resource utilization advantages of schema consolidation with none of the downsides. There are two parts to the term “pluggable database”: "pluggable", which is new, and "database", which is familiar.  Before we get to the exciting new stuff let’s discuss what hasn’t changed. A pluggable database is a fully functional Oracle database. It’s not watered down in any way. From the perspective of an application or an end user it hasn’t changed at all. This is very important because it means that no application changes are required to adopt this new architecture. There are many thousands of applications built on Oracle databases and they are all ready to run on Oracle Multitenant. So we have these self-contained pluggable databases (PDBs), and as their name suggests, they are plugged into a multitenant container database (CDB). The CDB behaves as a single database from the operations point of view. Very much as we had with the schema consolidation model, we only have a single set of Oracle background processes and a single, shared database memory requirement. This gives us very high consolidation density, which affords maximum reduction in capital expenses (CapEx). By performing management operations at the CDB level – “managing many as one” – we can achieve great reductions in operating expenses (OpEx) as well, but we retain granular control where appropriate. Furthermore, the “pluggability” capability gives us portability and this adds a tremendous amount of agility. We can simply unplug a PDB from one CDB and plug it into another CDB, for example to move it from one SLA tier to another. I'll explore all these new capabilities in much more detail in a future posting.  Use Cases We can identify a number of use cases for Oracle Multitenant. Here are a few of the major ones. 0 0 1 113 650 Oracle Corporation 5 1 762 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} Development / Testing where individual engineers need rapid provisioning and recycling of private copies of a few "master test databases" Consolidation of disparate applications using fewer, more powerful servers Software as a Service deploying separate copies of identical applications to individual tenants Database as a Service typically self-service provisioning of databases on the private cloud Application Distribution from ISV / Installation by Customer Eliminating many typical installation steps (create schema, import seed data, import application code PL/SQL…) - just plug in a PDB! High volume data distribution literally via disk drives in envelopes distributed by truck! - distribution of things like GIS or MDM master databases …various others! Benefits Previous approaches to consolidation have involved a trade-off between reductions in Capital Expenses (CapEx) and Operating Expenses (OpEx), and they’ve usually come at the expense of agility. With Oracle Multitenant you can have your cake and eat it: Minimize CapEx More Applications per server Minimize OpEx Manage many as one Standardized procedures and services Rapid provisioning Maximize Agility Cloning for development and testing Portability through pluggability Scalability with RAC Ease of Adoption Applications run unchanged It’s a pure deployment choice. Neither the database backend nor the application needs to be changed. In future postings I’ll explore various aspects in more detail. However, if you feel compelled to devour everything you can about Oracle Multitenant this very minute, have no fear. Visit the Multitenant page on OTN and explore the various resources we have available there. Among these, Oracle Distinguished Product Manager Bryn Llewellyn has written an excellent, thorough, and exhaustively detailed White Paper about Oracle Multitenant, which is available here.  Follow me  I tweet @OraclePDB #OracleMultitenant

    Read the article

  • CodePlex Daily Summary for Thursday, June 05, 2014

    CodePlex Daily Summary for Thursday, June 05, 2014Popular Releases51Degrees - Device Detection and Redirection: 3.1.2.3: Version 3.1 HighlightsDevice detection algorithm is over 100 times faster. Regular expressions and levenshtein distance calculations are no longer used. The device detection algorithm performance is no longer limited by the number of device combinations contained in the dataset. Two modes of operation are available: Memory – the detection data set is loaded into memory and there is no continuous connection to the source data file. Slower initialisation time but faster detection performanc...CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.27.0: CodeMap now indicates the type name for all members Implemented running scripts 'as administrator'. Just add '//css_npp asadmin' to the script and run it as usual. 'Prepare script for distribution' now aggregates script dependency assemblies. Various improvements in CodeSnipptet, Autcompletion and MethodInfo interactions with each other. Added printing line number for the entries in CodeMap (subject of configuration value) Improved debugging step indication for classless scripts ...Load Runner - HTTP Pressure Test Tool: Load Runner 1.1: 1. added support for measuring total in time / hits / traffic / requests 2. abstracted log, default log to console / text file 3. refactored codeKartris E-commerce: Kartris v2.6003: Bug fixes: Fixed issue where category could not be saved if parent categories not altered Updated split string function to 500 chars otherwise problems with attribute filtering in PowerPack Improvements: If a user has group pricing below that available through qty discounts, that will show in place of the qty discountMagicaVoxel: MagicaVoxel Renderer ver 0.01: MagicaVoxel Renderer (Win 0.01)ClosedXML - The easy way to OpenXML: ClosedXML 0.72.3: 70426e13c415 ClosedXML for .Net 4.0 now uses Open XML SDK 2.5 b9ef53a6654f Merge branch 'master' of https://git01.codeplex.com/forks/vbjay/closedxml 727714e86416 Fix range.Merge(Boolean) for .Net 3.5 eb1ed478e50e Make public range.Merge(Boolean checkIntersects) 6284cf3c3991 More performance improvements when saving.MCE Controller: MCE Controller V1.8.6: BETA. Adds option to disable all internal commands. If the "Disable Internal Commands" checkbox in settings is checked, MCE Controller will disable all internally defined commands (e.g. VK key codes, single characters, mouse commands, etc...) and only respond to commands defined in the MCEControl.commmands file. Attempts to fix inability for clients to reconnect after they forcibly close the connection. Build 1.8.6.37445Visual F# Tools: Daily Builds Preview 06-04-2014: This preview is released for use under a proprietary license.SEToolbox: SEToolbox 01.032.018 Release 1: Added ability to merge/join two ships, regardless of origin. Added Language selection menu to set display text language (for SE resources only), and fixed inherent issues. Added full support for Dedicated Servers, allowing use on PC without Steam present, and only the Dedicated Server install. Added Browse button for used specified save game selection in Load dialog. Installation of this version will replace older version.Service monitor: Version 3.1: Added the ability of the tool to restart itself in 'Admin' mode without UAC prompt. Note that this only works after the tool has run in 'Admin' mode at least once before. This is only supported on Vista, Windows 7 or later. See my blog post on how it is done.DNN Blog: 06.00.07: Highlights: Enhancement: Option to show management panel on child modules Fix: Changed SPROC and code to ensure the right people are notified of pending comments. (CP-24018) Fix: Fix to have notification actions authentication assume the right module id so these will work from the messaging center (CP-24019) Fix: Fix to issue in categories in a post that will not save when no categories and no tags selectedTEncoder: 4.0.0: --4.0.0 -Added: Video downloader -Added: Total progress will be updated more smoothly -Added: MP4Box progress will be shown -Added: A tool to create gif image from video -Added: An option to disable trimming -Added: Audio track option won't be used for mpeg sources as default -Fixed: Subtitle position wasn't used -Fixed: Duration info in the file list wasn't updated after trimming -Updated: FFMpegQuickMon: Version 3.14 (Pie release): This is unofficially the 'Pie' release. There are two big changes.1. 'Presets' - basically templates. Future releases might build on this to allow users to add more presets. 2. MSI Installer now allows you to choose components (in case you don't want all collectors etc.). This means you don't have to download separate components anymore (AllAgents.zip still included in case you want to use them separately) Some other changes:1. Add/changed default file extension for monitor packs to *.qmp (...VeraCrypt: VeraCrypt version 1.0d: Changes between 1.0c and 1.0d (03 June 2014) : Correct issue while creating hidden operating system. Minor fixes (look at git history for more details).Keepass2Android: 0.9.4-pre1: added plug-in support: See settings for how to get plug-ins! published QR plug-in (scan passwords, display passwords as QR code, transfer entries to other KP2A devices) published InputStick plugin (transfer credentials to your PC via bluetooth - requires InputStick USB stick) Third party apps can now simply implement querying KP2A for credentials. Are you a developer? Please add this to your app if suitable! added TOTP support (compatible with KeeOTP and TrayTotp) app should no l...Microsoft Web Protection Library: AntiXss Library 4.3.0: Download from nuget or the Microsoft Download Center This issue finally addresses the over zealous behaviour of the HTML Sanitizer which should now function as expected once again. HTML encoding has been changed to safelist a few more characters for webforms compatibility. This will be the last version of AntiXSS that contains a sanitizer. Any new releases will be encoding libraries only. We recommend you explore other sanitizer options, for example AntiSamy https://www.owasp.org/index....Z SqlBulkCopy Extensions: SqlBulkCopy Extensions 1.0.0: SqlBulkCopy Extensions provide MUST-HAVE methods with outstanding performance missing from the SqlBulkCopy class like Delete, Update, Merge, Upsert. Compatible with .NET 2.0, SQL Server 2000, SQL Azure and more! Bulk MethodsBulkDelete BulkInsert BulkMerge BulkUpdate BulkUpsert Utility MethodsGetSqlConnection GetSqlTransaction You like this library? Find out how and why you should support Z Project Become a Memberhttp://zzzproject.com/resources/images/all/become-a-member.png|ht...Tweetinvi a friendly Twitter C# API: Tweetinvi 0.9.3.x: Timelines- Added all the parameters available from the Timeline Endpoints in Tweetinvi. - This is available for HomeTimeline, UserTimeline, MentionsTimeline // Simple query var tweets = Timeline.GetHomeTimeline(); // Create a parameter for queries with specific parameters var timelineParameter = Timeline.CreateHomeTimelineRequestParameter(); timelineParameter.ExcludeReplies = true; timelineParameter.TrimUser = true; var tweets = Timeline.GetHomeTimeline(timelineParameter); Tweetinvi 0.9.3.1...Sandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.New Projects12306helper: 12306 ????2112110044: dasda2112110202: asdfsdfsdfsd2112110298: TieuDan2112110315: vftgcdxfrBaidu PCS: BaiduPCSExcel2Sobek: Here the project Excel2Sobek is hosted. Excel2Sobek is a preprocessor in Excel's VBA for the SOBEK 2 hydrological and hydraulic simulation model by Deltares.Help Desk Pro: Manage a business of customer serviceMARGYE: MARGYE CMS CMR E-CommerceMemberBS: MemberBS20140605MiKroTik API: Api untuk menghubungkan aplikasi anda dengan router Mikrotik.MiniProfiler + Log4Net: Use MiniProfiler and Log4Net in ASP.NET projectStore Apps Unity App Base for Prism: This simple library provides an Application Base for a Windows Store application that automatically connects Unity to provide injection as the container.SuperCaptcha for MVC: Custom captcha control for MVCTC760240: myappwithfailuresWindows Phone 8 Dilbert Comic Reader: Basic WP8.1 app that displays comics from http://www.dilbert.com.x86 Proved: Prove what you run!

    Read the article

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • CodePlex Daily Summary for Sunday, October 20, 2013

    CodePlex Daily Summary for Sunday, October 20, 2013Popular ReleasesKerbalAlarmClock: v2.6.1.0 Release: Version 2.6.1.0 Recompiled it for 0.21 Added Crew Alarms (track Kerbal rather than Vessel) Added Distance Target Alarms - distance from target vessel or altitude above planet Added Launch Rendezvous Alarm (under Ascending/Descending Node for Landed craft) - MechJeb2 code - thanks r4m0n Allow restoration of Nodes that you have passed (useful for interplanetary burns) Added missing Dres Transfer Model data - thanks Voneiden Added view only version of Alarm clock to both Space Center...MSBuild Extension Pack: October 2013: Release Blog Post The MSBuild Extension Pack October 2013 release provides a collection of over 480 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUI...VG-Ripper & PG-Ripper: VG-Ripper 2.9.49: changes NEW: Added Support for "ImageTeam.org links NEW: Added Support for "ImgNext.com" links NEW: Added Support for "HostUrImage.com" links NEW: Added Support for "3XVintage.com" linksMedia Companion: Media Companion MC3.583b: As before release but fixed for no movie poster sourcesNew* Both - Added 'An' as option to ignore in title * Movie - Renaming - added %Z - Sorttitle to Legend * Movie - Renaming - added %O - Audio Channels to Legend * Movie - Remove a poster source from priority list. Reset List back to defaults. * Made Media Companion truly portable application. Fixed* Movie - browse for Poster Or Fanart, allows for jpg, tbn, png and bmp images * Movie - Alt Fanart Browser - Url or Browse window now fully...MoreTerra (Terraria World Viewer): MoreTerra 1.11.3.1: Release 1.11.3.1 ================ = New Features = ================ Added markers for Copper Cache, Silver Cache and the Enchanted Sword. ============= = Bug Fixes = ============= Use Official Colors now no longer tries to change the Draw Wires option instead. World reading was breaking for people with a stock 1.2 Terraria version. Changed world name reading so it does not crash the program if you load MoreTerra while Terraria is saving the world. =================== = Feature Removal = =...patterns & practices - Windows Azure Guidance: Cloud Design Patterns: 1st drop of Cloud Design Patterns project. It contains 14 patterns with 6 related guidance.Player Framework by Microsoft: Player Framework for Windows and WP (v1.3): Includes all changes in v1.3 beta 1 and v1.3 beta 2 Support for Windows 8.1 RTM and VS2013 RTM Xaml: New property: AutoLoadPluginTypes to help control which stock plugins are loaded by default (requires AutoLoadPlugins = true). Support for SystemMediaTransportControls on Windows 8.1 JS: Support for visual markers in the timeline. JS: Support for markers collection and markerreached event. JS: New ChaptersPlugin to automatically populate timeline with chapter tracks. JS: Audio an...Json.NET: Json.NET 5.0 Release 8: Fix - Fixed not writing string quotes when QuoteName is falsePowerShell Community Extensions: 3.1 Production: PowerShell Community Extensions 3.1 Release NotesOct 17, 2013 This version of PSCX supports Windows PowerShell 3.0 and 4.0 See the ReleaseNotes.txt download above for more information.SQL Power Doc: Version 1.0.2.1: Misc. bug fixes Added logic to resolve members of a Windows Group server login Added columns to Excel workbooks to show definitions for server permissions, server roles, database permissions, and database rolesSocial Network Importer for NodeXL: SocialNetImporter(v.1.9): This new version includes: - Download latest status update and use it as vertex tooltip - Limit the timelines to parse to me, my friends or both - Fixed some reported bugs about the fan page and group importer - Fixed the login bug reported latelyTerrariViewer: TerrariViewer v7.1 [Terraria Inventory Editor]: You can now backspace in number fields Items added in 1.2.0.3 no longer corrupt player files Buff durations capped at 9999999 Item stacks capped at 9999999 Version info added Prefix IDs corrected Shoe and Eye color box are now properly clickable Moved Bank and Safe into their own tab Users will now be notified of new updatesPython Tools for Visual Studio: 2.0: PTVS 2.0 We’re pleased to announce the release of Python Tools for Visual Studio 2.0 RTM. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, IPython, and cross platform and cross language debugging support. QUICK VIDEO OVERVIEW For a quick overview of the general IDE experience, please watch this v...CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.8.2: Solved scrolling problem after DocumentFormatting Implemented "format as you type" --- To avoid the DLLs getting locked by OS use MSI file for the installation.LINQ to Twitter: LINQ to Twitter v2.1.09: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.Sandcastle Help File Builder: SHFB v1.9.8.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This new release contains bug fixes and feature enhancements. There are some potential breaking changes in this release as some features of the Help File Builder have been moved into...C++ REST SDK (codename "Casablanca"): C++ REST SDK 1.3.0: This release fixes multiple customer reported issues as well as the following: Full support for Dev12 binaries and project files Full support for Windows XP New sample highlighting the Client and Server APIs : BlackJack Expose underlying native handle to set custom options on http_client Improvements to Listener Library Note: Dev10 binaries have been dropped as of this release, however the Dev10 project files are still available in the Source CodeAD ACL Scanner: 1.3.2: Minor bug fixed: Powershell 4.0 will report: Select—Object: Parameter cannot be processed because the parameter name p is ambiguous.Fast YouTube Downloader: YouTube Downloader 2.2.0: YouTube Downloader 2.2.0VidCoder: 1.5.8 Beta: Added hardware acceleration options: Bicubic OpenCL scaling algorithm, QSV decoding/encoding and DXVA decoding. Updated HandBrake core to SVN 5834. Updated VidCoder setup icon. Fixed crash when choosing the mp4v2 container on x86 and opening on x64. Warning: the hardware acceleration features require specific hardware or file types to work correctly: QSV: Need an Intel processor that supports Quick Sync Video encoding, with a monitor hooked up to the Intel HD Graphics output and the lat...New ProjectsAdd2Nums: Add2Nums is a VB.NET project that takes 2 numbers, computes their sum and outputs the result. Developed by Justin Mifsud as part of Assignment 1 (7COM0152)Athir: GPAO El_AthirCS-MIC - C# Math Input Control: CS-MIC is a .NET library written in C# designed to give developers easy access to expression parsing.HLS Video Player: An open source over-the-network video player for HLS (Hinkle Light Show)Project Stark: This is a secret project available only to the project owners for the time being.Run ++: Run Plus Plus is a tool that enables you to custom the commands in the "Run" dialog.SimpleAddition: This is a simple ASP.NET in VB.NET page that allows users to enter 2 numbers, and display their sum. SWE 681 Go Fish: Project

    Read the article

  • Looking into the jQuery LazyLoad Plugin

    - by nikolaosk
    I have been using JQuery for a couple of years now and it has helped me to solve many problems on the client side of web development.  You can find all my posts about JQuery in this link. In this post I will be providing you with a hands-on example on the JQuery LazyLoad Plugin.If you want you can have a look at this post, where I describe the JQuery Cycle Plugin.You can find another post of mine talking about the JQuery Carousel Lite Plugin here. Another post of mine regarding the JQuery Image Zoom Plugin can be found here. You can have a look at the JQuery Overlays Plugin here . There are times when when I am asked to create a very long page with lots of images.My first thought is to enable paging on the proposed page. Imagine that we have 60 images on a page. There are performance concerns when we have so many images on a page. Paging can solve that problem if I am allowed to place only 5 images on a page.Sometimes the customer does not like the idea of the paging.Believe it or not some people find the idea of paging not attractive at all.In that case I need a way to only load the initial set of images and as the user scrolls down the page to load the rest.So as someone scrolls down new requests are made to the server and more images are fetched. I can accomplish that with the jQuery LazyLoad Plugin.This is just a plugin that delays loading of images in long web pages.The images that are outside of the viewport (visible part of web page) won't be loaded before the user scrolls to them. Using jQuery LazyLoad Plugin on long web pages containing many large images makes the page load faster. In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like. You can use Visual Studio 2012 Express edition. You can download it here.  You can download this plugin from this link. I launch Expression Web 4.0 and then I type the following HTML markup (I am using HTML 5)<!DOCTYPE html><html lang="en">  <head>    <title>Liverpool Legends</title>    <script type="text/javascript" src="jquery-1.8.3.min.js"></script>        <script type="text/javascript" src="jquery.lazyload.min.js" ></script></head>  <body>    <header>                <h1>Liverpool Legends</h1>    </header>        <div id="main">             <img src="barnes.JPG" width="800" height="1100" /><p />        <img src="dalglish.JPG" width="800" height="1100" /><p />                <img class="LiverpoolImage" src="loader.gif" data-original="fans.JPG" width="1200" height="900" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="lfc.JPG" width="1000" height="700" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="Liverpool-players.JPG" width="1100" height="900" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="steven_gerrard.JPG" width="1110" height="1000" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="robbie.JPG" width="1200" height="1000" /><p />          </div>            <footer>        <p>All Rights Reserved</p>      </footer>                    <script type="text/javascript">                $(function () {                    $("img.LiverpoolImage").lazyload();                });        </script>     </body>  </html> This is a very simple markup. I have  added references to the JQuery library (current version is 1.8.3) and the JQuery LazyLoad Plugin. Firstly, I add two images         <img src="barnes.JPG" width="800" height="1100" /><p />        <img src="dalglish.JPG" width="800" height="1100" /><p />  that will load immediately as soon as the page loads. Then I add the images that will not load unless they become active in the viewport. I have all my img tags pointing the src attribute towards a placeholder image. I’m using a blank 1×1 px grey image,loader.gif.The five images that will load as the user scrolls down the page follow.         <img class="LiverpoolImage" src="loader.gif" data-original="fans.JPG" width="1200" height="900" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="lfc.JPG" width="1000" height="700" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="Liverpool-players.JPG" width="1100" height="900" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="steven_gerrard.JPG" width="1110" height="1000" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="robbie.JPG" width="1200" height="1000" /><p /> Then we need to rename the image src to point towards the proper image placeholder. The full image URL goes into the data-original attribute.The Javascript code that makes it all happen follows. We need to make a call to the JQuery LazyLoad Plugin. We add the script just before we close the body element.         <script type="text/javascript">                $(function () {                    $("img.LiverpoolImage").lazyload();                });        </script>We can change the code above to incorporate some effects.          <script type="text/javascript">  $("img.LiverpoolImage").lazyload({    effect: "fadeIn"  });    </script> That is all I need to write to achieve lazy loading. It it true that you can do so much with less!!I view my simple page in Internet Explorer 10 and it works as expected. I have tested this simple solution in all major browsers and it works fine. You can test it yourself and see the results in your favorite browser. Hope it helps!!!

    Read the article

  • .NET WebRequest.PreAuthenticate not quite what it sounds like

    - by Rick Strahl
    I’ve run into the  problem a few times now: How to pre-authenticate .NET WebRequest calls doing an HTTP call to the server – essentially send authentication credentials on the very first request instead of waiting for a server challenge first? At first glance this sound like it should be easy: The .NET WebRequest object has a PreAuthenticate property which sounds like it should force authentication credentials to be sent on the first request. Looking at the MSDN example certainly looks like it does: http://msdn.microsoft.com/en-us/library/system.net.webrequest.preauthenticate.aspx Unfortunately the MSDN sample is wrong. As is the text of the Help topic which incorrectly leads you to believe that PreAuthenticate… wait for it - pre-authenticates. But it doesn’t allow you to set credentials that are sent on the first request. What this property actually does is quite different. It doesn’t send credentials on the first request but rather caches the credentials ONCE you have already authenticated once. Http Authentication is based on a challenge response mechanism typically where the client sends a request and the server responds with a 401 header requesting authentication. So the client sends a request like this: GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive and the server responds with: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: basic realm=rasnote" X-AspNet-Version: 2.0.50727 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM WWW-Authenticate: Basic realm="rasnote" X-Powered-By: ASP.NET Date: Tue, 27 Oct 2009 00:58:20 GMT Content-Length: 5163 plus the actual error message body. The client then is responsible for re-sending the current request with the authentication token information provided (in this case Basic Auth): GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: TimeTrakker=2HJ1998WH06696; WebLogCommentUser=Rick Strahl|http://www.west-wind.com/|[email protected]; WebStoreUser=b8bd0ed9 Authorization: Basic cgsf12aDpkc2ZhZG1zMA== Once the authorization info is sent the server responds with the actual page result. Now if you use WebRequest (or WebClient) the default behavior is to re-authenticate on every request that requires authorization. This means if you look in  Fiddler or some other HTTP client Proxy that captures requests you’ll see that each request re-authenticates: Here are two requests fired back to back: and you can see the 401 challenge, the 200 response for both requests. If you watch this same conversation between a browser and a server you’ll notice that the first 401 is also there but the subsequent 401 requests are not present. WebRequest.PreAuthenticate And this is precisely what the WebRequest.PreAuthenticate property does: It’s a caching mechanism that caches the connection credentials for a given domain in the active process and resends it on subsequent requests. It does not send credentials on the first request but it will cache credentials on subsequent requests after authentication has succeeded: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rick", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rstrahl", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); which results in the desired sequence: where only the first request doesn’t send credentials. This is quite useful as it saves quite a few round trips to the server – bascially it saves one auth request request for every authenticated request you make. In most scenarios I think you’d want to send these credentials this way but one downside to this is that there’s no way to log out the client. Since the client always sends the credentials once authenticated only an explicit operation ON THE SERVER can undo the credentials by forcing another login explicitly (ie. re-challenging with a forced 401 request). Forcing Basic Authentication Credentials on the first Request On a few occasions I’ve needed to send credentials on a first request – mainly to some oddball third party Web Services (why you’d want to use Basic Auth on a Web Service is beyond me – don’t ask but it’s not uncommon in my experience). This is true of certain services that are using Basic Authentication (especially some Apache based Web Services) and REQUIRE that the authentication is sent right from the first request. No challenge first. Ugly but there it is. Now the following works only with Basic Authentication because it’s pretty straight forward to create the Basic Authorization ‘token’ in code since it’s just an unencrypted encoding of the user name and password into base64. As you might guess this is totally unsecure and should only be used when using HTTPS/SSL connections (i’m not in this example so I can capture the Fiddler trace and my local machine doesn’t have a cert installed, but for production apps ALWAYS use SSL with basic auth). The idea is that you simply add the required Authorization header to the request on your own along with the authorization string that encodes the username and password: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "rick"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); This works and causes the request to immediately send auth information to the server. However, this only works with Basic Auth because you can actually create the authentication credentials easily on the client because it’s essentially clear text. The same doesn’t work for Windows or Digest authentication since you can’t easily create the authentication token on the client and send it to the server. Another issue with this approach is that PreAuthenticate has no effect when you manually force the authentication. As far as Web Request is concerned it never sent the authentication information so it’s not actually caching the value any longer. If you run 3 requests in a row like this: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "ricks"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); you’ll find the trace looking like this: where the first request (the one we explicitly add the header to) authenticates, the second challenges, and any subsequent ones then use the PreAuthenticate credential caching. In effect you’ll end up with one extra 401 request in this scenario, which is still better than 401 challenges on each request. Getting Access to WebRequest in Classic .NET Web Service Clients If you’re running a classic .NET Web Service client (non-WCF) one issue with the above is how do you get access to the WebRequest to actually add the custom headers to do the custom Authentication described above? One easy way is to implement a partial class that allows you add headers with something like this: public partial class TaxService { protected NameValueCollection Headers = new NameValueCollection(); public void AddHttpHeader(string key, string value) { this.Headers.Add(key,value); } public void ClearHttpHeaders() { this.Headers.Clear(); } protected override WebRequest GetWebRequest(Uri uri) { HttpWebRequest request = (HttpWebRequest) base.GetWebRequest(uri); request.Headers.Add(this.Headers); return request; } } where TaxService is the name of the .NET generated proxy class. In code you can then call AddHttpHeader() anywhere to add additional headers which are sent as part of the GetWebRequest override. Nice and simple once you know where to hook it. For WCF there’s a bit more work involved by creating a message extension as described here: http://weblogs.asp.net/avnerk/archive/2006/04/26/Adding-custom-headers-to-every-WCF-call-_2D00_-a-solution.aspx. FWIW, I think that HTTP header manipulation should be readily available on any HTTP based Web Service client DIRECTLY without having to subclass or implement a special interface hook. But alas a little extra work is required in .NET to make this happen Not a Common Problem, but when it happens… This has been one of those issues that is really rare, but it’s bitten me on several occasions when dealing with oddball Web services – a couple of times in my own work interacting with various Web Services and a few times on customer projects that required interaction with credentials-first services. Since the servers determine the protocol, we don’t have a choice but to follow the protocol. Lovely following standards that implementers decide to ignore, isn’t it? :-}© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  Web Services  

    Read the article

  • Configuring UCM cache to check for external Content Server changes

    - by Martin Deh
    Recently, I was involved in a customer scenario where they were modifying the Content Server's contributor data files directly through Content Server.  This operation of course is completely supported.  However, since the contributor data file was modified through the "backdoor", a running WebCenter Spaces page, which also used the same data file, would not get the updates immediately.  This was due to two reasons.  The first reason is that the Spaces page was using Content Presenter to display the contents of the data file. The second reason is that the Spaces application was using the "cached" version of the data file.  Fortunately, there is a way to configure cache so backdoor changes can be picked up more quickly and automatically. First a brief overview of Content Presenter.  The Content Presenter task flow enables WebCenter Spaces users with Page-Edit permissions to precisely customize the selection and presentation of content in a WebCenter Spaces application.  With Content Presenter, you can select a single item of content, contents under a folder, a list of items, or query for content, and then select a Content Presenter based template to render the content on a page in a Spaces application.  In addition to displaying the folders and the files in a Content Server, Content Presenter integrates with Oracle Site Studio to allow you to create, access, edit, and display Site Studio contributor data files (Content Server Document) in either a Site Studio region template or in a custom Content Presenter display template.  More information about creating Content Presenter Display Template can be found in the OFM Developers Guide for WebCenter Portal. The easiest way to configure the cache is to modify the WebCenter Spaces Content Server service connection setting through Enterprise Manager.  From here, under the Cache Details, there is a section to set the Cache Invalidation Interval.  Basically, this enables the cache to be monitored by the cache "sweeper" utility.  The cache sweeper queries for changes in the Content Server, and then "marks" the object in cache as "dirty".  This causes the application in turn to get a new copy of the document from the Content Server that replaces the cached version.  By default the initial value for the Cache Invalidation Interval is set to 0 (minutes).  This basically means that the sweeper is OFF.  To turn the sweeper ON, just set a value (in minutes).  The mininal value that can be set is 2 (minutes): Just a note.  In some instances, once the value of the Cache Invalidation Interval has been set (and saved) in the Enterprise Manager UI, it becomes "sticky" and the interval value cannot be set back to 0.  The good news is that this value can also be updated throught a WLST command.   The WLST command to run is as follows: setJCRContentServerConnection(appName, name, [socketType, url, serverHost, serverPort, keystoreLocation, keystorePassword, privateKeyAlias, privateKeyPassword, webContextRoot, clientSecurityPolicy, cacheInvalidationInterval, binaryCacheMaxEntrySize, adminUsername, adminPassword, extAppId, timeout, isPrimary, server, applicationVersion]) One way to get the required information for executing the command is to use the listJCRContentServerConnections('webcenter',verbose=true) command.  For example, this is the sample output from the execution: ------------------ UCM ------------------ Connection Name: UCM Connection Type: JCR External Appliction ID: Timeout: (not set) CIS Socket Type: socket CIS Server Hostname: webcenter.oracle.local CIS Server Port: 4444 CIS Keystore Location: CIS Private Key Alias: CIS Web URL: Web Server Context Root: /cs Client Security Policy: Admin User Name: sysadmin Cache Invalidation Interval: 2 Binary Cache Maximum Entry Size: 1024 The Documents primary connection is "UCM" From this information, the completed  setJCRContentServerConnection would be: setJCRContentServerConnection(appName='webcenter',name='UCM', socketType='socket', serverHost='webcenter.oracle.local', serverPort='4444', webContextRoot='/cs', cacheInvalidationInterval='0', binaryCacheMaxEntrySize='1024',adminUsername='sysadmin',isPrimary=1) Note: The Spaces managed server must be restarted for the change to take effect. More information about using WLST for WebCenter can be found here. Once the sweeper is turned ON, only cache objects that have been changed will be invalidated.  To test this out, I will go through a simple scenario.  The first thing to do is configure the Content Server so it can monitor and report on events.  Log into the Content Server console application, and under the Administration menu item, select System Audit Information.  Note: If your console is using the left menu display option, the Administration link will be located there. Under the Tracing Sections Information, add in only "system" and "requestaudit" in the Active Sections.  Check Full Verbose Tracing, check Save, then click the Update button.  Once this is done, select the View Server Output menu option.  This will change the browser view to display the log.  This is all that is needed to configure the Content Server. For example, the following is the View Server Output with the cache invalidation interval set to 2(minutes) Note the time stamp: requestaudit/6 08.30 09:52:26.001  IdcServer-68    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.016933999955654144(secs) requestaudit/6 08.30 09:52:26.010  IdcServer-69    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.006134999915957451(secs) requestaudit/6 08.30 09:52:26.014  IdcServer-70    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.004271999932825565(secs) ... other trace info ... requestaudit/6 08.30 09:54:26.002  IdcServer-71    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.020323999226093292(secs) requestaudit/6 08.30 09:54:26.011  IdcServer-72    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.017928000539541245(secs) requestaudit/6 08.30 09:54:26.017  IdcServer-73    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.010185999795794487(secs) Now that the tracing logs are reporting correctly, the next step is set up the Spaces app to test the sweeper. I will use 2 different pages that will use Content Presenter task flows.  Each task flow will use a different custom Content Presenter display template, and will be assign 2 different contributor data files (document that will be in the cache).  The pages at run time appear as follows: Initially, when the Space pages containing the content is loaded in the browser for the first time, you can see the tracing information in the Content Server output viewer. requestaudit/6 08.30 11:51:12.030 IdcServer-129 CLEAR_SERVER_OUTPUT [dUser=weblogic] 0.029171999543905258(secs) requestaudit/6 08.30 11:51:12.101 IdcServer-130 GET_SERVER_OUTPUT [dUser=weblogic] 0.025721000507473946(secs) requestaudit/6 08.30 11:51:26.592 IdcServer-131 VCR_GET_DOCUMENT_BY_NAME [dID=919][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.21525299549102783(secs) requestaudit/6 08.30 11:51:27.117 IdcServer-132 VCR_GET_CONTENT_TYPES [dUser=sysadmin][IsJava=1] 0.5059549808502197(secs) requestaudit/6 08.30 11:51:27.146 IdcServer-133 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.03360399976372719(secs) requestaudit/6 08.30 11:51:27.169 IdcServer-134 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.008806000463664532(secs) requestaudit/6 08.30 11:51:27.204 IdcServer-135 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.013265999965369701(secs) requestaudit/6 08.30 11:51:27.384 IdcServer-136 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.18119299411773682(secs) requestaudit/6 08.30 11:51:27.533 IdcServer-137 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.1519480049610138(secs) requestaudit/6 08.30 11:51:27.634 IdcServer-138 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.10827399790287018(secs) requestaudit/6 08.30 11:51:27.687 IdcServer-139 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.059702999889850616(secs) requestaudit/6 08.30 11:51:28.271 IdcServer-140 GET_USER_PERMISSIONS [dUser=weblogic][IsJava=1] 0.006703000050038099(secs) requestaudit/6 08.30 11:51:28.285 IdcServer-141 GET_ENVIRONMENT [dUser=sysadmin][IsJava=1] 0.010893999598920345(secs) requestaudit/6 08.30 11:51:30.433 IdcServer-142 GET_SERVER_OUTPUT [dUser=weblogic] 0.017318999394774437(secs) requestaudit/6 08.30 11:51:41.837 IdcServer-143 VCR_GET_DOCUMENT_BY_NAME [dID=508][dDocName=113_ES][dDocTitle=Landing Home][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.15937699377536774(secs) requestaudit/6 08.30 11:51:42.781 IdcServer-144 GET_FILE [dID=326][dDocName=WEBCENTERORACL000315][dDocTitle=Duke][dUser=anonymous][RevisionSelectionMethod=LatestReleased][dSecurityGroup=Public][xCollectionID=0] 0.16288499534130096(secs) The highlighted sections show where the 2 data files DF_UCMCACHETESTER (P1 page) and 113_ES (P2 page) were called by the (Spaces) VCR connection to the Content Server. The most important line to notice is the VCR_GET_DOCUMENT_BY_NAME invocation.  On subsequent refreshes of these 2 pages, you will notice (after you refresh the Content Server's View Server Output) that there are no further traces of the same VCR_GET_DOCUMENT_BY_NAME invocations.  This is because the pages are getting the documents from the cache. The next step is to go through the "backdoor" and change one of the documents through the Content Server console.  This operation can be done by first locating the data file document, and from the Content Information page, select Edit Data File menu option.   This invokes the Site Studio Contributor, where the modifications can be made. Refreshing the Content Server View Server Output, the tracing displays the operations perform on the document.  requestaudit/6 08.30 11:56:59.972 IdcServer-255 SS_CHECKOUT_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dUser=weblogic][dSecurityGroup=Public] 0.05558200180530548(secs) requestaudit/6 08.30 11:57:00.065 IdcServer-256 SS_GET_CONTRIBUTOR_CONFIG [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.08632399886846542(secs) requestaudit/6 08.30 11:57:00.470 IdcServer-259 DOC_INFO_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.02268899977207184(secs) requestaudit/6 08.30 11:57:10.177 IdcServer-264 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007652000058442354(secs) requestaudit/6 08.30 11:57:10.181 IdcServer-263 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.01868399977684021(secs) requestaudit/6 08.30 11:57:10.187 IdcServer-265 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.009367000311613083(secs) (internal)/6 08.30 11:57:26.118 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml (internal)/6 08.30 11:57:26.121 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml requestaudit/6 08.30 11:57:26.122 IdcServer-266 SS_SET_ELEMENT_DATA [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0][StatusCode=0][StatusMessage=Successfully checked in content item 'DF_UCMCACHETESTER'.] 0.3765290081501007(secs) requestaudit/6 08.30 11:57:30.710 IdcServer-267 DOC_INFO_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.07942699640989304(secs) requestaudit/6 08.30 11:57:30.733 IdcServer-268 SS_GET_CONTRIBUTOR_STRINGS [dUser=weblogic] 0.0044570001773536205(secs) After a few moments and refreshing the P1 page, the updates has been applied. Note: The refresh time may very, since the Cache Invalidation Interval (set to 2 minutes) is not determined by when changes happened.  The sweeper just runs every 2 minutes. Refreshing the Content Server View Server Output, the tracing displays the important information. requestaudit/6 08.30 11:59:10.171 IdcServer-270 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.00952600035816431(secs) requestaudit/6 08.30 11:59:10.179 IdcServer-271 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.011118999682366848(secs) requestaudit/6 08.30 11:59:10.182 IdcServer-272 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007447000127285719(secs) requestaudit/6 08.30 11:59:16.885 IdcServer-273 VCR_GET_DOCUMENT_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.0786449983716011(secs) After the specifed interval time the sweeper is invoked, which is noted by the GET_ ... calls.  Since the history has noted the change, the next call is to the VCR_GET_DOCUMENT_BY_NAME to retrieve the new version of the (modifed) data file.  Navigating back to the P2 page, and viewing the server output, there are no further VCR_GET_DOCUMENT_BY_NAME to retrieve the data file.  This simply means that this data file was just retrieved from the cache.   Upon further review of the server output, we can see that there was only 1 request for the VCR_GET_DOCUMENT_BY_NAME: requestaudit/6 08.30 12:08:00.021 Audit Request Monitor Request Audit Report over the last 120 Seconds for server webcenteroraclelocal16200****  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor -Num Requests 8 Errors 0 Reqs/sec. 0.06666944175958633 Avg. Latency (secs) 0.02762500010430813 Max Thread Count 2  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 1 Service VCR_GET_DOCUMENT_BY_NAME Total Elapsed Time (secs) 0.09200000017881393 Num requests 1 Num errors 0 Avg. Latency (secs) 0.09200000017881393  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 2 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.054999999701976776 Num requests 1 Num errors 0 Avg. Latency (secs) 0.054999999701976776  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 3 Service GET_FOLDER_HISTORY_REPORT Total Elapsed Time (secs) 0.028999999165534973 Num requests 2 Num errors 0 Avg. Latency (secs) 0.014499999582767487  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 4 Service GET_SERVER_OUTPUT Total Elapsed Time (secs) 0.017999999225139618 Num requests 1 Num errors 0 Avg. Latency (secs) 0.017999999225139618  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 5 Service GET_FILE Total Elapsed Time (secs) 0.013000000268220901 Num requests 1 Num errors 0 Avg. Latency (secs) 0.013000000268220901  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor ****End Audit Report*****  

    Read the article

  • Implementing History Support using jQuery for AJAX websites built on asp.net AJAX

    - by anil.kasalanati
    Problem Statement: Most modern day website use AJAX for page navigation and gone are the days of complete HTTP redirection so it is imperative that we support back and forward buttons on the browser so that end users navigation is not broken. In this article we discuss about solutions which are already available and problems with them. Microsoft History Support: Post .Net 3.5 sp1 Microsoft’s Script manager supports history for websites using Update panels. This is achieved by enabling the ENABLE HISTORY property for the script manager and then the event “Page_Browser_Navigate” needs to be handled. So whenever the browser buttons are clicked the event is fired and the application can write code to do the navigation. The following articles provide good tutorials on how to do that http://www.asp.net/aspnet-in-net-35-sp1/videos/introduction-to-aspnet-ajax-history http://www.codeproject.com/KB/aspnet/ajaxhistorymanagement.aspx And Microsoft api internally creates an IFrame and changes the bookmark of the url. Unfortunately this has a bug and it does not work in Ie6 and 7 which are the major browsers but it works in ie8 and Firefox. And Microsoft has apparently fixed this bug in .Net 4.0. Following is the blog http://weblogs.asp.net/joshclose/archive/2008/11/11/asp-net-ajax-addhistorypoint-bug.aspx For solutions which are still running on .net 3.5 sp1 there is no solution which Microsoft offers so there is  are two way to solve this o   Disable the back button. o   Develop custom solution.   Disable back button Even though this might look like a very simple thing to do there are issues around doing this  because there is no event which can be manipulated from the javascript. The browser does not provide an api to do this. So most of the technical solution on internet offer work arounds like doing a history.forward(1) so that even if the user clicks a back button the destination page redirects the user to the original page. This is not a good customer experience and does not work for asp.net website where there are different views in the same page. There are other ways around detecting the window unload events and writing code there. So there are 2 events onbeforeUnload and onUnload and we can write code to show a confirmation message to the user. If we write code in onUnLoad then we can only show a message but it is too late to stop the navigation. And if we write on onBeforeUnLoad we can stop the navigation if the user clicks cancel but this event would be triggered for all AJAX calls and hyperlinks where the href is anything other than #. We can do this but the website has to be checked properly to ensure there are no links where href is not # otherwise the user would see a popup message saying “you are leaving the website”. Believe me after doing a lot of research on the back button disable I found it easier to support it rather than disabling the button. So I am going to discuss a solution which work  using jQuery with some tweaking. Custom Solution JQuery already provides an api to manage the history of a AJAX website - http://plugins.jquery.com/project/history We need to integrate this with Microsoft Page request manager so that both of them work in tandem. The page state is maintained in the cookie so that it can be passed to the server and I used jQuery cookie plug in for that – http://plugins.jquery.com/node/1386/release Firstly when the page loads there is a need to hook up all the events on the page which needs to cause browser history and following is the code to that. jQuery(document).ready(function() {             // Initialize history plugin.             // The callback is called at once by present location.hash.             jQuery.history.init(pageload);               // set onlick event for buttons             jQuery("a[@rel='history']").click(function() {                 //                 var hash = this.page;                 hash = hash.replace(/^.*#/, '');                 isAsyncPostBack = true;                 // moves to a new page.                 // pageload is called at once.                 jQuery.history.load(hash);                 return true;             });         }); The above scripts basically gets all the DOM objects which have the attribute rel=”history” and add the event. In our test page we have the link button  which has the attribute rel set to history. <asp:LinkButton ID="Previous" rel="history" runat="server" onclick="PreviousOnClick">Previous</asp:LinkButton> <asp:LinkButton ID="AsyncPostBack" rel="history" runat="server" onclick="NextOnClick">Next</asp:LinkButton> <asp:LinkButton ID="HistoryLinkButton" runat="server" style="display:none" onclick="HistoryOnClick"></asp:LinkButton>   And you can see that there is an hidden HistoryLinkButton which used to send a sever side postback in case of browser back or previous buttons. And note that we need to use display:none and not visible= false because asp.net AJAX would disallow any post backs if visible=false. And in general the pageload event get executed on the client side when a back or forward is pressed and the function is shown below function pageload(hash) {                   if (hash) {                         if (!isAsyncPostBack) {                           jQuery.cookie("page", hash);                     __doPostBack("HistoryLinkButton", "");                 }                isAsyncPostBack = false;                             } else {                 // start page             jQuery("#load").empty();             }         }   As you can see in case there is an hash in the url we are basically do an asp.net AJAX post back using the following statement __doPostBack("HistoryLinkButton", ""); So whenever the user clicks back or forward the post back happens using the event statement we provide and Previous event code is invoked in the code behind.  We need to have the code to use the pageId present in the url to change the page content. And there is an important thing to note – because the hash is worked out using the pageId’s there is a need to recalculate the hash after every AJAX post back so following code is plugged in function ReWorkHash() {             jQuery("a[@rel='history']").unbind("click");             jQuery("a[@rel='history']").click(function() {                 //                 var hash = jQuery(this).attr("page");                 hash = hash.replace(/^.*#/, '');                 jQuery.cookie("page", hash);                 isAsyncPostBack = true;                                   // moves to a new page.                 // pageload is called at once.                 jQuery.history.load(hash);                 return true;             });        }   This code is executed from the code behind using ScriptManager RegisterClientScriptBlock as shown below –       ScriptManager.RegisterClientScriptBlock(this, typeof(_Default), "Recalculater", "ReWorkHash();", true);   A sample application is available to be downloaded at the following location – http://techconsulting.vpscustomer.com/Source/HistoryTest.zip And a working sample is available at – http://techconsulting.vpscustomer.com/Samples/Default.aspx

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Adding Unobtrusive Validation To MVCContrib Fluent Html

    - by srkirkland
    ASP.NET MVC 3 includes a new unobtrusive validation strategy that utilizes HTML5 data-* attributes to decorate form elements.  Using a combination of jQuery validation and an unobtrusive validation adapter script that comes with MVC 3, those attributes are then turned into client side validation rules. A Quick Introduction to Unobtrusive Validation To quickly show how this works in practice, assume you have the following Order.cs class (think Northwind) [If you are familiar with unobtrusive validation in MVC 3 you can skip to the next section]: public class Order : DomainObject { [DataType(DataType.Date)] public virtual DateTime OrderDate { get; set; }   [Required] [StringLength(12)] public virtual string ShipAddress { get; set; }   [Required] public virtual Customer OrderedBy { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Note the System.ComponentModel.DataAnnotations attributes, which provide the validation and metadata information used by ASP.NET MVC 3 to determine how to render out these properties.  Now let’s assume we have a form which can edit this Order class, specifically let’s look at the ShipAddress property: @Html.LabelFor(x => x.Order.ShipAddress) @Html.EditorFor(x => x.Order.ShipAddress) @Html.ValidationMessageFor(x => x.Order.ShipAddress) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now the Html.EditorFor() method is smart enough to look at the ShipAddress attributes and write out the necessary unobtrusive validation html attributes.  Note we could have used Html.TextBoxFor() or even Html.TextBox() and still retained the same results. If we view source on the input box generated by the Html.EditorFor() call, we get the following: <input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" data-val-required="The ShipAddress field is required." data-val-length-max="12" data-val-length="The field ShipAddress must be a string with a maximum length of 12." data-val="true" class="text-box single-line input-validation-error"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } As you can see, we have data-val-* attributes for both required and length, along with the proper error messages and additional data as necessary (in this case, we have the length-max=”12”). And of course, if we try to submit the form with an invalid value, we get an error on the client: Working with MvcContrib’s Fluent Html The MvcContrib project offers a fluent interface for creating Html elements which I find very expressive and useful, especially when it comes to creating select lists.  Let’s look at a few quick examples: @this.TextBox(x => x.FirstName).Class("required").Label("First Name:") @this.MultiSelect(x => x.UserId).Options(ViewModel.Users) @this.CheckBox("enabled").LabelAfter("Enabled").Title("Click to enable.").Styles(vertical_align => "middle")   @(this.Select("Order.OrderedBy").Options(Model.Customers, x => x.Id, x => x.CompanyName) .Selected(Model.Order.OrderedBy != null ? Model.Order.OrderedBy.Id : "") .FirstOption(null, "--Select A Company--") .HideFirstOptionWhen(Model.Order.OrderedBy != null) .Label("Ordered By:")) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } These fluent html helpers create the normal html you would expect, and I think they make life a lot easier and more readable when dealing with complex markup or select list data models (look ma: no anonymous objects for creating class names!). Of course, the problem we have now is that MvcContrib’s fluent html helpers don’t know about ASP.NET MVC 3’s unobtrusive validation attributes and thus don’t take part in client validation on your page.  This is not ideal, so I wrote a quick helper method to extend fluent html with the knowledge of what unobtrusive validation attributes to include when they are rendered. Extending MvcContrib’s Fluent Html Before posting the code, there are just a few things you need to know.  The first is that all Fluent Html elements implement the IElement interface (MvcContrib.FluentHtml.Elements.IElement), and the second is that the base System.Web.Mvc.HtmlHelper has been extended with a method called GetUnobtrusiveValidationAttributes which we can use to determine the necessary attributes to include.  With this knowledge we can make quick work of extending fluent html: public static class FluentHtmlExtensions { public static T IncludeUnobtrusiveValidationAttributes<T>(this T element, HtmlHelper htmlHelper) where T : MvcContrib.FluentHtml.Elements.IElement { IDictionary<string, object> validationAttributes = htmlHelper .GetUnobtrusiveValidationAttributes(element.GetAttr("name"));   foreach (var validationAttribute in validationAttributes) { element.SetAttr(validationAttribute.Key, validationAttribute.Value); }   return element; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The code is pretty straight forward – basically we use a passed HtmlHelper to get a list of validation attributes for the current element and then add each of the returned attributes to the element to be rendered. The Extension In Action Now let’s get back to the earlier ShipAddress example and see what we’ve accomplished.  First we will use a fluent html helper to render out the ship address text input (this is the ‘before’ case): @this.TextBox("Order.ShipAddress").Label("Ship Address:").Class("class-name") .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } And the resulting HTML: <label id="Order_ShipAddress_Label" for="Order_ShipAddress">Ship Address:</label> <input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" class="class-name"> Now let’s do the same thing except here we’ll use the newly written extension method: @this.TextBox("Order.ShipAddress").Label("Ship Address:") .Class("class-name").IncludeUnobtrusiveValidationAttributes(Html) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } And the resulting HTML: <label id="Order_ShipAddress_Label" for="Order_ShipAddress">Ship Address:</label> <input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" data-val-required="The ShipAddress field is required." data-val-length-max="12" data-val-length="The field ShipAddress must be a string with a maximum length of 12." data-val="true" class="class-name"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Excellent!  Now we can continue to use unobtrusive validation and have the flexibility to use ASP.NET MVC’s Html helpers or MvcContrib’s fluent html helpers interchangeably, and every element will participate in client side validation. Wrap Up Overall I’m happy with this solution, although in the best case scenario MvcContrib would know about unobtrusive validation attributes and include them automatically (of course if it is enabled in the web.config file).  I know that MvcContrib allows you to author global behaviors, but that requires changing the base class of your views, which I am not willing to do. Enjoy!

    Read the article

  • CodePlex Daily Summary for Thursday, November 18, 2010

    CodePlex Daily Summary for Thursday, November 18, 2010Popular ReleasesSitefinity Migration Tool: Sitefinity Migration Tool 0.2 Alpha: - Improvements for the Sitefinity RC releaseMiniTwitter: 1.57: MiniTwitter 1.57 ???? ?? ?????????????????? ?? User Streams ????????????????????? ???????????????·??????·???????VFPX: VFP2C32 2.0.0.7: fixed a bug in AAverage - NULL values in the array corrupted the result removed limitation in ASum, AMin, AMax, AAverage - the functions were limited to 65000 elements, now they're limited to 65000 rows ASplitStr now returns a 1 element array with an empty string when an empty string is passed (behaves more like ALINES) internal code cleanup and optimization: optimized FoxArray class - results in a speedup of 10-20% in many functions which return the result in an array - like AProcesses...Microsoft SQL Server Product Samples: Database: AdventureWorks 2008R2 SR1: Sample Databases for Microsoft SQL Server 2008R2 (SR1)This release is dedicated to the sample databases that ship for Microsoft SQL Server 2008R2. See Database Prerequisites for SQL Server 2008R2 for feature configurations required for installing the sample databases. See Installing SQL Server 2008R2 Databases for step by step installation instructions. The SR1 release contains minor bug fixes to the installer used to create the sample databases. There are no changes to the databases them...VidCoder: 0.7.2: Fixed duplicated subtitles when running multiple encodes off of the same title.Razor Templating Engine: Razor Template Engine v1.1: Release 1.1 Changes: ADDED: Signed assemblies with strong name to allow assemblies to be referenced by other strongly-named assemblies. FIX: Filter out dynamic assemblies which causes failures in template compilation. FIX: Changed ASCII to UTF8 encoding to support UTF-8 encoded string templates. FIX: Corrected implementation of TemplateBase adding ITemplate interface.Prism Training Kit: Prism Training Kit - 1.1: This is an updated version of the Prism training Kit that targets Prism 4.0 and fixes the bugs reported in the version 1.0. This release consists of a Training Kit with Labs on the following topics Modularity Dependency Injection Bootstrapper UI Composition Communication Note: Take into account that this is a Beta version. If you find any bugs please report them in the Issue Tracker PrerequisitesVisual Studio 2010 Microsoft Word 2007/2010 Microsoft Silverlight 4 Microsoft S...Craig's Utility Library: Craig's Utility Library Code 2.0: This update contains a number of changes, added functionality, and bug fixes: Added transaction support to SQLHelper. Added linked/embedded resource ability to EmailSender. Updated List to take into account new functions. Added better support for MAC address in WMI classes. Fixed Parsing in Reflection class when dealing with sub classes. Fixed bug in SQLHelper when replacing the Command that is a select after doing a select. Fixed issue in SQL Server helper with regard to generati...MFCMAPI: November 2010 Release: Build: 6.0.0.1023 Full release notes at SGriffin's blog. If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. The 64 bit build will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit build, regardless of the operating system. Facebook BadgeDotNetNuke® Community Edition: 05.06.00: Major HighlightsAdded automatic portal alias creation for single portal installs Updated the file manager upload page to allow user to upload multiple files without returning to the file manager page. Fixed issue with Event Log Email Notifications. Fixed issue where Telerik HTML Editor was unable to upload files to secure or database folder. Fixed issue where registration page is not set correctly during an upgrade. Fixed issue where Sendmail stripped HTML and Links from emails...mVu Mobile Viewer: mVu Mobile Viewer 0.7.10.0: Tube8 fix.EPPlus-Create advanced Excel 2007 spreadsheets on the server: EPPlus 2.8.0.1: EPPlus-Create advanced Excel 2007 spreadsheets on the serverNew Features Improved chart support Different chart-types series on the same chart Support for secondary axis and a lot of new properties Better styling Encryption and Workbook protection Table support Import csv files Array formulas ...and a lot of bugfixesAutoLoL: AutoLoL v1.4.2: Added support for more clients (French and Russian) Settings are now stored sepperatly for each user on a computer Auto Login is much faster now Auto Login detects and handles caps lock state properly nowTailspinSpyworks - WebForms Sample Application: TailspinSpyworks-v0.9: Contains a number of bug fixes and additional tutorial steps as well as complete database implementation details.ASP.NET MVC Project Awesome (rich jQuery AJAX helpers): 1.3 and demos: a library with mvc helpers and a demo project that demonstrates an awesome way of doing asp.net mvc. tested on mozilla, safari, chrome, opera, ie 9b/8/7/6 new stuff in 1.3 Autocomplete helper Autocomplete and AjaxDropdown can have parentId and be filled with data depending on the value of the parent PopupForm besides Content("ok") on success can also return Json(data) and use 'data' in a client side function Awesome demo improved (cruder, builder, added service layer)Nearforums - ASP.NET MVC forum engine: Nearforums v4.1: Version 4.1 of the ASP.NET MVC forum engine, with great improvements: TinyMCE added as visual editor for messages (removed CKEditor). Integrated AntiSamy for cleaner html user post and add more prevention to potential injections. Admin status page: a page for the site admin to check the current status of the configuration / db / etc. View Roadmap for more details.UltimateJB: UltimateJB 2.01 PL3 KakaRoto + PSNYes by EvilSperm: Voici une version attendu avec impatience pour beaucoup : - La Version PSNYes pour pouvoir jouer sur le PSN avec une PS3 Jailbreaker. - Pour l'instant le PSNYes n'est disponible qu'avec les PS3 en firmwares 3.41 !!! - La version PL3 KAKAROTO intégre ses dernières modification et prépare a l'intégration du Firmware 3.30 !!! Conclusion : - UltimateJB PSNYes => Valide l'utilisation du PSN : Uniquement compatible avec les 3.41 - ultimateJB DEFAULT => Pas de PSN mais disponible pour les PS3 sui...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.0: Fluent Ribbon Control Suite 2.0(supports .NET 4.0 RTM and .NET 3.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (only for .NET 4.0) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery NEW! *Walkthrough (documenta...patterns & practices: Prism: Prism 4 Documentation: This release contains the Prism 4 documentation in Help 1.0 (CHM) format and PDF format. The documentation is also included with the full download of the guidance. Note: If you cannot view the content of the CHM, using Windows Explorer, select the properties for the file and then click Unblock on the General tab. Note: The PDF version of the guidance is provided for printing and reading in book format. The online version of the Prism 4 documentation can be read here.Farseer Physics Engine: Farseer Physics Engine 3.1: DonationsIf you like this release and would like to keep Farseer Physics Engine running, please consider a small donation. What's new?We bring a lot of new features in Farseer Physics Engine 3.1. Just to name a few: New Box2D core Rope joint added More stable CCD algorithm YuPeng clipper Explosives logic New Constrained Delaunay Triangulation algorithm from the Poly2Tri project. New Flipcode triangulation algorithm. Silverlight 4 samples Silverlight 4 debug view XNA 4.0 relea...New Projectsbizicosoft crm: crmBlog Migrator: The Blog Migrator tool is an all purpose utility designed to help transition a blog from one platform to another. It leverages XML-RPC, BlogML, and WordPress WXR formats. It also provides the ability to "rewrite" your posts on your old blog to point to the new location.bzr-tfs integration tests: Used to test bzr-tfs integrationC++ Open Source Advanced Operating System: C++ Open Source Advanced Operating System is a project which allows starter developers create their own OS. For now it is at a really initial stage.Chavah - internet radio for Yeshua's disciples: Chavah (pronounced "ha-vah") is internet radio for Yeshua's disciples. Inspired by Pandora, Chavah is a Silverlight application that brings community-driven Messianic Jewish tunes for the Lord over the web to your eager ears.CodePoster: An add-in for Visual Studio which allows you to post code directly from Visual Studio to your blog. CRM 2011 Plugin Testing Tools: This solution is meant to make unit testing of plugins in CRM 2011 a simpler and more efficient process. This solution serializes the objects that the CRM server passes to a plugin on execution and then offers a library that allows you to deserialize them in a unit test.Edinamarry Free Tarot Software for Windows: A freeware yet an advanced Tarot reading divinity Software for Psychics and for all those who practice Divinity and Spirituality. This software includes Tarot Spread Designer, Tarot Deck Designer, Tarot Cards Gallery, Client & Customer Profile, Word Editor, Tarot Reader, etc.EPiSocial: Social addons for EPiServer.first team foundation project: this is my first project for the student to teach them about the ms visual studio 201o and team foundation serverFKTdev: Proyecto donde subiremos las pruebas, códigos de ejemplo y demás recursos en nuestro aprendizaje en XNA, hasta que comencemos un desarrollo estable.Gardens Point Component Pascal: Gardens Point Component Pascal is an implementation for .NET of the Component Pascal Language (CP). CP is an object oriented version of Pascal, and shares many design features with Oberon-2. Geoinformatics: geoinformaticsGREENHOUSEMANAGER: GREENHOUSE es un proyecto universitario para manejar los distintos aspectos de un invernadero. El sistema esta desarrollado en c# con interfaz grafica en WPFHousing: This project is only for the asp.net learning. HR-XML.NET: A .NET HR-XML Serialization Library. Also supports the Dutch SETU standard and some proprietary extensions used in the Netherlands. The project is currently targeting HR-XML version 2.5 and Setu standard 2008-01.InternetShop2: ShopLesson4: Lesson4 for M.Logical Synchronous Circuit Simulator: As part of a student project, we are trying to make a logic synchronous circuit simulator, with the ultimate goal of simulating a processor and a digital clock running on it.MediaOwl: MediaOwl is a music (albums, artists, tracks, tags) and movie (movies, series, actors, directors, genres) search engine, but above all, it is a Microsoft Silverlight 4 application (C#), that shows how to use Caliburn Micro.N2F Yverdon Solar Flare Reflector: The solar flare reflector provides minimal base-range protection for your N2F Yverdon installation against solar flare interference.Netduino Plus Home Automation Toolkit: The Netduino Plus Home Automation project is designed to proivde a communication platform from various consumer based home automation products that offer a common web service endpoint. This will hopefully create a low cost DIY alternative to the expensive ethernet interfaces.NRapid: NRapidOfficeHelper: Wrapper around the open xml office package. You can easily create xlsx documents based on a template xlsx document and reuse parts from that document, if you mark them as named ranges (i.e. "names").OffProjects: This is a private project which for my dev investigationParis Velib Stations for Windows Mobile: Allow to find the closest Velib bike station in Paris on a Windows Mobile Phone (6.5)/ Permet de trouver la station de Vélib la plus proche dans Paris ainsi que ses informations sur un smartphone Windows MobilePolarConverter: Adjust the measured distance of HRM files created by Polar Heart Rate monitorsSexy Select: a jQuery plugin that allows for easy manipulation of select options. Allows for adding, removing, sorting, validation and custom skinningSilverlight Progress Feedback: Demonstrates how to get progress feedback from slow running WPF processes in Silverlight.Silverlight Tabbed Panel: Tabbed Panel based on Silverlight targeted for both developers and designers audience. Tabbed Control is used in this project. This is a basic application. More features will be added in further releases. XAML has been used to design this panel. slabhid: SLABHIDDevice.dll is used for the SLAB MCU example code on PC, the original source code is written by C++. This wrapper class brings SLABHIDDevice.dll to the .Net world, so it will be possible to make some quick solution for firmware testing purpose.SuperWebSocket: A .NET server side implementation of WebSocket protocol.test1-jjoiner: just a test projectTotem Alpha Developer Framework For .Net: ????tadf??VS.NET???????????,????jtadf???????????????。 ?????????tadf??????????????J2EE???????VS.NET?????????,??tadf?????.NET??,???????????,????????????,??????C#??????????Java???????,??????。 tadf?????????????,????HTML???????????,???????,?????????,?????。tadf???????????,????????RICH UI?????WEB??。??????,??。 tadf?????????????????????,????WEB??????????。???????,???????????,?Ajax???????,????????????????,????????,????????????????。???????????,???????????????????????????????,?xml??????,?????????????xml...Ukázkové projekty: Obsahuje ukázkové projekty uživatele TenCoKaciStromy.WPFDemo: This Peoject is only for the WPF learning.Xinx TimeIt!: TinyAlarm is a small utility that allows you to configure an Alarm so that you can opt for 1. Shutdown computer 2. Play a sound 3. Show a note with sound 4. Disconnect a dial-up connection 5. Connect via dial-up connection

    Read the article

  • Is your team is a high-performing team?

    As a child I can remember looking out of the car window as my father drove along the Interstate in Florida while seeing prisoners wearing bright orange jump suits and prison guards keeping a watchful eye on them. The prisoners were taking part in a prison road gang. These road gangs were formed to help the state maintain the state highway infrastructure. The prisoner’s primary responsibilities are to pick up trash and debris from the roadway. This is a prime example of a work group or working group used by most prison systems in the United States. Work groups or working groups can be defined as a collection of individuals or entities working together to achieve a specific goal or accomplish a specific set of tasks. Typically these groups are only established for a short period of time and are dissolved once the desired outcome has been achieved. More often than not group members usually feel as though they are expendable to the group and some even dread that they are even in the group. "A team is a small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they are mutually accountable." (Katzenbach and Smith, 1993) So how do you determine that a team is a high-performing team?  This can be determined by three base line criteria that include: consistently high quality output, the promotion of personal growth and well being of all team members, and most importantly the ability to learn and grow as a unit. Initially, a team can successfully create high-performing output without meeting all three criteria, however this will erode over time because team members will feel detached from the group or that they are not growing then the quality of the output will decline. High performing teams are similar to work groups because they both utilize a collection of individuals or entities to accomplish tasks. What distinguish a high-performing team from a work group are its characteristics. High-performing teams contain five core characteristics. These characteristics are what separate a group from a team. The five characteristics of a high-performing team include: Purpose, Performance Measures, People with Tasks and Relationship Skills, Process, and Preparation and Practice. A high-performing team is much more than a work group, and typically has a life cycle that can vary from team to team. The standard team lifecycle consists of five states and is comparable to a human life cycle. The five states of a high-performing team lifecycle include: Formulating, Storming, Normalizing, Performing, and Adjourning. The Formulating State of a team is first realized when the team members are first defined and roles are assigned to all members. This initial stage is very important because it can set the tone for the team and can ultimately determine its success or failure. In addition, this stage requires the team to have a strong leader because team members are normally unclear about specific roles, specific obstacles and goals that my lay ahead of them.  Finally, this stage is where most team members initially meet one another prior to working as a team unless the team members already know each other. The Storming State normally arrives directly after the formulation of a new team because there are still a lot of unknowns amongst the newly formed assembly. As a general rule most of the parties involved in the team are still getting used to the workload, pace of work, deadlines and the validity of various tasks that need to be performed by the group.  In this state everything is questioned because there are so many unknowns. Items commonly questioned include the credentials of others on the team, the actual validity of a project, and the leadership abilities of the team leader.  This can be exemplified by looking at the interactions between animals when they first meet.  If we look at a scenario where two people are walking directly toward each other with their dogs. The dogs will automatically enter the Storming State because they do not know the other dog. Typically in this situation, they attempt to define which is more dominating via play or fighting depending on how the dogs interact with each other. Once dominance has been defined and accepted by both dogs then they will either want to play or leave depending on how the dogs interacted and other environmental variables. Once the Storming State has been realized then the Normalizing State takes over. This state is entered by a team once all the questions of the Storming State have been answered and the team has been tested by a few tasks or projects.  Typically, participants in the team are filled with energy, and comradery, and a strong alliance with team goals and objectives.  A high school football team is a perfect example of the Normalizing State when they start their season.  The player positions have been assigned, the depth chart has been filled and everyone is focused on winning each game. All of the players encourage and expect each other to perform at the best of their abilities and are united by competition from other teams. The Performing State is achieved by a team when its history, working habits, and culture solidify the team as one working unit. In this state team members can anticipate specific behaviors, attitudes, reactions, and challenges are seen as opportunities and not problems. Additionally, each team member knows their role in the team’s success, and the roles of others. This is the most productive state of a group and is where all the time invested working together really pays off. If you look at an Olympic figure skating team skate you can easily see how the time spent working together benefits their performance. They skate as one unit even though it is comprised of two skaters. Each skater has their routine completely memorized as well as their partners. This allows them to anticipate each other’s moves on the ice makes their skating look effortless. The final state of a team is the Adjourning State. This state is where accomplishments by the team and each individual team member are recognized. Additionally, this state also allows for reflection of the interactions between team members, work accomplished and challenges that were faced. Finally, the team celebrates the challenges they have faced and overcome as a unit. Currently in the workplace teams are divided into two different types: Co-located and Distributed Teams. Co-located teams defined as the traditional group of people working together in an office, according to Andy Singleton of Assembla. This traditional type of a team has dominated business in the past due to inadequate technology, which forced workers to primarily interact with one another via face to face meetings.  Team meetings are primarily lead by the person with the highest status in the company. Having personally, participated in meetings of this type, usually a select few of the team members dominate the flow of communication which reduces the input of others in group discussions. Since discussions are dominated by a select few individuals the discussions and group discussion are skewed in favor of the individuals who communicate the most in meetings. In addition, Team members might not give their full opinions on a topic of discussion in part not to offend or create controversy amongst the team and can alter decision made in meetings towards those of the opinions of the dominating team members. Distributed teams are by definition spread across an area or subdivided into separate sections. That is exactly what distributed teams when compared to a more traditional team. It is common place for distributed teams to have team members across town, in the next state, across the country and even with the advances in technology over the last 20 year across the world. These teams allow for more diversity compared to the other type of teams because they allow for more flexibility regarding location. A team could consist of a 30 year old male Italian project manager from New York, a 50 year old female Hispanic from California and a collection of programmers from India because technology allows them to communicate as if they were standing next to one another.  In addition, distributed team members consult with more team members prior to making decisions compared to traditional teams, and take longer to come to decisions due to the changes in time zones and cultural events. However, team members feel more empowered to speak out when they do not agree with the team and to notify others of potential issues regarding the work that the team is doing. Virtual teams which are a subset of the distributed team type is changing organizational strategies due to the fact that a team can now in essence be working 24 hrs a day because of utilizing employees in various time zones and locations.  A primary example of this is with customer services departments, a company can have multiple call centers spread across multiple time zones allowing them to appear to be open 24 hours a day while all a employees work from 9AM to 5 PM every day. Virtual teams also allow human resources departments to go after the best talent for the company regardless of where the potential employee works because they will be a part of a virtual team all that is need is the proper technology to be setup to allow everyone to communicate. In addition to allowing employees to work from home, the company can save space and resources by not having to provide a desk for every team member. In fact, those team members that randomly come into the office can actually share one desk amongst multiple people. This is definitely a cost cutting plus given the current state of the economy. One thing that can turn a team into a high-performing team is leadership. High-performing team leaders need to focus on investing in ongoing personal development, provide team members with direction, structure, and resources needed to accomplish their work, make the right interventions at the right time, and help the team manage boundaries between the team and various external parties involved in the teams work. A team leader needs to invest in ongoing personal development in order to effectively manage their team. People have said that attitude is everything; this is very true about leaders and leadership. A team takes on the attitudes and behaviors of its leaders. This can potentially harm the team and the team’s output. Leaders must concentrate on self-awareness, and understanding their team’s group dynamics to fully understand how to lead them. In addition, always learning new leadership techniques from other effective leaders is also very beneficial. Providing team members with direction, structure, and resources that they need to accomplish their work collectively sounds easy, but it is not.  Leaders need to be able to effectively communicate with their team on how their work helps the company reach for its organizational vision. Conversely, the leader needs to allow his team to work autonomously within specific guidelines to turn the company’s vision into a reality.  This being said the team must be appropriately staffed according to the size of the team’s tasks and their complexity. These tasks should be clear, and be meaningful to the company’s objectives and allow for feedback to be exchanged with the leader and the team member and the leader and upper management. Now if the team is properly staffed, and has a clear and full understanding of what is to be done; the company also must supply the workers with the proper tools to achieve the tasks that they are asked to do. No one should be asked to dig a hole without being given a shovel.  Finally, leaders must reward their team members for accomplishments that they achieve. Awards could range from just a simple congratulatory email, a party to close the completion of a large project, or other monetary rewards. Managing boundaries is very important for team leaders because it can alter attitudes of team members and can add undue stress to the team which will force them to loose focus on the tasks at hand for the group. Team leaders should promote communication between team members so that burdens are shared amongst the team and solutions can be derived from hearing the opinions of multiple sources. This also reinforces team camaraderie and working as a unit. Team leaders must manage the type and timing of interventions as to not create an even bigger mess within the team. Poorly timed interventions can really deflate team members and make them question themselves. This could really increase further and undue interventions by the team leader. Typically, the best time for interventions is when the team is just starting to form so that all unproductive behaviors are removed from the team and that it can retain focus on its agenda. If an intervention is effectively executed the team will feel energized about the work that they are doing, promote communication and interaction amongst the group and improve moral overall. High-performing teams are very import to organizations because they consistently produce high quality output and develop a collective purpose for their work. This drive to succeed allows team members to utilize specific talents allowing for growth in these areas.  In addition, these team members usually take on a sense of ownership with their projects and feel that the other team members are irreplaceable. References: http://blog.assembla.com/assemblablog/tabid/12618/bid/3127/Three-ways-to-organize-your-team-co-located-outsourced-or-global.aspx Katzenbach, J.R. & Smith, D.K. (1993). The Wisdom of Teams: Creating the High-performance Organization. Boston: Harvard Business School.

    Read the article

  • Solaris 11.1: Changes to included FOSS packages

    - by alanc
    Besides the documentation changes I mentioned last time, another place you can see Solaris 11.1 changes before upgrading is in the online package repository, now that the 11.1 packages have been published to http://pkg.oracle.com/solaris/release/, as the “0.175.1.0.0.24.2” branch. (Oracle Solaris Package Versioning explains what each field in that version string means.) When you’re ready to upgrade to the packages from either this repo, or the support repository, you’ll want to first read How to Update to Oracle Solaris 11.1 Using the Image Packaging System by Pete Dennis, as there are a couple issues you will need to be aware of to do that upgrade, several of which are due to changes in the Free and Open Source Software (FOSS) packages included with Solaris, as I’ll explain in a bit. Solaris 11 can update more readily than Solaris 10 In the Solaris 10 and older update models, the way the updates were built constrained what changes we could make in those releases. To change an existing SVR4 package in those releases, we created a Solaris Patch, which applied to a given version of the SVR4 package and replaced, added or deleted files in it. These patches were released via the support websites (originally SunSolve, now My Oracle Support) for applying to existing Solaris 10 installations, and were also merged into the install images for the next Solaris 10 update release. (This Solaris Patches blog post from Gerry Haskins dives deeper into that subject.) Some of the restrictions of this model were that package refactoring, changes to package dependencies, and even just changing the package version number, were difficult to do in this hybrid patch/OS update model. For instance, when Solaris 10 first shipped, it had the Xorg server from X11R6.8. Over the first couple years of update releases we were able to keep it up to date by replacing, adding, & removing files as necessary, taking it all the way up to Xorg server release 1.3 (new version numbering begun after the X11R7 split of the X11 tree into separate modules gave each module its own version). But if you run pkginfo on the SUNWxorg-server package, you’ll see it still displayed a version number of 6.8, confusing users as to which version was actually included. We stopped upgrading the Xorg server releases in Solaris 10 after 1.3, as later versions added new dependencies, such as HAL, D-Bus, and libpciaccess, which were very difficult to manage in this patching model. (We later got libpciaccess to work, but HAL & D-Bus would have been much harder due to the greater dependency tree underneath those.) Similarly, every time the GNOME team looked into upgrading Solaris 10 past GNOME 2.6, they found these constraints made it so difficult it wasn’t worthwhile, and eventually GNOME’s dependencies had changed enough it was completely infeasible. Fortunately, this worked out for both the X11 & GNOME teams, with our management making the business decision to concentrate on the “Nevada” branch for desktop users - first as Solaris Express Desktop Edition, and later as OpenSolaris, so we didn’t have to fight to try to make the package updates fit into these tight constraints. Meanwhile, the team designing the new packaging system for Solaris 11 was seeing us struggle with these problems, and making this much easier to manage for both the development teams and our users was one of their big goals for the IPS design they were working on. Now that we’ve reached the first update release to Solaris 11, we can start to see the fruits of their labors, with more FOSS updates in 11.1 than we had in many Solaris 10 update releases, keeping software more up to date with the upstream communities. Of course, just because we can more easily update now, doesn’t always mean we should or will do so, it just removes the package system limitations from forcing the decision for us. So while we’ve upgraded the X Window System in the 11.1 release from X11R7.6 to 7.7, the Solaris GNOME team decided it was not the right time to try to make the jump from GNOME 2 to GNOME 3, though they did update some individual components of the desktop, especially those with security fixes like Firefox. In other parts of the system, decisions as to what to update were prioritized based on how they affected other projects, or what customer requests we’d gotten for them. So with all that background in place, what packages did we actually update or add between Solaris 11.0 and 11.1? Core OS Functionality One of the FOSS changes with the biggest impact in this release is the upgrade from Grub Legacy (0.97) to Grub 2 (1.99) for the x64 platform boot loader. This is the cause of one of the upgrade quirks, since to go from Solaris 11.0 to 11.1 on x64 systems, you first need to update the Boot Environment tools (such as beadm) to a new version that can handle boot environments that use the Grub2 boot loader. System administrators can find the details they need to know about the new Grub in the Administering the GRand Unified Bootloader chapter of the Booting and Shutting Down Oracle Solaris 11.1 Systems guide. This change was necessary to be able to support new hardware coming into the x64 marketplace, including systems using UEFI firmware or booting off disk drives larger than 2 terabytes. For both platforms, Solaris 11.1 adds rsyslog as an optional alternative to the traditional syslogd, and OpenSCAP for checking security configuration settings are compliant with site policies. Note that the support repo actually has newer versions of BIND & fetchmail than the 11.1 release, as some late breaking critical fixes came through from the community upstream releases after the Solaris 11.1 release was frozen, and made their way to the support repository. These are responsible for the other big upgrade quirk in this release, in which to upgrade a system which already installed those versions from the support repo, you need to either wait for those packages to make their way to the 11.1 branch of the support repo, or follow the steps in the aforementioned upgrade walkthrough to let the package system know it's okay to temporarily downgrade those. Developer Stack While Solaris 11.0 included Python 2.7, many of the bundled python modules weren’t packaged for it yet, limiting its usability. For 11.1, many more of the python modules include 2.7 versions (enough that I filtered them out of the below table, but you can always search on the package repository server for them. For other language runtimes and development tools, 11.1 expands the use of IPS mediated links to choose which version of a package is the default when the packages are designed to allow multiple versions to install side by side. For instance, in Solaris 11.0, GNU automake 1.9 and 1.10 were provided, and developers had to run them as either automake-1.9 or automake-1.10. In Solaris 11.1, when automake 1.11 was added, also added was a /usr/bin/automake mediated link, which points to the automake-1.11 program by default, but can be changed to another version by running the pkg set-mediator command. Mediated links were also used for the Java runtime & development kits in 11.1, changing the default versions to the Java 7 releases (the 1.7.0.x package versions), while allowing admins to switch links such as /usr/bin/javac back to Java 6 if they need to for their site, to deal with Java 7 compatibility or other issues, without having to update each usage to use the full versioned /usr/jdk/jdk1.6.0_35/bin/javac paths for every invocation. Desktop Stack As I mentioned before, we upgraded from X11R7.6 to X11R7.7, since a pleasant coincidence made the X.Org release dates line up nicely with our feature & code freeze dates for this release. (Or perhaps it wasn’t so coincidental, after all, one of the benefits of being the person making the release is being able to decide what schedule is most convenient for you, and this one worked well for me.) For the table below, I’ve skipped listing the packages in which we use the X11 “katamari” version for the Solaris package version (mainly packages combining elements of multiple upstream modules with independent version numbers), since they just all changed from 7.6 to 7.7. In the graphics drivers, we worked with Intel to update the Intel Integrated Graphics Processor support to support 3D graphics and kernel mode setting on the Ivy Bridge chipsets, and updated Nvidia’s non-FOSS graphics driver from 280.13 to 295.20. Higher up in the desktop stack, PulseAudio was added for audio support, and liblouis for Braille support, and the GNOME applications were built to use them. The Mozilla applications, Firefox & Thunderbird moved to the current Extended Support Release (ESR) versions, 10.x for each, to bring up-to-date security fixes without having to be on Mozilla’s agressive 6 week feature cycle release train. Detailed list of changes This table shows most of the changes to the FOSS packages between Solaris 11.0 and 11.1. As noted above, some were excluded for clarity, or to reduce noise and duplication. All the FOSS packages which didn't change the version number in their packaging info are not included, even if they had updates to fix bugs, security holes, or add support for new hardware or new features of Solaris. Package11.011.1 archiver/unrar 3.8.5 4.1.4 audio/sox 14.3.0 14.3.2 backup/rdiff-backup 1.2.1 1.3.3 communication/im/pidgin 2.10.0 2.10.5 compress/gzip 1.3.5 1.4 compress/xz not included 5.0.1 database/sqlite-3 3.7.6.3 3.7.11 desktop/remote-desktop/tigervnc 1.0.90 1.1.0 desktop/window-manager/xcompmgr 1.1.5 1.1.6 desktop/xscreensaver 5.12 5.15 developer/build/autoconf 2.63 2.68 developer/build/autoconf/xorg-macros 1.15.0 1.17 developer/build/automake-111 not included 1.11.2 developer/build/cmake 2.6.2 2.8.6 developer/build/gnu-make 3.81 3.82 developer/build/imake 1.0.4 1.0.5 developer/build/libtool 1.5.22 2.4.2 developer/build/makedepend 1.0.3 1.0.4 developer/documentation-tool/doxygen 1.5.7.1 1.7.6.1 developer/gnu-binutils 2.19 2.21.1 developer/java/jdepend not included 2.9 developer/java/jdk-6 1.6.0.26 1.6.0.35 developer/java/jdk-7 1.7.0.0 1.7.0.7 developer/java/jpackage-utils not included 1.7.5 developer/java/junit 4.5 4.10 developer/lexer/jflex not included 1.4.1 developer/parser/byaccj not included 1.14 developer/parser/java_cup not included 0.10 developer/quilt 0.47 0.60 developer/versioning/git 1.7.3.2 1.7.9.2 developer/versioning/mercurial 1.8.4 2.2.1 developer/versioning/subversion 1.6.16 1.7.5 diagnostic/constype 1.0.3 1.0.4 diagnostic/nmap 5.21 5.51 diagnostic/scanpci 0.12.1 0.13.1 diagnostic/wireshark 1.4.8 1.8.2 diagnostic/xload 1.1.0 1.1.1 editor/gnu-emacs 23.1 23.4 editor/vim 7.3.254 7.3.600 file/lndir 1.0.2 1.0.3 image/editor/bitmap 1.0.5 1.0.6 image/gnuplot 4.4.0 4.6.0 image/library/libexif 0.6.19 0.6.21 image/library/libpng 1.4.8 1.4.11 image/library/librsvg 2.26.3 2.34.1 image/xcursorgen 1.0.4 1.0.5 library/audio/pulseaudio not included 1.1 library/cacao 2.3.0.0 2.3.1.0 library/expat 2.0.1 2.1.0 library/gc 7.1 7.2 library/graphics/pixman 0.22.0 0.24.4 library/guile 1.8.4 1.8.6 library/java/javadb 10.5.3.0 10.6.2.1 library/java/subversion 1.6.16 1.7.5 library/json-c not included 0.9 library/libedit not included 3.0 library/libee not included 0.3.2 library/libestr not included 0.1.2 library/libevent 1.3.5 1.4.14.2 library/liblouis not included 2.1.1 library/liblouisxml not included 2.1.0 library/libtecla 1.6.0 1.6.1 library/libtool/libltdl 1.5.22 2.4.2 library/nspr 4.8.8 4.8.9 library/openldap 2.4.25 2.4.30 library/pcre 7.8 8.21 library/perl-5/subversion 1.6.16 1.7.5 library/python-2/jsonrpclib not included 0.1.3 library/python-2/lxml 2.1.2 2.3.3 library/python-2/nose not included 1.1.2 library/python-2/pyopenssl not included 0.11 library/python-2/subversion 1.6.16 1.7.5 library/python-2/tkinter-26 2.6.4 2.6.8 library/python-2/tkinter-27 2.7.1 2.7.3 library/security/nss 4.12.10 4.13.1 library/security/openssl 1.0.0.5 (1.0.0e) 1.0.0.10 (1.0.0j) mail/thunderbird 6.0 10.0.6 network/dns/bind 9.6.3.4.3 9.6.3.7.2 package/pkgbuild not included 1.3.104 print/filter/enscript not included 1.6.4 print/filter/gutenprint 5.2.4 5.2.7 print/lp/filter/foomatic-rip 3.0.2 4.0.15 runtime/java/jre-6 1.6.0.26 1.6.0.35 runtime/java/jre-7 1.7.0.0 1.7.0.7 runtime/perl-512 5.12.3 5.12.4 runtime/python-26 2.6.4 2.6.8 runtime/python-27 2.7.1 2.7.3 runtime/ruby-18 1.8.7.334 1.8.7.357 runtime/tcl-8/tcl-sqlite-3 3.7.6.3 3.7.11 security/compliance/openscap not included 0.8.1 security/nss-utilities 4.12.10 4.13.1 security/sudo 1.8.1.2 1.8.4.5 service/network/dhcp/isc-dhcp 4.1 4.1.0.6 service/network/dns/bind 9.6.3.4.3 9.6.3.7.2 service/network/ftp (ProFTPD) 1.3.3.0.5 1.3.3.0.7 service/network/samba 3.5.10 3.6.6 shell/conflict 0.2004.9.1 0.2010.6.27 shell/pipe-viewer 1.1.4 1.2.0 shell/zsh 4.3.12 4.3.17 system/boot/grub 0.97 1.99 system/font/truetype/liberation 1.4 1.7.2 system/library/freetype-2 2.4.6 2.4.9 system/library/libnet 1.1.2.1 1.1.5 system/management/cim/pegasus 2.9.1 2.11.0 system/management/ipmitool 1.8.10 1.8.11 system/management/wbem/wbemcli 1.3.7 1.3.9.1 system/network/routing/quagga 0.99.8 0.99.19 system/rsyslog not included 6.2.0 terminal/luit 1.1.0 1.1.1 text/convmv 1.14 1.15 text/gawk 3.1.5 3.1.8 text/gnu-grep 2.5.4 2.10 web/browser/firefox 6.0.2 10.0.6 web/browser/links 1.0 1.0.3 web/java-servlet/tomcat 6.0.33 6.0.35 web/php-53 not included 5.3.14 web/php-53/extension/php-apc not included 3.1.9 web/php-53/extension/php-idn not included 0.2.0 web/php-53/extension/php-memcache not included 3.0.6 web/php-53/extension/php-mysql not included 5.3.14 web/php-53/extension/php-pear not included 5.3.14 web/php-53/extension/php-suhosin not included 0.9.33 web/php-53/extension/php-tcpwrap not included 1.1.3 web/php-53/extension/php-xdebug not included 2.2.0 web/php-common not included 11.1 web/proxy/squid 3.1.8 3.1.18 web/server/apache-22 2.2.20 2.2.22 web/server/apache-22/module/apache-sed 2.2.20 2.2.22 web/server/apache-22/module/apache-wsgi not included 3.3 x11/diagnostic/xev 1.1.0 1.2.0 x11/diagnostic/xscope 1.3 1.3.1 x11/documentation/xorg-docs 1.6 1.7 x11/keyboard/xkbcomp 1.2.3 1.2.4 x11/library/libdmx 1.1.1 1.1.2 x11/library/libdrm 2.4.25 2.4.32 x11/library/libfontenc 1.1.0 1.1.1 x11/library/libfs 1.0.3 1.0.4 x11/library/libice 1.0.7 1.0.8 x11/library/libsm 1.2.0 1.2.1 x11/library/libx11 1.4.4 1.5.0 x11/library/libxau 1.0.6 1.0.7 x11/library/libxcb 1.7 1.8.1 x11/library/libxcursor 1.1.12 1.1.13 x11/library/libxdmcp 1.1.0 1.1.1 x11/library/libxext 1.3.0 1.3.1 x11/library/libxfixes 4.0.5 5.0 x11/library/libxfont 1.4.4 1.4.5 x11/library/libxft 2.2.0 2.3.1 x11/library/libxi 1.4.3 1.6.1 x11/library/libxinerama 1.1.1 1.1.2 x11/library/libxkbfile 1.0.7 1.0.8 x11/library/libxmu 1.1.0 1.1.1 x11/library/libxmuu 1.1.0 1.1.1 x11/library/libxpm 3.5.9 3.5.10 x11/library/libxrender 0.9.6 0.9.7 x11/library/libxres 1.0.5 1.0.6 x11/library/libxscrnsaver 1.2.1 1.2.2 x11/library/libxtst 1.2.0 1.2.1 x11/library/libxv 1.0.6 1.0.7 x11/library/libxvmc 1.0.6 1.0.7 x11/library/libxxf86vm 1.1.1 1.1.2 x11/library/mesa 7.10.2 7.11.2 x11/library/toolkit/libxaw7 1.0.9 1.0.11 x11/library/toolkit/libxt 1.0.9 1.1.3 x11/library/xtrans 1.2.6 1.2.7 x11/oclock 1.0.2 1.0.3 x11/server/xdmx 1.10.3 1.12.2 x11/server/xephyr 1.10.3 1.12.2 x11/server/xorg 1.10.3 1.12.2 x11/server/xorg/driver/xorg-input-keyboard 1.6.0 1.6.1 x11/server/xorg/driver/xorg-input-mouse 1.7.1 1.7.2 x11/server/xorg/driver/xorg-input-synaptics 1.4.1 1.6.2 x11/server/xorg/driver/xorg-input-vmmouse 12.7.0 12.8.0 x11/server/xorg/driver/xorg-video-ast 0.91.10 0.93.10 x11/server/xorg/driver/xorg-video-ati 6.14.1 6.14.4 x11/server/xorg/driver/xorg-video-cirrus 1.3.2 1.4.0 x11/server/xorg/driver/xorg-video-dummy 0.3.4 0.3.5 x11/server/xorg/driver/xorg-video-intel 2.10.0 2.18.0 x11/server/xorg/driver/xorg-video-mach64 6.9.0 6.9.1 x11/server/xorg/driver/xorg-video-mga 1.4.13 1.5.0 x11/server/xorg/driver/xorg-video-openchrome 0.2.904 0.2.905 x11/server/xorg/driver/xorg-video-r128 6.8.1 6.8.2 x11/server/xorg/driver/xorg-video-trident 1.3.4 1.3.5 x11/server/xorg/driver/xorg-video-vesa 2.3.0 2.3.1 x11/server/xorg/driver/xorg-video-vmware 11.0.3 12.0.2 x11/server/xserver-common 1.10.3 1.12.2 x11/server/xvfb 1.10.3 1.12.2 x11/server/xvnc 1.0.90 1.1.0 x11/session/sessreg 1.0.6 1.0.7 x11/session/xauth 1.0.6 1.0.7 x11/session/xinit 1.3.1 1.3.2 x11/transset 0.9.1 1.0.0 x11/trusted/trusted-xorg 1.10.3 1.12.2 x11/x11-window-dump 1.0.4 1.0.5 x11/xclipboard 1.1.1 1.1.2 x11/xclock 1.0.5 1.0.6 x11/xfd 1.1.0 1.1.1 x11/xfontsel 1.0.3 1.0.4 x11/xfs 1.1.1 1.1.2 P.S. To get the version numbers for this table, I ran a quick perl script over the output from: % pkg contents -H -r -t depend -a type=incorporate -o fmri \ `pkg contents -H -r -t depend -a type=incorporate -o fmri [email protected],5.11-0.175.1.0.0.24` \ | sort /tmp/11.1 % pkg contents -H -r -t depend -a type=incorporate -o fmri \ `pkg contents -H -r -t depend -a type=incorporate -o fmri [email protected],5.11-0.175.0.0.0.2` \ | sort /tmp/11.0

    Read the article

  • Protecting a WebCenter app with OAM 11g - the Webcenter side

    - by Martin Deh
    Recently, there was a customer requirment to enable a WebCenter custom portal application to have multiple login-type pages and have the authentication be handle through Oracle Access Manager (OAM) As my security colleagues would tell me, this is fully supported through OAM.  Basically, all that would have to be done is to define in OAM individual resources (directories, URLS , .etc) that needed to be secured. Once that was done, OAM would handle the rest and the user would typically then be prompted by a login page, which was provided by OAM.  I am not going to discuss talking about OAM security in this blog.  In addition, my colleague Chris Johnson (ATEAM security) has already blogged his side of the story here:  http://fusionsecurity.blogspot.com/2012/06/protecting-webcenter-app-with-oam-11g.html .  What I am going to cover is what was done on the WebCenter/ADF side of things. In the test application, basically the structure of pages defined in the pages.xml are as follows:  In this screenshot, notice that "Delegated Security" has been selected, and of the absence for the anonymous-role for the "secured" page (A - B is the same)  This essentially in the WebCenter world means that each of these pages are protected, and only accessible by those define by the applications "role".  For more information on how WebCenter handles security, which by the way extends from ADF security, please refer to the documentation.  The (default) navigation model was configured.  You can see that with this set up, a user will be able to view the "links", where the links define navigation to the respective page:   Note from this dialog, you could also set some security on each link via the "visible" property.  However, the recommended best practice is to set the permissions through the page hierarchy (pages.xml).  Now based on this set up, the expected behavior is that I could only see the link for secured A page only if I was already authenticated (logged in).  But, this is not the use case of the requirement, since any user (anonymous) should be able to view (and click on the link).  So how is this accomplished?  There is now a patch that enables this.  In addition, the portal application's web.xml will need an additional context parameter: <context-param>     <param-name>oracle.webcenter.navigationframework.SECURITY_LEVEL</param-name>     <param-value>public</param-value>  </context-param>  As Chris mentions in his part of the blog, the code that is responsible for displaying the "links" is based upon the retrieval of the navigation model "node" prettyURL.  The prettyURL is a generated URL that also includes the adf.ctrl-state token, which is very important to the ADF framework runtime.  URLs that are void of this token, get new tokens from the ADF runtime.  This can lead to potential memory issues.  <af:forEach var="node" varStatus="vs"    items="#{navigationContext.defaultNavigationModel.listModel['startNode=/,includeStartNode=false']}">                 <af:spacer width="10" height="10" id="s1"/>                 <af:panelGroupLayout id="pgl2" layout="vertical"                                      inlineStyle="border:blue solid 1px">                   <af:goLink id="pt_gl1" text="#{node.title}"                              destination="#{node.goLinkPrettyUrl}"                              targetFrame="#{node.attributes['Target']}"                              inlineStyle="font-size:large;#{node.selected ? 'font-weight:bold;' : ''}"/>                   <af:spacer width="10" height="10" id="s2"/>                   <af:outputText value="#{node.goLinkPrettyUrl}" id="ot2"                                  inlineStyle="font-size:medium; font-weight:bold;"/>                 </af:panelGroupLayout>               </af:forEach>  So now that the links are visible to all, clicking on a secure link will be intercepted by OAM.  Since the OAM can also configure in the Authentication Scheme, the challenging URL (the login page(s)) can also come from anywhere.  In this case the each login page have been defined in the custom portal application.  This was another requirement as well, since this login page also needed to have ADF based content.  This would not be possible if the login page came from OAM.  The following is the example login page: <?xml version='1.0' encoding='UTF-8'?> <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1"           xmlns:f="http://java.sun.com/jsf/core"           xmlns:h="http://java.sun.com/jsf/html"           xmlns:af="http://xmlns.oracle.com/adf/faces/rich">   <jsp:directive.page contentType="text/html;charset=UTF-8"/>   <f:view>     <af:document title="Settings" id="d1">       <af:panelGroupLayout id="pgl1" layout="vertical"/>       <af:outputText value="LOGIN FORM FOR A" id="ot1"/>       <form id="loginform" name="loginform" method="POST"             action="XXXXXXXX:14100/oam/server/auth_cred_submit">         <table>           <tr>             <td align="right">username:</td>             <td align="left">               <input name="username" type="text"/>             </td>           </tr>                      <tr>             <td align="right">password:</td>             <td align="left">               <input name="password" type="password"/>             </td>           </tr>                      <tr>             <td colspan="2" align="center">               <input value=" login " type="submit"/>             </td>           </tr>         </table>         <input name="request_id" type="hidden" value="${param['request_id']}"                id="itsss"/>       </form>     </af:document>   </f:view> </jsp:root> As you can see the code is pretty straight forward.  The most important section is in the form tag, where the submit is a POST to the OAM server.  This example page is mostly HTML, however, it is valid to have adf tags mixed in as well.  As a side note, this solution is really to tailored for a specific requirement.  Normally, there would be only one login page (or dialog/popup), and the OAM challenge resource would be /adfAuthentication.  This maps to the adfAuthentication servlet.  Please see the documentation for more about ADF security here. 

    Read the article

  • SOA Suite Integration: Part 1: Building a Web Service

    - by Anthony Shorten
    Over the next few weeks I will be posting blog entries outlying the SOA Suite integration of the Oracle Utilities Application Framework. This will illustrate how easy it is to integrate by providing some samples. I will use a consistent set of features as examples. The examples will be simple and while will not illustrate ALL the possibilities it will illustrate the relative ease of integration. Think of them as a foundation. You can obviously build upon them. Now, to ease a few customers minds, this series will certainly feature the latest version of SOA Suite and the latest version of Oracle Utilities Application Framework but the principles will apply to past versions of both those products. So if you have Oracle SOA Suite 10g or are a customer of Oracle Utilities Application Framework V2.1 or above most of what I will show you will work with those versions. It is just easier in Oracle SOA Suite 11g and Oracle Utilities Application Framework V4.x. This first posting will not feature SOA Suite at all but concentrate on the capability for the Oracle Utilities Application Framework to create Web Services you can use for integration. The XML Application Integration (XAI) component of the Oracle Utilities Application Framework allows product objects to be exposed as XML based transactions or as Web Services (or both). XAI was written before Web Services became fashionable and has allowed customers of our products to provide a consistent interface into and out of our product line. XAI has been enhanced over the last few years to take advantages of the maturing landscape of Web Services in the market place to a point where it now easier to integrate to SOA infrastructure. There are a number of object types that can be exposed as Web Services: Maintenance Objects – These are the lowest level objects that can be exposed as Web Services. Customers of past versions of the product will be familiar with XAI services based upon Maintenance Objects as they used to be the only method of generating Web Services. These are still supported for background compatibility but are starting to become less popular as they were strict in their structure and were solely attribute based. To generate Maintenance Object based Web Services definition you need to use the XAI Schema Editor component. Business Objects – In Oracle Utilities Application Framework V2.1 we introduced the concept of Business Objects. These are site or industry specific objects that are based upon Maintenance Objects. These allow sites to respecify, in configuration, the structure and elements of a Maintenance Object and other Business Objects (they are true objects with support for inheritance, polymorphism, encapsulation etc.). These can be exposed as Web Services. Business Services – As with Business Objects, we introduced Business Services in Oracle Utilities Application Framework V2.1 which allowed applications services and query zones to be expressed as custom services. These can then be exposed as Web Services via the Business Service definition. Service Scripts - As with Business Objects and Business Services, we introduced Service Scripts in Oracle Utilities Application Framework V2.1. These allow services and/objects to be combined into complex objects or simply expose common routines as callable scripts. These can also be defined as Web Services. For the purpose of this series we will restrict ourselves to Business Objects. The techniques can apply to any of the objects discussed above. Now, lets get to the important bit of this blog post, the creation of a Web Service. To build a Business Object, you first logon to the product and navigate to the Administration Menu by selecting the Admin Menu from the Menu action on left top of the screen (next to Home). A popup menu will appear with the menu’s available. If you do not see the Admin menu then you do not have authority to use it. Here is an example: Navigate to the B menu and select the + symbol next to the Business Object menu item. This indicates that you want to ADD a new Business Object. This menu will appear if you are running Alphabetic mode in your installation (I almost forgot that point). You will be presented with the Business Object maintenance screen. You will fill out the following on the first tab (at a minimum): Business Object – The name of the Business Object. Typically you will make it descriptive and also prefix with CM to denote it as a customization (you can easily find it if you prefix it). As I running this on my personal copy of the product I will use my initials as the prefix and call the sample Web Service “AS-User”. Description – A short description of the object to tell others what it is used for. For my example, I will use “Anthony Shorten’s User Object”. Detailed Description – You can add a long description to help other developers understand your object. I am just going to specify “Anthony Shorten’s Test Object for SOA Suite Integration”. Maintenance Object – As this Business Service is going to be based upon a Maintenance Object I will specify the desired Maintenance Object. In this example, I have decided to use the Framework object USER. Now, I chose this for a number of reasons. It is meaningful, simple and is across all our product lines. I could choose ANY maintenance object I wished to expose (including any custom ones, if I had them). Parent Business Object – If I was not using a Maintenance Object but building a child Business Object against another Business Object, then I would specify the Parent Business Object here. I am not using Parent’s so I will leave this blank. You either use Parent Business Object or Maintenance Object not both. Application Service – Business Objects like other objects are subject to security. You can attach an Application Service to an object to specify which groups of users (remember services are attached to user groups not users) have appropriate access to the object. I will use a default service provided with the product, F1-DFLTS ,as this is just a demonstration so I do not have to be too sophisticated about security. Instance Control – This allows the object to create instances in its objects. You can specify a Business Object purely to hold rules. I am being simple here so I will set it to Allow New Instances to allow the Business Object to be used to create, read, update and delete user records. The rest of the tab I will leave empty as I want this to be a very simple object. Other options allow lots of flexibility. The contents should look like this: Before saving your work, you need to navigate to the Schema tab and specify the contents of your object. I will save some time. When you create an object the schema will only contain the basic root elements of the object (in fact only the schema tag is visible). When you go to the Schema Tab, on the dashboard you will see a BO Schema zone with a solitary button. This will allow you to Generate the Schema for you from our metadata. Click on the Generate button to generate a basic schema from the metadata. You will now see a Schema with the element tags and references to the metadata of the Maintenance object (in the mapField attribute). I could spend a while outlining all the ways you can change the schema with defaults, formatting, tagging etc but the online help has plenty of great examples to illustrate this. You can use the Schema Tips zone in the for more details of the available customizations. Note: The tags are generated from the language pack you have installed. The sample is English so the tags are in English (which is the base language of all installations). If you are using a language pack then the tags will be generated in the language of the user that generated the object. At this point you can save your Business Object by pressing the Save action. At this point you have a basic Business Object based on the USER maintenance object ready for use but it is not defined as a Web Service yet. To do this you need to define the newly created Business Object as an XAI Inbound Service. The easiest and quickest way is to select + off the XAI Inbound Service off the context menu on the Business Object maintenance screen. This will prepopulate the service definition with the following: Adapter – This will be set to Business Adaptor. This indicates that the service is either Business Object, Business Service or Service Script based. Schema Type – Whether the object is a Business Object, Business Service or Service Script. In this case it is a Business Object. Schema Name – The name of the object. In this case it is the Business Object AS-User. Active – Set to Yes. This means the service is available upon startup automatically. You can enable and disable services as needed. Transaction Type – A default transaction type as this is Business Object Service. More about this in later postings. In our case we use the default Read. This means that if we only specify data and not a transaction type then the product will assume you want to issue a read against the object. You need to fill in the following: XAI Inbound Service – The name of the Web Service. Usually people use the same name as the underlying object , in the case of this example, but this can match your sites interfacing standards. By the way you can define multiple XAI Inbound Services/Web Services against the same object if you want. Description and Detail Description – Documentation for your Web Service. I just supplied some basic documentation for this demonstration. You can now save the service definition. Note: There are lots of other options on this screen that allow for behavior of your service to be specified. I will leave them blank for now. When you save the service you are issued with two new pieces of information. XAI Inbound Service Id is a randomly generated identifier used internally by the XAI Servlet. WSDL URL is the WSDL standard URL used for integration. We will take advantage of that in later posts. An example of the definition is shown below: Now you have defined the service but it will only be available when the next server restart or when you flush the data cache. XAI Inbound Services are cached for performance so the cache needs to be told of this new service. To refresh the cache you can use the Admin –> X –> XAI Command menu item. From the command dropdown select Refresh Registry and press Send Command. You will see an XML of the command sent to the server (the presence of the XML means it is finished). If you have an error around the authorization, then check your default user and password settings on the XAI Options menu item. Be careful with flushing the cache as the cache is shared (unless of course you are the only Web Service user on the system – In that case it only affects you). The Web Service is NOW available to be used. To perform a simple test of your new Web Service, navigate to the Admin –> X –> XAI Submission menu item. You will see an open XML request tab. You need to type in the request XML you want to test in the Main tab. The first tag is the XAI Inbound Service Name and the elements are as per your schema (minus the schema tag itself as that is only used internally). My example is as follows (I want to return the details of user SYSUSER) – Remember to close tags. Hitting the Save button will issue the XML and return the response according to the Business Object schema. Now before you panic, you noticed that it did not ask for credentials. It propagates the online credentials to the service call on this function. You now have a Web Service you can use for integration. We will reuse this information in subsequent posts. The process I just described can be used for ANY object in the system you want to expose. This whole process at a minimum can take under a minute. Obviously I only showed the basics but you can at least get an appreciation of the ease of defining a Web Service (just by using a browser). The next posts now build upon this. Hope you enjoyed the post.

    Read the article

  • Building a jQuery Plug-in to make an HTML Table scrollable

    - by Rick Strahl
    Today I got a call from a customer and we were looking over an older application that uses a lot of tables to display financial and other assorted data. The application is mostly meta-data driven with lots of layout formatting automatically driven through meta data rather than through explicit hand coded HTML layouts. One of the problems in this apps are tables that display a non-fixed amount of data. The users of this app don't want to use paging to see more data, but instead want to display overflow data using a scrollbar. Many of the forms are very densely populated, often with multiple data tables that display a few rows of data in the UI at the most. This sort of layout does not lend itself well to paging, but works much better with scrollable data. Unfortunately scrollable tables are not easily created. HTML Tables are mangy beasts as anybody who's done any sort of Web development knows. Tables are finicky when it comes to styling and layout, and they have many funky quirks, especially when it comes to scrolling both of the table rows themselves or even the child columns. There's no built-in way to make tables scroll and to lock headers while you do, and while you can embed a table (or anything really) into a scrolling div with something like this: <div style="position:relative; overflow: hidden; overflow-y: scroll; height: 200px; width: 400px;"> <table id="table" style="width: 100%" class="blackborder" > <thead> <tr class="gridheader"> <th>Column 1</th> <th>Column 2</th> <th>Column 3</th> <th >Column 4</th> </tr> </thead> <tbody> <tr> <td>Column 1 Content</td> <td>Column 2 Content</td> <td>Column 3 Content</td> <td>Column 4 Content</td> </tr> <tr> <td>Column 1 Content</td> <td>Column 2 Content</td> <td>Column 3 Content</td> <td>Column 4 Content</td> </tr> … </tbody> </table> </div> </div> that won't give a very satisfying visual experience: Both the header and body scroll which looks odd. You lose context as soon as the header scrolls off the top and when you reach the bottom of the list the bottom outline of the table shows which also looks off. The the side bar shows all the way down the length of the table yet another visual miscue. In a pinch this will work, but it's ugly. What's out there? Before we go further here you should know that there are a few capable grid plug-ins out there already. Among them: Flexigrid (can work of any table as well as with AJAX data) jQuery Scrollable Table Plug-in (feature similar to what I need but not quite) jqGrid (mostly an Ajax Grid which is very powerful and works very well) But in the end none of them fit the bill of what I needed in this situation. All of these require custom CSS and some of them are fairly complex to restyle. Others are AJAX only or work better with AJAX loaded data. However, I need to actually try (as much as possible) to maintain the original styling of the tables without requiring extensive re-styling. Building the makeTableScrollable() Plug-in To make a table scrollable requires rearranging the table a bit. In the plug-in I built I create two <div> tags and split the table into two: one for the table header and one for the table body. The bottom <div> tag then contains only the table's row data and can be scrolled while the header stays fixed. Using jQuery the basic idea is pretty simple: You create the divs, copy the original table into the bottom, then clone the table, clear all content append the <thead> section, into new table and then copy that table into the second header <div>. Easy as pie, right? Unfortunately it's a bit more complicated than that as it's tricky to get the width of the table right to account for the scrollbar (by adding a small column) and making sure the borders properly line up for the two tables. A lot of style settings have to be made to ensure the table is a fixed size, to remove and reattach borders, to add extra space to allow for the scrollbar and so forth. The end result of my plug-in is a table with a scrollbar. Using the same table I used earlier the result looks like this: To create it, I use the following jQuery plug-in logic to select my table and run the makeTableScrollable() plug-in against the selector: $("#table").makeTableScrollable( { cssClass:"blackborder"} ); Without much further ado, here's the short code for the plug-in: (function ($) { $.fn.makeTableScrollable = function (options) { return this.each(function () { var $table = $(this); var opt = { // height of the table height: "250px", // right padding added to support the scrollbar rightPadding: "10px", // cssclass used for the wrapper div cssClass: "" } $.extend(opt, options); var $thead = $table.find("thead"); var $ths = $thead.find("th"); var id = $table.attr("id"); var cssClass = $table.attr("class"); if (!id) id = "_table_" + new Date().getMilliseconds().ToString(); $table.width("+=" + opt.rightPadding); $table.css("border-width", 0); // add a column to all rows of the table var first = true; $table.find("tr").each(function () { var row = $(this); if (first) { row.append($("<th>").width(opt.rightPadding)); first = false; } else row.append($("<td>").width(opt.rightPadding)); }); // force full sizing on each of the th elemnts $ths.each(function () { var $th = $(this); $th.css("width", $th.width()); }); // Create the table wrapper div var $tblDiv = $("<div>").css({ position: "relative", overflow: "hidden", overflowY: "scroll" }) .addClass(opt.cssClass); var width = $table.width(); $tblDiv.width(width).height(opt.height) .attr("id", id + "_wrapper") .css("border-top", "none"); // Insert before $tblDiv $tblDiv.insertBefore($table); // then move the table into it $table.appendTo($tblDiv); // Clone the div for header var $hdDiv = $tblDiv.clone(); $hdDiv.empty(); var width = $table.width(); $hdDiv.attr("style", "") .css("border-bottom", "none") .width(width) .attr("id", id + "_wrapper_header"); // create a copy of the table and remove all children var $newTable = $($table).clone(); $newTable.empty() .attr("id", $table.attr("id") + "_header"); $thead.appendTo($newTable); $hdDiv.insertBefore($tblDiv); $newTable.appendTo($hdDiv); $table.css("border-width", 0); }); } })(jQuery); Oh sweet spaghetti code :-) The code starts out by dealing the parameters that can be passed in the options object map: height The height of the full table/structure. The height of the outside wrapper container. Defaults to 200px. rightPadding The padding that is added to the right of the table to account for the scrollbar. Creates a column of this width and injects it into the table. If too small the rightmost column might get truncated. if too large the empty column might show. cssClass The CSS class of the wrapping container that appears to wrap the table. If you want a border around your table this class should probably provide it since the plug-in removes the table border. The rest of the code is obtuse, but pretty straight forward. It starts by creating a new column in the table to accommodate the width of the scrollbar and avoid clipping of text in the rightmost column. The width of the columns is explicitly set in the header elements to force the size of the table to be fixed and to provide the same sizing when the THEAD section is moved to a new copied table later. The table wrapper div is created, formatted and the table is moved into it. The new wrapper div is cloned for the header wrapper and configured. Finally the actual table is cloned and cleared of all elements. The original table's THEAD section is then moved into the new table. At last the new table is added to the header <div>, and the header <div> is inserted before the table wrapper <div>. I'm always amazed how easy jQuery makes it to do this sort of re-arranging, and given of what's happening the amount of code is rather small. Disclaimer: Your mileage may vary A word of warning: I make no guarantees about the code above. It's a first cut and I provided this here mainly to demonstrate the concepts of decomposing and reassembling an HTML layout :-) which jQuery makes so nice and easy. I tested this component against the typical scenarios we plan on using it for which are tables that use a few well known styles (or no styling at all). I suspect if you have complex styling on your <table> tag that things might not go so well. If you plan on using this plug-in you might want to minimize your styling of the table tag and defer any border formatting using the class passed in via the cssClass parameter, which ends up on the two wrapper div's that wrap the header and body rows. There's also no explicit support for footers. I rarely if ever use footers (when not using paging that is), so I didn't feel the need to add footer support. However, if you need that it's not difficult to add - the logic is the same as adding the header. The plug-in relies on a well-formatted table that has THEAD and TBODY sections along with TH tags in the header. Note that ASP.NET WebForm DataGrids and GridViews by default do not generate well-formatted table HTML. You can look at my Adding proper THEAD sections to a GridView post for more info on how to get a GridView to render properly. The plug-in has no dependencies other than jQuery. Even with the limitations in mind I hope this might be useful to some of you. I know I've already identified a number of places in my own existing applications where I will be plugging this in almost immediately. Resources Download Sample and Plug-in code Latest version in the West Wind Web & AJAX Toolkit Repository © Rick Strahl, West Wind Technologies, 2005-2011Posted in jQuery  HTML  ASP.NET  

    Read the article

  • On checking kernel parms HP-UX & Oracle 10.2

    - by [email protected]
    Hi,Just did this little script for a customer wanting to investigate if the kernel parameters of their HP-UX machines fits or not the Oracle 10.2 specifications ( looks like someone checked the "User Verified" at database installation time on some prerequisites)Just want to share ( its in Spanish)Hope it helps!--L Normal 0 21 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} ### ### check_kernel_parms.sh ### ### ### ###    NAME ###      check_kernel_parms.sh ### ###    DESCRIPTION ###      Comprueba los parametros de kernel ### ###    NOTES ###     Valido para versiones de SO HP-UX y Oracle 10.2 ###     ### ###    MODIFIED   (MM/DD/YY) ###    lvelasco    05/20/10 - Creacion     V_DATE=`/usr/bin/date +%Y%m%d_%H%M%S` if [ "${1}" != 'sin_traza' ]; then   V_FICHERO_LOG=`dirname $0`/`basename $0`.${V_DATE}.log   exec 4>&1   tee ${V_FICHERO_LOG} >&4 |&   exec 1>&p 2>&1 fi   echo "${V_DATE}- check_params.sh *******************************************" echo "Comenzado ejecucion de chequeo de parametros de kernel en maquina `hostname`" echo "****************************************************************************"   UX_VERS=`uname -a | awk '{print $3}'` KCTUNE=/usr/sbin/kctune MEM_TOTAL_MB=`./check_mem` <-- This should return the physical mem of the machine on bytes MEM_TOTAL=$((MEM_TOTAL_MB*1024))     NPROC=$(${KCTUNE} | grep nproc | grep -v '(nproc' | grep -v *nproc | awk '{print $2}') if [ ${NPROC} -lt 4096 ] then        echo "[FAILED] Information: El parametro nproc con valor actual ${NPROC} debe tener una valor superior a 4096" else        echo "[OK] Information: El parametro nproc con valor actual ${NPROC} debe tener una valor superior a 4096" fi     KSI_ALLOC_MAX_T=$(${KCTUNE} | grep ksi_alloc | awk '{print $2}') KSI_ALLOC_MAX=$((NPROC*8)) if [ ${KSI_ALLOC_MAX_T} -lt ${KSI_ALLOC_MAX} ] then        echo "[FAILED] Information: El parametro ksi_alloc_max con valor actual ${KSI_ALLOC_MAX_T} debe tener una valor superior a ${KSI_ALLOC_MAX}" else        echo "[OK] Information: El parametro ksi_alloc_max con valor actual ${KSI_ALLOC_MAX_T} debe tener una valor superior a ${KSI_ALLOC_MAX}" fi       EXECUTABLE_STACK=$(${KCTUNE} | grep executable_stack | awk '{print $2}') if [ ${EXECUTABLE_STACK} -ne 0 ] then        echo "[FAILED] Information: El parametro executable_stack con valor actual ${EXECUTABLE_STACK} debe tener una valor igual a 0" else        echo "[OK] Information: El parametro executable_stack con valor actual ${EXECUTABLE_STACK} debe tener una valor igual a 0" fi       MAX_THREAD_PROC=$(${KCTUNE} | grep max_thread_proc | awk '{print $2}') if [ ${MAX_THREAD_PROC} -lt 1024 ] then        echo "[FAILED] Information: El parametro max_thread_proc con valor actual ${MAX_THREAD_PROC} debe tener una valor superior a 1024" else        echo "[OK] Information: El parametro max_thread_proc con valor actual ${MAX_THREAD_PROC} debe tener una valor superior a 1024" fi     MAXDSIZ=$(${KCTUNE} | grep 'maxdsiz ' | awk '{print $2}') if [ ${MAXDSIZ} -lt 1073741824 ] then        echo "[FAILED] Information: El parametro maxdsiz con valor actual ${MAXDSIZ} debe tener una valor superior a 1073741824" else        echo "[OK] Information: El parametro maxdsiz con valor actual ${MAXDSIZ} debe tener una valor superior a 1073741824" fi     MAXDSIZ_64BIT=$(${KCTUNE} | grep maxdsiz_64bit | awk '{print $2}') if [ ${MAXDSIZ} -lt 2147483648 ] then        echo "[FAILED] Information: El parametro maxdsiz_64bit con valor actual ${MAXDSIZ_64BIT} debe tener una valor superior a 2147483648" else        echo "[OK] Information: El parametro maxdsiz_64bit con valor actual ${MAXDSIZ_64BIT} debe tener una valor superior a 2147483648" fi   MAXSSIZ=$(${KCTUNE} | grep 'maxssiz ' | awk '{print $2}') if [ ${MAXSSIZ} -lt 134217728 ] then        echo "[FAILED] Information: El parametro maxssiz con valor actual ${MAXSSIZ} debe tener una valor superior a 134217728" else        echo "[OK] Information: El parametro maxssiz con valor actual ${MAXSSIZ} debe tener una valor superior a 134217728" fi     MAXSSIZ_64BIT=$(${KCTUNE} | grep maxssiz_64bit | awk '{print $2}') if [ ${MAXSSIZ} -lt 1073741824 ] then        echo "[FAILED] Information: El parametro maxssiz_64bit con valor actual ${MAXSSIZ} debe tener una valor superior a 1073741824" else        echo "[OK] Information: El parametro maxssiz_64bit con valor actual ${MAXSSIZ} debe tener una valor superior a 1073741824" fi     MAXUPRC_T=$(${KCTUNE} | grep maxuprc | awk '{print $2}') MAXUPRC=$(((NPROC*9)/10)) if [ ${MAXUPRC_T} -lt ${MAXUPRC} ] then        echo "[FAILED] Information: El parametro maxuprc con valor actual ${MAXUPRC_T} debe tener una valor superior a ${MAXUPRC}" else        echo "[OK] Information: El parametro maxuprc con valor actual ${MAXUPRC_T} debe tener una valor superior a ${MAXUPRC}" fi   MSGMNI=$(${KCTUNE} | grep msgmni | awk '{print $2}') if [ ${MSGMNI} -lt ${NPROC} ] then        echo "[FAILED] Information: El parametro msgni con valor actual ${MSGMNI} debe tener una valor superior a ${NPROC}" else        echo "[OK] Information: El parametro msgni con valor actual ${MSGMNI} debe tener una valor superior a ${NPROC}" fi     if [ ${UX_VERS} = "B.11.23" ] then        MSGSEG=$(${KCTUNE} | grep msgseg | awk '{print $2}')        if [ ${MSGSEG} -lt 32767 ]        then        echo "[FAILED] Information: El parametro msgseg con valor actual ${MSGSEG} debe tener una valor superior a 32767"        else        echo "[OK] Information: El parametro msgseg con valor actual ${MSGSEG} debe tener una valor superior a 32767"        fi fi   MSGTQL=$(${KCTUNE} | grep msgtql | awk '{print $2}') if [ ${MSGTQL} -lt ${NPROC} ] then        echo "[FAILED] Information: El parametro msgtql con valor actual ${MSGTQL} debe tener una valor superior a ${NPROC}" else        echo "[OK] Information: El parametro msgtql con valor actual ${MSGTQL} debe tener una valor superior a ${NPROC}" fi       if [ ${UX_VERS} = "B.11.23" ] then        NFILE_T=$(${KCTUNE} | grep nfile | awk '{print $2}')        NFILE=$((15*NPROC+2048))        if [ ${NFILE_T} -lt ${NFILE} ]        then        echo "[FAILED] Information: El parametro nfile con valor actual ${NFILE_T} debe tener una valor superior a ${NFILE}"        else        echo "[OK] Information: El parametro nfile con valor actual ${NFILE_T} debe tener una valor superior a ${NFILE}"        fi fi     NFLOCKS=$(${KCTUNE} | grep nflocks | awk '{print $2}') if [ ${NFLOCKS} -lt ${NPROC} ] then        echo "[FAILED] Information: El parametro nflocks con valor actual ${NFLOCKS} debe tener una valor superior a ${NPROC}" else        echo "[OK] Information: El parametro nflocks con valor actual ${NFLOCKS} debe tener una valor superior a ${NPROC}" fi   NINODE_T=$(${KCTUNE} | grep ninode | grep -v vx | awk '{print $2}') NINODE=$((8*NPROC+2048)) if [ ${NINODE_T} -lt ${NINODE} ] then        echo "[FAILED] Information: El parametro ninode con valor actual ${NINODE_T} debe tener una valor superior a ${NINODE}" else        echo "[OK] Information: El parametro ninode con valor actual ${NINODE_T} debe tener una valor superior a ${NINODE}" fi     NCSIZE_T=$(${KCTUNE} | grep ncsize | awk '{print $2}') NCSIZE=$((NINODE+1024)) if [ ${NCSIZE_T} -lt ${NCSIZE} ] then        echo "[FAILED] Information: El parametro ncsize con valor actual ${NCSIZE_T} debe tener una valor superior a ${NCSIZE}" else        echo "[OK] Information: El parametro ncsize con valor actual ${NCSIZE_T} debe tener una valor superior a ${NCSIZE}" fi     if [ ${UX_VERS} = "B.11.23" ] then        MSGMAP_T=$(${KCTUNE} | grep msgmap | awk '{print $2}')        MSGMAP=$((MSGTQL+2))        if [ ${MSGMAP_T} -lt ${MSGMAP} ]        then        echo "[FAILED] Information: El parametro msgmap con valor actual ${MSGMAP_T} debe tener una valor superior a ${MSGMAP}"        else        echo "[OK] Information: El parametro msgmap con valor actual ${MSGMAP_T} debe tener una valor superior a ${MSGMAP}"        fi fi   NKTHREAD_T=$(${KCTUNE} | grep nkthread | awk '{print $2}') NKTHREAD=$((((NPROC*7)/4)+16)) if [ ${NKTHREAD_T} -lt ${NKTHREAD} ] then        echo "[FAILED] Information: El parametro nkthread con valor actual ${NKTHREAD_T} debe tener una valor superior a ${NKTHREAD}" else        echo "[OK] Information: El parametro nkthread con valor actual ${NKTHREAD_T} debe tener una valor superior a ${NKTHREAD}" fi     SEMMNI=$(${KCTUNE} | grep semmni | grep -v '(semm' | awk '{print $2}') if [ ${SEMMNI} -lt ${NPROC} ] then        echo "[FAILED] Information: El parametro semmni con valor actual ${SEMMNI} debe tener una valor superior a ${NPROC}" else        echo "[OK] Information: El parametro semmni con valor actual ${SEMMNI} debe tener una valor superior a ${NPROC}" fi     SEMMNS_T=$(${KCTUNE} | grep semmns | awk '{print $2}') SEMMNS=$((SEMMNI+2)) if [ ${SEMMNS_T} -lt ${SEMMNI} ] then        echo "[FAILED] Information: El parametro semmns con valor actual ${SEMMNS_T} debe tener una valor superior a ${SEMMNS}" else        echo "[OK] Information: El parametro semmns con valor actual ${SEMMNS_T} debe tener una valor superior a ${SEMMNS}" fi   SEMMNU_T=$(${KCTUNE} | grep semmnu | awk '{print $2}') SEMMNU=$((NPROC-4)) if [ ${SEMMNU_T} -lt ${SEMMNU} ] then        echo "[FAILED] Information: El parametro semmnu con valor actual ${SEMMNU_T} debe tener una valor superior a ${SEMMNU}" else        echo "[OK] Information: El parametro semmnu con valor actual ${SEMMNU_T} debe tener una valor superior a ${SEMMNU}" fi     if [ ${UX_VERS} = "B.11.23" ] then        SEMVMX=$(${KCTUNE} | grep msgseg | awk '{print $2}')        if [ ${SEMVMX} -lt 32767 ]        then        echo "[FAILED] Information: El parametro semvmx con valor actual ${SEMVMX} debe tener una valor superior a 32767"        else        echo "[OK] Information: El parametro semvmx con valor actual ${SEMVMX} debe tener una valor superior a 32767"        fi fi     SHMMNI=$(${KCTUNE} | grep shmmni | awk '{print $2}') if [ ${SHMMNI} -lt 512 ] then        echo "[FAILED] Information: El parametro shmmni con valor actual ${SHMMNI} debe tener una valor superior a 512" else        echo "[OK] Information: El parametro shmmni con valor actual ${SHMMNI} debe tener una valor superior a 512" fi     SHMSEG=$(${KCTUNE} | grep shmseg | awk '{print $2}') if [ ${SHMSEG} -lt 120 ] then        echo "[FAILED] Information: El parametro shmseg con valor actual ${SHMSEG} debe tener una valor superior a 120" else        echo "[OK] Information: El parametro shmseg con valor actual ${SHMSEG} debe tener una valor superior a 120" fi   VPS_CEILING=$(${KCTUNE} | grep vps_ceiling | awk '{print $2}') if [ ${VPS_CEILING} -lt 64 ] then        echo "[FAILED] Information: El parametro vps_ceiling con valor actual ${VPS_CEILING} debe tener una valor superior a 64" else        echo "[OK] Information: El parametro vps_ceiling con valor actual ${VPS_CEILING} debe tener una valor superior a 64" fi   SHMMAX=$(${KCTUNE} | grep shmmax | awk '{print $2}') if [ ${SHMMAX} -lt ${MEM_TOTAL} ] then         echo "[FAILED] Information: El parametro shmmax con valor actual ${SHMMAX} debe tener una valor superior a ${MEM_TOTAL}" else         echo "[OK] Information: El parametro shmmax con valor actual ${SHMMAX} debe tener una valor superior a ${MEM_TOTAL}" fi exit 0  

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • Is Social Media The Vital Skill You Aren’t Tracking?

    - by HCM-Oracle
    By Mark Bennett - Originally featured in Talent Management Excellence The ever-increasing presence of the workforce on social media presents opportunities as well as risks for organizations. While on the one hand, we read about social media embarrassments happening to organizations, on the other we see that social media activities by workers and candidates can enhance a company’s brand and provide insight into what individuals are, or can become, influencers in the social media sphere. HR can play a key role in helping organizations make the most value out of the activities and presence of workers and candidates, while at the same time also helping to manage the risks that come with the permanence and viral nature of social media. What is Missing from Understanding Our Workforce? “If only HP knew what HP knows, we would be three-times more productive.”  Lew Platt, Former Chairman, President, CEO, Hewlett-Packard  What Lew Platt recognized was that organizations only have a partial understanding of what their workforce is capable of. This lack of understanding impacts the company in several negative ways: 1. A particular skill that the company needs to access in one part of the organization might exist somewhere else, but there is no record that the skill exists, so the need is unfulfilled. 2. As market conditions change rapidly, the company needs to know strategic options, but some options are missed entirely because the company doesn’t know that sufficient capability already exists to enable those options. 3. Employees may miss out on opportunities to demonstrate how their hidden skills could create new value to the company. Why don’t companies have that more complete picture of their workforce capabilities – that is, not know what they know? One very good explanation is that companies put most of their efforts into rating their workforce according to the jobs and roles they are filling today. This is the essence of two important talent management processes: recruiting and performance appraisals.  In recruiting, a set of requirements is put together for a job, either explicitly or indirectly through a job description. During the recruiting process, much of the attention is paid towards whether the candidate has the qualifications, the skills, the experience and the cultural fit to be successful in the role. This makes a lot of sense.  In the performance appraisal process, an employee is measured on how well they performed the functions of their role and in an effort to help the employee do even better next time, they are also measured on proficiency in the competencies that are deemed to be key in doing that job. Again, the logic is impeccable.  But in both these cases, two adages come to mind: 1. What gets measured is what gets managed. 2. You only see what you are looking for. In other words, the fact that the current roles the workforce are performing are the basis for measuring which capabilities the workforce has, makes them the only capabilities to be measured. What was initially meant to be a positive, i.e. identify what is needed to perform well and measure it, in order that it can be managed, comes with the unintended negative consequence of overshadowing the other capabilities the workforce has. This also comes with an employee engagement price, for the measurements and management of workforce capabilities is to typically focus on where the workforce comes up short. Again, it makes sense to do this, since improving a capability that appears to result in improved performance benefits, both the individual through improved performance ratings and the company through improved productivity. But this is based on the assumption that the capabilities identified and their required proficiencies are the only attributes of the individual that matter. Anything else the individual brings that results in high performance, while resulting in a desired performance outcome, often goes unrecognized or underappreciated at best. As social media begins to occupy a more important part in current and future roles in organizations, businesses must incorporate social media savvy and innovation into job descriptions and expectations. These new measures could provide insight into how well someone can use social media tools to influence communities and decision makers; keep abreast of trends in fast-moving industries; present a positive brand image for the organization around thought leadership, customer focus, social responsibility; and coordinate and collaborate with partners. These measures should demonstrate the “social capital” the individual has invested in and developed over time. Without this dimension, “short cut” methods may generate a narrow set of positive metrics that do not have real, long-lasting benefits to the organization. How Workforce Reputation Management Helps HR Harness Social Media With hundreds of petabytes of social media data flowing across Facebook, LinkedIn and Twitter, businesses are tapping technology solutions to effectively leverage social for HR. Workforce reputation management technology helps organizations discover, mobilize and retain talent by providing insight into the social reputation and influence of the workforce while also helping organizations monitor employee social media policy compliance and mitigate social media risk.  There are three major ways that workforce reputation management technology can play a strategic role to support HR: 1. Improve Awareness and Decisions on Talent Many organizations measure the skills and competencies that they know they need today, but are unaware of what other skills and competencies their workforce has that could be essential tomorrow. How about whether your workforce has the reputation and influence to make their skills and competencies more effective? Many organizations don’t have insight into the social media “reach” their workforce has, which is becoming more critical to business performance. These features help organizations, managers, and employees improve many talent processes and decision making, including the following: Hiring and Assignments. People and teams with higher reputations are considered more valuable and effective workers. Someone with high reputation who refers a candidate also can have high credibility as a source for hires.   Training and Development. Reputation trend analysis can impact program decisions regarding training offerings by showing how reputation and influence across the workforce changes in concert with training. Worker reputation impacts development plans and goal choices by helping the individual see which development efforts result in improved reputation and influence.   Finding Hidden Talent. Managers can discover hidden talent and skills amongst employees based on a combination of social profile information and social media reputation. Employees can improve their personal brand and accelerate their career development.  2. Talent Search and Discovery The right technology helps organizations find information on people that might otherwise be hidden. By leveraging access to candidate and worker social profiles as well as their social relationships, workforce reputation management provides companies with a more complete picture of what their knowledge, skills, and attributes are and what they can in turn access. This more complete information helps to find the right talent both outside the organization as well as the right, perhaps previously hidden talent, within the organization to fill roles and staff projects, particularly those roles and projects that are required in reaction to fast-changing opportunities and circumstances. 3. Reputation Brings Credibility Workforce reputation management technology provides a clearer picture of how candidates and workers are viewed by their peers and communities across a wide range of social reputation and influence metrics. This information is less subject to individual bias and can impact critical decision-making. Knowing the individual’s reputation and influence enables the organization to predict how well their capabilities and behaviors will have a positive effect on desired business outcomes. Many roles that have the highest impact on overall business performance are dependent on the individual’s influence and reputation. In addition, reputation and influence measures offer a very tangible source of feedback for workers, providing them with insight that helps them develop themselves and their careers and see the effectiveness of those efforts by tracking changes over time in their reputation and influence. The following are some examples of the different reputation and influence measures of the workforce that Workforce Reputation Management could gather and analyze: Generosity – How often the user reposts other’s posts. Influence – How often the user’s material is reposted by others.  Engagement – The ratio of recent posts with references (e.g. links to other posts) to the total number of posts.  Activity – How frequently the user posts. (e.g. number per day)  Impact – The size of the users’ social networks, which indicates their ability to reach unique followers, friends, or users.   Clout – The number of references and citations of the user’s material in others’ posts.  The Vital Ingredient of Workforce Reputation Management: Employee Participation “Nothing about me, without me.” Valerie Billingham, “Through the Patient’s Eyes”, Salzburg Seminar Session 356, 1998 Since data resides primarily in social media, a question arises: what manner is used to collect that data? While much of social media activity is publicly accessible (as many who wished otherwise have learned to their chagrin), the social norms of social media have developed to put some restrictions on what is acceptable behavior and by whom. Disregarding these norms risks a repercussion firestorm. One of the more recognized norms is that while individuals can follow and engage with other individual’s public social activity (e.g. Twitter updates) fairly freely, the more an organization does this unprompted and without getting permission from the individual beforehand, the more likely the organization risks a totally opposite outcome from the one desired. Instead, the organization must look for permission from the individual, which can be met with resistance. That resistance comes from not knowing how the information will be used, how it will be shared with others, and not receiving enough benefit in return for granting permission. As the quote above about patient concerns and rights succinctly states, no one likes not feeling in control of the information about themselves, or the uncertainty about where it will be used. This is well understood in consumer social media (i.e. permission-based marketing) and is applicable to workforce reputation management. However, asking permission leaves open the very real possibility that no one, or so few, will grant permission, resulting in a small set of data with little usefulness for the company. Connecting Individual Motivation to Organization Needs So what is it that makes an individual decide to grant an organization access to the data it wants? It is when the individual’s own motivations are in alignment with the organization’s objectives. In the case of workforce reputation management, when the individual is motivated by a desire for increased visibility and career growth opportunities to advertise their skills and level of influence and reputation, they are aligned with the organizations’ objectives; to fill resource needs or strategically build better awareness of what skills are present in the workforce, as well as levels of influence and reputation. Individuals can see the benefit of granting access permission to the company through multiple means. One is through simple social awareness; they begin to discover that peers who are getting more career opportunities are those who are signed up for workforce reputation management. Another is where companies take the message directly to the individual; we think you would benefit from signing up with our workforce reputation management solution. Another, more strategic approach is to make reputation management part of a larger Career Development effort by the company; providing a wide set of tools to help the workforce find ways to plan and take action to achieve their career aspirations in the organization. An effective mechanism, that facilitates connecting the visibility and career growth motivations of the workforce with the larger context of the organization’s business objectives, is to use game mechanics to help individuals transform their career goals into concrete, actionable steps, such as signing up for reputation management. This works in favor of companies looking to use workforce reputation because the workforce is more apt to see how it fits into achieving their overall career goals, as well as seeing how other participation brings additional benefits.  Once an individual has signed up with reputation management, not only have they made themselves more visible within the organization and increased their career growth opportunities, they have also enabled a tool that they can use to better understand how their actions and behaviors impact their influence and reputation. Since they will be able to see their reputation and influence measurements change over time, they will gain better insight into how reputation and influence impacts their effectiveness in a role, as well as how their behaviors and skill levels in turn affect their influence and reputation. This insight can trigger much more directed, and effective, efforts by the individual to improve their ability to perform at a higher level and become more productive. The increased sense of autonomy the individual experiences, in linking the insight they gain to the actions and behavior changes they make, greatly enhances their engagement with their role as well as their career prospects within the company. Workforce reputation management takes the wide range of disparate data about the workforce being produced across various social media platforms and transforms it into accessible, relevant, and actionable information that helps the organization achieve its desired business objectives. Social media holds untapped insights about your talent, brand and business, and workforce reputation management can help unlock them. Imagine - if you could find the hidden secrets of your businesses, how much more productive and efficient would your organization be? Mark Bennett is a Director of Product Strategy at Oracle. Mark focuses on setting the strategic vision and direction for tools that help organizations understand, shape, and leverage the capabilities of their workforce to achieve business objectives, as well as help individuals work effectively to achieve their goals and navigate their own growth. His combination of a deep technical background in software design and development, coupled with a broad knowledge of business challenges and thinking in today’s globalized, rapidly changing, technology accelerated economy, has enabled him to identify and incorporate key innovations that are central to Oracle Fusion’s unique value proposition. Mark has over the course of his career been in charge of the design, development, and strategy of Talent Management products and the design and development of cutting edge software that is better equipped to handle the increasingly complex demands of users while also remaining easy to use. Follow him @mpbennett

    Read the article

  • Log message Request and Response in ASP.NET WebAPI

    - by Fredrik N
    By logging both incoming and outgoing messages for services can be useful in many scenarios, such as debugging, tracing, inspection and helping customers with request problems etc.  I have a customer that need to have both incoming and outgoing messages to be logged. They use the information to see strange behaviors and also to help customers when they call in  for help (They can by looking in the log see if the customers sends in data in a wrong or strange way).   Concerns Most loggings in applications are cross-cutting concerns and should not be  a core concern for developers. Logging messages like this:   // GET api/values/5 public string Get(int id) { //Cross-cutting concerns Log(string.Format("Request: GET api/values/{0}", id)); //Core-concern var response = DoSomething(); //Cross-cutting concerns Log(string.Format("Reponse: GET api/values/{0}\r\n{1}", id, response)); return response; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } will only result in duplication of code, and unnecessarily concerns for the developers to be aware of, if they miss adding the logging code, no logging will take place. Developers should focus on the core-concern, not the cross-cutting concerns. By just focus on the core-concern the above code will look like this: // GET api/values/5 public string Get(int id) { return DoSomething(); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The logging should then be placed somewhere else so the developers doesn’t need to focus care about the cross-concern. Using Message Handler for logging There are different ways we could place the cross-cutting concern of logging message when using WebAPI. We can for example create a custom ApiController and override the ApiController’s ExecutingAsync method, or add a ActionFilter, or use a Message Handler. The disadvantage with custom ApiController is that we need to make sure we inherit from it, the disadvantage of ActionFilter, is that we need to add the filter to the controllers, both will modify our ApiControllers. By using a Message Handler we don’t need to do any changes to our ApiControllers. So the best suitable place to add our logging would be in a custom Message Handler. A Message Handler will be used before the HttpControllerDispatcher (The part in the WepAPI pipe-line that make sure the right controller is used and called etc). Note: You can read more about message handlers here, it will give you a good understanding of the WebApi pipe-line. To create a Message Handle we can inherit from the DelegatingHandler class and override the SendAsync method: public class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { return base.SendAsync(request, cancellationToken); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   If we skip the call to the base.SendAsync our ApiController’s methods will never be invoked, nor other Message Handlers. Everything placed before base.SendAsync will be called before the HttpControllerDispatcher (before WebAPI will take a look at the request which controller and method it should be invoke), everything after the base.SendAsync, will be executed after our ApiController method has returned a response. So a message handle will be a perfect place to add cross-cutting concerns such as logging. To get the content of our response within a Message Handler we can use the request argument of the SendAsync method. The request argument is of type HttpRequestMessage and has a Content property (Content is of type HttpContent. The HttpContent has several method that can be used to read the incoming message, such as ReadAsStreamAsync, ReadAsByteArrayAsync and ReadAsStringAsync etc. Something to be aware of is what will happen when we read from the HttpContent. When we read from the HttpContent, we read from a stream, once we read from it, we can’t be read from it again. So if we read from the Stream before the base.SendAsync, the next coming Message Handlers and the HttpControllerDispatcher can’t read from the Stream because it’s already read, so our ApiControllers methods will never be invoked etc. The only way to make sure we can do repeatable reads from the HttpContent is to copy the content into a buffer, and then read from that buffer. This can be done by using the HttpContent’s LoadIntoBufferAsync method. If we make a call to the LoadIntoBufferAsync method before the base.SendAsync, the incoming stream will be read in to a byte array, and then other HttpContent read operations will read from that buffer if it’s exists instead directly form the stream. There is one method on the HttpContent that will internally make a call to the  LoadIntoBufferAsync for us, and that is the ReadAsByteArrayAsync. This is the method we will use to read from the incoming and outgoing message. public abstract class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var requestMessage = await request.Content.ReadAsByteArrayAsync(); var response = await base.SendAsync(request, cancellationToken); var responseMessage = await response.Content.ReadAsByteArrayAsync(); return response; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The above code will read the content of the incoming message and then call the SendAsync and after that read from the content of the response message. The following code will add more logic such as creating a correlation id to combine the request with the response, and create a log entry etc: public abstract class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var corrId = string.Format("{0}{1}", DateTime.Now.Ticks, Thread.CurrentThread.ManagedThreadId); var requestInfo = string.Format("{0} {1}", request.Method, request.RequestUri); var requestMessage = await request.Content.ReadAsByteArrayAsync(); await IncommingMessageAsync(corrId, requestInfo, requestMessage); var response = await base.SendAsync(request, cancellationToken); var responseMessage = await response.Content.ReadAsByteArrayAsync(); await OutgoingMessageAsync(corrId, requestInfo, responseMessage); return response; } protected abstract Task IncommingMessageAsync(string correlationId, string requestInfo, byte[] message); protected abstract Task OutgoingMessageAsync(string correlationId, string requestInfo, byte[] message); } public class MessageLoggingHandler : MessageHandler { protected override async Task IncommingMessageAsync(string correlationId, string requestInfo, byte[] message) { await Task.Run(() => Debug.WriteLine(string.Format("{0} - Request: {1}\r\n{2}", correlationId, requestInfo, Encoding.UTF8.GetString(message)))); } protected override async Task OutgoingMessageAsync(string correlationId, string requestInfo, byte[] message) { await Task.Run(() => Debug.WriteLine(string.Format("{0} - Response: {1}\r\n{2}", correlationId, requestInfo, Encoding.UTF8.GetString(message)))); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The code above will show the following in the Visual Studio output window when the “api/values” service (One standard controller added by the default WepAPI template) is requested with a Get http method : 6347483479959544375 - Request: GET http://localhost:3208/api/values 6347483479959544375 - Response: GET http://localhost:3208/api/values ["value1","value2"] .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Register a Message Handler To register a Message handler we can use the Add method of the GlobalConfiguration.Configration.MessageHandlers in for example Global.asax: public class WebApiApplication : System.Web.HttpApplication { protected void Application_Start() { GlobalConfiguration.Configuration.MessageHandlers.Add(new MessageLoggingHandler()); ... } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Summary By using a Message Handler we can easily remove cross-cutting concerns like logging from our controllers. You can also find the source code used in this blog post on ForkCan.com, feel free to make a fork or add comments, such as making the code better etc. Feel free to follow me on twitter @fredrikn if you want to know when I will write other blog posts etc.

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212  | Next Page >