Search Results

Search found 5069 results on 203 pages for 'hidden premise'.

Page 187/203 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • CodePlex Daily Summary for Thursday, June 17, 2010

    CodePlex Daily Summary for Thursday, June 17, 2010New ProjectsAstalanumerator: A JavaScript based recursive DOM/JS object inspector. Uses a simple tree menu to enumerate all properties of a object.BDD Log Converter: A simple .NET class and console application that will convert BDD logs (MDT) into XML format.CastleInvestProj: Castle Investigating project Easy Callback: This library facilitates the use of multiple asynchronous calls on the same page, and asynchronous calls from a user control also have a clean cod...Easy Wings: Small webApp to manage aircraft booking in flying club. French only for the moment.EPiServer Template Foundation: EPiServer Template Foundation builds on top of Page Type Builder to provide a framework for common site features such as basic page type properties...guidebook: a project to plan your road trip.Look into documents for e-discovery: Search, browse, tag, annotate documents such as MS Word, PDF, e-mail, etc. Good for legal professionals do e-discovery. One Bus Away for Windows Phone: A Windows Phone 7 application written in Silverlight for the OneBusAway (www.onebusaway.org) website. Allows mobile users to search for public tra...OneBusAway for Windows Phone 7: OneBusAway is a service with transit information for the Seattle, WA region. We are creating a mobile application for Windows Phone 7 utilizing th...PoFabLab - Poetry Generation Library and Editor in .NET: PoFabLab is an open source library and word processor designed for digital poets. The library can scan lines, perform Markov analysis, filter text...Project Axure: More details coming soon.Чат кутежа 2.0: ИРЦ чат специально для форума ЕНЕ简易代码生成器: 初次使用CodePlex,这只是一个测试项目。打算用WPF做一个简单的代码生成器,兼具SQL Server Client功能。使用.Net 4.0, C#开发。运营工作系统: TRAS(Team resource assist system) is a toolkit that help the studio to manage and distribute the daily work, like publish the news, GM broadcast a...New ReleasesAmuse - A New MU* Client For Windows: 2010 June: Important Notice to TestersPlease uninstall any previous versions of Amuse prior to this one before installing. Changes and InformationFirst relea...ASP.NET Generic Data Source Control: V1.0: GenericDataSource - Version 1.0Binary This is the first official binary release of the GenericDataSource for ASP.NET - stable and ready for product...Astalanumerator: Astalanumerator 0.7: I wanted to map all properties in javascript and inspect them regardless if they were objects or not. IE doesn’t support for(i in..) for native pro...BDD Log Converter: BDD Log Converter 0.1.0: First release (0.1.0).DVD Swarm: 0.8.10.616: Major update with improvements to encoding speed.Easy Callback: Easy Callback 1.0.0.0: Easy Callback library 1.0.0.0Facebook Connect Authentication for ASP.NET: Facebook Connect Authentication for ASP.NET - v1.0: Now supporting Facebook's new Open Graph API JavaScript SDK, this release of FBConnectAuth also adds support for running in partially trusted envir...FlickrNet API Library: 3.0 Beta 3: Another small Beta. Changed parsing code so exceptions aren't raised when new attributes are added by Flickr. This affects searches where you are ...Infragistics Analytics Framework: Infragistics Analytics Framework 10.2: An updated version of Infragistics Analytics Framework, which utilizes the newest version (v.1.4.4) of MSAF as well as the newest release (v.10.2) ...NUnit Add-in for Growl Notifications: NUnit Add-in for Growl Notifications 1.0 build 1: Version 1.0 build 1:[change] Test run failure notification now disappears automaticallyOpen Source PLM Activities: 3dxml player integration for Aras Innovator: This is just a simple html file you need to add to your Aras Innovator install directory. It loads the 3Dxml player for your 3dxml files. Tested o...patterns & practices - Windows Azure Guidance: WAAG - Part 2 - Drop 1: First code and docs drop for Part 2 of the Windows Azure Architecture Guide Part 1 of the Guide is released here. Highlights of this release are:...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (June 2010): Installer of the latest binaries of Phalanger 2.0 (June 2010) and its integration into Visual Studio 2008 SP1. * Improved compatibility with P...RIA Services Essentials: Book Club Application (June 16, 2010): Added some XAML to hide/show link to BookShelf page based on whether the user is logged in or not. Updated IsBookOwner authorization rule implement...secs4net: Relase 1.01: version 1.01 releasesELedit: sELedit v1.1c: Added: Tool for exporting NPC/Mob database file that is used by sNPCeditSharePoint Ad Rotator: SPAdRotator 2.0 Beta 2: Added: Open tool pane link to default Web Part text Made all images except the first hidden by default, so the Web Part will degrade gracefully w...sMAPtool: sMAPtool v0.7f (without Maps): Added: 3rd party magnifier softwaresNPCedit: sNPCedit v0.9c: Added: npc/mob names and corresponding datbaseSolidWorks Addin Development: GenericAddinFrameworkR1-06.17.2010: .sTASKedit: sTASKedit v0.8: Important BugFix: there was an mistake in the structure, team-member block and get-items block was swapped internally. Tasks that contains both blo...stefvanhooijdonk.com: UnitTesting-SP2010-TFS2010: Files for my post on TFS2010 and NUnit testing with SP2010 projects. see the post here: http://wp.me/pMnlQ-88 The XSLT here is from http://nunit4t...Telerik CAB Enabling Kit for RadControls for WinForms: TCEK 2010.1.10.504: What's new in v2010.1.0610 (Beta): RadDocking component has been replaced with the latest RadDock control Requirements: Visual Studio 2005+ Tele...TFS Buddy: TFS Buddy 1.2: Fixes a problem with notificationsThales Simulator Library: Version 0.9: The Thales Simulator Library is an implementation of a software emulation of the Thales (formerly Zaxus & Racal) Hardware Security Module cryptogra...Triton Application Framework: Tools - Code Generator - Build 1.0: This is the first release of the Generator. This is buggy but works.VCC: Latest build, v2.1.30616.0: Automatic drop of latest buildXsltDb - DotNetNuke Module Builder: 01.01.27: Code completion for XsltDb, HTML and XSL stuff!! Full screen editing Some bugs are still in EditArea component and object lists in code completi...Чат кутежа 2.0: 0.9a build 2 версия: вторая сборка первой альфа-версии ирц-клиента.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsdotSpatialpatterns & practices: Enterprise Library Contribpatterns & practices – Enterprise LibraryBlogEngine.NETLightweight Fluent WorkflowRhyduino - Arduino and Managed CodeSunlit World SchemeNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSolidWorks Addin DevelopmentN2 CMS

    Read the article

  • Writing the tests for FluentPath

    - by Bertrand Le Roy
    Writing the tests for FluentPath is a challenge. The library is a wrapper around a legacy API (System.IO) that wasn’t designed to be easily testable. If it were more testable, the sensible testing methodology would be to tell System.IO to act against a mock file system, which would enable me to verify that my code is doing the expected file system operations without having to manipulate the actual, physical file system: what we are testing here is FluentPath, not System.IO. Unfortunately, that is not an option as nothing in System.IO enables us to plug a mock file system in. As a consequence, we are left with few options. A few people have suggested me to abstract my calls to System.IO away so that I could tell FluentPath – not System.IO – to use a mock instead of the real thing. That in turn is getting a little silly: FluentPath already is a thin abstraction around System.IO, so layering another abstraction between them would double the test surface while bringing little or no value. I would have to test that new abstraction layer, and that would bring us back to square one. Unless I’m missing something, the only option I have here is to bite the bullet and test against the real file system. Of course, the tests that do that can hardly be called unit tests. They are more integration tests as they don’t only test bits of my code. They really test the successful integration of my code with the underlying System.IO. In order to write such tests, the techniques of BDD work particularly well as they enable you to express scenarios in natural language, from which test code is generated. Integration tests are being better expressed as scenarios orchestrating a few basic behaviors, so this is a nice fit. The Orchard team has been successfully using SpecFlow for integration tests for a while and I thought it was pretty cool so that’s what I decided to use. Consider for example the following scenario: Scenario: Change extension Given a clean test directory When I change the extension of bar\notes.txt to foo Then bar\notes.txt should not exist And bar\notes.foo should exist This is human readable and tells you everything you need to know about what you’re testing, but it is also executable code. What happens when SpecFlow compiles this scenario is that it executes a bunch of regular expressions that identify the known Given (set-up phases), When (actions) and Then (result assertions) to identify the code to run, which is then translated into calls into the appropriate methods. Nothing magical. Here is the code generated by SpecFlow: [NUnit.Framework.TestAttribute()] [NUnit.Framework.DescriptionAttribute("Change extension")] public virtual void ChangeExtension() { TechTalk.SpecFlow.ScenarioInfo scenarioInfo = new TechTalk.SpecFlow.ScenarioInfo("Change extension", ((string[])(null))); #line 6 this.ScenarioSetup(scenarioInfo); #line 7 testRunner.Given("a clean test directory"); #line 8 testRunner.When("I change the extension of " + "bar\\notes.txt to foo"); #line 9 testRunner.Then("bar\\notes.txt should not exist"); #line 10 testRunner.And("bar\\notes.foo should exist"); #line hidden testRunner.CollectScenarioErrors();} The #line directives are there to give clues to the debugger, because yes, you can put breakpoints into a scenario: The way you usually write tests with SpecFlow is that you write the scenario first, let it fail, then write the translation of your Given, When and Then into code if they don’t already exist, which results in running but failing tests, and then you write the code to make your tests pass (you implement the scenario). In the case of FluentPath, I built a simple Given method that builds a simple file hierarchy in a temporary directory that all scenarios are going to work with: [Given("a clean test directory")] public void GivenACleanDirectory() { _path = new Path(SystemIO.Path.GetTempPath()) .CreateSubDirectory("FluentPathSpecs") .MakeCurrent(); _path.GetFileSystemEntries() .Delete(true); _path.CreateFile("foo.txt", "This is a text file named foo."); var bar = _path.CreateSubDirectory("bar"); bar.CreateFile("baz.txt", "bar baz") .SetLastWriteTime(DateTime.Now.AddSeconds(-2)); bar.CreateFile("notes.txt", "This is a text file containing notes."); var barbar = bar.CreateSubDirectory("bar"); barbar.CreateFile("deep.txt", "Deep thoughts"); var sub = _path.CreateSubDirectory("sub"); sub.CreateSubDirectory("subsub"); sub.CreateFile("baz.txt", "sub baz") .SetLastWriteTime(DateTime.Now); sub.CreateFile("binary.bin", new byte[] {0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0xFF}); } Then, to implement the scenario that you can read above, I had to write the following When: [When("I change the extension of (.*) to (.*)")] public void WhenIChangeTheExtension( string path, string newExtension) { var oldPath = Path.Current.Combine(path.Split('\\')); oldPath.Move(p => p.ChangeExtension(newExtension)); } As you can see, the When attribute is specifying the regular expression that will enable the SpecFlow engine to recognize what When method to call and also how to map its parameters. For our scenario, “bar\notes.txt” will get mapped to the path parameter, and “foo” to the newExtension parameter. And of course, the code that verifies the assumptions of the scenario: [Then("(.*) should exist")] public void ThenEntryShouldExist(string path) { Assert.IsTrue(_path.Combine(path.Split('\\')).Exists); } [Then("(.*) should not exist")] public void ThenEntryShouldNotExist(string path) { Assert.IsFalse(_path.Combine(path.Split('\\')).Exists); } These steps should be written with reusability in mind. They are building blocks for your scenarios, not implementation of a specific scenario. Think small and fine-grained. In the case of the above steps, I could reuse each of those steps in other scenarios. Those tests are easy to write and easier to read, which means that they also constitute a form of documentation. Oh, and SpecFlow is just one way to do this. Rob wrote a long time ago about this sort of thing (but using a different framework) and I highly recommend this post if I somehow managed to pique your interest: http://blog.wekeroad.com/blog/make-bdd-your-bff-2/ And this screencast (Rob always makes excellent screencasts): http://blog.wekeroad.com/mvc-storefront/kona-3/ (click the “Download it here” link)

    Read the article

  • Tailoring the Oracle Fusion Applications User Interface with Oracle Composer

    - by mvaughan
    By Killian Evers, Oracle Applications User Experience Changing the user interface (UI) is one of the most common modifications customers perform to Oracle Fusion Applications. Typically, customers add or remove a field based on their needs. Oracle makes the process of tailoring easier for customers, and reduces the burden for their IT staff, which you can read about on the Usable Apps website or in an earlier VoX post.This is the first in a series of posts that will talk about the tools that Oracle has provided for tailoring with its family of composers. These tools are designed for business systems analysts, and they allow employees other than IT staff to make changes in an upgrade-safe and patch-friendly manner. Let’s take a deep dive into one of these composers, the Oracle Composer. Oracle Composer allows business users to modify existing UIs after they have been deployed and are in use. It is an integral component of our SaaS offering. Using Oracle Composer, users can control:     •    Who sees the changes     •    When the changes are made     •    What changes are made Change for me, change for you, change for all of youOne of the most powerful aspects of Oracle Composer is its flexibility. Oracle uses Oracle Composer to make changes for a user or group of users – those who see the changes. A user of Oracle Fusion Applications can make changes to the user interface at runtime via Oracle Composer, and these changes will remain every time they log into the system. For example, they can rearrange certain objects on a page, add and remove designated content, and save queries.Business systems analysts can make changes to Oracle Fusion Application UIs for groups of users or all users. Oracle’s Fusion Middleware Metadata Services (MDS) stores these changes and retrieves them at runtime, merging customizations with the base metadata and revealing the final experience to the end user. A tailored application can have multiple customization layers, and some layers can be specific to certain Fusion Applications. Some examples of customization layers are: site, organization, country, or role. Customization layers are applied in a specific order of precedence on top of the base application metadata. This image illustrates how customization layers are applied.What time is it?Users make changes to UIs at design time, runtime, and design time at runtime. Design time changes are typically made by application developers using an integrated development environment, or IDE, such as Oracle JDeveloper. Once made, these changes are then deployed to managed servers by application administrators. Oracle Composer covers the other two areas: Runtime changes and design time at runtime changes. When we say users are making changes at runtime, we mean that the changes are made within the running application and take effect immediately in the running application. A prime example of this ability is users who make changes to their running application that only affect the UIs they see. What is new with Oracle Composer is the last area: Design time at runtime.  A business systems analyst can make changes to the UIs at runtime but does not have to make those changes immediately to the application. These changes are stored as metadata, separate from the base application definitions. Customizations made at runtime can be saved in a sandbox so that the changes can be isolated and validated before being published into an environment, without the need to redeploy the application. What can I do?Oracle Composer can be run in one of two modes. Depending on which mode is chosen, you may have different capabilities available for changing the UIs. The first mode is view mode, the most common default mode for most pages. This is the mode that is used for personalizations or user customizations. Users can access this mode via the Personalization link (see below) in the global region on Oracle Fusion Applications pages. In this mode, you can rearrange components on a page with drag-and-drop, collapse or expand components, add approved external content, and change the overall layout of a page. However, all of the changes made this way are exclusive to that particular user.The second mode, edit mode, is typically made available to select users with access privileges to edit page content. We call these folks business systems analysts. This mode is used to make UI changes for groups of users. Users with appropriate privileges can access the edit mode of Oracle Composer via the Administration menu (see below) in the global region on Oracle Fusion Applications pages. In edit mode, users can also add components, delete components, and edit component properties. While in edit mode in Oracle Composer, there are two views that assist the business systems analyst with making UI changes: Design View and Source View (see below). Design View, the default view, is a WYSIWYG rendering of the page and its content. The business systems analyst can perform these actions: Add content – including custom content like a portlet displaying news or stock quotes, or predefined content delivered from Oracle Fusion Applications (including ADF components and task flows) Rearrange content – performed via drag-and-drop on the page or by using the actions menu of a component or portlet to move content around Edit component properties and parameters – for specific components, control the visual properties such as text or display labels, or parameters such as RSS feeds Hide or show components – hidden components can be re-shown Delete components Change page layout – users can select from eight pre-defined layouts Edit page properties – create or edit a page’s parameters and display properties Reset page customizations – remove edits made to the page in the current layer and/or reset the page to a previous state. Detailed information on each of these capabilities and the additional actions not covered in the list above can be found in the Oracle® Fusion Middleware Developer's Guide for Oracle WebCenter.This image shows what the screen looks like in Design View.Source View, the second option in the edit mode of Oracle Composer, provides a WYSIWYG and a hierarchical rendering of page components in a component navigator. In Source View, users can access and modify properties of components that are not otherwise selectable in Design View. For example, many ADF Faces components can be edited only in Source View. Users can also edit components within a task flow. This image shows what the screen looks like in Source View.Detailed information on Source View can be found in the Oracle® Fusion Middleware Developer's Guide for Oracle WebCenter.Oracle Composer enables any application or portal to be customized or personalized after it has been deployed and is in use. It is designed to be extremely easy to use so that both business systems analysts and users can edit Oracle Fusion Applications pages with a few clicks of the mouse. Oracle Composer runs in all modern browsers and provides a rich, dynamic way to edit JSF application and portal pages.From the editor: The next post in this series about composers will be on Data Composer. You can also catch Killian speaking about extensibility at OpenWorld 2012 and in her Faces of Fusion video.

    Read the article

  • CodePlex Daily Summary for Monday, May 17, 2010

    CodePlex Daily Summary for Monday, May 17, 2010New Projects.NET Essentials Course: .NET Essentials course @ Telerik Academy Training project for the studentsAU/NZ Office 2010 Launch Demos: The AU/NZ Office 2010 Launch Demos are a collection of code samples that were used as part of the Office/SharePoint 2010 launch parties in Australi...CybennyCMS: Very simple CMS system for building sites with ASP.NET with templates for lay-out, content pages with only html content and a xml file for the site...essionPIM: essionPIMGIStance: A library for finding "nearest neighbor" among an in-memory set of positions, in C# and F#. A radius must be specified for making a meaningful s...IP Informer: IP Informer is IP Informer.Kurumsal Ofis Paketi: Kurumsal Ofis Paketi (KOP), Microsoft Ofis 2010 ürünleri için geliştirilmiş eklenti yazılımıdır. KOP, Word ve Excel’de bulunan işlevlerinin genişle...Mockup to XAML: Convert Balsamiq Mockups to XAML. This project supports BMML mockup control conversion using plugins. A standard set of controls are included wit...Open XML Validator: This WPF app give you a brief resume about errors in your Open XML documents.Paint.NET Bulk Image Processor: PDNBulkUpdater is a plug-in for Paint.NET that allows you to efficiently perform operations such as resizing and converting multiple images at the ...PiPiBugNet: PiPiBugNet是一套全新的开源Bug管理系统Roleplay character generator: The roleplay character generator allows the creation of characters for different roleplaying gamesSharePoint User Search WebParts: This project contains SharePoint webparts which provide advanced search configuration and experience for SharePoint 2007. It will be upgrade in few...Spodi: Spodi is created on 22-04-2010TfsPolicyPack: This project will provide a few checkin policies for VS 2010.vccodesandobx: vccodesandobxvccodesandobxvccodesandobxWhiteNile: test project using codeplexNew ReleasesAnimeStore.Net: 1.0.3.0: Build 1.0.3.0 Changes Move some functionality to features (MEF) Filter / Search functionality. Anime hard-copy records storage (e.g Disk Storage ...AU/NZ Office 2010 Launch Demos: Twitter map web part: This is the main twitter map web part download, see the Twitter Map web part page for all the information.Blueset Studio Opensource Projects: 推来: 稳定版本BUtil: BUtil 5.0 Alpha2: The initial implementation of multitasking (except ghost)CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1 and 4.0.1 beta: Beta 2 is released here: url http://cassinidev.codeplex.com/releases/view/45456 New in CassiniDev v3.5.1.0/v4.0.1.0 Added .Net 4 / VS10 build. ...CBM-Command: 2010-05-16: Release Notes - 2010-05-16New Features New navigation options: Page Up, Page Down, Top of Directory, Bottom of Directory. See documentation (http:...CCNet Conditional Plugin: CCNet Conditional for CCNet 1.5: A (quick) build of the plugin for CCNet 1.5 to fix the 17365 bug reported by Beakster. This also adds a new condition "timeCondition"CybennyCMS: Cybenny CMS beta 1: The first beta. Includes a small demo site.Data Extracting SDK: Data Extracting SDK v.1.1 RTM: RTM version of Data Extracting SDK.Duckworth Lewis Professional Edition Calculator: DLcalc 2.0: This software can perform all D/L calculations 100% accurately. From version 2.0 onwards, tables for par scores can also be produced.EPiServer CMS Page Type Builder: Page Type Builder 1.2: Release notes can be found in this blog post.Floe IRC Client: Floe IRC Client 2010-05 R5: - Many new context menu options for @s - Ability to select multiple users in the nick list for some operations (kick, ban) - Bunch of minor bug fix...Graffiti CMS Events Plugin: Version 1.0.1: Minor update to previous version to fix bug where deleted posts were still showing in the calendar.Microsoft Research Boogie: 2010-05-16: Binary release of Boogie and Dafny. (Note, Chalice is not pre-built as part of this binary release. To obtain it, you need to build it yourself f...MSBuild Launch Pad (mPad): 1.0 Beta 2: Basic support for sln, csproj, vbproj, vcxproj, shfbproj, ccproj, oxygene and proj files are added. Basic settings (Show Prompt, and Auto Hide) are...Multi-Language Words Memorizer: Memorizer 1.1: Issues fix, XML db update with new words.NShader - HLSL - GLSL - CG - Shader Syntax Highlighter AddIn for Visual Studio: NShader 1.1: New release of NShader! New : - a Visual Studio 2010 port can be installed through the new extension manager : you just have to download NShaderV...PHPExcel: PHPExcel 1.7.3 Production: Want to contribute?Please refer the Contribute page. DonationsDonate via PayPal. If you want to, we can also add your name / company on our Donati...Rollback - A social backup tool.: Rollback Setup 0.5.1.2 Build 48360: Bug fixes for backing up files which are hidden/system. Changes to make builds on 64 bit Windows 7 using VS 2010 Express edition.Rollback - A social backup tool.: Rollback Setup 0.5.1.3: Updated version number.Shake - C# Make: Shake v0.1.20: New: Simple console logger Changes: Command line params helper writes out syntax and samples (like msbuild) Fixes: Assembly info, file task and r...SharePoint User Search WebParts: v0.1 Friendly MOSS 2007 Search WebPart: Very first version of this webpart. A more stabilized version will follow in few days.Team Deploy: Team Deploy 2010 Beta 1: This is the initial release for Team Deploy 2010 for TFS Team Build 2010. All features from Team Build 2.x are functional in this version. Comp...Team Foundation Server Administration Tool: 2.0: TFS Administration Tool 2.0 TFS Administration Tool 2.0 is built on top of the Team Foundation Server 2008 object model and in order to connect to...The Ping Master: v0.9.0.0: Installer for The Ping Master binariesUseful Office Macros: All Macro Downloads: Please find above the downloads related to this project. Each Excel Workbook below works independently of the others, so you only need to download...VCC: Latest build, v2.1.30516.0: Automatic drop of latest buildVisual Studio DSite: Advanced Digital Board Game (Visual C++ 2008): An advanced digital board game made in visual c 2008.YUI Compressor Custom Tool for Visual Studio: YUI Compressor Custom Tool Full Version: Version 1.0 The following changes have been made: Merged classes to automatically sense if the target file is Javascript or CSS. Cleaned up setu...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryPHPExcelBlogEngine.NETRawrMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersDotNetZip LibraryCaliburn: An Application Framework for WPF and SilverlightSQL Server PowerShell Extensions

    Read the article

  • Share and Deliver BI Publisher Reports in Multiple Languages

    - by kanichiro.nishida
    When you share your reports with someone who speak and read in different languages you want your reports to be shown in their language, right ? Well, translating reports with BI Publisher is not only easy but also reduces the maintenance cost a lot. Many of us in the BI Publisher product development team used to work in Globalization and Multi Lingual support, which enables Oracle products and applications to be used in many different languages and countries and territories.  And we have a lot of experience in this area. In fact, being a strategic reporting platform for Oracle EBS, PeopleSoft, JD Edwards, Siebel, and many other Oracle application products, our customers from all over the world are generating thousands of thousands of reports, including out-of-the-box pre-developed reports from Oracle and customer created or customized reports, in their own local language everyday as they operate and manage their business. Today, I’m going to talk about this very topic, how to translate my reports with BI Publisher 11G. Translation Grows, not the Numbers of the Reports Most of the reporting tools, regardless if it’s traditional or new, always take this translation on the back burner. They require their users to copy an original report and translate the whole thing. So when you want to support additional10 languages you will need to have 10 copies of the original. Imagine when you have 50 reports then you will end up having 500 reports (50 x 10) ! Now you need to maintain these 500 reports, whenever you need to make a change in a report you need to apply the same change to the other 10 reports. And as you imagine this is not only a nightmare for IT managements but not acceptable especially for the applications like Oracle EBS that supports over 30 languages. So first thing we did was, very simple, we separated the translation out of the report and marry it to the report only at the report generation. This means, regardless of how many languages you need to support you need to have only one report and translation files for the 10 languages, which would contain the translated letters and words. So let’s say you have 50 reports and need to support 10 languages for those reports you still have only 50 reports and each report now has 10 language translation files. Yes, translation is the one should grow as you add more languages to support, not the report itself! And second, we provide the translation files in XLIFF format, which is an international standard XML based format to exchange and maintain translation strings. So once you generate the XLIFF files for your reports with BI Publisher then you can work with any translation vendors in the world to make a mass translation or you can translate the XML files by yourself by manually updating the translatable strings presented in this text file. Lastly, we made it easier to manage the translation process starting from generating the XLIFF files to uploading the translated XLIFF files back to the BI Publisher server. You can generate, download, upload the XLIFF files from the BI Publisher’s Web interface with your browser and you can see the translated reports right away without needing to shutdown or restart your server. While the translated reports are displayed based on your language preference setting you can also specify a different language when you schedule or deliver the reports so that they can be generated in your customer’s preferred language. What Can I Translate? When it comes to translation there are three things. First, report content translation. When you receive a report you like to see the content like report title, section title, comments, annotation, table column header, and anything that are static and embedded in the report. in your preferred language. We call this Reports Content translation. Second, when you open a report online you might want to see not only the report content being translated but also the report UI, such as report name, parameter name, layout name, and anything that would help you to navigate around the reports, to be translated in your language. We call this Reports UI translation. And this separation of the Reports Content and Reports UI translation makes it very useful especially when you want to navigate through the reports in your preferred language UI but want to generate the reports in your customer’s preferred language. Imagine you are English native speaker and need to generate and send a report to your customers in China. You like to see the report name, parameter name in English so that you can comfortably navigate to the report and generate the report output, but like to see the report generated in Chinese so that the your customers in China can understand the report when they receive it. And lastly, you might want to see even the data presented in the report to be translated. For example, you might want to see product names in an Order Status report to be translated based on the report viewer’s language preference. We call this Reporting Data translation. Since this Reporting Data translation is maintained at the data source level such as Database tables along with the main data, you need to prepare the translation at the data source level first. Then, you want to make sure that your query is switched accordingly based on the language preference setting so that the translated data will be retrieved. How to Translate BI Publisher Reports? Now when it comes to ‘how to translate BI Publisher reports?’ the main focus here is about the translation for the Report Content and Report UI. And I just created this video to show you how to create and manage the translation with BI Publisher 11G. Please take a look at the clip below.   In today’s business world, customers and suppliers are from all over the world regardless of the size of the company or organization. Supporting multiple languages for your reports is no longer something ‘nice to have’, it’s mandatory. BI Publisher is designed to support multi lingual reports from the beginning without any extra hidden cost of license or configuration like other reporting tools such as Crystal Reports. You can support additional languages translation at any time with the very simple steps shown in the video above. Happy translation! Please share your translation experience with us! 

    Read the article

  • Working With Extended Events

    - by Fatherjack
    SQL Server 2012 has made working with Extended Events (XE) pretty simple when it comes to what sessions you have on your servers and what options you have selected and so forth but if you are like me then you still have some SQL Server instances that are 2008 or 2008 R2. For those servers there is no built-in way to view the Extended Event sessions in SSMS. I keep coming up against the same situations – Where are the xel log files? What events, actions or predicates are set for the events on the server? What sessions are there on the server already? I got tired of this being a perpetual question and wrote some TSQL to save as a snippet in SQL Prompt so that these details are permanently only a couple of clicks away. First, some history. If you just came here for the code skip down a few paragraphs and it’s all there. If you want a little time to reminisce about SQL Server then stick with me through the next paragraph or two. We are in a bit of a cross-over period currently, there are many versions of SQL Server but I would guess that SQL Server 2008, 2008 R2 and 2012 comprise the majority of installations. With each of these comes a set of management tools, of which SQL Server Management Studio (SSMS) is one. In 2008 and 2008 R2 Extended Events made their first appearance and there was no way to work with them in the SSMS interface. At some point the Extended Events guru Jonathan Kehayias (http://www.sqlskills.com/blogs/jonathan/) created the SQL Server 2008 Extended Events SSMS Addin which is really an excellent tool to ease XE session administration. This addin will install in SSMS 2008 or 2008R2 but not SSMS 2012. If you use a compatible version of SSMS then I wholly recommend downloading and using it to make your work with XE much easier. If you have SSMS 2012 installed, and there is no reason not to as it will let you work with all versions of SQL Server, then you cannot install this addin. If you are working with SQL Server 2012 then SSMS 2012 has built in functionality to manage XE sessions – this functionality does not apply for 2008 or 2008 R2 instances though. This means you are somewhat restricted and have to use TSQL to manage XE sessions on older versions of SQL Server. OK, those of you that skipped ahead for the code, you need to start from here: So, you are working with SSMS 2012 but have a SQL Server that is an earlier version that needs an XE session created or you think there is a session created but you aren’t sure, or you know it’s there but can’t remember if it is running and where the output is going. How do you find out? Well, none of the information is hidden as such but it is a bit of a wrangle to locate it and it isn’t a lot of code that is unlikely to remain in your memory. I have created two pieces of code. The first examines the SYS.Server_Event_… management views in combination with the SYS.DM_XE_… management views to give the name of all sessions that exist on the server, regardless of whether they are running or not and two pieces of TSQL code. One piece will alter the state of the session: if the session is running then the code will stop the session if executed and vice versa. The other piece of code will drop the selected session. If the session is running then the code will stop it first. Do not execute the DROP code unless you are sure you have the Create code to hand. It will be dropped from the server without a second chance to change your mind. /**************************************************************/ /***   To locate and describe event sessions on a server    ***/ /***                                                        ***/ /***   Generates TSQL to start/stop/drop sessions           ***/ /***                                                        ***/ /***        Jonathan Allen - @fatherjack                    ***/ /***                 June 2013                                ***/ /***                                                        ***/ /**************************************************************/ SELECT  [EES].[name] AS [Session Name - all sessions] ,         CASE WHEN [MXS].[name] IS NULL THEN ISNULL([MXS].[name], 'Stopped')              ELSE 'Running'         END AS SessionState ,         CASE WHEN [MXS].[name] IS NULL              THEN ISNULL([MXS].[name],                          'ALTER EVENT SESSION [' + [EES].[name]                          + '] ON SERVER STATE = START;')              ELSE 'ALTER EVENT SESSION [' + [EES].[name]                   + '] ON SERVER STATE = STOP;'         END AS ALTER_SessionState ,         CASE WHEN [MXS].[name] IS NULL              THEN ISNULL([MXS].[name],                          'DROP EVENT SESSION [' + [EES].[name]                          + '] ON SERVER; -- This WILL drop the session. It will no longer exist. Don't do it unless you are certain you can recreate it if you need it.')              ELSE 'ALTER EVENT SESSION [' + [EES].[name]                   + '] ON SERVER STATE = STOP; ' + CHAR(10)                   + '-- DROP EVENT SESSION [' + [EES].[name]                   + '] ON SERVER; -- This WILL stop and drop the session. It will no longer exist. Don't do it unless you are certain you can recreate it if you need it.'         END AS DROP_Session FROM    [sys].[server_event_sessions] AS EES         LEFT JOIN [sys].[dm_xe_sessions] AS MXS ON [EES].[name] = [MXS].[name] WHERE   [EES].[name] NOT IN ( 'system_health', 'AlwaysOn_health' ) ORDER BY SessionState GO I have excluded the system_health and AlwaysOn sessions as I don’t want to accidentally execute the drop script for these sessions that are created as part of the SQL Server installation. It is possible to recreate the sessions but that is a whole lot of aggravation I’d rather avoid. The second piece of code gathers details of running XE sessions only and provides information on the Events being collected, any predicates that are set on those events, the actions that are set to be collected, where the collected information is being logged and if that logging is to a file target, where that file is located. /**********************************************/ /***    Running Session summary                ***/ /***                                        ***/ /***    Details key values of XE sessions     ***/ /***    that are in a running state            ***/ /***                                        ***/ /***        Jonathan Allen - @fatherjack    ***/ /***        June 2013                        ***/ /***                                        ***/ /**********************************************/ SELECT  [EES].[name] AS [Session Name - running sessions] ,         [EESE].[name] AS [Event Name] ,         COALESCE([EESE].[predicate], 'unfiltered') AS [Event Predicate Filter(s)] ,         [EESA].[Action] AS [Event Action(s)] ,         [EEST].[Target] AS [Session Target(s)] ,         ISNULL([EESF].[value], 'No file target in use') AS [File_Target_UNC] -- select * FROM    [sys].[server_event_sessions] AS EES         INNER JOIN [sys].[dm_xe_sessions] AS MXS ON [EES].[name] = [MXS].[name]         INNER JOIN [sys].[server_event_session_events] AS [EESE] ON [EES].[event_session_id] = [EESE].[event_session_id]         LEFT JOIN [sys].[server_event_session_fields] AS EESF ON ( [EES].[event_session_id] = [EESF].[event_session_id]                                                               AND [EESF].[name] = 'filename'                                                               )         CROSS APPLY ( SELECT    STUFF(( SELECT  ', ' + sest.name                                         FROM    [sys].[server_event_session_targets]                                                 AS SEST                                         WHERE   [EES].[event_session_id] = [SEST].[event_session_id]                                       FOR                                         XML PATH('')                                       ), 1, 2, '') AS [Target]                     ) AS EEST         CROSS APPLY ( SELECT    STUFF(( SELECT  ', ' + [sesa].NAME                                         FROM    [sys].[server_event_session_actions]                                                 AS sesa                                         WHERE   [sesa].[event_session_id] = [EES].[event_session_id]                                       FOR                                         XML PATH('')                                       ), 1, 2, '') AS [Action]                     ) AS EESA WHERE   [EES].[name] NOT IN ( 'system_health', 'AlwaysOn_health' ) /*Optional to exclude 'out-of-the-box' traces*/ I hope that these scripts are useful to you and I would be obliged if you would keep my name in the script comments. I have no problem with you using it in production or personal circumstances, however it has no warranty or guarantee. Don’t use it unless you understand it and are happy with what it is going to do. I am not ever responsible for the consequences of executing this script on your servers.

    Read the article

  • How to block the ASP.NET page while ajax UpdateProgress is being displayed.

    Step 1: Copy the following styles to your aspx page. <style type="text/css">       .hide       {           display: none;       }       .show       {           display: inherit;       }        .progressBackgroundFilter       {           position: absolute;           top: 0px;           bottom: 0px;           left: 0px;           right: 0px;           overflow: hidden;           padding: 0;           margin: 0;           background-color: #000;           filter: alpha(opacity=50);           opacity: 0.5;           z-index: 1000;       }       .processMessage       {           position: absolute;           font-family:Verdana;           font-size:12px;           font-weight:normal;           color:#000066;           top: 30%;           left: 43%;           padding: 10px;           width: 18%;           z-index: 1001;           background-color: #fff;       }   </style> Step 2: Put the divs as shown below in UpdateProgress control. <asp:UpdateProgress ID="updPrgsBaselineTab" runat="server">        <ProgressTemplate>            <div id="progressBackgroundFilter" class="progressBackgroundFilter">            </div>            <div id="processMessage" class="processMessage">                <table width="100%">                    <tr style="width: 100%">                        <td style="width: 100%">                            Please Wait..........                        </td>                    </tr>                    <tr style="width: 100%">                        <td style="width: 100%" align="center">                            <img src="../Images/Update_Progress.gif" />                        </td>                    </tr>                </table>            </div>        </ProgressTemplate>    </asp:UpdateProgress> span.fullpost {display:none;}

    Read the article

  • Exception Handling Differences Between 32/64 Bit

    - by Alois Kraus
    I do quite a bit of debugging .NET applications but from time to time I see things that are impossible (at a first look). I may ask you dear reader what your mental exception handling model is. Exception handling is easy after all right? Lets suppose the following code:         private void F1(object sender, EventArgs e)         {             try             {                 F2();             }             catch (Exception ex)             {                 throw new Exception("even worse Exception");             }           }           private void F2()         {             try             {                 F3();             }             finally             {                 throw new Exception("other exception");             }         }           private void F3()         {             throw new NotImplementedException();         }   What will the call stack look like when you break into the catch(Exception) clause in Windbg (32 and 64 bit on .NET 3.5 SP1)? The mental model I have is that when an exception is thrown the stack frames are unwound until the catch handler can execute. An exception does propagate the call chain upwards.   So when F3 does throw an exception the control flow will resume at the finally handler in F2 which does throw another exception hiding the original one (that is nasty) and then the new Exception will be catched in F1 where the catch handler is executed. So we should see in the catch handler in F1 as call stack only the F1 stack frame right? Well lets try it out in Windbg. For this I created a simple Windows Forms application with one button which does execute the F1 method in its click handler. When you compile the application for 64 bit and the catch handler is reached you will find with the following commands in Windbg   Load sos extension from the same path where mscorwks was loaded in the current process .loadby sos mscorwks   Beak on clr exceptions sxe clr   Continue execution g   Dump mixed call stack container C++  and .NET Stacks interleaved 0:000> !DumpStack OS Thread Id: 0x1d8 (0) Child-SP         RetAddr          Call Site 00000000002c88c0 000007fefa68f0bd KERNELBASE!RaiseException+0x39 00000000002c8990 000007fefac42ed0 mscorwks!RaiseTheExceptionInternalOnly+0x295 00000000002c8a60 000007ff005dd7f4 mscorwks!JIT_Throw+0x130 00000000002c8c10 000007fefa6942e1 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F1(System.Object, System.EventArgs)+0xb4 00000000002c8c60 000007fefa661012 mscorwks!ExceptionTracker::CallHandler+0x145 00000000002c8d60 000007fefa711a72 mscorwks!ExceptionTracker::CallCatchHandler+0x9e 00000000002c8df0 0000000077b055cd mscorwks!ProcessCLRException+0x25e 00000000002c8e90 0000000077ae55f8 ntdll!RtlpExecuteHandlerForUnwind+0xd 00000000002c8ec0 000007fefa637c1a ntdll!RtlUnwindEx+0x539 00000000002c9560 000007fefa711a21 mscorwks!ClrUnwindEx+0x36 00000000002c9a70 0000000077b0554d mscorwks!ProcessCLRException+0x20d 00000000002c9b10 0000000077ae5d1c ntdll!RtlpExecuteHandlerForException+0xd 00000000002c9b40 0000000077b1fe48 ntdll!RtlDispatchException+0x3cb 00000000002ca220 000007fefdaeaa7d ntdll!KiUserExceptionDispatcher+0x2e 00000000002ca7e0 000007fefa68f0bd KERNELBASE!RaiseException+0x39 00000000002ca8b0 000007fefac42ed0 mscorwks!RaiseTheExceptionInternalOnly+0x295 00000000002ca980 000007ff005dd8df mscorwks!JIT_Throw+0x130 00000000002cab30 000007fefa6942e1 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F2()+0x9f 00000000002cab80 000007fefa71b5b3 mscorwks!ExceptionTracker::CallHandler+0x145 00000000002cac80 000007fefa70dcd0 mscorwks!ExceptionTracker::ProcessManagedCallFrame+0x683 00000000002caed0 000007fefa7119af mscorwks!ExceptionTracker::ProcessOSExceptionNotification+0x430 00000000002cbd90 0000000077b055cd mscorwks!ProcessCLRException+0x19b 00000000002cbe30 0000000077ae55f8 ntdll!RtlpExecuteHandlerForUnwind+0xd 00000000002cbe60 000007fefa637c1a ntdll!RtlUnwindEx+0x539 00000000002cc500 000007fefa711a21 mscorwks!ClrUnwindEx+0x36 00000000002cca10 0000000077b0554d mscorwks!ProcessCLRException+0x20d 00000000002ccab0 0000000077ae5d1c ntdll!RtlpExecuteHandlerForException+0xd 00000000002ccae0 0000000077b1fe48 ntdll!RtlDispatchException+0x3cb 00000000002cd1c0 000007fefdaeaa7d ntdll!KiUserExceptionDispatcher+0x2e 00000000002cd780 000007fefa68f0bd KERNELBASE!RaiseException+0x39 00000000002cd850 000007fefac42ed0 mscorwks!RaiseTheExceptionInternalOnly+0x295 00000000002cd920 000007ff005dd968 mscorwks!JIT_Throw+0x130 00000000002cdad0 000007ff005dd875 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F3()+0x48 00000000002cdb10 000007ff005dd786 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F2()+0x35 00000000002cdb60 000007ff005dbe6a WindowsFormsApplication1!WindowsFormsApplication1.Form1.F1(System.Object, System.EventArgs)+0x46 00000000002cdbc0 000007ff005dd452 System_Windows_Forms!System.Windows.Forms.Control.OnClick(System.EventArgs)+0x5a   Hm okaaay. I see my method F1 two times in this call stack. Looks like we did get some recursion bug. But that can´t be given the obvious code above. Let´s try the same thing in a 32 bit process.  0:000> !DumpStack OS Thread Id: 0x33e4 (0) Current frame: KERNELBASE!RaiseException+0x58 ChildEBP RetAddr  Caller,Callee 0028ed38 767db727 KERNELBASE!RaiseException+0x58, calling ntdll!RtlRaiseException 0028ed4c 68b9008c mscorwks!Binder::RawGetClass+0x20, calling mscorwks!Module::LookupTypeDef 0028ed5c 68b904ff mscorwks!Binder::IsClass+0x23, calling mscorwks!Binder::RawGetClass 0028ed68 68bfb96f mscorwks!Binder::IsException+0x14, calling mscorwks!Binder::IsClass 0028ed78 68bfb996 mscorwks!IsExceptionOfType+0x23, calling mscorwks!Binder::IsException 0028ed80 68bfbb1c mscorwks!RaiseTheExceptionInternalOnly+0x2a8, calling KERNEL32!RaiseExceptionStub 0028eda8 68ba0713 mscorwks!Module::ResolveStringRef+0xe0, calling mscorwks!BaseDomain::GetStringObjRefPtrFromUnicodeString 0028edc8 68b91e8d mscorwks!SetObjectReferenceUnchecked+0x19 0028ede0 68c8e910 mscorwks!JIT_Throw+0xfc, calling mscorwks!RaiseTheExceptionInternalOnly 0028ee44 68c8e734 mscorwks!JIT_StrCns+0x22, calling mscorwks!LazyMachStateCaptureState 0028ee54 68c8e865 mscorwks!JIT_Throw+0x1e, calling mscorwks!LazyMachStateCaptureState 0028eea4 02ffaecd (MethodDesc 0x7af08c +0x7d WindowsFormsApplication1.Form1.F1(System.Object, System.EventArgs)), calling mscorwks!JIT_Throw 0028eeec 02ffaf19 (MethodDesc 0x7af098 +0x29 WindowsFormsApplication1.Form1.F2()), calling 06370634 0028ef58 02ffae37 (MethodDesc 0x7a7bb0 +0x4f System.Windows.Forms.Control.OnClick(System.EventArgs))   That does look more familar. The call stack has been unwound and we do see only some frames into the history where the debugger was smart enough to find out that we have called F2 from F1. The exception handling on 64 bit systems does work quite differently which seems to have the nice property to remember the called methods not only during the first pass of exception filter clauses (during first pass all catch handler are called if they are going to catch the exception which is about to be thrown)  but also when the actual stack unwind has taken place. This makes it possible to follow not only the call stack right at the moment but also to look into the “history” of the catch/finally clauses. In a 64 bit process you only need to look at the ExceptionTracker to find out if a catch or finally handler was called. The two frames ProcessManagedCallFrame/CallHandler does indicate a finally clause whereas CallCatchHandler/CallHandler indicates a catch clause. That was a interesting one. Oh and by the way if you manage to load the Microsoft symbols you can also find out the hidden exception which. When you encounter in the call stack a line 0016eb34 75b79617 KERNELBASE!RaiseException+0x58 ====> Exception Code e0434f4d cxr@16e850 exr@16e838 Then it is a good idea to execute .exr 16e838 !analyze –v to find out more. In the managed world it is even easier since we can dump the objects allocated on the stack which have not yet been garbage collected to look at former method parameters. The command !dso which is the abbreviation for dump stack objects will give you 0:000> !dso OS Thread Id: 0x46c (0) ESP/REG  Object   Name 0016dd4c 020737f0 System.Exception 0016dd98 020737f0 System.Exception 0016dda8 01f5c6cc System.Windows.Forms.Button 0016ddac 01f5d2b8 System.EventHandler 0016ddb0 02071744 System.Windows.Forms.MouseEventArgs 0016ddc0 01f5d2b8 System.EventHandler 0016ddcc 01f5c6cc System.Windows.Forms.Button 0016dddc 020737f0 System.Exception 0016dde4 01f5d2b8 System.EventHandler 0016ddec 02071744 System.Windows.Forms.MouseEventArgs 0016de40 020737f0 System.Exception 0016de80 02071744 System.Windows.Forms.MouseEventArgs 0016de8c 01f5d2b8 System.EventHandler 0016de90 01f5c6cc System.Windows.Forms.Button 0016df10 02073784 System.SByte[] 0016df5c 02073684 System.NotImplementedException 0016e2a0 02073684 System.NotImplementedException 0016e2e8 01ed69f4 System.Resources.ResourceManager From there it is easy to do 0:000> !pe 02073684 Exception object: 02073684 Exception type: System.NotImplementedException Message: Die Methode oder der Vorgang sind nicht implementiert. InnerException: <none> StackTrace (generated):     SP       IP       Function     0016ECB0 006904AD WindowsFormsApplication2!WindowsFormsApplication2.Form1.F3()+0x35     0016ECC0 00690411 WindowsFormsApplication2!WindowsFormsApplication2.Form1.F2()+0x29     0016ECF0 0069038F WindowsFormsApplication2!WindowsFormsApplication2.Form1.F1(System.Object, System.EventArgs)+0x3f StackTraceString: <none> HResult: 80004001 to see the former exception. That´s all for today.

    Read the article

  • Custom Lookup Provider For NetBeans Platform CRUD Tutorial

    - by Geertjan
    For a long time I've been planning to rewrite the second part of the NetBeans Platform CRUD Application Tutorial to integrate the loosely coupled capabilities introduced in a seperate series of articles based on articles by Antonio Vieiro (a great series, by the way). Nothing like getting into the Lookup stuff right from the get go (rather than as an afterthought)! The question, of course, is how to integrate the loosely coupled capabilities in a logical way within that tutorial. Today I worked through the tutorial from scratch, up until the point where the prototype is completed, i.e., there's a JTextArea displaying data pulled from a database. That brought me to the place where I needed to be. In fact, as soon as the prototype is completed, i.e., the database connection has been shown to work, the whole story about Lookup.Provider and InstanceContent should be introduced, so that all the subsequent sections, i.e., everything within "Integrating CRUD Functionality" will be done by adding new capabilities to the Lookup.Provider. However, before I perform open heart surgery on that tutorial, I'd like to run the scenario by all those reading this blog who understand what I'm trying to do! (I.e., probably anyone who has read this far into this blog entry.) So, this is what I propose should happen and in this order: Point out the fact that right now the database access code is found directly within our TopComponent. Not good. Because you're mixing view code with data code and, ideally, the developers creating the user interface wouldn't need to know anything about the data access layer. Better to separate out the data access code into a separate class, within the CustomerLibrary module, i.e., far away from the module providing the user interface, with this content: public class CustomerDataAccess { public List<Customer> getAllCustomers() { return Persistence.createEntityManagerFactory("CustomerLibraryPU"). createEntityManager().createNamedQuery("Customer.findAll").getResultList(); } } Point out the fact that there is a concept of "Lookup" (which readers of the tutorial should know about since they should have followed the NetBeans Platform Quick Start), which is a registry into which objects can be published and to which other objects can be listening. In the same way as a TopComponent provides a Lookup, as demonstrated in the NetBeans Platform Quick Start, your own object can also provide a Lookup. So, therefore, let's provide a Lookup for Customer objects.  import org.openide.util.Lookup; import org.openide.util.lookup.AbstractLookup; import org.openide.util.lookup.InstanceContent; public class CustomerLookupProvider implements Lookup.Provider { private Lookup lookup; private InstanceContent instanceContent; public CustomerLookupProvider() { // Create an InstanceContent to hold capabilities... instanceContent = new InstanceContent(); // Create an AbstractLookup to expose the InstanceContent... lookup = new AbstractLookup(instanceContent); // Add a "Read" capability to the Lookup of the provider: //...to come... // Add a "Update" capability to the Lookup of the provider: //...to come... // Add a "Create" capability to the Lookup of the provider: //...to come... // Add a "Delete" capability to the Lookup of the provider: //...to come... } @Override public Lookup getLookup() { return lookup; } } Point out the fact that, in the same way as we can publish an object into the Lookup of a TopComponent, we can now also publish an object into the Lookup of our CustomerLookupProvider. Instead of publishing a String, as in the NetBeans Platform Quick Start, we'll publish an instance of our own type. And here is the type: public interface ReadCapability { public void read() throws Exception; } And here is an implementation of our type added to our Lookup: public class CustomerLookupProvider implements Lookup.Provider { private Set<Customer> customerSet; private Lookup lookup; private InstanceContent instanceContent; public CustomerLookupProvider() { customerSet = new HashSet<Customer>(); // Create an InstanceContent to hold capabilities... instanceContent = new InstanceContent(); // Create an AbstractLookup to expose the InstanceContent... lookup = new AbstractLookup(instanceContent); // Add a "Read" capability to the Lookup of the provider: instanceContent.add(new ReadCapability() { @Override public void read() throws Exception { ProgressHandle handle = ProgressHandleFactory.createHandle("Loading..."); handle.start(); customerSet.addAll(new CustomerDataAccess().getAllCustomers()); handle.finish(); } }); // Add a "Update" capability to the Lookup of the provider: //...to come... // Add a "Create" capability to the Lookup of the provider: //...to come... // Add a "Delete" capability to the Lookup of the provider: //...to come... } @Override public Lookup getLookup() { return lookup; } public Set<Customer> getCustomers() { return customerSet; } } Point out that we can now create a new instance of our Lookup (in some other module, so long as it has a dependency on the module providing the CustomerLookupProvider and the ReadCapability), retrieve the ReadCapability, and then do something with the customers that are returned, here in the rewritten constructor of the TopComponent, without needing to know anything about how the database access is actually achieved since that is hidden in the implementation of our type, above: public CustomerViewerTopComponent() { initComponents(); setName(Bundle.CTL_CustomerViewerTopComponent()); setToolTipText(Bundle.HINT_CustomerViewerTopComponent()); // EntityManager entityManager = Persistence.createEntityManagerFactory("CustomerLibraryPU").createEntityManager(); // Query query = entityManager.createNamedQuery("Customer.findAll"); // List<Customer> resultList = query.getResultList(); // for (Customer c : resultList) { // jTextArea1.append(c.getName() + " (" + c.getCity() + ")" + "\n"); // } CustomerLookupProvider lookup = new CustomerLookupProvider(); ReadCapability rc = lookup.getLookup().lookup(ReadCapability.class); try { rc.read(); for (Customer c : lookup.getCustomers()) { jTextArea1.append(c.getName() + " (" + c.getCity() + ")" + "\n"); } } catch (Exception ex) { Exceptions.printStackTrace(ex); } } Does the above make as much sense to others as it does to me, including the naming of the classes? Feedback would be appreciated! Then I'll integrate into the tutorial and do the same for the other sections, i.e., "Create", "Update", and "Delete". (By the way, of course, the tutorial ends up showing that, rather than using a JTextArea to display data, you can use Nodes and explorer views to do so.)

    Read the article

  • Setting useLegacyV2RuntimeActivationPolicy At Runtime

    - by Reed
    Version 4.0 of the .NET Framework included a new CLR which is almost entirely backwards compatible with the 2.0 version of the CLR.  However, by default, mixed-mode assemblies targeting .NET 3.5sp1 and earlier will fail to load in a .NET 4 application.  Fixing this requires setting useLegacyV2RuntimeActivationPolicy in your app.Config for the application.  While there are many good reasons for this decision, there are times when this is extremely frustrating, especially when writing a library.  As such, there are (rare) times when it would be beneficial to set this in code, at runtime, as well as verify that it’s running correctly prior to receiving a FileLoadException. Typically, loading a pre-.NET 4 mixed mode assembly is handled simply by changing your app.Config file, and including the relevant attribute in the startup element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0"/> </startup> </configuration> .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } This causes your application to run correctly, and load the older, mixed-mode assembly without issues. For full details on what’s happening here and why, I recommend reading Mark Miller’s detailed explanation of this attribute and the reasoning behind it. Before I show any code, let me say: I strongly recommend using the official approach of using app.config to set this policy. That being said, there are (rare) times when, for one reason or another, changing the application configuration file is less than ideal. While this is the supported approach to handling this issue, the CLR Hosting API includes a means of setting this programmatically via the ICLRRuntimeInfo interface.  Normally, this is used if you’re hosting the CLR in a native application in order to set this, at runtime, prior to loading the assemblies.  However, the F# Samples include a nice trick showing how to load this API and bind this policy, at runtime.  This was required in order to host the Managed DirectX API, which is built against an older version of the CLR. This is fairly easy to port to C#.  Instead of a direct port, I also added a little addition – by trapping the COM exception received if unable to bind (which will occur if the 2.0 CLR is already bound), I also allow a runtime check of whether this property was setup properly: public static class RuntimePolicyHelper { public static bool LegacyV2RuntimeEnabledSuccessfully { get; private set; } static RuntimePolicyHelper() { ICLRRuntimeInfo clrRuntimeInfo = (ICLRRuntimeInfo)RuntimeEnvironment.GetRuntimeInterfaceAsObject( Guid.Empty, typeof(ICLRRuntimeInfo).GUID); try { clrRuntimeInfo.BindAsLegacyV2Runtime(); LegacyV2RuntimeEnabledSuccessfully = true; } catch (COMException) { // This occurs with an HRESULT meaning // "A different runtime was already bound to the legacy CLR version 2 activation policy." LegacyV2RuntimeEnabledSuccessfully = false; } } [ComImport] [InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] [Guid("BD39D1D2-BA2F-486A-89B0-B4B0CB466891")] private interface ICLRRuntimeInfo { void xGetVersionString(); void xGetRuntimeDirectory(); void xIsLoaded(); void xIsLoadable(); void xLoadErrorString(); void xLoadLibrary(); void xGetProcAddress(); void xGetInterface(); void xSetDefaultStartupFlags(); void xGetDefaultStartupFlags(); [MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)] void BindAsLegacyV2Runtime(); } } Using this, it’s possible to not only set this at runtime, but also verify, prior to loading your mixed mode assembly, whether this will succeed. In my case, this was quite useful – I am working on a library purely for internal use which uses a numerical package that is supplied with both a completely managed as well as a native solver.  The native solver uses a CLR 2 mixed-mode assembly, but is dramatically faster than the pure managed approach.  By checking RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully at runtime, I can decide whether to enable the native solver, and only do so if I successfully bound this policy. There are some tricks required here – To enable this sort of fallback behavior, you must make these checks in a type that doesn’t cause the mixed mode assembly to be loaded.  In my case, this forced me to encapsulate the library I was using entirely in a separate class, perform the check, then pass through the required calls to that class.  Otherwise, the library will load before the hosting process gets enabled, which in turn will fail. This code will also, of course, try to enable the runtime policy before the first time you use this class – which typically means just before the first time you check the boolean value.  As a result, checking this early on in the application is more likely to allow it to work. Finally, if you’re using a library, this has to be called prior to the 2.0 CLR loading.  This will cause it to fail if you try to use it to enable this policy in a plugin for most third party applications that don’t have their app.config setup properly, as they will likely have already loaded the 2.0 runtime. As an example, take a simple audio player.  The code below shows how this can be used to properly, at runtime, only use the “native” API if this will succeed, and fallback (or raise a nicer exception) if this will fail: public class AudioPlayer { private IAudioEngine audioEngine; public AudioPlayer() { if (RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully) { // This will load a CLR 2 mixed mode assembly this.audioEngine = new AudioEngineNative(); } else { this.audioEngine = new AudioEngineManaged(); } } public void Play(string filename) { this.audioEngine.Play(filename); } } Now – the warning: This approach works, but I would be very hesitant to use it in public facing production code, especially for anything other than initializing your own application.  While this should work in a library, using it has a very nasty side effect: you change the runtime policy of the executing application in a way that is very hidden and non-obvious.

    Read the article

  • How to select the first ocurrence in the auto-completion menu by pressing Enter?

    - by janoChen
    Every time there's is a pop up menu. I select the first occurrence and press enter but nothing happens (the word is not completed with he selected occurrence). The only way is to press Tab until you reach the term for a second time. Is there a way of selecting the first occurrence pressing Enter (or other Vim hotkey)? My .vimrc: " SHORTCUTS nnoremap <F4> :set filetype=html<CR> nnoremap <F5> :set filetype=php<CR> nnoremap <F3> :TlistToggle<CR> " press space to turn off highlighting and clear any message already displayed. nnoremap <silent> <Space> :nohlsearch<Bar>:echo<CR> " set buffers commands nnoremap <silent> <M-F8> :BufExplorer<CR> nnoremap <silent> <F8> :bn<CR> nnoremap <silent> <S-F8> :bp<CR> " open NERDTree with start directory: D:\wamp\www nnoremap <F9> :NERDTree /home/alex/www<CR> " open MRU nnoremap <F10> :MRU<CR> " open current file (silently) nnoremap <silent> <F11> :let old_reg=@"<CR>:let @"=substitute(expand("%:p"), "/", "\\", "g")<CR>:silent!!cmd /cstart <C-R><C-R>"<CR><CR>:let @"=old_reg<CR> " open current file in localhost (default browser) nnoremap <F12> :! start "http://localhost" file:///"%:p""<CR> " open Vim's default Explorer nnoremap <silent> <F2> :Explore<CR> nnoremap <C-F2> :%s/\.html/.php/g<CR> " REMAPPING " map leader to , let mapleader = "," " remap ` to ' nnoremap ' ` nnoremap ` ' " remap increment numbers nnoremap <C-kPlus> <C-A> " COMPRESSION function Js_css_compress () let cwd = expand('<afile>:p:h') let nam = expand('<afile>:t:r') let ext = expand('<afile>:e') if -1 == match(nam, "[\._]src$") let minfname = nam.".min.".ext else let minfname = substitute(nam, "[\._]src$", "", "g").".".ext endif if ext == 'less' if executable('lessc') cal system( 'lessc '.cwd.'/'.nam.'.'.ext.' &') endif else if filewritable(cwd.'/'.minfname) if ext == 'js' && executable('closure-compiler') cal system( 'closure-compiler --js '.cwd.'/'.nam.'.'.ext.' > '.cwd.'/'.minfname.' &') elseif executable('yuicompressor') cal system( 'yuicompressor '.cwd.'/'.nam.'.'.ext.' > '.cwd.'/'.minfname.' &') endif endif endif endfunction autocmd FileWritePost,BufWritePost *.js :call Js_css_compress() autocmd FileWritePost,BufWritePost *.css :call Js_css_compress() autocmd FileWritePost,BufWritePost *.less :call Js_css_compress() " GUI " taglist right side let Tlist_Use_Right_Window = 1 " hide tool bar set guioptions-=T "remove scroll bars set guioptions+=LlRrb set guioptions-=LlRrb " set the initial size of window set lines=46 columns=180 " set default font set guifont=Monospace " set guifont=Monospace\ 10 " show line number set number " set default theme colorscheme molokai-2 " encoding set encoding=utf-8 setglobal fileencoding=utf-8 bomb set fileencodings=ucs-bom,utf-8,latin1 " SCSS syntax highlight au BufRead,BufNewFile *.scss set filetype=scss " LESS syntax highlight syntax on au BufNewFile,BufRead *.less set filetype=less " Haml syntax highlight "au! BufRead,BufNewFile *.haml "setfiletype haml " Sass syntax highlight "au! BufRead,BufNewFile *.sass "setfiletype sass " set filetype indent filetype indent on " for snipMate to work filetype plugin on " show breaks set showbreak=-----> " coding format set tabstop=4 set shiftwidth=4 set linespace=1 " CONFIG " set location of ctags let Tlist_Ctags_Cmd='D:\ctags58\ctags.exe' " keep the buffer around when left set hidden " enable matchit plugin source $VIMRUNTIME/macros/matchit.vim " folding set foldmethod=marker set foldmarker={,} let g:FoldMethod = 0 map <leader>ff :call ToggleFold()<cr> fun! ToggleFold() if g:FoldMethod == 0 exe 'set foldmethod=indent' let g:FoldMethod = 1 else exe 'set foldmethod=marker' let g:FoldMethod = 0 endif endfun " save and restore folds when a file is closed and re-opened "au BufWrite ?* mkview "au BufRead ?* silent loadview " auto-open NERDTree everytime Vim is invoked au VimEnter * NERDTree /home/alex/www " set omnicomplete autocmd FileType python set omnifunc=pythoncomplete#Complete autocmd FileType javascript set omnifunc=javascriptcomplete#CompleteJS autocmd FileType html set omnifunc=htmlcomplete#CompleteTags autocmd FileType css set omnifunc=csscomplete#CompleteCSS autocmd FileType xml set omnifunc=xmlcomplete#CompleteTags autocmd FileType php set omnifunc=phpcomplete#CompletePHP autocmd FileType c set omnifunc=ccomplete#Complete " Remove trailing white-space once the file is saved au BufWritePre * silent g/\s\+$/s/// " Use CTRL-S for saving, also in Insert mode noremap <C-S> :update!<CR> vnoremap <C-S> <C-C>:update!<CR> inoremap <C-S> <C-O>:update!<CR> " DEFAULT set nocompatible source $VIMRUNTIME/vimrc_example.vim "source $VIMRUNTIME/mswin.vim "behave mswin " disable creation of swap files set noswapfile " no back ups wwhile editing set nowritebackup " disable creation of backups set nobackup " no file change pop up warning autocmd FileChangedShell * echohl WarningMsg | echo "File changed shell." | echohl None set diffexpr=MyDiff() function MyDiff() let opt = '-a --binary ' if &diffopt =~ 'icase' | let opt = opt . '-i ' | endif if &diffopt =~ 'iwhite' | let opt = opt . '-b ' | endif let arg1 = v:fname_in if arg1 =~ ' ' | let arg1 = '"' . arg1 . '"' | endif let arg2 = v:fname_new if arg2 =~ ' ' | let arg2 = '"' . arg2 . '"' | endif let arg3 = v:fname_out if arg3 =~ ' ' | let arg3 = '"' . arg3 . '"' | endif let eq = '' if $VIMRUNTIME =~ ' ' if &sh =~ '\<cmd' let cmd = '""' . $VIMRUNTIME . '\diff"' let eq = '"' else let cmd = substitute($VIMRUNTIME, ' ', '" ', '') . '\diff"' endif else let cmd = $VIMRUNTIME . '\diff' endif silent execute '!' . cmd . ' ' . opt . arg1 . ' ' . arg2 . ' > ' . arg3 . eq endfunction

    Read the article

  • The 20 Most Important Keyboard Shortcuts For Windows PCs

    - by Chris Hoffman
    Keyboard shortcuts are practically essential for using any type of PC. They’ll speed up almost everything you do. But long lists of keyboard shortcuts can quickly become overwhelming if you’re just getting started. This list will cover the most useful keyboard shortcuts that every Windows user should know. If you haven’t used keyboard shortcuts much, these will show you just how useful keyboard shortcuts can be. Windows Key + Search The Windows key is particularly important on Windows 8 — especially before Windows 8.1 — because it allows you to quickly return to the Start screen. On Windows 7, it opens the Start menu. Either way, you can start typing immediately after you press the Windows key to search for programs, settings, and files. For example, if you want to launch Firefox, you can press the Windows key, start typing the word Firefox, and press Enter when the Firefox shortcut appears. It’s a quick way to launch programs, open files, and locate Control Panel options without even touching your mouse and without digging through a cluttered Start menu. You can also use the arrow keys to select the shortcut you want to launch before pressing Enter. Copy, Cut, Paste Copy, Cut, and Paste are extremely important keyboard shortcuts for text-editing. If you do any typing on your computer, you probably use them. These options can be accessed using the mouse, either by right-clicking on selected text or opening the application’s Edit menu, but this is the slowest way to do it. After selecting some text, press Ctrl+C to copy it or Ctrl+X to cut it. Position the cursor where you want the text and use Ctrl+V to paste it. These shortcuts can save you a huge amount of time over using the mouse. Search the Current Page or File To quickly perform a search in the current application — whether you’re in a web browser, PDF viewer, document editor, or almost any other type of application — press Ctrl+F. The application’s search (or “Find”) feature will pop up, and you can instantly start typing a phrase you want to search for. You can generally press Enter to  go to the next appearance of the word or phrase in the document, quickly searching through it for what you’re interested in. Switch Between Applications and Tabs Rather than clicking buttons on your taskbar, Alt+Tab is a very quick way to switch between running applications. Windows orders the list of open windows by the order you accessed them, so if you’re only using two different applications, you can just press Alt+Tab to quickly switch between them. If switching between more than two windows, you’ll have to hold the Alt key and press Tab repeatedly to toggle through the list of open windows. If you miss the window you want, you can always press Alt+Shift+Tab to move through the list in reverse. To move between tabs in an application — such as the browser tabs in your web browser — press Ctrl+Tab. Ctrl+Shift+Tab will move through tabs in reverse. Quickly Print If you’re the kind of person who still prints things, you can quickly open the print window by pressing Ctrl+P. This can be faster than hunting down the Print option in every program you want to print something from. Basic Browser Shortcuts Web browser shortcuts can save you tons of time, too. Ctrl+T is a very useful one, as it will open a new tab with the address bar focused, so you can quickly press Ctrl +T, type a search phrase or web address, and press Enter to go there. To go back or forward while browsing, hold the Ctrl key and press the left or right arrow keys. If you’d just like to focus your web browser’s address bar so you can type a new web address or search without opening a new tab, press Ctrl + L. You can then start typing something and press Enter. Close Tabs and Windows To quickly close the current application, press Alt+F4. This works on the desktop and even in new Windows 8-style applications. To quickly close the current browser tab or document, press Ctrl+W. This will often close the current window if there are no other tabs open. Lock Your Computer When you’re done using your computer and want to step away, you may want to lock it. People won’t be able to log in and access your desktop unless they know your password. You can do this from the Start menu or Start screen, but the fastest way to lock your screen is by quickly pressing Windows Key + L before you get up. Access the Task Manager Ctrl+Alt+Delete will take you to a screen that allows you to quickly launch the Task Manager or perform other operations, such as signing out. This is particularly useful because if can be used to recover from situations where your computer doesn’t appear responsive or isn’t accepting input. For example, if a full-screen game becomes unresponsive, Ctrl+Alt+Delete will often allow you to escape from it and end it via the Task Manager. Windows 8 Shortcuts On Windows 8 PCs, there are other very important keyboard shortcuts. Windows Key + C will open your Charms bar, while Windows Key + Tab will open the new App Switcher. These keyboard shortcuts will allow you to avoid the hot corners, which can be tedious to use with a mouse. On the desktop side, Windows Key + D will take you back to the desktop from anywhere. Windows Key + X will open a special “power user menu” that gives you quick access to options that are hidden in the new Windows 8 interface, including Shut Down, Restart, and Control Panel. If you’re interested in learning more keyboard shortcuts, be sure to check our longer lists of 47 keyboard shortcuts that work in all web browsers and 42+ keyboard shortcuts to speed up text-editing. Image Credit: Jeroen Bennink on Flickr     

    Read the article

  • Learnings from trying to write better software: Loud errors from the very start

    - by theo.spears
    Microsoft made a very small number of backwards incompatible changes between .NET 1.1 and 2.0, because they wanted to make it as easy and safe as possible to port applications to the new runtime. (Here’s a list.) However, one thing they did change was what happens when a background thread fails with an unhanded exception - in .NET 1.1 nothing happened, the thread terminated, and the application continued oblivious. Try the same trick in .NET 2.0 and the entire application, including all threads, will rudely terminate. There are three reasons for this. Firstly if a background thread has crashed, it may have left the entire application in an inconsistent state, in a way that will affect other threads. It’s better to terminate the entire application than continue and have the application perform actions based on a broken state, for example take customer orders, or write corrupt files to disk.  Secondly, during software development, it is far better for errors to be loud and obtrusive. Even if you have unit tests and integration tests (and you should), a key part of ensuring software works properly is to actually try using it, both through systematic testing and through the casual use all software gets by its developers during use. Subtle errors are easy to miss if you are not actually doing real work using the application, loud errors are obvious. Thirdly, and most importantly, even if catching and swallowing exceptions indiscriminately doesn't cause any problems in your application, the presence of unexpected exceptions shows you do not fully understand the behavior of your code. The currently released version of your application may be absolutely correct. However, because your mental model of the behavior is wrong, any future change you make to the program could and probably will introduce critical errors.  This applies to more than just exceptions causing threads to exit, any unexpected state should make the application blow up in an un-ignorable way. The worst thing you can do is silently swallow errors and continue. And let's be clear, writing to a log file does not count as blowing up in an un-ignorable way.  This is all simple as long as the call stack only contains your code, but when your functions start to be called by third party or .NET framework code, it's surprisingly easy for exceptions to start vanishing. Let's look at two examples.   1. Windows forms drag drop events  Usually if you throw an exception from a winforms event handler it will bring up the "application has crashed" dialog with abort and continue options. This is a good default behavior - the error is big and loud, but it is possible for the user to ignore the error and hopefully save their data, if somehow this bug makes it past testing. However drag and drop are different - throw an exception from one of these and it will just be silently swallowed with no explanation.  By the way, it's not just drag and drop events. Timer events do it too.  You can research how exceptions are treated in different handlers and code appropriately, but the safest and most user friendly approach is to always catch exceptions in your event handlers and show your own error message. I'll talk about one good approach to handling these exceptions at the end of this post.   2. SSMS integration for SQL Tab Magic  A while back wrote an SSMS add-in called SQL Tab Magic (learn more about the process here). It works by listening to certain SSMS events and remembering what documents are opened and closed. I deployed it internally and it was used for a few months by a number of people without problems, so I was reasonably confident in its quality. Before releasing I made a few cleanups, including introducing error reporting. Bam. A few days later I was looking at over 1,000 error reports in my inbox. In turns out I wasn't handling table designers properly. The exceptions were there, but again SSMS was helpfully swallowing them all for me, so I was blissfully unaware. Had I made my errors loud from the start, I would have noticed these issues long before and fixed them.   Handling exceptions  Now you are systematically catching exceptions throughout your application, you need to do something with them. I've tried 3 options: log them, alert the user, and automatically send them home.  There are a few good options for logging in .NET. The most widespread is Apache log4net, which provides a very capable and configurable logging framework. There is also NLog which has a compatible interface, with a greater emphasis on fluent rather than XML configuration.  Alerting the user serves two purposes. Firstly it means they understand their action has failed to they don't just assume it worked (Silent file copy failure is a problem if you then delete the originals) or that they should keep waiting for a background task to complete. Secondly, it means the users can report the bug to your support team, and then you can fix it. This means the message you show the user should contain the information you need as a developer to identify and fix it. And the user will probably just send you a screenshot of the dialog, so it shouldn't be hidden by scroll bars.  This leads us to the third option, automatically sending error reports home. By automatic I mean with minimal effort on the part of the user, rather than doing it silently behind their backs. The advantage of this is you can send back far more detailed and precise information than you can expect a user to include in an email, and by making it easier to report errors, you make it more likely users will do so.  We do this using a great tool called SmartAssembly (full disclosure: this is a product made by Red Gate). It captures complete stack traces including the values of all local variables and then allows the user to send all this information back with a single click. We also capture log files to help understand what lead up to the error. We then use the free SmartAssembly Sync for Jira to dedupe these reports and raise them as bugs in our bug tracking system.  The combined effect of loud errors during development and then automatic error reporting once software is deployed allows us to find and fix more bugs, correct misunderstandings on how our software works, and overall is a key piece in delivering higher quality software. However it is no substitute for having motivated cunning testers in the building - and we're looking to hire more of those too.   If you found this post interesting you should follow me on twitter.  

    Read the article

  • Building an Infrastructure Cloud with Oracle VM for x86 + Enterprise Manager 12c

    - by Richard Rotter
    Cloud Computing? Everyone is talking about Cloud these days. Everyone is explaining how the cloud will help you to bring your service up and running very fast, secure and with little effort. You can find these kinds of presentations at almost every event around the globe. But what is really behind all this stuff? Is it really so simple? And the answer is: Yes it is! With the Oracle SW Stack it is! In this post, I will try to bring this down to earth, demonstrating how easy it could be to build a cloud infrastructure with Oracle's solution for cloud computing.But let me cover some basics first: How fast can you build a cloud?How elastic is your cloud so you can provide new services on demand? How much effort does it take to monitor and operate your Cloud Infrastructure in order to meet your SLAs?How easy is it to chargeback for your services provided? These are the critical success factors of Cloud Computing. And Oracle has an answer to all those questions. By using Oracle VM for X86 in combination with Enterprise Manager 12c you can build and control your cloud environment very fast and easy. What are the fundamental building blocks for your cloud? Oracle Cloud Building Blocks #1 Hardware Surprise, surprise. Even the cloud needs to run somewhere, hence you will need hardware. This HW normally consists of servers, storage and networking. But Oracles goes beyond that. There are Optimized Solutions available for your cloud infrastructure. This is a cookbook to build your HW cloud platform. For example, building your cloud infrastructure with blades and our network infrastructure will reduce complexity in your datacenter (Blades with switch network modules, splitter cables to reduce the amount of cables, TOR (Top Of the Rack) switches which are building the interface to your infrastructure environment. Reducing complexity even in the cabling will help you to manage your environment more efficient and with less risk. Of course, our engineered systems fit into the cloud perfectly too. Although they are considered as a PaaS themselves, having the database SW (for Exadata) and the application development environment (for Exalogic) already deployed on them, in general they are ideal systems to enable you building your own cloud and PaaS infrastructure. #2 Virtualization The next missing link in the cloud setup is virtualization. For me personally, it's one of the most hidden "secret", that oracle can provide you with a complete virtualization stack in terms of a hypervisor on both architectures: X86 and Sparc CPUs. There is Oracle VM for X86 and Oracle VM for Sparc available at no additional  license costs if your are running this virtualization stack on top of Oracle HW (and with Oracle Premier Support for HW). This completes the virtualization portfolio together with Solaris Zones introduced already with Solaris 10 a few years ago. Let me explain how Oracle VM for X86 works: Oracle VM for x86 consists of two main parts: - The Oracle VM Server: Oracle VM Server is installed on bare metal and it is the hypervisor which is able to run virtual machines. It has a very small footprint. The ISO-Image of Oracle VM Server is only 200MB large. It is very small but efficient. You can install a OVM-Server in less than 5 mins by booting the Server with the ISO-Image assigned and providing the necessary configuration parameters (like installing an Linux distribution). After the installation, the OVM-Server is ready to use. That's all. - The Oracle VM-Manager: OVM-Manager is the central management tool where you can control your OVM-Servers. OVM-Manager provides the graphical user interface, which is an Application Development Framework (ADF) application, with a familiar web-browser based interface, to manage Oracle VM Servers, virtual machines, and resources. The Oracle VM Manager has the following capabilities: Create virtual machines Create server pools Power on and off virtual machines Manage networks and storage Import virtual machines, ISO files, and templates Manage high availability of Oracle VM Servers, server pools, and virtual machines Perform live migration of virtual machines I want to highlight one of the goodies which you can use if you are running Oracle VM for X86: Preconfigured, downloadable Virtual Machine Templates form edelivery With these templates, you can download completely preconfigured Virtual Machines in your environment, boot them up, configure them at first time boot and use it. There are templates for almost all Oracle SW and Applications (like Fusion Middleware, Database, Siebel, etc.) available. #3) Cloud Management The management of your cloud infrastructure is key. This is a day-to-day job. Acquiring HW, installing a virtualization layer on top of it is done just at the beginning and if you want to expand your infrastructure. But managing your cloud, keeping it up and running, deploying new services, changing your chargeback model, etc, these are the daily jobs. These jobs must be simple, secure and easy to manage. The Enterprise Manager 12c Cloud provides this functionality from one management cockpit. Enterprise Manager 12c uses Oracle VM Manager to control OVM Serverpools. Once you registered your OVM-Managers in Enterprise Manager, then you are able to setup your cloud infrastructure and manage everything from Enterprise Manager. What you need to do in EM12c is: ">Register your OVM Manager in Enterprise ManagerAfter Registering your OVM Manager, all the functionality of Oracle VM for X86 is also available in Enterprise Manager. Enterprise Manager works as a "Manger" of the Manager. You can register as many OVM-Managers you want and control your complete virtualization environment Create Roles and Users for your Self Service Portal in Enterprise ManagerWith this step you allow users to logon on the Enterprise Manager Self Service Portal. Users can request Virtual Machines in this portal. Setup the Cloud InfrastructureSetup the Quotas for your self service users. How many VMs can they request? How much of your resources ( cpu, memory, storage, network, etc. etc.)? Which SW components (templates, assemblys) can your self service users request? In this step, you basically set up the complete cloud infrastructure. Setup ChargebackOnce your cloud is set up, you need to configure your chargeback mechanism. The Enterprise Manager collects the resources metrics, which are used in a very deep level. Almost all collected Metrics could be used in the chargeback module. You can define chargeback plans based on configurations (charge for the amount of cpu, memory, storage is assigned to a machine, or for a specific OS which is installed) or chargeback on resource consumption (% of cpu used, storage used, etc). Or you can also define a combination of configuration and consumption chargeback plans. The chargeback module is very flexible. Here is a overview of the workflow how to handle infrastructure cloud in EM: Summary As you can see, setting up an Infrastructure Cloud Service with Oracle VM for X86 and Enterprise Manager 12c is really simple. I personally configured a complete cloud environment with three X86 servers and a small JBOD san box in less than 3 hours. There is no magic in it, it is all straightforward. Of course, you have to have some experience with Oracle VM and Enterprise Manager. Experience in setting up Linux environments helps as well. I plan to publish a technical cookbook in the next few weeks. I hope you found this post useful and will see you again here on our blog. Any hints, comments are welcome!

    Read the article

  • Not attending the LUGM mini-meetup - 05. Oct 2013

    Not attending a meeting of the LUGM can be fun, too. It's getting a bit of a habit that Ish is organising small gatherings, aka mini-meetups, of the Linux User Group Mauritius/Meta (LUGM) almost every Saturday. There they mainly discuss and talk about various elements of using Linux as ones main operating systems and the possibilities you are going to have. On top of course, some tips & tricks about mastering the command line and initial steps in scripting or even writing HTML. In general, sounds like a good portion of fun and great spirit of community. Unfortunately, I'm usually quite busy with private and family matters during the weekend and so I already signalised that I wouldn't be around. Well, at least not physically... But this Saturday a couple of things worked out faster than expected and so I was hanging out on my machine. I made virtual contact with one of Pawan's messages over on Facebook... And somehow that kicked off some kind of an online game fun on basic configuration of Apache HTTPd 2.2.x, PHP 5.x and how to improve the overall performance of a newly installed blog based on WordPress. Default configuration files Nitin's website finally came alive and despite the dark theme and the hidden Apple 'fanboy' advertisement I was more interested in the technical situation. As with any new installation there is usually quite some adjustment to be done. And Nitin's page was no exception. Unfortunately, out of the box installations of Apache httpd and PHP are too verbose and expose too much information under the hood. You might think that this isn't really a problem at all, well, think about it again after completely reading this article. First, I checked the HTTP response headers - using either Chrome Developer Tools or Firefox Web Developer extension - of Nitin's page and based on that I advised him to lower the noise levels a little bit. It's not really necessary that detailed information about web server software and scripting language has to be published in every response made. Quite a number of script kiddies and exploits actually check for version specifics prior to an attack. So, removing at least version details hardens the system a little bit. In particular, I'm talking about these response values: Server X-Powered-By How to achieve that? By tweaking the configuration files... Namely, we are going to look into the following ones: apache2.conf httpd.conf .htaccess php.ini The above list contains some additional files, I'm talking about in the next paragraphs. Anyway, those are the ones involved. Tweaking Apache Open your favourite text editor and start to modify the apache2.conf. Eventually, you might like to have a quick peak at the file to see whether it is necessary to adjust it or not. Following is a handy combination of commands to get an overview of your active directives: # sudo grep -v '#' /etc/apache2/apache2.conf | grep -v '^$' | less There you keep an eye on those two Apache directives: ServerSignature Off ServerTokens Prod If that's not the case, change them as highlighted above. In order to activate your modifications you have to restart Apache httpd server. On Debian and Ubuntu you might use apache2ctl for that, on other distributions you might have to use service or run the init-scripts again: # sudo apache2ctl configtestSyntax OK# sudo apache2ctl restart Refresh your website and check the HTTP response header. Tweaking PHP5 (a little bit) Next, check your php.ini file with the following statement: # sudo grep -v ';' /etc/php5/apache2/php.ini | grep -v '^$' | less And check the value of expose_php = Off Again, if it's not as highlighted, change it... Some more Apache love Okay, back to Apache it might also be interesting to improve the situation about browser caching and removing more obsolete information. When you run your website against the usual performance checks like Google Page Speed and Yahoo YSlow you might see those check points with bad grades on a standard, default configuration. Well, this can be done easily. Configure entity tags (ETags) ETags are only interesting when you run your websites on a farm of multiple web servers. Removing this data for your static resources is very simple in Apache. As we are going to deal with the HTTP response header information you have to ensure that Apache is capable to manipulate them. First, check your enabled modules: # sudo ls -al /etc/apache2/mods-enabled/ | grep headers And in case that the 'headers' module is not listed, you have to enable it from the available ones: # sudo a2enmod headers Second, check your httpd.conf file (in case it exists): # sudo grep -v '#' /etc/apache2/httpd.conf | grep -v '^$' | less In newer (better said fresh) installations you might have to create a new configuration file below your conf.d folder with your favourite text editor like so: # sudo nano /etc/apache2/conf.d/headers.conf Then, in order to tweak your HTTP responses either check for those lines or add them: Header unset ETagFileETag None In case that your file doesn't exist or those lines are missing, feel free to create/add them. Afterwards, check your Apache configuration syntax and restart your running instances as already shown above: # sudo apache2ctl configtestSyntax OK# sudo apache2ctl restart Add Expires headers To improve the loading performance of your website, you should take some care into the proper configuration of how to leverage the browser's ability to cache certain resources and files. This is done by adding an Expires: value to the HTTP response header. Generally speaking it is advised that you specify a near-future, read: 1 week or a little bit more, for your static content like JavaScript files or Cascading Style Sheets. One solution to adjust this is to put some instructions into the .htaccess file in the root folder of your web site. Of course, this could also be placed into a more generic location of your Apache installation but honestly, I'd like to keep this at the web site level. Following some adjustments I'm currently using on this blog site: # Turn on Expires and set default to 0ExpiresActive OnExpiresDefault A0 # Set up caching on media files for 1 year (forever?)<FilesMatch "\.(flv|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav)$">ExpiresDefault A29030400Header append Cache-Control "public"</FilesMatch> # Set up caching on media files for 1 week<FilesMatch "\.(js|css)$">ExpiresDefault A604800Header append Cache-Control "public"</FilesMatch> # Set up caching on media files for 31 days<FilesMatch "\.(gif|jpg|jpeg|png|swf)$">ExpiresDefault A2678400Header append Cache-Control "public"</FilesMatch> As we are editing the .htaccess files, it is not necessary to restart Apache. In case that your web site doesn't load anymore or you're experiencing an error while trying to restart your httpd, check that the 'expires' module is actually an enabled module: # ls -al /etc/apache2/mods-enabled/ | grep expires# sudo a2enmod expires Of course, the instructions above a re not feature complete but I hope that they might provide a better default configuration for your LAMP stack. Resume of the day Within a couple of hours, and while being occupied with an eLearning course on SQL Server 2012, I had some good fun in helping and assisting other LUGM members while they were some kilometers away at Bagatelle. According to other blog articles it seems that Nitin had quite some moments of desperation. Just for the records: At no time it was my intention to either kick his butt or pull a leg on him. Simply, providing some input based on the lessons I've learned over the last couple of years configuring Apache HTTPd and PHP. Check out the other blogs, too: LUGM mini-meetup... Epic! Superb Saturday Linux Meetup And last but not least, the man himself: The end of a new beginning Cheers, and happy community'ing! Updates Due to our weekly Code & Coffee sessions in the MSCC community, I had a chance to talk to Nitin directly and he showed me the problems directly on his machine. This led to update this article hence the paragraphs on enabling the modules 'headers' and 'expires'.

    Read the article

  • first install for windows eight.....da beta

    - by raysmithequip
    The W8 preview is now installed and I am enjoying it.  I remember the learning curve of my first unix machine back in the eighties, this ain't that.It is normal for me to do the first os install with a keyboard and low end monitor...you never know what you'll encounter out in the field.  The OS took like a fish to water.  I used a low end INTEL motherboard dp55w I gathered on the cheap, an 1157 i5 from the used bin a pair of 6 gig ddr3 sticks, a rosewell 550 watt power supply a cheap used twenty buck sub 200g wd sata drive, a half working dvd burner and an asus fanless nvidia vid card, not a great one but Sub 50.00 on newey eggey...I did have to hunt the ms forums for a key and of course to activate the thing, if dos would of needed this outmoded ritual, we would still be on cpm and osborne would be a household name, of course little do people know that this ritual was common as far back as the seventies on att unix installs....not, but it was possible, I used to joke about when I ran a bbs, what hell would of been wrought had dos 3.2 machines been required to dial into my bbs to send fido mail to ms and wait for an acknowledgement.  All in all the thing was pushing a seven on the ms richter scale, not including the vid card, sadly it came in at just a tad over three....I wanted to evaluate it for a possible replacement on critical machines that in the past went down due to a vid card fan failure....you have no idea what a customer thinks when you show them a failed vid card fan..."you mean that little plastic piece of junk caused all this!!??!!!"...yea man.  Some production machines don't need any sort of vid, I will at least keep it on the maybe list for those, MTBF is a very important factor, some big box stores should put percentage of failure rate within 24 month estimates on the outside of the carton for sure.  And a warning that the power supplies are already at their limit.  Let's face it, today even 550w can be iffy.A few neat eye candy improvements over the earlier windows is nice, the metro screen is nice, anyone who has used a newer phone recently will intuitively drag their fingers across the screen....lot of good that was with no mouse or touch screen though.  Lucky me, I have been using windows since day one, I still have a copy of win 2.0 (and every other version) for no good reason.  Still the old ix collection of disks is much larger, recompiling any kernal is another silly ritual, same machine, different day, same recompile...argh. Rh is my all time fav, mandrake was always missing something, like it rewrote the init file or something, novell is ok as long as you stay on the beaten path and of course ubuntu normally recompiles with the same errors consistantly....makes life easy that way....no errors on windows eight, just a screen that did not match the installed hardware, natuarally I alt tabbed right out of it, then hit the flag key to find the start menu....no start button. I miss the start button already. Keyboard cowboy funnin and I was browsing the harddrive, nothing stunning there, I like that, means I can find stuff. Only I can't find what I want, the start button....the start menu is that first screen for touch tablets. No biggie for useruser, that is where they will want to be, I can see that. Admins won't want to be there, it is easy enough to get the control panel a bazzilion other ways though, just not the start button. (see a pattern here?). Personally, from the keyboard I find it fun to hit the carets along the location bar at the top of the explorer screen with tabs and arrows and choose SHOW ALL CONTROL PANEL ITEMS, or thereabouts. Bottom line, I love seven and I'll love eight even more!...very happy I did not have to follow the normal rule of thumb (a customer watching me build a system and asking questions said "oh I get it, so every piece you put in there is basically a hundred bucks, right?)...ok, sure, pretty much, more or less, well, ya dude.  It will be WAY past october till I get a real touch screen but I did pick up a pair of cheap tatungs so I can try the NEW main start screen, I parse a lot of folders and have a vision of how a pair of touch screens will be easier than landing a rover on mars.  Ok.  fine, they are way smallish, and I don't expect multitouch to work but we are talking a few percent of a new 21 inch viewsonic touch screen.  Will this OS be a game changer?  I don't know.  Bottom line with all the pads and droids in the world, it is more of a catch up move at first glance.  Not something ms is used to.  An app store?  I can see ms's motivation, the others have it.  I gather there will not be gadgets there, go ahead and see what ms did  to the once populated gadget page...go ahead, google gadgets and take a gander, used to hundreds of gadgets, they are already gone.  They replaced gadgets?  sort of, I'll drop that, it's a bit of a sore point for me.  More of interest was what happened when I downloaded stuff off codeplex and some other normal programs that I like, like orbitron, top o' my list!!...cardware it is...anyways, click on the exe, get a screen, normal for windows, this one indicated that I was not running a normal windows program and had a button for  exit the install, naw, I hit details, a hidden run program anyways came into view....great, my path to the normal windows has detected a program tha.....yea ok, acl is on, fine, moving along I got orbitron installed in record time and was tracking the iss on the newest Microsoft OS, beta of course, felt like the first time I setup bsd all those year ago...FUN!!...I suppose I gotta start to think about budgeting for the real os when it comes out in october, by then I should have a rasberry pi and be done with fedora remixed.  Of course that sounds like fun too!!  I would use this OS on a tablet or phone.  I don't like the idea of being hearded to an app store, don't like that on anything, we are americans and want real choices not marketed hype, lest you are younger with opm (other peoples money).   This os would be neat on a zune, but I suspect the zune is a gonner, I am rooting for microsoft, after all their default password is not admin anymore, nor alpine,  it's blank. Others force a password, my first fawn password was so long I could not even log into it with the password in front of me, who the heck uses %$# anyways, and if I was writing a brute force attack what the heck kinda impasse is that anyways at .00001 microseconds of a code execution cycle (just a non qualified number, not a real clock speed)....AI is where it will be before too long, MS is on that path, perhaps soon someone will sit down and write an app for the kinect that watches your eyes while you scan the new main start screen, clicking on the big E icon when you blink.....boy is that going to be fun!!!! sure. Blink,dammit,blink,dammit...... OPM no doubt.I like windows eight, we are moving forwards, better keep a close eye on ubuntu.  The real clinch comes when open source becomes paid source......don't blink, I already see plenty of very expensive 'ix apps, some even in app stores already.  more to come.......

    Read the article

  • Can Google Employees See My Saved Google Chrome Passwords?

    - by Jason Fitzpatrick
    Storing your passwords in your web browser seems like a great time saver, but are the passwords secure and inaccessible to others (even employees of the browser company) when squirreled away? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader MMA is curious if Google employees have (or could have) access to the passwords he stores in Google Chrome: I understand that we are really tempted to save our passwords in Google Chrome. The likely benefit is two fold, You don’t need to (memorize and) input those long and cryptic passwords. These are available wherever you are once you log in to your Google account. The last point sparked my doubt. Since the password is available anywhere, the storage must in some central location, and this should be at Google. Now, my simple question is, can a Google employee see my passwords? Searching over the Internet revealed several articles/messages. Do you save passwords in Chrome? Maybe you should reconsider: Talks about your passwords being stolen by someone who has access to your computer account. Nothing mentioned about the central storage security and vulnerability. There is even a response from Chrome browser security tech lead about the first issue. Chrome’s insane password security strategy: Mostly along the same line. You can steal password from somebody if you have access to the computer account. How to Steal Passwords Saved in Google Chrome in 5 Simple Steps: Teaches you how to actually perform the act mentioned in the previous two when you have access to somebody else’s account. There are many more (including this one at this site), mostly along the same line, points, counter-points, huge debates. I refrain from mentioning them here, simply carry a search if you want to find them. Coming back to my original query, can a Google employee see my password? Since I can view the password using a simple button, definitely they can be unhashed (decrypted) even if encrypted. This is very different from the passwords saved in Unix-like OS’s where the saved password can never be seen in plain text. They use a one-way encryption algorithm to encrypt your passwords. This encrypted password is then stored in the passwd or shadow file. When you attempt to login, the password you type in is encrypted again and compared with the entry in the file that stores your passwords. If they match, it must be the same password, and you are allowed access. Thus, a superuser can change my password, can block my account, but he can never see my password. So are his concerns well founded or will a little insight dispel his worry? The Answer SuperUser contributor Zeel helps put his mind at ease: Short answer: No* Passwords stored on your local machine can be decrypted by Chrome, as long as your OS user account is logged in. And then you can view those in plain text. At first this seems horrible, but how did you think auto-fill worked? When that password field gets filled in, Chrome must insert the real password into the HTML form element – or else the page wouldn’t work right, and you could not submit the form. And if the connection to the website is not over HTTPS, the plain text is then sent over the internet. In other words, if chrome can’t get the plain text passwords, then they are totally useless. A one way hash is no good, because we need to use them. Now the passwords are in fact encrypted, the only way to get them back to plain text is to have the decryption key. That key is your Google password, or a secondary key you can set up. When you sign into Chrome and sync the Google servers will transmit the encrypted passwords, settings, bookmarks, auto-fill, etc, to your local machine. Here Chrome will decrypt the information and be able to use it. On Google’s end all that info is stored in its encrpyted state, and they do not have the key to decrypt it. Your account password is checked against a hash to log in to Google, and even if you let chrome remember it, that encrypted version is hidden in the same bundle as the other passwords, impossible to access. So an employee could probably grab a dump of the encrypted data, but it wouldn’t do them any good, since they would have no way to use it.* So no, Google employees can not** access your passwords, since they are encrypted on their servers. * However, do not forget that any system that can be accessed by an authorized user can be accessed by an unauthorized user. Some systems are easier to break than other, but none are fail-proof. . . That being said, I think I will trust Google and the millions they spend on security systems, over any other password storage solution. And heck, I’m a wimpy nerd, it would be easier to beat the passwords out of me than break Google’s encryption. ** I am also assuming that there isn’t a person who just happens to work for Google gaining access to your local machine. In that case you are screwed, but employment at Google isn’t actually a factor any more. Moral: Hit Win + L before leaving machine. While we agree with zeel that it’s a pretty safe bet (as long as your computer is not compromised) that your passwords are in fact safe while stored in Chrome, we prefer to encrypt all our logins and passwords in a LastPass vault. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • BizTalk: Internals: the Partner Direct Ports and the Orchestration Chains

    - by Leonid Ganeline
    Partner Direct Port is one of the BizTalk hidden gems. It opens simple ways to the several messaging patterns. This article based on the Kevin Lam’s blog article. The article is pretty detailed but it still leaves several unclear pieces. So I have created a sample and will show how it works from different perspectives. Requirements We should create an orchestration chain where the messages should be routed from the first stage to the second stage. The messages should not be modified. All messages has the same message type. Common artifacts Source code can be downloaded here. It is interesting but all orchestrations use only one port type. It is possible because all ports are one-way ports and use only one operation. I have added a B orchestration. It helps to test the sample, showing all test messages in channel. The Receive shape Filter is empty. A Receive Port (R_Shema1Direct) is a plain Direct Port. As you can see, a subscription expression of this direct port has only one part, the MessageType for our test schema: A Filer is empty but, as you know, a link from the Receive shape to the Port creates this MessageType expression. I use only one Physical Receive File port to send a message to all processes. Each orchestration outputs a Trace.WriteLine(“<Orchestration Name>”). Forward Binding This sample has three orchestrations: A_1, A_21 and A_22. A_1 is a sender, A_21 and A_22 are receivers. Here is a subscription of the A_1 orchestration: It has two parts A MessageType. The same was for the B orchestration. A ReceivePortID. There was no such parameter for the B orchestration. It was created because I have bound the orchestration port with Physical Receive File port. This binding means the PortID parameter is added to the subscription. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for the sending orchestration, A_1. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, receive orchestrations, A_21 or A_22, but it is A_1 orchestration and S_SentFromA_1 port. Now we have to choose a Partner Orchestration parameter for the received orchestrations, A_21 and A_22. Nothing strange is here except a parameter name. We choose the port of the sender, A_1 orchestration and S_SentFromA_1 port. As you can see the Partner Orchestration parameter for the sender and receiver orchestrations is the same. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B and by A_1 A_1 sent a message forward. A message was received by B, A_21, A_22 Let’s look at a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports.     Now let’s see a subscription of the A_21 and A_22 orchestrations. Now it makes sense. That’s why we have chosen such a strange value for the Partner Orchestration parameter of the sending orchestration. Inverse Binding This sample has three orchestrations: A_11, A_12 and A_2. A_11 and A_12 are senders, A_2 is receiver. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for a receiving orchestration, A_2. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, sent orchestrations, A_11 or A_12, but it is A_2 orchestration and R_SentToA_2 port. Now we have to choose a Partner Orchestration parameter for the sending orchestrations, A_11 and A_12. Nothing strange is here except a parameter name. We choose the port of the sender, A_2 orchestration and R_SentToA_2 port. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B, A_11 and by A_12 A_11 and A_12 sent two messages forward. The messages were received by B, A_2 Let’s see what was a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports. Here is a subscription of the A_2 orchestration. Models I had a hard time trying to explain the Partner Direct Ports in simple terms. I have finished with this model: Forward Binding Receivers know a Sender. Sender doesn’t know Receivers. Publishers know a Subscriber. Subscriber doesn’t know Publishers. 1 –> 1 1 –> M Inverse Binding Senders know a Receiver. Receiver doesn’t know Senders. Subscribers know a Publisher. Publisher doesn’t know Subscribers. 1 –> 1 M –> 1 Notes   Orchestration chain It’s worth to note, the Partner Direct Port Binding creates a chain opened from one side and closed from another. The Forward Binding: A new Receiver can be added at run-time. The Sender can not be changed without design-time changes in Receivers. The Inverse Binding: A new Sender can be added at run-time. The Receiver can not be changed without design-time changes in Senders.

    Read the article

  • Looking into the JQuery Overlays Plugin

    - by nikolaosk
    I have been using JQuery for a couple of years now and it has helped me to solve many problems on the client side of web development.  You can find all my posts about JQuery in this link. In this post I will be providing you with a hands-on example on the JQuery Overlays Plugin.If you want you can have a look at this post, where I describe the JQuery Cycle Plugin.You can find another post of mine talking about the JQuery Carousel Lite Plugin here. Another post of mine regarding the JQuery Image Zoom Plugin can be found here.I will be writing more posts regarding the most commonly used JQuery Plugins. With the JQuery Overlays Plugin we can provide the user (overlay) with more information about an image when the user hovers over the image. I have been using extensively this plugin in my websites. In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like. You can use Visual Studio 2012 Express edition. You can download it here.  You can download this plugin from this link. I launch Expression Web 4.0 and then I type the following HTML markup (I am using HTML 5) <html lang="en"> <head>    <link rel="stylesheet" href="ImageOverlay.css" type="text/css" media="screen" />    <script type="text/javascript" src="jquery-1.8.3.min.js"></script>    <script type="text/javascript" src="jquery.ImageOverlay.min.js"></script>         <script type="text/javascript">        $(function () {            $("#Liverpool").ImageOverlay();        });    </script>   </head><body>    <ul id="Liverpool" class="image-overlay">        <li>            <a href="www.liverpoolfc.com">                <img alt="Liverpool" src="championsofeurope.jpg" />                <div class="caption">                    <h3>Liverpool Football club</h3>                    <p>The greatest club in the world</p>                </div>            </a>        </li>    </ul></body></html> This is a very simple markup. I have added references to the JQuery library (current version is 1.8.3) and the JQuery Overlays Plugin. Then I add 1 image in the element with "id=Liverpool". There is a caption class as well, where I place the text I want to show when the mouse hovers over the image. The caption class and the Liverpool id element are styled in the ImageOverlay.css file that can also be downloaded with the plugin.You can style the various elements of the html markup in the .css file. The Javascript code that makes it all happen follows.   <script type="text/javascript">        $(function () {            $("#Liverpool").ImageOverlay();        });    </script>        I am just calling the ImageOverlay function for the Liverpool ID element.The contents of ImageOverlay.css file follow .image-overlay { list-style: none; text-align: left; }.image-overlay li { display: inline; }.image-overlay a:link, .image-overlay a:visited, .image-overlay a:hover, .image-overlay a:active { text-decoration: none; }.image-overlay a:link img, .image-overlay a:visited img, .image-overlay a:hover img, .image-overlay a:active img { border: none; }.image-overlay a{    margin: 9px;    float: left;    background: #fff;    border: solid 2px;    overflow: hidden;    position: relative;}.image-overlay img{    position: absolute;    top: 0;    left: 0;    border: 0;}.image-overlay .caption{    float: left;    position: absolute;    background-color: #000;    width: 100%;    cursor: pointer;    /* The way to change overlay opacity is the follow properties. Opacity is a tricky issue due to        longtime IE abuse of it, so opacity is not offically supported - use at your own risk.         To play it safe, disable overlay opacity in IE. */    /* For Firefox/Opera/Safari/Chrome */    opacity: .8;    /* For IE 5-7 */    filter: progid:DXImageTransform.Microsoft.Alpha(Opacity=80);    /* For IE 8 */    -MS-filter: "progid:DXImageTransform.Microsoft.Alpha(Opacity=80)";}.image-overlay .caption h1, .image-overlay .caption h2, .image-overlay .caption h3,.image-overlay .caption h4, .image-overlay .caption h5, .image-overlay .caption h6{    margin: 10px 0 10px 2px;    font-size: 26px;    font-weight: bold;    padding: 0 0 0 5px;    color:#92171a;}.image-overlay p{    text-indent: 0;    margin: 10px;    font-size: 1.2em;} It couldn't be any simpler than that. I view my simple page in Internet Explorer 10 and it works as expected. I have tested this simple solution in all major browsers and it works fine.Have a look at the picture below. You can test it yourself and see the results in your favorite browser. Hope it helps!!!

    Read the article

  • Partner Blog Series: PwC Perspectives Part 2 - Jumpstarting your IAM program with R2

    - by Tanu Sood
    Identity and access management (IAM) isn’t a new concept. Over the past decade, companies have begun to address identity management through a variety of solutions that have primarily focused on provisioning. . The new age workforce is converging at a rapid pace with ever increasing demand to use diverse portfolio of applications and systems to interact and interface with their peers in the industry and customers alike. Oracle has taken a significant leap with their release of Identity and Access Management 11gR2 towards enabling this global workforce to conduct their business in a secure, efficient and effective manner. As companies deal with IAM business drivers, it becomes immediately apparent that holistic, rather than piecemeal, approaches better address their needs. When planning an enterprise-wide IAM solution, the first step is to create a common framework that serves as the foundation on which to build the cost, compliance and business process efficiencies. As a leading industry practice, IAM should be established on a foundation of accurate data for identity management, making this data available in a uniform manner to downstream applications and processes. Mature organizations are looking beyond IAM’s basic benefits to harness more advanced capabilities in user lifecycle management. For any organization looking to embark on an IAM initiative, consider the following use cases in managing and administering user access. Expanding the Enterprise Provisioning Footprint Almost all organizations have some helpdesk resources tied up in handling access requests from users, a distraction from their core job of handling problem tickets. This dependency has mushroomed from the traditional acceptance of provisioning solutions integrating and addressing only a portion of applications in the heterogeneous landscape Oracle Identity Manager (OIM) 11gR2 solves this problem by offering integration with third party ticketing systems as “disconnected applications”. It allows for the existing business processes to be seamlessly integrated into the system and tracked throughout its lifecycle. With minimal effort and analysis, an organization can begin integrating OIM with groups or applications that are involved with manually intensive access provisioning and de-provisioning activities. This aspect of OIM allows organizations to on-board applications and associated business processes quickly using out of box templates and frameworks. This is especially important for organizations looking to fold in users and resources from mergers and acquisitions. Simplifying Access Requests Organizations looking to implement access request solutions often find it challenging to get their users to accept and adopt the new processes.. So, how do we improve the user experience, make it intuitive and personalized and yet simplify the user access process? With R2, OIM helps organizations alleviate the challenge by placing the most used functionality front and centre in the new user request interface. Roles, application accounts, and entitlements can all be found in the same interface as catalog items, giving business users a single location to go to whenever they need to initiate, approve or track a request. Furthermore, if a particular item is not relevant to a user’s job function or area inside the organization, it can be hidden so as to not overwhelm or confuse the user with superfluous options. The ability to customize the user interface to suit your needs helps in exercising the business rules effectively and avoiding access proliferation within the organization. Saving Time with Templates A typical use case that is most beneficial to business users is flexibility to place, edit, and withdraw requests based on changing circumstances and business needs. With OIM R2, multiple catalog items can now be added and removed from the shopping cart, an ecommerce paradigm that many users are already familiar with. This feature can be especially useful when setting up a large number of new employees or granting existing department or group access to a newly integrated application. Additionally, users can create their own shopping cart templates in order to complete subsequent requests more quickly. This feature saves the user from having to search for and select items all over again if a request is similar to a previous one. Advanced Delegated Administration A key feature of any provisioning solution should be to empower each business unit in managing their own access requests. By bringing administration closer to the user, you improve user productivity, enable efficiency and alleviate the administration overhead. To do so requires a federated services model so that the business units capable of shouldering the onus of user life cycle management of their business users can be enabled to do so. OIM 11gR2 offers advanced administrative options for creating, managing and controlling business logic and workflows through easy to use administrative interface and tools that can be exposed to delegated business administrators. For example, these business administrators can establish or modify how certain requests and operations should be handled within their business unit based on a number of attributes ranging from the type of request or the risk level of the individual items requested. Closed-Loop Remediation Security continues to be a major concern for most organizations. Identity management solutions bolster security by ensuring only the right users have the right access to the right resources. To prevent unauthorized access and where it already exists, the ability to detect and remediate it, are key requirements of an enterprise-grade proven solution. But the challenge with most solutions today is that some of this information still exists in silos. And when changes are made to systems directly, not all information is captured. With R2, oracle is offering a comprehensive Identity Governance solution that our customer organizations are leveraging for closed loop remediation that allows for an automated way for administrators to revoke unauthorized access. The change is automatically captured and the action noted for continued management. Conclusion While implementing provisioning solutions, it is important to keep the near term and the long term goals in mind. The provisioning solution should always be a part of a larger security and identity management program but with the ability to seamlessly integrate not only with the company’s infrastructure but also have the ability to leverage the information, business models compiled and used by the other identity management solutions. This allows organizations to reduce the cost of ownership, close security gaps and leverage the existing infrastructure. And having done so a multiple clients’ sites, this is the approach we recommend. In our next post, we will take a journey through our experiences of advising clients looking to upgrade to R2 from a previous version or migrating from a different solution. Meet the Writers:   Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • I Know What I Did This Summer: Put Down Trex Decking

    - by thatjeffsmith
    If you’re wondering why I would bore everyone with my pictures and frequent status updates/tweets from the past week – it’s so I could document the process of refurbishing my deck, or what some would call a porch. When we go to take a vacation, buy a car, do anything – we also read personal blogs to get the real story. So, if you’re curious about what it takes to tackle this sort of project, read on. Skills/Equipment/Manpower We Possessed I took the old decking out by myself. I’m about 230 lbs, more than 6′ tall, and I’m pretty healthy. This took about 8 hours over two afternoons. Three of us put the deck back together. My wife has two engineering degrees. Her father also has two engineering degrees. Lots of brainpower available here. Also, her dad ran the public works department for a country for more than 20 years – so lots and lots of practical experience on hand. We had a compound mitre saw, a skilsaw, 2-3 crowbars, a framing hammer, 3 cordless drills, a corded drill, lots of sawhorses, a power sander, an angle grinder, a 10×10 Coleman canopy tent, a Ford F-150 pickup truck, outdoor speakers and lots of iTunes playlists, plenty of water and cold beer. Why We Did This Our deck was relatively young – it was built in 2005. However, the pressure treated boards must not have been adequately maintained before we bought the house. I had powerwashed the deck every other year and had it stained a few times. The boards just rotted. We’re going to be in the house for a long time, and we wanted something that would look nice and require little maintenance. More bad deck boards The deck boards were in bad shape Things We Learned The two most important things: The hidden fasteners have to be put in JUST right. Wedge them into the grooved board, then bend down the bit that is screwed down. We didn’t do this on the first board and couldn’t get the second board to fit nearly close enough. Watching the official TREX YouTube video helped immensely, and we should have watched that first. When pre-drilling holes for the boards that need screwed down – DO NOT pre-drill through the underlying framing wood. ONLY pre-drill through the TREX itself. The screw won’t seat in the board properly. Instead of sitting down flush with the board, it will stop at the top of the board and just spin. I had to call the the place that sold me the screws to find this out. So about a third of our screws look like crap. If it doesn’t look or feel right – stop everything and pick up your computer or your phone. It’s not right, and it will be much easier to stop and find out why. We didn’t do this, and now I’m going to see every screw that’s not flush with the boards and get upset. Oh well. The Process How much time did it take? Well I spent about 8 hours taking the deck apart. And then the 3 of use spent 8 hours the first day, 10 hours the second day, 8 hours the third, and another 6 hours on the fourth day. That’s like 104 man-hours. We supposedly saved four or five thousand dollars in labor, but don’t do the math here or you might get a bit upset. The main thing is that we got what we wanted, and there won’t be any surprises later. Now for some pictures… This 6”+ pry bar made the destruction of the old deck much easier Most of the joists, once exposed, were OK. This joist wasn’t sitting on ANYTHING before. We think a lazy gas person cut the board to sneak a gas line in. Awesome… These monster lag bolts had to be accounted for when putting in the additional framing The border pattern Sheri wanted to put in required a lot more framing. These were the first boards to go down – we screwed them in as there was no way to attach clips I sat, kicked in the boards, and then drilled these clips in – but my wife was able to go MUCH faster by using her hands to lock the boards in and drill on her knees. I liked locking the board in with my feet when they needed to be ‘encouraged’ to go straight. The first board took FOREVER to go in, but then when we got rolling, we were able to put in a 20′ board in less than 10 minutes. This was end of construction day #2 – we got much further than we thought we would. Ah, the dreaded last 10% – what to do here? Remember those ‘floating’ stringers? Yeah, we fixed that up a bit, too. My wife used a website (and her brain) to calculate exactly how to cut the stringers to give us the rise/run we needed with the proper clearance and all that jazz. The stairs with stringers and toe kicks – this was worth the effort It started raining on us as I screwed down the steps – this we managed to get our shade tent up on the deck to protect us from the rain too The stairs, finished Finished, mostly Good corner shot The top of the stairs Stairs, looking down Celebratory beer In Summary There are a few things we’re not happy with. I think we can fix them up – but later. I have a few things left to finish, rewire the lighting, get the gas grille put back in, and rehang some screen doors. I was expecting this to be a lot worse than it was. If I didn’t have the help, I would have never done it myself. But I’m glad that I did have that help and did do that project. It’s not often you get to spend that kind of qualify time with family and building cool stuff.

    Read the article

  • Car animations in Frogger on Javascript

    - by Mijoro Nicolas Rasoanaivo
    I have to finish a Frogger game in Javascript for my engineering school degree, but I don't know how to animate the cars. Right now I tried to manipulate the CSS, the DOM, I wrote a script with a setTimeout(), but none of them works.Can I have some help please? Here's my code and my CSS: <html> <head> <title>Image d&eacute;filante</title> <link rel="stylesheet" type="text/css" href="map_style.css"/> </head> <body onload="start()"> <canvas id="jeu" width="800" height="450"> </canvas> <img id="voiture" class="voiture" src="car.png" onload="startTimerCar()"> <img id="voiture2" class="voiture" src="car.png" onload="startTimerCar()"> <img id="voiture3" class="voiture" src="car.png" onload="startTimerCar()"> <img id="bigrig" class="bigrig" src="bigrig.png" onload="startTimerBigrig()"> <img id="bigrig2" class="bigrig" src="bigrig.png" onload="startTimerBigrig()"> <img id="bigrig3" class="bigrig" src="bigrig.png" onload="startTimerBigrig()"> <img id="hotrod" src="hotrod.png" onload="startTimerHotrod()"> <img id="hotrod2" src="hotrod.png" onload="startTimerHotrod()"> <img id="turtle" src="turtles_diving.png" onload="startTimerTurtle()"> <img id="turtle2" src="turtles_diving.png" onload="startTimerTurtle()"> <img id="turtle3" src="turtles_diving.png" onload="startTimerTurtle()"> <img id="small" src="log_small.png" onload="startTimerSmall()"> <img id="small2" src="log_small.png" onload="startTimerSmall()"> <img id="small3" src="log_small.png" onload="startTimerSmall()"> <img id="small4" src="log_small.png" onload="startTimerSmall()"> <img id="med" src="log_medium.png" onload="startTimerMedium()"> <img id="med2" src="log_medium.png" onload="startTimerMedium()"> <img id="med3" src="log_medium.png" onload="startTimerMedium()"> <script type="text/javascript"> var X = 1; var timer; function start(){ setInterval(init,10); document.onkeydown = move; var canvas = document.getElementById('jeu'); var context = canvas.getContext('2d'); var frog = document.getElementById('frog'); var posX_frog = 415; var posY_frog = 400; var voiture = [document.getElementById('voiture'),document.getElementById('voiture2'),document.getElementById('voiture3')]; var bigrig = [document.getElementById('bigrig'),document.getElementById('bigrig2'),document.getElementById('bigrig3')]; var hotrod = [document.getElementById('hotrod'),document.getElementById('hotrod2')]; var turtle = [document.getElementById('turtle'),document.getElementById('turtle2'),document.getElementById('turtle3')]; var small = [document.getElementById('small'),document.getElementById('small2'),document.getElementById('small3'),document.getElementById('small4')]; var med = [document.getElementById('med'),document.getElementById('med2'),document.getElementById('med3')]; function init() { context.fillStyle = "#AEEE00"; context.fillRect(0,0,800,50); context.fillRect(0,200,800,50); context.fillRect(0,400,800,50); context.fillStyle = "#046380"; context.fillRect(0,50,800,150); context.fillStyle = "#000000"; context.fillRect(0,250,800,150); var img= new Image(); img.src="./frog.png"; context.drawImage(img,posX_frog, posY_frog, 46, 38); } function move(event){ if (event.keyCode == 39){ if( posX_frog < 716 ){ posX_frog += 50; } } if(event.keyCode == 37){ if( posX_frog >25 ){ posX_frog -= 50; } } if (event.keyCode == 38){ if( posY_frog > 10 ){ posY_frog -= 50; } } if(event.keyCode == 40){ if( posY_frog <400 ){ posY_frog += 50; } } } } </script> </body> And my map_css: #jeu{ z-index:10; width: 800px; height: 450px; border: 2px black solid; overflow: hidden; position: relative; transition:width 2s; -moz-transition:width 2s; /* Firefox 4 */ -webkit-transition:width 2s; /* Safari and Chrome */ } #voiture{ z-index: 100; position: absolute; top: 315px; left: 48px; transition-timing-function: linear; -webkit-transition-timing-function: linear; -moz-transition-timing-function: linear; } #voiture2{ z-index: 100; position: absolute; top: 315px; left: 144px; } #voiture3{ z-index: 100; position: absolute; top: 315px; left: 240px; } #bigrig{ z-index: 100; position: absolute; top: 365px; left: 200px; } #bigrig2{ z-index: 100; position: absolute; top: 365px; left: 400px; } #bigrig3{ z-index: 100; position: absolute; top: 365px; left: 600px; } #hotrod{ z-index: 100; position: absolute; top: 265px; left: 200px; } #hotrod2{ z-index: 100; position: absolute; top: 265px; left: 500px; } #hotrod3{ z-index: 100; position: absolute; top: 265px; left: 750px; } #turtle{ z-index: 100; position: absolute; top: 175px; left: 50px; } #turtle2{ z-index: 100; position: absolute; top: 175px; left: 450px; } #turtle3{ z-index: 100; position: absolute; top: 175px; left: 250px; } #small{ z-index: 100; position: absolute; top: 125px; left: 20px; } #small2{ z-index: 100; position: absolute; top: 125px; left: 220px; } #small3{ z-index: 100; position: absolute; top: 125px; left: 420px; } #small4{ z-index: 100; position: absolute; top: 125px; left: 620px; } #med{ z-index: 100; position: absolute; top: 75px; left: 120px; } #med2{ z-index: 100; position: absolute; top: 75px; left: 320px; } #med3{ z-index: 100; position: absolute; top: 75px; left: 520px; } I had to say that I'm in the obligation to code in HTML5, CSS3, and Javascript but not jQuery, who is way more easier, I already created games in jQuery... It takes me too much time and too much code lines right here.

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • Consuming ASP.NET Web API services from PHP script

    - by DigiMortal
    I introduced ASP.NET Web API in some of my previous posts. Although Web API is easy to use in ASP.NET web applications you can use Web API also from other platforms. This post shows you how to consume ASP.NET Web API from PHP scripts. Here are my previous posts about Web API: How content negotiation works? ASP.NET Web API: Extending content negotiation with new formats Query string based content formatting Although these posts cover content negotiation they give you some idea about how Web API works. Test application On Web API side I use the same sample application as in previous Web API posts – very primitive web application to manage contacts. Listing contacts On the other machine I will run the following PHP script that works against my Web API application: <?php   // request list of contacts from Web API $json = file_get_contents('http://vs2010dev:3613/api/contacts/'); // deserialize data from JSON $contacts = json_decode($json); ?> <html> <head>     <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> </head> <body>     <table>     <?php      foreach($contacts as $contact)     {         ?>         <tr>             <td valign="top">                 <?php echo $contact->FirstName ?>             </td>             <td valign="top">                 <?php echo $contact->LastName ?>             </td>             <td valign="middle">                 <form method="POST">                     <input type="hidden" name="id"                          value="<?php echo $contact-/>Id ?>" />                     <input type="submit" name="cmd"                          value="Delete"/>                 </form>             </td>         </tr>         <?php     }     ?>     </table> </body> </html> Notice how easy it is to handle JSON data in PHP! My PHP script produces the following output: Looks like data is here as it should be. Deleting contacts Now let’s write code to delete contacts. Add this block of code before any other code in PHP script. if(@$_POST['cmd'] == 'Delete') {     $errno = 0;     $errstr = '';     $id = @$_POST['id'];          $params = array('http' => array(               'method' => 'DELETE',               'content' => ""             ));     $url = 'http://vs2010dev:3613/api/contacts/'.$id;     $ctx = stream_context_create($params);     $fp = fopen($url, 'rb', false, $ctx);       if (!$fp) {         $res = false;       } else {         $res = stream_get_contents($fp);       }     fclose($fp);     header('Location: /json.php');     exit; } Again simple code. If we write also insert and update methods we may want to bundle those operations to single class. Conclusion ASP.NET Web API is not only ASP.NET fun. It is available also for all other platforms. In this posting we wrote simple PHP client that is able to communicate with our Web API application. We wrote only some simple code, nothing complex. Same way we can use also platforms like Java, PERL and Ruby.

    Read the article

  • How to block the ASP.NET page while ajax UpdateProgress is being displayed.

    Step 1: Copy the following styles to your aspx page. <style type="text/css">       .hide       {           display: none;       }       .show       {           display: inherit;       }        .progressBackgroundFilter       {           position: absolute;           top: 0px;           bottom: 0px;           left: 0px;           right: 0px;           overflow: hidden;           padding: 0;           margin: 0;           background-color: #000;           filter: alpha(opacity=50);           opacity: 0.5;           z-index: 1000;       }       .processMessage       {           position: absolute;           font-family:Verdana;           font-size:12px;           font-weight:normal;           color:#000066;           top: 30%;           left: 43%;           padding: 10px;           width: 18%;           z-index: 1001;           background-color: #fff;       }   </style> Step 2: Put the divs as shown below in UpdateProgress control. <asp:UpdateProgress ID="updPrgsBaselineTab" runat="server">        <ProgressTemplate>            <div id="progressBackgroundFilter" class="progressBackgroundFilter">            </div>            <div id="processMessage" class="processMessage">                <table width="100%">                    <tr style="width: 100%">                        <td style="width: 100%">                            Please Wait..........                        </td>                    </tr>                    <tr style="width: 100%">                        <td style="width: 100%" align="center">                            <img src="../Images/Update_Progress.gif" />                        </td>                    </tr>                </table>            </div>        </ProgressTemplate>    </asp:UpdateProgress> span.fullpost {display:none;}

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >