Search Results

Search found 26412 results on 1057 pages for 'product key'.

Page 554/1057 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • Oracle Tutor: Top 10 to Implement Sustainable Policies and Procedures

    - by emily.chorba(at)oracle.com
    Overview Your organization (executives, managers, and employees) understands the value of having written business process documents (process maps, procedures, instructions, reference documents, and form abstracts). Policies and procedures should be documented because they help to reduce the range of individual decisions and encourage management by exception: the manager only needs to give special attention to unusual problems, not covered by a specific policy or procedure. As more and more procedures are written to cover recurring situations, managers will begin to make decisions which will be consistent from one functional area to the next.Companies should take a project management approach when implementing an environment for a sustainable documentation program and do the following:1. Identify an Executive Champion2. Put together a winning team3. Assign ownership4. Centralize publishing5. Establish the Document Maintenance Process Up Front6. Document critical activities only7. Document actual practice8. Minimize documentation9. Support continuous improvement10. Keep it simple 1. Identify an Executive ChampionAppoint a top down driver. Select one key individual to be a mentor for the procedure planning team. The individual should be a senior manager, such as your company president, CIO, CFO, the vice-president of quality, manufacturing, or engineering. Written policies and procedures can be important supportive aids when known to express the thinking for the chief executive officer and / or the president and to have his or her full support. 2. Put Together a Winning TeamChoose a strong Project Management Leader and staff the procedure planning team with management members from cross functional groups. Make sure team members have the responsibility - and the authority - to make things happen.The winning team should consist of the Documentation Project Manager, Document Owners (one for each functional area), a Document Controller, and Document Specialists (as needed). The Tutor Implementation Guide has complete job descriptions for these roles. 3. Assign Ownership It is virtually impossible to keep process documentation simple and meaningful if employees who are far removed from the activity itself create it. It is impossible to keep documentation up-to-date when responsibility for the document is not clearly understood.Key to the Tutor methodology, therefore, is the concept of ownership. Each document has a single owner, who is responsible for ensuring that the document is necessary and that it reflects actual practice. The owner must be a person who is knowledgeable about the activity and who has the authority to build consensus among the persons who participate in the activity as well as the authority to define or change the way an activity is performed. The owner must be an advocate of the performers and negotiate, not dictate practices.In the Tutor environment, a document's owner is the only person with the authority to approve an update to that document. 4. Centralize Publishing Although it is tempting (especially in a networked environment and with document management software solutions) to decentralize the control of all documents -- with each owner updating and distributing his own -- Tutor promotes centralized publishing by assigning the Document Administrator (gate keeper) to manage the updates and distribution of the procedures library. 5. Establish a Document Maintenance Process Up Front (and stick to it) Everyone in your organization should know they are invited to suggest changes to procedures and should understand exactly what steps to take to do so. Tutor provides a set of procedures to help your company set up a healthy document control system. There are many document management products available to automate some of the document change and maintenance steps. Depending on the size of your organization, a simple document management system can reduce the effort it takes to track and distribute document changes and updates. Whether your company decides to store the written policies and procedures on a file server or in a database, the essential tasks for maintaining documents are the same, though some tasks are automated. 6. Document Critical Activities Only The best way to keep your documentation simple is to reduce the number of process documents to a bare minimum and to include in those documents only as much detail as is absolutely necessary. The first step to reducing process documentation is to document only those activities that are deemed critical. Not all activities require documentation. In fact, some critical activities cannot and should not be standardized. Others may be sufficiently documented with an instruction or a checklist and may not require a procedure. A document should only be created when it enhances the performance of the employee performing the activity. If it does not help the employee, then there is no reason to maintain the document. Activities that represent little risk (such as project status), activities that cannot be defined in terms of specific tasks (such as product research), and activities that can be performed in a variety of ways (such as advertising) often do not require documentation. Sometimes, an activity will evolve to the point where documentation is necessary. For example, an activity performed by single employee may be straightforward and uncomplicated -- that is, until the activity is performed by multiple employees. Sometimes, it is the interaction between co-workers that necessitates documentation; sometimes, it is the complexity or the diversity of the activity.7. Document Actual Practices The only reason to maintain process documentation is to enhance the performance of the employee performing the activity. And documentation can only enhance performance if it reflects reality -- that is, current best practice. Documentation that reflects an unattainable ideal or outdated practices will end up on the shelf, unused and forgotten.Documenting actual practice means (1) auditing the activity to understand how the work is really performed, (2) identifying best practices with employees who are involved in the activity, (3) building consensus so that everyone agrees on a common method, and (4) recording that consensus.8. Minimize Documentation One way to keep it simple is to document at the highest level possible. That is, include in your documents only as much detail as is absolutely necessary.When writing a document, you should ask yourself, What is the purpose of this document? That is, what problem will it solve?By focusing on this question, you can target the critical information.• What questions are the end users likely to have?• What level of detail is required?• Is any of this information extraneous to the document's purpose? Short, concise documents are user friendly and they are easier to keep up to date. 9. Support Continuous Improvement Employees who perform an activity are often in the best position to identify improvements to the process. In other words, continuous improvement is a natural byproduct of the work itself -- but only if the improvements are communicated to all employees who are involved in the process, and only if there is consensus among those employees.Traditionally, process documentation has been used to dictate performance, to limit employees' actions. In the Tutor environment, process documents are used to communicate improvements identified by employees. How does this work? The Tutor methodology requires a process document to reflect actual practice, so the owner of a document must routinely audit its content -- does the document match what the employees are doing? If it doesn't, the owner has the responsibility to evaluate the process, to build consensus among the employees, to identify "best practices," and to communicate these improvements via a document update. Continuous improvement can also be an outgrowth of corrective action -- but only if the solutions to problems are communicated effectively. The goal should be to solve a problem once and only once, which means not only identifying the solution, but ensuring that the solution becomes part of the process. The Tutor system provides the method through which improvements and solutions are documented and communicated to all affected employees in a cost-effective, timely manner; it ensures that improvements are not lost or confined to a single employee. 10. Keep it Simple Process documents don't have to be complex and unfriendly. In fact, the simpler the format and organization, the more likely the documents will be used. And the simpler the method of maintenance, the more likely the documents will be kept up-to-date. Keep it simply by:• Minimizing skills and training required• Following the established Tutor document format and layout• Avoiding technology just for technology's sake No other rule has as major an impact on the success of your internal documentation as -- keep it simple. Learn More For more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum.   Emily Chorba Principle Product Manager Oracle Tutor & BPM 

    Read the article

  • CodePlex Daily Summary for Tuesday, June 12, 2012

    CodePlex Daily Summary for Tuesday, June 12, 2012Popular ReleasesWeapsy - ASP.NET MVC CMS: 1.0 RC: - Upgrade to Entity Framework 4.3.1 - Added AutoMapper custom version (by nopCommerce Team) - Added missed model properties and localization resources of Plugin Definitions - Minor changes - Fixed some bugsSolution Extender for Microsoft Dynamics CRM 2011: Solution Extender (1.0.0.0): Initial releaseQTP FT Uninstaller: QTP FT Uninstaller v3: - KnowledgeInbox has made this tool open source - Converted to C# - Better scanning & cleaning mechanismWebSocket4Net: WebSocket4Net 0.7: Changes included in this release: updated ClientEngine added proper exception handling code added state support for callback added property AllowUnstrustedCertificate for JsonWebSocket added properties for sending ping automatically improved JsBridge fixed a uri compatibility issueXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0 Beta: Catalog and Publication reviews and ratings Store language packs in data base Improve reporting system Improve Import/Export system A lot of WebAdmin app UI improvements Initial implementation of the WebForum app DB indexes Improve and simplify architecture Less abstractions Modernize architecture Improve, simplify and unify API Simplify and improve testing A lot of new unit tests Codebase refactoring and ReSharpering Utilize Castle Windsor Utilize NHibernate ORM ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.55: Properly handle IE extension to CSS3 grammar that allows for multiple parameters to functional pseudo-class selectors. add new switch -braces:(new|same) that affects where opening braces are placed in multi-line output. The default, "new" puts them on their own new line; "same" outputs them at the end of the previous line. add new optional values to the -inline switch: -inline:(force|noforce), which can be combined with the existing boolean value via comma-separators; value "force" (which...Microsoft Media Platform: Player Framework: MMP Player Framework 2.7 (Silverlight and WP7): Additional DownloadsSMFv2.7 Full Installer (MSI) - This will install everything you need in order to develop your own SMF player application, including the IIS Smooth Streaming Client. It only includes the assemblies. If you want the source code please follow the link above. Smooth Streaming Sample Player - This is a pre-built player that includes support for IIS Smooth Streaming. You can configure the player to playback your content by simplying editing a configuration file - no need to co...Liberty: v3.2.1.0 Release 10th June 2012: Change Log -Added -Liberty is now digitally signed! If the certificate on Liberty.exe is missing, invalid, or does not state that it was developed by "Xbox Chaos, Open Source Developer," your copy of Liberty may have been altered in some (possibly malicious) way. -Reach Mass biped max health and shield changer -Fixed -H3/ODST Fixed all of the glitches that users kept reporting (also reverted the changes made in 3.2.0.2) -Reach Made some tag names clearer and more consistent between m...SHA-1 Hash Checker: SHA-1 Hash Checker (for Windows): Fixed major bugs. Removed false negatives.AutoUpdaterdotNET: AutoUpdater.NET 1.0: Everything seems perfect if you find any problem you can report to http://www.rbsoft.org/contact.htmlMedia Companion: Media Companion 3.503b: It has been a while, so it's about time we release another build! Major effort has been for fixing trailer downloads, plus a little bit of work for episode guide tag in TV show NFOs.Microsoft SQL Server Product Samples: Database: AdventureWorks Sample Reports 2008 R2: AdventureWorks Sample Reports 2008 R2.zip contains several reports include Sales Reason Comparisons SQL2008R2.rdl which uses Adventure Works DW 2008R2 as a data source reference. For more information, go to Sales Reason Comparisons report.Json.NET: Json.NET 4.5 Release 7: Fix - Fixed Metro build to pass Windows Application Certification Kit on Windows 8 Release Preview Fix - Fixed Metro build error caused by an anonymous type Fix - Fixed ItemConverter not being used when serializing dictionaries Fix - Fixed an incorrect object being passed to the Error event when serializing dictionaries Fix - Fixed decimal properties not being correctly ignored with DefaultValueHandlingLINQ Extensions Library: 1.0.3.0: New to release 1.0.3.0:Combinatronics: Combinations (unique) Combinations (with repetition) Permutations (unique) Permutations (with repetition) Convert jagged arrays to fixed multidimensional arrays Convert fixed multidimensional arrays to jagged arrays ElementAtMax ElementAtMin ElementAtAverage New set of array extension (1.0.2.8):Rotate Flip Resize (maintaing data) Split Fuse Replace Append and Prepend extensions (1.0.2.7) IndexOf extensions (1.0.2.7) Ne...????????API for .Net SDK: SDK for .Net ??? Release 1: 6?12????? ??? - ?????.net2.0?.net4.0????Winform?Web???????。 NET2 - .net 2.0?Winform?Web???? NET4 - .net 4.0?Winform?Web???? Libs - SDK ???,??2.0?4.0?SDK??? 6?11????? ??? - ?Entities???????????EntityBase,???ToString()???????json???,??????4.0???????。2.0?3.5???! ??? - Request????????AccessToken??????source=appkey?????。????,????????,???????public_timeline?????????。 ?? - ???ClinetLogin??????????RefreshToken???????false???。 ?? - ???RepostTimeline????Statuses???null???。 ?? - Utility?Bui...Audio Pitch & Shift: Audio Pitch And Shift 4.5.0: Added Instruments tab for modules Open folder content feature Some bug fixesPython Tools for Visual Studio: 1.5 Beta 1: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports CPython, IronPython, Jython and PyPy • Python editor with advanced member, signature intellisense and refactoring • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging •...Circuit Diagram: Circuit Diagram 2.0 Beta 1: New in this release: Automatically flip components when placing Delete components using keyboard delete key Resize document Document properties window Print document Recent files list Confirm when exiting with unsaved changes Thumbnail previews in Windows Explorer for CDDX files Show shortcut keys in toolbox Highlight selected item in toolbox Zoom using mouse scroll wheel while holding down ctrl key Plugin support for: Custom export formats Custom import formats Open...Umbraco CMS: Umbraco CMS 5.2 Beta: The future of Umbracov5 represents the future architecture of Umbraco, so please be aware that while it's technically superior to v4 it's not yet on a par feature or performance-wise. What's new? For full details see our http://progress.umbraco.org task tracking page showing all items complete for 5.2. In a nutshellPackage Builder Starter Kits Dynamic Extension Methods Querying / IsHelpers Friendly alt template URLs Localization Various bug fixes / performance enhancements Gett...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0.5: JayData is a unified data access library for JavaScript developers to query and update data from different sources like WebSQL, IndexedDB, OData, Facebook or YQL. See it in action in this 6 minutes video New features in JayData 1.0.5http://jaydata.org/blog/jaydata-1.0.5-is-here-with-authentication-support-and-more http://jaydata.org/blog/release-notes Sencha Touch 2 module (read-only)This module can be used to bind data retrieved by JayData to Sencha Touch 2 generated user interface. (exam...New ProjectsAFS.Aspnet.Collab.Site: alles umfassender testAFS.FallBackService: Afs.ultimate.fallbackserviceAIMS: This project aims at creating an mvc-based application for NGO portfolio management.Card Douglas: Project for card discountCrashReporter.NET: Send crash reports for c# or vb.net applications directly to your mail's inbox Daphne Translator - localizing tool for Daphne: This project was created as a helper project for localization Daphne checkers software [url:http://daphne.codeplex.com]. It can be used by other developers to do translations of resources. Daphne Translator is done to be extensible by any resource type of localized texts. ExcelFileEditor: This project is a free excel file editor class library . ( Support " .xls and .xlsx " )Finegypt: Fin EgyptHuish: Huish MarineINNO DNN Open Registration: The INNO DNN Registration module collection allows for customized DotNetNuke registration, payment processing integration, profile management, transaction history and a powerful contact us form. It is built to be responsive to allow registration on mobile devices such as Android phones and tablets, iPad, iPhone and more. Key focus is on extensibility including options to hook your own custom DotNetNuke modules into it for consistent, easy registration of users.jCardView: jCardViewLinkCard: D? án VDOS - TopLinkLINQ Expression Projection: Project Description LINQ Expression Projection library provides tool that enable using lambda expressions in LINQ projections (to anonymous and predefined types) even if the lambda expression is not defined in the query but rather is stored in a variable or retrieved via a function or property. Goal This library is a tool for development of applications following the DRY principle. It is used to facilitate reuse LINQ expressions.MD5 Message-Digest Algorithm for Microsoft® .NET Micro Framework: MD5 Message-Digest Algorithm for Microsoft® .NET Micro FrameworkMoney Class for C#. Fast, light and flexible: This Money class gives you the ability to work with money of multiple currencies as if it were built-in types. It looks and behaves like a decimal with extra features, but can perform much faster.NCT-Medical Record System: I've created this project for learning purposes onlyPaiPaiAPI: ??SDKParadise Shop: sâsâsaPingyThingy: PingyThingy is a basic application that will allow you to ping multiple hosts.SCK - Spine Model Creation Kit: The SCK is a software package for generating FE spine models from a set of anatomical parameters.ScoreMS(????????): The summary is required.SiteCoreDynamicForm: This project's aim is to provide a simple solution in SiteCore for creating dynamic formsSmall Media Library (for Games and other 3D rendering applications): A .NET library that utilises DirectShow to render video into a block of memory that can be easily written to surfaces managed by rendering APIs such as DirectX and OpenGL, or used in any other way the host application requires.SportStore: The project has only training purpose. It is an ecommerce web application to sell sport products on line.Thesis Management System: This is a Thesis Management System(TMIS),Based on a three-tier structure,This is a open source project.There are many question and bugs ,I will repair by other time.This PowerShell script will output details of your default organization.: This is a PowerShell script. You can save this PowerShell file 'DefaultCRMOrganization.ps1' to your server. Microsoft Dynamics CRM administrators and professionals can have a quick browse through the details for their default organization by simply running this file. Note: Sometimes when you run this file, for first time, it might open in Notepad. In order to open this file with a PowerShell application, associate this file (with extension ps1) with C:\Windows\System32\WindowsPowerShell...Tmib Dreampad: Dream Pad is a smart text editor by Tmib Inc. Tags - Notepad , Texteditor , Note Pad , Dream Pad , Wordpad , Word Pad, TmibUAIFAI: Trabalho de EDUmbraco Dynamic Form: This project's aim is to provide a simple solution for creating dynamic forms in UmbracoVDOS - Top Link: H? th?ng tích di?m LinkCardVoteChecker: Vote Checker allows a canvasser to identify a valid Ballot by checking if the corresponding barcode is known to the election committee.WebRozklady: sWindows Application Monitoring and Administration Tool: This projects aims at developing client server architecture based windows application monitoring, administring and deployment. Also Most of the real life applications today are either windows services or Websites. WINADM suite aims at monitoring these and providing the user a graphical interface for interaction with these services.WPF 3 D: How to study wpf

    Read the article

  • asp.net - csharp - jquery - looking for a better and usage solution

    - by LostLord
    hi my dear friends i have a little problem about using jquery...(i reeally do not know jquery but i forced to use it) i am using vs 2008 - asp.net web app with c# also i am using telerik controls in my pages also i am using sqldatasources (Connecting to storedprocedures) in my pages my pages base on master and content pages and in content pages i have mutiviews ================================================================================= in one of the views(inside one of those multiviews)i had made two radcombo boxes for country and city requirement like cascading dropdowns as parent and child combo boxes. i used old way for doing that , i mean i used update panel and in the SelectedIndexChange Event of Parent RadComboBox(Country) i Wrote this code : protected void RadcomboboxCountry_SelectedIndexChanged(object o, RadComboBoxSelectedIndexChangedEventArgs e) { hfSelectedCo_ID.Value = RadcomboboxCountry.SelectedValue; RadcomboboxCity.Items.Clear(); RadcomboboxCity.Items.Add(new RadComboBoxItem(" ...", "5")); RadcomboboxCity.DataBind(); RadcomboboxCity.SelectedIndex = 0; } my child radcombo box can fill by upper code , let me tell you how : the child sqldatasource have a sp that has a parameter and i fill that parameter by this line - hfSelectedCo_ID.Value = RadcbCoNameInInsert.SelectedValue; RadcbCoNameInInsert.SelectedValue means country ID. after doing that SelectedIndexChange Event of Parent RadComboBox(Country) could not be fire therefore i forced to set the autopostback property to true. afetr doing that every thing was ok until some one told me can u control focus and keydown of your radcombo boxes (when u press enter key on the parent combobox[country] , so child combobox gets focus -- and when u press upperkey on child radcombobox [city], so parent combobox[country] gets focus) (For Users That Do Not Want To Use Mouse for Input Info And Choose items) i told him this is web app , not win form and we can not do that. i googled it and i found jquery the only way for doing that ... so i started using jquery . i wrote this code with jquery for both of them : <script src="../JQuery/jquery-1.4.1.js" language="javascript" type="text/javascript"></script> <script type="text/javascript"> $(function() { $('input[id$=RadcomboboxCountry_Input]').focus(); $('input[id$=RadcomboboxCountry_Input]').select(); $('input[id$=RadcomboboxCountry_Input]').bind('keyup', function(e) { var code = (e.keyCode ? e.keyCode : e.which); if (code == 13) { -----------> Enter Key $('input[id$=RadcomboboxCity_Input]').focus(); $('input[id$=RadcomboboxCity_Input]').select(); } }); $('input[id$=RadcomboboxCity_Input]').bind('keyup', function(e) { var code = (e.keyCode ? e.keyCode : e.which); if (code == 38) { -----------> Upper Key $('input[id$=RadcomboboxCountry_Input]').focus(); $('input[id$=RadcomboboxCountry_Input]').select(); } }); }); </script> this jquery code worked BBBBBBUUUUUUUTTTTTT autopostback=true of the Parent RadComboBox Became A Problem , Because when SelectedIndex Change Of ParentRadComboBox is fired after that Telerik Skins runs and after that i lost parent ComboBox Focus and we should use mouse but we don't want it.... for fix this problem i decided to set autopostback of perentCB to false and convert protected void RadcomboboxCountry_SelectedIndexChanged(object o, RadComboBoxSelectedIndexChangedEventArgs e) { hfSelectedCo_ID.Value = RadcomboboxCountry.SelectedValue; RadcomboboxCity.Items.Clear(); RadcomboboxCity.Items.Add(new RadComboBoxItem(" ...", "5")); RadcomboboxCity.DataBind(); RadcomboboxCity.SelectedIndex = 0; } to a public non static method without parameters and call it with jquey like this : (i used onclientchanged property of parentcombo box like onclientchanged = "MyMethodForParentCB_InJquery();" insread of selectedindexchange event) public void MyMethodForParentCB_InCodeBehind() { hfSelectedCo_ID.Value = RadcomboboxCountry.SelectedValue; RadcomboboxCity.Items.Clear(); RadcomboboxCity.Items.Add(new RadComboBoxItem(" ...", "5")); RadcomboboxCity.DataBind(); RadcomboboxCity.SelectedIndex = 0; } for doing that i read the blow manual and do that step by step : ======================================================================= http://www.ajaxprojects.com/ajax/tutorialdetails.php?itemid=732 ======================================================================= but this manual is about static methods and this is my new problem ... when i am using static method like : public static void MyMethodForParentCB_InCodeBehind() { hfSelectedCo_ID.Value = RadcomboboxCountry.SelectedValue; RadcomboboxCity.Items.Clear(); RadcomboboxCity.Items.Add(new RadComboBoxItem(" ...", "5")); RadcomboboxCity.DataBind(); RadcomboboxCity.SelectedIndex = 0; } so i recieved some errors and this method could not recognize my controls and hidden field... one of those errors like this : Error 2 An object reference is required for the non-static field, method, or property 'Darman.SuperAdmin.Users.hfSelectedCo_ID' C:\Javad\Copy of Darman 6\Darman\SuperAdmin\Users.aspx.cs 231 13 Darman any idea or is there any way to call non static methods with jquery (i know we can not do that but is there another way to solve my problem)???????????????

    Read the article

  • Windows Azure: Major Updates for Mobile Backend Development

    - by ScottGu
    This week we released some great updates to Windows Azure that make it significantly easier to develop mobile applications that use the cloud. These new capabilities include: Mobile Services: Custom API support Mobile Services: Git Source Control support Mobile Services: Node.js NPM Module support Mobile Services: A .NET API via NuGet Mobile Services and Web Sites: Free 20MB SQL Database Option for Mobile Services and Web Sites Mobile Notification Hubs: Android Broadcast Push Notification Support All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Mobile Services: Custom APIs, Git Source Control, and NuGet Windows Azure Mobile Services provides the ability to easily stand up a mobile backend that can be used to support your Windows 8, Windows Phone, iOS, Android and HTML5 client applications.  Starting with the first preview we supported the ability to easily extend your data backend logic with server side scripting that executes as part of client-side CRUD operations against your cloud back data tables. With today’s update we are extending this support even further and introducing the ability for you to also create and expose Custom APIs from your Mobile Service backend, and easily publish them to your Mobile clients without having to associate them with a data table. This capability enables a whole set of new scenarios – including the ability to work with data sources other than SQL Databases (for example: Table Services or MongoDB), broker calls to 3rd party APIs, integrate with Windows Azure Queues or Service Bus, work with custom non-JSON payloads (e.g. Windows Periodic Notifications), route client requests to services back on-premises (e.g. with the new Windows Azure BizTalk Services), or simply implement functionality that doesn’t correspond to a database operation.  The custom APIs can be written in server-side JavaScript (using Node.js) and can use Node’s NPM packages.  We will also be adding support for custom APIs written using .NET in the future as well. Creating a Custom API Adding a custom API to an existing Mobile Service is super easy.  Using the Windows Azure Management Portal you can now simply click the new “API” tab with your Mobile Service, and then click the “Create a Custom API” button to create a new Custom API within it: Give the API whatever name you want to expose, and then choose the security permissions you’d like to apply to the HTTP methods you expose within it.  You can easily lock down the HTTP verbs to your Custom API to be available to anyone, only those who have a valid application key, only authenticated users, or administrators.  Mobile Services will then enforce these permissions without you having to write any code: When you click the ok button you’ll see the new API show up in the API list.  Selecting it will enable you to edit the default script that contains some placeholder functionality: Today’s release enables Custom APIs to be written using Node.js (we will support writing Custom APIs in .NET as well in a future release), and the Custom API programming model follows the Node.js convention for modules, which is to export functions to handle HTTP requests. The default script above exposes functionality for an HTTP POST request. To support a GET, simply change the export statement accordingly.  Below is an example of some code for reading and returning data from Windows Azure Table Storage using the Azure Node API: After saving the changes, you can now call this API from any Mobile Service client application (including Windows 8, Windows Phone, iOS, Android or HTML5 with CORS). Below is the code for how you could invoke the API asynchronously from a Windows Store application using .NET and the new InvokeApiAsync method, and data-bind the results to control within your XAML:     private async void RefreshTodoItems() {         var results = await App.MobileService.InvokeApiAsync<List<TodoItem>>("todos", HttpMethod.Get, parameters: null);         ListItems.ItemsSource = new ObservableCollection<TodoItem>(results);     }    Integrating authentication and authorization with Custom APIs is really easy with Mobile Services. Just like with data requests, custom API requests enjoy the same built-in authentication and authorization support of Mobile Services (including integration with Microsoft ID, Google, Facebook and Twitter authentication providers), and it also enables you to easily integrate your Custom API code with other Mobile Service capabilities like push notifications, logging, SQL, etc. Check out our new tutorials to learn more about to use new Custom API support, and starting adding them to your app today. Mobile Services: Git Source Control Support Today’s Mobile Services update also enables source control integration with Git.  The new source control support provides a Git repository as part your Mobile Service, and it includes all of your existing Mobile Service scripts and permissions. You can clone that git repository on your local machine, make changes to any of your scripts, and then easily deploy the mobile service to production using Git. This enables a really great developer workflow that works on any developer machine (Windows, Mac and Linux). To use the new support, navigate to the dashboard for your mobile service and select the Set up source control link: If this is your first time enabling Git within Windows Azure, you will be prompted to enter the credentials you want to use to access the repository: Once you configure this, you can switch to the configure tab of your Mobile Service and you will see a Git URL you can use to use your repository: You can use this URL to clone the repository locally from your favorite command line: > git clone https://scottgutodo.scm.azure-mobile.net/ScottGuToDo.git Below is the directory structure of the repository: As you can see, the repository contains a service folder with several subfolders. Custom API scripts and associated permissions appear under the api folder as .js and .json files respectively (the .json files persist a JSON representation of the security settings for your endpoints). Similarly, table scripts and table permissions appear as .js and .json files, but since table scripts are separate per CRUD operation, they follow the naming convention of <tablename>.<operationname>.js. Finally, scheduled job scripts appear in the scheduler folder, and the shared folder is provided as a convenient location for you to store code shared by multiple scripts and a few miscellaneous things such as the APNS feedback script. Lets modify the table script todos.js file so that we have slightly better error handling when an exception occurs when we query our Table service: todos.js tableService.queryEntities(query, function(error, todoItems){     if (error) {         console.error("Error querying table: " + error);         response.send(500);     } else {         response.send(200, todoItems);     }        }); Save these changes, and now back in the command line prompt commit the changes and push them to the Mobile Services: > git add . > git commit –m "better error handling in todos.js" > git push Once deployment of the changes is complete, they will take effect immediately, and you will also see the changes be reflected in the portal: With the new Source Control feature, we’re making it really easy for you to edit your mobile service locally and push changes in an atomic fashion without sacrificing ease of use in the Windows Azure Portal. Mobile Services: NPM Module Support The new Mobile Services source control support also allows you to add any Node.js module you need in the scripts beyond the fixed set provided by Mobile Services. For example, you can easily switch to use Mongo instead of Windows Azure table in our example above. Set up Mongo DB by either purchasing a MongoLab subscription (which provides MongoDB as a Service) via the Windows Azure Store or set it up yourself on a Virtual Machine (either Windows or Linux). Then go the service folder of your local git repository and run the following command: > npm install mongoose This will add the Mongoose module to your Mobile Service scripts.  After that you can use and reference the Mongoose module in your custom API scripts to access your Mongo database: var mongoose = require('mongoose'); var schema = mongoose.Schema({ text: String, completed: Boolean });   exports.get = function (request, response) {     mongoose.connect('<your Mongo connection string> ');     TodoItemModel = mongoose.model('todoitem', schema);     TodoItemModel.find(function (err, items) {         if (err) {             console.log('error:' + err);             return response.send(500);         }         response.send(200, items);     }); }; Don’t forget to push your changes to your mobile service once you are done > git add . > git commit –m "Switched to use Mongo Labs" > git push Now our Mobile Service app is using Mongo DB! Note, with today’s update usage of custom Node.js modules is limited to Custom API scripts only. We will enable it in all scripts (including data and custom CRON tasks) shortly. New Mobile Services NuGet package, including .NET 4.5 support A few months ago we announced a new pre-release version of the Mobile Services client SDK based on portable class libraries (PCL). Today, we are excited to announce that this new library is now a stable .NET client SDK for mobile services and is no longer a pre-release package. Today’s update includes full support for Windows Store, Windows Phone 7.x, and .NET 4.5, which allows developers to use Mobile Services from ASP.NET or WPF applications. You can install and use this package today via NuGet. Mobile Services and Web Sites: Free 20MB Database for Mobile Services and Web Sites Starting today, every customer of Windows Azure gets one Free 20MB database to use for 12 months free (for both dev/test and production) with Web Sites and Mobile Services. When creating a Mobile Service or a Web Site, simply chose the new “Create a new Free 20MB database” option to take advantage of it: You can use this free SQL Database together with the 10 free Web Sites and 10 free Mobile Services you get with your Windows Azure subscription, or from any other Windows Azure VM or Cloud Service. Notification Hubs: Android Broadcast Push Notification Support Earlier this year, we introduced a new capability in Windows Azure for sending broadcast push notifications at high scale: Notification Hubs. In the initial preview of Notification Hubs you could use this support with both iOS and Windows devices.  Today we’re excited to announce new Notification Hubs support for sending push notifications to Android devices as well. Push notifications are a vital component of mobile applications.  They are critical not only in consumer apps, where they are used to increase app engagement and usage, but also in enterprise apps where up-to-date information increases employee responsiveness to business events.  You can use Notification Hubs to send push notifications to devices from any type of app (a Mobile Service, Web Site, Cloud Service or Virtual Machine). Notification Hubs provide you with the following capabilities: Cross-platform Push Notifications Support. Notification Hubs provide a common API to send push notifications to iOS, Android, or Windows Store at once.  Your app can send notifications in platform specific formats or in a platform-independent way.  Efficient Multicast. Notification Hubs are optimized to enable push notification broadcast to thousands or millions of devices with low latency.  Your server back-end can fire one message into a Notification Hub, and millions of push notifications can automatically be delivered to your users.  Devices and apps can specify a number of per-user tags when registering with a Notification Hub. These tags do not need to be pre-provisioned or disposed, and provide a very easy way to send filtered notifications to an infinite number of users/devices with a single API call.   Extreme Scale. Notification Hubs enable you to reach millions of devices without you having to re-architect or shard your application.  The pub/sub routing mechanism allows you to broadcast notifications in a super-efficient way.  This makes it incredibly easy to route and deliver notification messages to millions of users without having to build your own routing infrastructure. Usable from any Backend App. Notification Hubs can be easily integrated into any back-end server app, whether it is a Mobile Service, a Web Site, a Cloud Service or an IAAS VM. It is easy to configure Notification Hubs to send push notifications to Android. Create a new Notification Hub within the Windows Azure Management Portal (New->App Services->Service Bus->Notification Hub): Then register for Google Cloud Messaging using https://code.google.com/apis/console and obtain your API key, then simply paste that key on the Configure tab of your Notification Hub management page under the Google Cloud Messaging Settings: Then just add code to the OnCreate method of your Android app’s MainActivity class to register the device with Notification Hubs: gcm = GoogleCloudMessaging.getInstance(this); String connectionString = "<your listen access connection string>"; hub = new NotificationHub("<your notification hub name>", connectionString, this); String regid = gcm.register(SENDER_ID); hub.register(regid, "myTag"); Now you can broadcast notification from your .NET backend (or Node, Java, or PHP) to any Windows Store, Android, or iOS device registered for “myTag” tag via a single API call (you can literally broadcast messages to millions of clients you have registered with just one API call): var hubClient = NotificationHubClient.CreateClientFromConnectionString(                   “<your connection string with full access>”,                   "<your notification hub name>"); hubClient.SendGcmNativeNotification("{ 'data' : {'msg' : 'Hello from Windows Azure!' } }", "myTag”); Notification Hubs provide an extremely scalable, cross-platform, push notification infrastructure that enables you to efficiently route push notification messages to millions of mobile users and devices.  It will make enabling your push notification logic significantly simpler and more scalable, and allow you to build even better apps with it. Learn more about Notification Hubs here on MSDN . Summary The above features are now live and available to start using immediately (note: some of the services are still in preview).  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • CodePlex Daily Summary for Monday, June 11, 2012

    CodePlex Daily Summary for Monday, June 11, 2012Popular ReleasesCasanova Language: Casanova IDE alpha release: This is the first release for the Casanova IDE. It features the major capabilities of the framework: support for rules, scripts, input management, and basic content management. The IDE is still under major development. Planned features include: multiplayer support 3D rendering syntax highlighting basic Intellisense slightly improved syntax for rules and scripts audio in-game menus Also, do not forget to download and install OpenAL: http://connect.creativelabs.com/openal/Download...Liberty: v3.2.1.0 Release 10th June 2012: Change Log -Added -Liberty is now digitally signed! If the certificate on Liberty.exe is missing, invalid, or does not state that it was developed by "Xbox Chaos, Open Source Developer," your copy of Liberty may have been altered in some (possibly malicious) way. -Reach Mass biped max health and shield changer -Fixed -H3/ODST Fixed all of the glitches that users kept reporting (also reverted the changes made in 3.2.0.2) -Reach Made some tag names clearer and more consistent between m...SVNUG.CodePlex: Cloud Development with Windows Azure: This release contains the slides for the Cloud Development with Windows Azure presentation.WCF Data Service (OData) Regression & Load Testing Tool: Latest: This is latest stable releaseSHA-1 Hash Checker: SHA-1 Hash Checker (for Windows): Fixed major bugs. Removed false negatives.AutoUpdaterdotNET: AutoUpdater.NET 1.0: Everything seems perfect if you find any problem you can report to http://www.rbsoft.org/contact.htmlMedia Companion: Media Companion 3.503b: It has been a while, so it's about time we release another build! Major effort has been for fixing trailer downloads, plus a little bit of work for episode guide tag in TV show NFOs.Microsoft SQL Server Product Samples: Database: AdventureWorks Sample Reports 2008 R2: AdventureWorks Sample Reports 2008 R2.zip contains several reports include Sales Reason Comparisons SQL2008R2.rdl which uses Adventure Works DW 2008R2 as a data source reference. For more information, go to Sales Reason Comparisons report.Json.NET: Json.NET 4.5 Release 7: Fix - Fixed Metro build to pass Windows Application Certification Kit on Windows 8 Release Preview Fix - Fixed Metro build error caused by an anonymous type Fix - Fixed ItemConverter not being used when serializing dictionaries Fix - Fixed an incorrect object being passed to the Error event when serializing dictionaries Fix - Fixed decimal properties not being correctly ignored with DefaultValueHandlingLINQ Extensions Library: 1.0.3.0: New to release 1.0.3.0:Combinatronics: Combinations (unique) Combinations (with repetition) Permutations (unique) Permutations (with repetition) Convert jagged arrays to fixed multidimensional arrays Convert fixed multidimensional arrays to jagged arrays ElementAtMax ElementAtMin ElementAtAverage New set of array extension (1.0.2.8):Rotate Flip Resize (maintaing data) Split Fuse Replace Append and Prepend extensions (1.0.2.7) IndexOf extensions (1.0.2.7) Ne...????????API for .Net SDK: SDK for .Net ??? Release 1: 6?11????? ??? - ?Entities???????????EntityBase,???ToString()???????json???,??????4.0???????。2.0?3.5???! ??? - Request????????AccessToken??????source=appkey?????。????,????????,???????public_timeline?????????。 ?? - ???ClinetLogin??????????RefreshToken???????false???。 ?? - ???RepostTimeline????Statuses???null???。 ?? - Utility?BuildPostData?,?WeiboParameter??value?NULL????????。 ??????? ??? - ??.Net 2.0/3.5/4.0????。??????VS2010??????????。VS2008????????,??????????。 ??? - ??.Net 4.0???SDK...Audio Pitch & Shift: Audio Pitch And Shift 4.5.0: Added Instruments tab for modules Open folder content feature Some bug fixesPython Tools for Visual Studio: 1.5 Beta 1: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports CPython, IronPython, Jython and PyPy • Python editor with advanced member, signature intellisense and refactoring • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging •...Circuit Diagram: Circuit Diagram 2.0 Beta 1: New in this release: Automatically flip components when placing Delete components using keyboard delete key Resize document Document properties window Print document Recent files list Confirm when exiting with unsaved changes Thumbnail previews in Windows Explorer for CDDX files Show shortcut keys in toolbox Highlight selected item in toolbox Zoom using mouse scroll wheel while holding down ctrl key Plugin support for: Custom export formats Custom import formats Open...Umbraco CMS: Umbraco CMS 5.2 Beta: The future of Umbracov5 represents the future architecture of Umbraco, so please be aware that while it's technically superior to v4 it's not yet on a par feature or performance-wise. What's new? For full details see our http://progress.umbraco.org task tracking page showing all items complete for 5.2. In a nutshellPackage Builder Starter Kits Dynamic Extension Methods Querying / IsHelpers Friendly alt template URLs Localization Various bug fixes / performance enhancements Gett...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0.5: JayData is a unified data access library for JavaScript developers to query and update data from different sources like WebSQL, IndexedDB, OData, Facebook or YQL. See it in action in this 6 minutes video New features in JayData 1.0.5http://jaydata.org/blog/jaydata-1.0.5-is-here-with-authentication-support-and-more http://jaydata.org/blog/release-notes Sencha Touch 2 module (read-only)This module can be used to bind data retrieved by JayData to Sencha Touch 2 generated user interface. (exam...32feet.NET: 3.5: This version changes the 32feet.NET library (both desktop and NETCF) to use .NET Framework version 3.5. Previously we compiled for .NET v2.0. There are no code changes from our version 3.4. See the 3.4 release for more information. Changes due to compiling for .NET 3.5Applications should be changed to use NET/NETCF v3.5. Removal of class InTheHand.Net.Bluetooth.AsyncCompletedEventArgs, which we provided on NETCF. We now just use the standard .NET System.ComponentModel.AsyncCompletedEvent...Application Architecture Guidelines: Application Architecture Guidelines 3.0.7: 3.0.7Jolt Environment: Jolt v2 Stable: Many new features. Follow development here for more information: http://www.rune-server.org/runescape-development/rs-503-client-server/projects/298763-jolt-environment-v2.html Setup instructions in downloadSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.5): New fetures:Multilingual Support Max users property in Standings Web Part Games time zone change (UTC +1) bug fix - Version 1.4 locking problem http://euro2012.codeplex.com/discussions/358262 bug fix - Field Title not found (v.1.3) German SP http://euro2012.codeplex.com/discussions/358189#post844228 Bug fix - Access is denied.for users with contribute rights Bug fix - Installing on non-English version of SharePoint Bug fix - Title Rules Installing SharePoint Euro 2012 PredictorSharePoint E...New Projects2D map editor for Game Tool Development class: This project contains a basic 2D map editor, which can read a tileset (or chipset) to create a custom map. The user can then load and save maps previously created (file format .map). ArtifexCore: ArtifexCore - a compilation of unique, original, and revamped RunUO/OrbSA projects.ASP.NET MVC 4 - Sports Store using Visual Studio 2011 Beta: This is a the output of "Sports Store" exercise in Pro ASP.NET MVC 3 Framework by Adam Freeman and Steven Sanderson. Instead of MVC 3 as recommended in the book, I have used MVC 4.clabinet: clabinet is a cloud based file cabinetCopy File Location - Explorer Shortcut: This Explorer Extension adds a shortcut menu to all files and folders to copy the full location to the Clipboard.CSSSEVER: ???????????DoodleLabyrinthLogic: A starter '100 rogues' type logic library for rogue-like buildersEasyFlash Cart Builder: EasyFlash Cart Builder is a tool for linking files together into an EasyFlash cartridge. Generic enumeration: Provides string representation of C# enumerations.Hacker Typer for WP7: This is a hackertyper.net Windows Phone based application HTML Batch Logger: This is a simple Log class that can be used in any .NET C# Console Application. You can use it to log into an HTML file, console window, or database. kinect ????????????: ???、??????????????????????、????????????????。???、??????????????????。?????????、????????????????、??????????????????。laskjdfqewr131231: example omes projectnlite web libraray: Lite Web Framework,????Page,??Ndf???WebApiNMemory - an in-memory relational database for .NET: NMemory is a lightweight in-memory relational database engine that can be hosted by .NET applications. It supports traditional database features like indexes, foreign key relations, transaction handling and isolation.NoManaComponets: A attempt at a reusable library for common tasks in XNA like avatar management, text rendering and shading. Currently abandoned.NP: network client appObject Viewer: A UserControl that can display any object.PowerPoint Graph Creator: The main purpose of this PowerPoint add-in is to help people who sometimes need to draw graphs into the PPT and then for example add some animations or want to load to existing graphs into PPT but don't have a lot of time to re-draw every thing.Project Blue Tigris: This Project aims at enabling users to control and interact with Windows without the need to touch the keyboard or the mouse, through a new user interface using gestures and voice commands. This project will utilize webcam and microphones connected to the computer when installed. This is our vision. It would be great if you join us.QTP FT Uninstaller: KnowledgeInbox QTP/FT Uninstaller is a tool designed for uninstalling HP's QuickTest Professional or Functional Testing products in one click. The tool should only be used in cases where QTP was working fine earlier and after some update or installation it stopped working. The tool scans the system registry for all files associated with QTP and deletes them. Note: Using this uninstaller may impact tools like HP Sprinter, Quality Center. You should re-install those tools also after using t...Radius Client for Microsoft® .NET Micro Framework: Client for Remote Authentication Dial In User Service (RADIUS)Regression Suite: RegressionSuite is a software test suite that incorporates measurement of the startup lag, measurement of accurate execution times, generating execution statistics, customized input distributions, and processable regression specific details as part of the regular unit tests. Essentially, RegressionSuite provides the frame-work around which the individual unit regressors are invoked (and details and statistics collected). Unit regressors are grouped into named regressor sets (or modules), a...Sern: Nothing yet.SharePoint List Number to Text Custom Column: Number to text custom column in SharePoint is a project that is useful in any financial SharePoint implementation to automatically generate the corresponding number representation in textual format, this feature is designed to be easy extended to any language and it’s initially in Arabic and English languages.you can find all related source code in download section SharpDND: A attempt to create a open source DLL for the openSRD. Designed with customization in mind if completed the tools included could build out pathfinder and other RPG rulesets.SmartSense: SmartSense is a wearable holographic gesture and voice controlled intelligent system. Human-computer interaction (HCI) is a heavily researched area in today’s technology driven world. Most people experience HCI using a mouse and keyboard as input devices. We wanted a more natural way to interact with the computer that also allows instant access to information. We are developing a real-time system that is always on and available to collect data at any moment but also is accessible “on-the-fly...Sumzlib: sumzlib is a set of class library that provides useful algorithms in static method. this project is written in vb.netWCF Data Service (OData) Regression & Load Testing Tool: This is a tool that is especially being developed to regress OData Services. Current release only support few test like ($select, Top, etc ), more test will be added over the time. It is a Multithreaded Regression Test Tool that can generate result in Excel format including diagnostic error data and performance data like turnaround time as well. It can be used for bulk testing of several services at the same time It can be quite useful to for those who are developing several service...X.Web.Microdata: This project is intended to represent an mitcrodata entitie in the .NET Framework. (Particularly in ASP.NET) The X.Web.Microdata represent the http://www.data-vocabulary.org/ (and Google) microdata notation And X.Web.Microdata.SchemaOrg represent http://schema.org/ microdata notatio

    Read the article

  • Thou shalt not put code on a piedestal - Code is a tool, no more, no less

    - by Ralf Westphal
    “Write great code and everything else becomes easier” is what Paul Pagel believes in. That´s his version of an adage by Brian Marick he cites: “treat code as an end, not just a means.” And he concludes: “My post-Agile world is software craftsmanship.” I wonder, if that´s really the way to go. Will “simply” writing great code lead the software industry into the light? He´s alluding to the philosopher Kant who proposed, a human beings should never be treated as a means, but always as an end. But should we transfer this ethical statement into the world of software? I doubt it.   Reason #1: Human beings are categorially different from code. They are autonomous entities who need to find a way of living happily together. To Kant it seemed this goal could only be reached if nobody (ab)used a human being for his/her purposes. Because using a human being, i.e. treating it as a means, would contradict the fundamental autonomy and freedom of human beings. People should hold up a symmetric view of their relationships: Since nobody wants to be (ab)used, nobody should (ab)use anybody else. If you want to be treated decently, with respect, in accordance with your own free will - which means as an end - then do the same to other people. Code is dead, it´s a product, it´s a tool for people to reach their goals. No company spends any money on code other than to save money or earn money in the long run. Code is not a puppy. Enterprises do not commission software development to just feel good in its company. Code is not a buddy. Code is a slave, if you will. A mechanical slave, a non-tangible robot. Code is a tool, is a tool. And if we start to treat it differently, if we elevate its status unduely… I guess that will contort our relationship in a contraproductive way. Please get me right: Just because something is “just a tool”, “just a product” does not mean we should not be careful while designing, building, using it. Right to the contrary. We should be very careful when writing code – but not for the code´s sake! We should be careful because we respect our customers who are fellow human beings who should be treated as an end. If we are careless, neglectful, ignorant when producing code on their behalf, then we´re using them. Being sloppy means you´re caring more for yourself that for your customer. You´re then treating the customer as a means to fulfill some of your own needs. That´s plain unethical behavior.   Reason #2: The focus should always be on your purpose, not on any tool. But if code is treated as an end, then the focus is on the code. That might sound right, because where else should be your focus as a software developer? But, well, I´d say, your focus should be on delivering value to your customer. Because in the end your customer does not care if you write a single line of code. She just wants her problem to be solved. Solving problems is the purpose of any contractor. Code must be treated just as a means, a tool we know how to handle very well. But if we´re really trying to be craftsmen then we should be conscious about exactly that and act ethically. That means we must never be so focused on our tool as to be unable to suggest better solutions to the problems of our customers than code.   I´m all with Paul when he urges us to “Write great code”. Sure, if you need to write code, then by all means do so. Write the best code you can think of – and then try to improve it. Paul has all the best intentions when he signs Brians “treat code as an end” - but as we all know: “The road to hell is paved with best intentions” ;-) Yes, I can imagine a “hell of code focus”. In fact, I don´t need to imagine it, I´m seeing it quite often. Because code hell is whereever two developers stand together and are so immersed in talking about all sorts of coding tricks, design patterns, code smells, technologies, platforms, tools that they lose sight of the big picture. Talking about TDD or SOLID or refactoring is a sign of consciousness – relative to the “cowboy coders” view of the world. But from yet another point of view TDD, SOLID, and refactoring are just cures for ailments within a system. And I fear, if “Writing great code” is the only focus or the main focus of software development, then we as an industry lose the ability to see that. Focus draws a line around something, it defines a horizon for perceptions and thinking. So if we focus on code our horizon ends where “the land of code” ends. I don´t think that should be our professional attitude.   So what about Software Craftsmanship as the next big thing after Agility? I think Software Craftsmanship has an important message for all software developers and beyond. But to make it the successor of the Agility movement seems to miss a point. Agility never claimed to solve all software development problems, I´d say. So to blame it for having missed out on certain aspects of it is wrong. If I had to summarize Agility in one word I´d say “Value”. Agility put value for the customer back in software development. Focus on delivering value early and often – that´s Agility´s mantra. All else follows from that. And I ask you: Is that obsolete? Is delivering value not hip anymore? No, sure not. That´s our very purpose as software developers. So how can Agility become obsolete and need to be replaced? We need to do away with this “either/or”-thinking. It´s either Agility or Lean or Software Craftsmanship or whatnot. Instead we should start integrating concepts and movements. Think “both/and”. Think Agility plus Software Craftsmanship plus Lean plus whatnot. We don´t neet to tear down anything from a piedestal and replace it with a new idol. Instead we should do away with piedestals and arrange whatever is helpful is a circle. Then we can turn to concepts, movements for whatever they are best. After 10 years of Agility we should be able to identify what it was good at – and keep that. Keep Agility around and add whatever Agility was lacking or never concerned with. Add whatever is at the core of Software Craftsmanship. Add whatever is at the core of Lean etc. But don´t call out the age of Post-Agility. Because it better never will end. Because once we start to lose Agility´s core we´re losing focus of the customer.

    Read the article

  • 3D Ball Physics Theory: collision response on ground and against walls?

    - by David
    I'm really struggling to get a strong grasp on how I should be handling collision response in a game engine I'm building around a 3D ball physics concept. Think Monkey Ball as an example of the type of gameplay. I am currently using sphere-to-sphere broad phase, then AABB to OBB testing (the final test I am using right now is one that checks if one of the 8 OBB points crosses the planes of the object it is testing against). This seems to work pretty well, and I am getting back: Plane that object is colliding against (with a point on the plane, the plane's normal, and the exact point of intersection. I've tried what feels like dozens of different high-level strategies for handling these collisions, without any real success. I think my biggest problem is understanding how to handle collisions against walls in the x-y axes (left/right, front/back), which I want to have elasticity, and the ground (z-axis) where I want an elastic reaction if the ball drops down, but then for it to eventually normalize and be kept "on the ground" (not go into the ground, but also not continue bouncing). Without kluging something together, I'm positive there is a good way to handle this, my theories just aren't getting me all the way there. For physics modeling and movement, I am trying to use a Euler based setup with each object maintaining a position (and destination position prior to collision detection), a velocity (which is added onto the position to determine the destination position), and an acceleration (which I use to store any player input being put on the ball, as well as gravity in the z coord). Starting from when I detect a collision, what is a good way to approach the response to get the expected behavior in all cases? Thanks in advance to anyone taking the time to assist... I am grateful for any pointers, and happy to post any additional info or code if it is useful. UPDATE Based on Steve H's and eBusiness' responses below, I have adapted my collision response to what makes a lot more sense now. It was close to right before, but I didn't have all the right pieces together at the right time! I have one problem left to solve, and that is what is causing the floor collision to hit every frame. Here's the collision response code I have now for the ball, then I'll describe the last bit I'm still struggling to understand. // if we are moving in the direction of the plane (against the normal)... if (m_velocity.dot(intersection.plane.normal) <= 0.0f) { float dampeningForce = 1.8f; // eventually create this value based on mass and acceleration // Calculate the projection velocity PVRTVec3 actingVelocity = m_velocity.project(intersection.plane.normal); m_velocity -= actingVelocity * dampeningForce; } // Clamp z-velocity to zero if we are within a certain threshold // -- NOTE: this was an experimental idea I had to solve the "jitter" bug I'll describe below float diff = 0.2f - abs(m_velocity.z); if (diff > 0.0f && diff <= 0.2f) { m_velocity.z = 0.0f; } // Take this object to its new destination position based on... // -- our pre-collision position + vector to the collision point + our new velocity after collision * time // -- remaining after the collision to finish the movement m_destPosition = m_position + intersection.diff + (m_velocity * intersection.tRemaining * GAMESTATE->dt); The above snippet is run after a collision is detected on the ball (collider) with a collidee (floor in this case). With a dampening force of 1.8f, the ball's reflected "upward" velocity will eventually be overcome by gravity, so the ball will essentially be stuck on the floor. THIS is the problem I have now... the collision code is running every frame (since the ball's z-velocity is constantly pushing it a collision with the floor below it). The ball is not technically stuck, I can move it around still, but the movement is really goofy because the velocity and position keep getting affected adversely by the above snippet. I was experimenting with an idea to clamp the z-velocity to zero if it was "close to zero", but this didn't do what I think... probably because the very next frame the ball gets a new gravity acceleration applied to its velocity regardless (which I think is good, right?). Collisions with walls are as they used to be and work very well. It's just this last bit of "stickiness" to deal with. The camera is constantly jittering up and down by extremely small fractions too when the ball is "at rest". I'll keep playing with it... I like puzzles like this, especially when I think I'm close. Any final ideas on what I could be doing wrong here? UPDATE 2 Good news - I discovered I should be subtracting the intersection.diff from the m_position (position prior to collision). The intersection.diff is my calculation of the difference in the vector of position to destPosition from the intersection point to the position. In this case, adding it was causing my ball to always go "up" just a little bit, causing the jitter. By subtracting it, and moving that clamper for the velocity.z when close to zero to being above the dot product (and changing the test from <= 0 to < 0), I now have the following: // Clamp z-velocity to zero if we are within a certain threshold float diff = 0.2f - abs(m_velocity.z); if (diff > 0.0f && diff <= 0.2f) { m_velocity.z = 0.0f; } // if we are moving in the direction of the plane (against the normal)... float dotprod = m_velocity.dot(intersection.plane.normal); if (dotprod < 0.0f) { float dampeningForce = 1.8f; // eventually create this value based on mass and acceleration? // Calculate the projection velocity PVRTVec3 actingVelocity = m_velocity.project(intersection.plane.normal); m_velocity -= actingVelocity * dampeningForce; } // Take this object to its new destination position based on... // -- our pre-collision position + vector to the collision point + our new velocity after collision * time // -- remaining after the collision to finish the movement m_destPosition = m_position - intersection.diff + (m_velocity * intersection.tRemaining * GAMESTATE->dt); UpdateWorldMatrix(m_destWorldMatrix, m_destOBB, m_destPosition, false); This is MUCH better. No jitter, and the ball now "rests" at the floor, while still bouncing off the floor and walls. The ONLY thing left is that the ball is now virtually "stuck". He can move but at a much slower rate, likely because the else of my dot product test is only letting the ball move at a rate multiplied against the tRemaining... I think this is a better solution than I had previously, but still somehow not the right idea. BTW, I'm trying to journal my progress through this problem for anyone else with a similar situation - hopefully it will serve as some help, as many similar posts have for me over the years.

    Read the article

  • Converting Encrypted Values

    - by Johnm
    Your database has been protecting sensitive data at rest using the cell-level encryption features of SQL Server for quite sometime. The employees in the auditing department have been inviting you to their after-work gatherings and buying you drinks. Thousands of customers implicitly include you in their prayers of thanks giving as their identities remain safe in your company's database. The cipher text resting snuggly in a column of the varbinary data type is great for security; but it can create some interesting challenges when interacting with other data types such as the XML data type. The XML data type is one that is often used as a message type for the Service Broker feature of SQL Server. It also can be an interesting data type to capture for auditing or integrating with external systems. The challenge that cipher text presents is that the need for decryption remains even after it has experienced its XML metamorphosis. Quite an interesting challenge nonetheless; but fear not. There is a solution. To simulate this scenario, we first will want to create a plain text value for us to encrypt. We will do this by creating a variable to store our plain text value: -- set plain text value DECLARE @PlainText NVARCHAR(255); SET @PlainText = 'This is plain text to encrypt'; The next step will be to create a variable that will store the cipher text that is generated from the encryption process. We will populate this variable by using a pre-defined symmetric key and certificate combination: -- encrypt plain text value DECLARE @CipherText VARBINARY(MAX); OPEN SYMMETRIC KEY SymKey     DECRYPTION BY CERTIFICATE SymCert     WITH PASSWORD='mypassword2010';     SET @CipherText = EncryptByKey                          (                            Key_GUID('SymKey'),                            @PlainText                           ); CLOSE ALL SYMMETRIC KEYS; The value of our newly generated cipher text is 0x006E12933CBFB0469F79ABCC79A583--. This will be important as we reference our cipher text later in this post. Our final step in preparing our scenario is to create a table variable to simulate the existence of a table that contains a column used to hold encrypted values. Once this table variable has been created, populate the table variable with the newly generated cipher text: -- capture value in table variable DECLARE @tbl TABLE (EncVal varbinary(MAX)); INSERT INTO @tbl (EncVal) VALUES (@CipherText); We are now ready to experience the challenge of capturing our encrypted column in an XML data type using the FOR XML clause: -- capture set in xml DECLARE @xml XML; SET @xml = (SELECT               EncVal             FROM @tbl AS MYTABLE             FOR XML AUTO, BINARY BASE64, ROOT('root')); If you add the SELECT @XML statement at the end of this portion of the code you will see the contents of the XML data in its raw format: <root>   <MYTABLE EncVal="AG4Skzy/sEafeavMeaWDBwEAAACE--" /> </root> Strangely, the value that is captured appears nothing like the value that was created through the encryption process. The result being that when this XML is converted into a readable data set the encrypted value will not be able to be decrypted, even with access to the symmetric key and certificate used to perform the decryption. An immediate thought might be to convert the varbinary data type to either a varchar or nvarchar before creating the XML data. This approach makes good sense. The code for this might look something like the following: -- capture set in xml DECLARE @xml XML; SET @xml = (SELECT              CONVERT(NVARCHAR(MAX),EncVal) AS EncVal             FROM @tbl AS MYTABLE             FOR XML AUTO, BINARY BASE64, ROOT('root')); However, this results in the following error: Msg 9420, Level 16, State 1, Line 26 XML parsing: line 1, character 37, illegal xml character A quick query that returns CONVERT(NVARCHAR(MAX),EncVal) reveals that the value that is causing the error looks like something off of a genuine Chinese menu. While this situation does present us with one of those spine-tingling, expletive-generating challenges, rest assured that this approach is on the right track. With the addition of the "style" argument to the CONVERT method, our solution is at hand. When dealing with converting varbinary data types we have three styles available to us: - The first is to not include the style parameter, or use the value of "0". As we see, this style will not work for us. - The second option is to use the value of "1" will keep our varbinary value including the "0x" prefix. In our case, the value will be 0x006E12933CBFB0469F79ABCC79A583-- - The third option is to use the value of "2" which will chop the "0x" prefix off of our varbinary value. In our case, the value will be 006E12933CBFB0469F79ABCC79A583-- Since we will want to convert this back to varbinary when reading this value from the XML data we will want the "0x" prefix, so we will want to change our code as follows: -- capture set in xml DECLARE @xml XML; SET @xml = (SELECT              CONVERT(NVARCHAR(MAX),EncVal,1) AS EncVal             FROM @tbl AS MYTABLE             FOR XML AUTO, BINARY BASE64, ROOT('root')); Once again, with the inclusion of the SELECT @XML statement at the end of this portion of the code you will see the contents of the XML data in its raw format: <root>   <MYTABLE EncVal="0x006E12933CBFB0469F79ABCC79A583--" /> </root> Nice! We are now cooking with gas. To continue our scenario, we will want to parse the XML data into a data set so that we can glean our freshly captured cipher text. Once we have our cipher text snagged we will capture it into a variable so that it can be used during decryption: -- read back xml DECLARE @hdoc INT; DECLARE @EncVal NVARCHAR(MAX); EXEC sp_xml_preparedocument @hDoc OUTPUT, @xml; SELECT @EncVal = EncVal FROM OPENXML (@hdoc, '/root/MYTABLE') WITH ([EncVal] VARBINARY(MAX) '@EncVal'); EXEC sp_xml_removedocument @hDoc; Finally, the decryption of our cipher text using the DECRYPTBYKEYAUTOCERT method and the certificate utilized to perform the encryption earlier in our exercise: SELECT     CONVERT(NVARCHAR(MAX),                     DecryptByKeyAutoCert                          (                            CERT_ID('AuditLogCert'),                            N'mypassword2010',                            @EncVal                           )                     ) EncVal; Ah yes, another hurdle presents itself! The decryption produced the value of NULL which in cryptography means that either you don't have permissions to decrypt the cipher text or something went wrong during the decryption process (ok, sometimes the value is actually NULL; but not in this case). As we see, the @EncVal variable is an nvarchar data type. The third parameter of the DECRYPTBYKEYAUTOCERT method requires a varbinary value. Therefore we will need to utilize our handy-dandy CONVERT method: SELECT     CONVERT(NVARCHAR(MAX),                     DecryptByKeyAutoCert                          (                             CERT_ID('AuditLogCert'),                             N'mypassword2010',                             CONVERT(VARBINARY(MAX),@EncVal)                           )                     ) EncVal; Oh, almost. The result remains NULL despite our conversion to the varbinary data type. This is due to the creation of an varbinary value that does not reflect the actual value of our @EncVal variable; but rather a varbinary conversion of the variable itself. In this case, something like 0x3000780030003000360045003--. Considering the "style" parameter got us past XML challenge, we will want to consider its power for this challenge as well. Knowing that the value of "1" will provide us with the actual value including the "0x", we will opt to utilize that value in this case: SELECT     CONVERT(NVARCHAR(MAX),                     DecryptByKeyAutoCert                          (                            CERT_ID('SymCert'),                            N'mypassword2010',                            CONVERT(VARBINARY(MAX),@EncVal,1)                           )                     ) EncVal; Bingo, we have success! We have discovered what happens with varbinary data when captured as XML data. We have figured out how to make this data useful post-XML-ification. Best of all we now have a choice in after-work parties now that our very happy client who depends on our XML based interface invites us for dinner in celebration. All thanks to the effective use of the style parameter.

    Read the article

  • The Future of Project Management is Social

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Kazim Isfahani, Director, Product Marketing, Oracle Rapid Ascent. Breakneck Speed. Lightning Fast. Perhaps even overwhelming. No matter which set of adjectives we use to describe it, social media’s rise into the enterprise mainstream has been unprecedented. Indeed, the big 4 social media powerhouses (Facebook, Google+, LinkedIn, and Twitter), have nearly 2 Billion users between them. You may be asking (as you should really) “That’s all well and good for the consumer, but for me at my company, what’s your point? Beyond the fact that I can check and post updates, that is.” Good question, kind sir. Impact of Social and Collaboration on Project Management I’ll dovetail this discussion to the project management realm, since that’s what I’m writing about. Speed is a big challenge for project-driven organizations. Anything that can help speed up project delivery - be it a new product introduction effort or a geographical expansion project - fast is a good thing. So where does this whole social thing fit particularly since there are already a host of tools to help with traditional project execution? The fact is companies have seen improvements in their productivity by deploying departmental collaboration and other social-oriented solutions. McKinsey’s survey on social tools shows we have reached critical scale: 72% of respondents report that their companies use at least one and over 40% say they are using social networks and blogs. We don’t hear as much about the impact of social media technologies at the project and project manager level, but that does not mean there is none. Consider the new hire. The type of individual entering the workforce and executing on projects is a generation of worker expecting visually appealing, easy to use and easy to understand technology meshing hand-in-hand with business processes. Consider the project manager. The social era has enhanced the role that the project manager must play. Today’s project manager must be a supreme communicator, an influencer, a sympathizer, a negotiator, and still manage to keep all stakeholders in the loop on project progress. Social tools play a significant role in this effort. Now consider the impact to the project team. The way that a project team functions has changed, with newer, social oriented technologies making the process of information dissemination and team communications much more fluid. It’s clear that a shift is occurring where “social” is intersecting with project management. The Rise of Social Project Management We refer to the melding of project management and social networking as Social Project Management. Social Project Management is based upon the philosophy that the project team is one part of an integrated whole, and that valuable and unique abilities exist within the larger organization. For this reason, Social Project Management systems should be integrated into the collaborative platform(s) of an organization, allowing communication to proceed outside the project boundaries. What makes social project management "social" is an implicit awareness where distributed teams build connected links in ways that were previously restricted to teams that were co-located. Just as critical, Social Project Management embraces the vision of seamless online collaboration within a project team, but also provides for, (and enhances) the use of rigorous project management techniques. Social Project Management acknowledges that projects (particularly large projects) are a social activity - people doing work with people, for other people, with commitments to yet other people. The more people (larger projects), the more interpersonal the interactions, and the more social affects the project. The Epitome of Social - Fusion Project Portfolio Management If I take this one level further to discuss Fusion Project Portfolio Management, the notion of Social Project Management is on full display. With Fusion Project Portfolio Management, project team members have a single place for interaction on projects and access to any other resources working within the Fusion ERP applications. This allows team members the opportunity to be informed with greater participation and provide better information. The application’s the visual appeal, and highly graphical nature makes it easy to navigate information. The project activity stream adds to the intuitive user experience. The goal of productivity is pervasive throughout Fusion Project Portfolio Management. Field research conducted with Oracle customers and partners showed that users needed a way to stay in the context of their core transactions and yet easily access social networking tools. This is manifested in the application so when a user executes a business process, they not only have the transactional application at their fingertips, but also have things like e-mail, SMS, text, instant messaging, chat – all providing a number of different ways to interact with people and/or groups of people, both internal and external to the project and enterprise. But in the end, connecting people is relatively easy. The larger issue is finding a way to serve up relevant, system-generated, actionable information, in real time, which will allow for more streamlined execution on key business processes. Fusion Project Portfolio Management’s design concept enables users to create project communities, establish discussion threads, manage event calendars as well as deliver project based work spaces to organize communications within the context of a project – all within a secure business environment. We’d love to hear from you and get your thoughts and ideas about how Social Project Management is impacting your organization. To learn more about Oracle Fusion Project Portfolio Management, please visit this link

    Read the article

  • The Presentation Isn't Over Until It's Over

    - by Phil Factor
    The senior corporate dignitaries settled into their seats looking important in a blue-suited sort of way. The lights dimmed as I strode out in front to give my presentation.  I had ten vital minutes to make my pitch.  I was about to dazzle the top management of a large software company who were considering the purchase of my software product. I would present them with a dazzling synthesis of diagrams, graphs, followed by  a live demonstration of my software projected from my laptop.  My preparation had been meticulous: It had to be: A year’s hard work was at stake, so I’d prepared it to perfection.  I stood up and took them all in, with a gaze of sublime confidence. Then the laptop expired. There are several possible alternative plans of action when this happens     A. Stare at the smoking laptop vacuously, flapping ones mouth slowly up and down     B. Stand frozen like a statue, locked in indecision between fright and flight.     C. Run out of the room, weeping     D. Pretend that this was all planned     E. Abandon the presentation in favour of a stilted and tedious dissertation about the software     F. Shake your fist at the sky, and curse the sense of humour of your preferred deity I started for a few seconds on plan B, normally referred to as the ‘Rabbit in the headlamps of the car’ technique. Suddenly, a little voice inside my head spoke. It spoke the famous inane words of Yogi Berra; ‘The game isn't over until it's over.’ ‘Too right’, I thought. What to do? I ran through the alternatives A-F inclusive in my mind but none appealed to me. I was completely unprepared for this. Nowadays, longevity has since taught me more than I wanted to know about the wacky sense of humour of fate, and I would have taken two laptops. I hadn’t, but decided to do the presentation anyway as planned. I started out ignoring the dead laptop, but pretending, instead that it was still working. The audience looked startled. They were expecting plan B to be succeeded by plan C, I suspect. They weren’t used to denial on this scale. After my introductory talk, which didn’t require any visuals, I came to the diagram that described the application I’d written.  I’d taken ages over it and it was hot stuff. Well, it would have been had it been projected onto the screen. It wasn’t. Before I describe what happened then, I must explain that I have thespian tendencies.  My  triumph as Professor Higgins in My Fair Lady at the local operatic society is now long forgotten, but I remember at the time of my finest performance, the moment that, glancing up over the vast audience of  moist-eyed faces at the during the poignant  scene between Eliza and Higgins at the end, I  realised that I had a talent that one day could possibly  be harnessed for commercial use I just talked about the diagram as if it was there, but throwing in some extra description. The audience nodded helpfully when I’d done enough. Emboldened, I began a sort of mime, well, more of a ballet, to represent each slide as I came to it. Heaven knows I’d done my preparation and, in my mind’s eye, I could see every detail, but I had to somehow project the reality of that vision to the audience, much the same way any actor playing Macbeth should do the ghost of Banquo.  My desperation gave me a manic energy. If you’ve ever demonstrated a windows application entirely by mime, gesture and florid description, you’ll understand the scale of the challenge, but then I had nothing to lose. With a brief sentence of description here and there, and arms flailing whilst outlining the size and shape of  graphs and diagrams, I used the many tricks of mime, gesture and body-language  learned from playing Captain Hook, or the Sheriff of Nottingham in pantomime. I set out determinedly on my desperate venture. There wasn’t time to do anything but focus on the challenge of the task: the world around me narrowed down to ten faces and my presentation: ten souls who had to be hypnotized into seeing a Windows application:  one that was slick, well organized and functional I don’t remember the details. Eight minutes of my life are gone completely. I was a thespian berserker.  I know however that I followed the basic plan of building the presentation in a carefully controlled crescendo until the dazzling finale where the results were displayed on-screen.  ‘And here you see the results, neatly formatted and grouped carefully to enhance the significance of the figures, together with running trend-graphs!’ I waved a mime to signify an animated  window-opening, and looked up, in my first pause, to gaze defiantly  at the audience.  It was a sight I’ll never forget. Ten pairs of eyes were gazing in rapt attention at the imaginary window, and several pairs of eyes were glancing at the imaginary graphs and figures.  I hadn’t had an audience like that since my starring role in  Beauty and the Beast.  At that moment, I realized that my desperate ploy might work. I sat down, slightly winded, when my ten minutes were up.  For the first and last time in my life, the audience of a  ‘PowerPoint’ presentation burst into spontaneous applause. ‘Any questions?’ ‘Yes,  Have you got an agent?’ Yes, in case you’re wondering, I got the deal. They bought the software product from me there and then. However, it was a life-changing experience for me and I have never ever again trusted technology as part of a presentation.  Even if things can’t go wrong, they’ll go wrong and they’ll kill the flow of what you’re presenting.  if you can’t do something without the techno-props, then you shouldn’t do it.  The greatest lesson of all is that great presentations require preparation and  ‘stage-presence’ rather than fancy graphics. They’re a great supporting aid, but they should never dominate to the point that you’re lost without them.

    Read the article

  • Developing Schema Compare for Oracle (Part 1)

    - by Simon Cooper
    SQL Compare is one of Red Gate's most successful SQL Server tools; it allows developers and DBAs to compare and synchronize the contents of their databases. Although similar tools exist for Oracle, they are quite noticeably lacking in the usability and stability that SQL Compare is known for in the SQL Server world. We could see a real need for a usable schema comparison tools for Oracle, and so the Schema Compare for Oracle project was born. Over the next few weeks, as we come up to release of v1, I'll be doing a series of posts on the development of Schema Compare for Oracle. For the first post, I thought I would start with the main pitfalls that we stumbled across when developing the product, especially from a SQL Server background. 1. Schemas and Databases The most obvious difference is that the concept of a 'database' is quite different between Oracle and SQL Server. On SQL Server, one server instance has multiple databases, each with separate schemas. There is typically little communication between separate databases, and most databases are no more than about 1000-2000 objects. This means SQL Compare can register an entire database in a reasonable amount of time, and cross-database dependencies probably won't be an issue. It is a quite different scene under Oracle, however. The terms 'database' and 'instance' are used interchangeably, (although technically 'database' refers to the datafiles on disk, and 'instance' the running Oracle process that reads & writes to the database), and a database is a single conceptual entity. This immediately presents problems, as it is infeasible to register an entire database as we do in SQL Compare; in my Oracle install, using the standard recommended options, there are 63975 system objects. If we tried to register all those, not only would it take hours, but the client would probably run out of memory before we finished. As a result, we had to allow people to specify what schemas they wanted to register. This decision had quite a few knock-on effects for the design, which I will cover in a future post. 2. Connecting to Oracle The next obvious difference is in actually connecting to Oracle – in SQL Server, you can specify a server and database, and off you go. On Oracle things are slightly more complicated. SIDs, Service Names, and TNS A database (the files on disk) must have a unique identifier for the databases on the system, called the SID. It also has a global database name, which consists of a name (which doesn't have to match the SID) and a domain. Alternatively, you can identify a database using a service name, which normally has a 1-to-1 relationship with instances, but may not if, for example, using RAC (Real Application Clusters) for redundancy and failover. You specify the computer and instance you want to connect to using TNS (Transparent Network Substrate). The user-visible parts are a config file (tnsnames.ora) on the client machine that specifies how to connect to an instance. For example, the entry for one of my test instances is: SC_11GDB1 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = simonctest)(PORT = 1521)) ) (CONNECT_DATA = (SID = 11gR1db1) ) ) This gives the hostname, port, and SID of the instance I want to connect to, and associates it with a name (SC_11GDB1). The tnsnames syntax also allows you to specify failover, multiple descriptions and address lists, and client load balancing. You can then specify this TNS identifier as the data source in a connection string. Although using ODP.NET (the .NET dlls provided by Oracle) was fine for internal prototype builds, once we released the EAP we discovered that this simply wasn't an acceptable solution for installs on other people's machines. Due to .NET assembly strong naming, users had to have installed on their machines the exact same version of the ODP.NET dlls as we had on our build server. We couldn't ship the ODP.NET dlls with our installer as the Oracle license agreement prohibited this, and we didn't want to force users to install another Oracle client just so they can run our program. To be able to list the TNS entries in the connection dialog, we also had to locate and parse the tnsnames.ora file, which was complicated by users with several Oracle client installs and intricate TNS entries. After much swearing at our computers, we eventually decided to use a third party Oracle connection library from Devart that we could ship with our program; this could use whatever client version was installed, parse the TNS entries for us, and also had the nice feature of being able to connect to an Oracle server without having any client installed at all. Unfortunately, their current license agreement prevents us from shipping an Oracle SDK, but that's a bridge we'll cross when we get to it. 3. Running synchronization scripts The most important difference is that in Oracle, DDL is non-transactional; you cannot rollback DDL statements like you can on SQL Server. Although we considered various solutions to this, including using the flashback archive or recycle bin, or generating an undo script, no reliable method of completely undoing a half-executed sync script has yet been found; so in this case we simply have to trust that the DBA or developer will check and verify the script before running it. However, before we got to that stage, we had to get the scripts to run in the first place... To run a synchronization script from SQL Compare we essentially pass the script over to the SqlCommand.ExecuteNonQuery method. However, when we tried to do the same for an OracleConnection we got a very strange error – 'ORA-00911: invalid character', even when running the most basic CREATE TABLE command. After much hair-pulling and Googling, we discovered that Oracle has got some very strange behaviour with semicolons at the end of statements. To understand what's going on, we need to take a quick foray into SQL and PL/SQL. PL/SQL is not T-SQL In SQL Server, T-SQL is the language used to interface with the database. It has DDL, DML, control flow, and many other nice features (like Turing-completeness) that you can mix and match in the same script. In Oracle, DDL SQL and PL/SQL are two completely separate languages, with different syntax, different datatypes and different execution engines within the instance. Oracle SQL is much more like 'pure' ANSI SQL, with no state, no control flow, and only the basic DML commands. PL/SQL is the Turing-complete language, but can only do DML and DCL (i.e. BEGIN TRANSATION commands). Any DDL or SQL commands that aren't recognised by the PL/SQL engine have to be passed back to the SQL engine via an EXECUTE IMMEDIATE command. In PL/SQL, a semicolons is a valid token used to delimit the end of a statement. In SQL, a semicolon is not a valid token (even though the Oracle documentation gives them at the end of the syntax diagrams) . When you execute the command CREATE TABLE table1 (COL1 NUMBER); in SQL*Plus the semicolon on the end is a command to SQL*Plus to execute the preceding statement on the server; it strips off the semicolon before passing it on. SQL Developer does a similar thing. When executing a PL/SQL block, however, the syntax is like so: BEGIN INSERT INTO table1 VALUES (1); INSERT INTO table1 VALUES (2); END; / In this case, the semicolon is accepted by the PL/SQL engine as a statement delimiter, and instead the / is the command to SQL*Plus to execute the current block. This explains the ORA-00911 error we got when trying to run the CREATE TABLE command – the server is complaining about the semicolon on the end. This also means that there is no SQL syntax to execute more than one DDL command in the same OracleCommand. Therefore, we would have to do a round-trip to the server for every command we want to execute. Obviously, this would cause lots of network traffic and be very slow on slow or congested networks. Our first attempt at a solution was to wrap every SQL statement (without semicolon) inside an EXECUTE IMMEDIATE command in a PL/SQL block and pass that to the server to execute. One downside of this solution is that we get no feedback as to how the script execution is going; we're currently evaluating better solutions to this thorny issue. Next up: Dependencies; how we solved the problem of being unable to register the entire database, and the knock-on effects to the whole product.

    Read the article

  • Community Outreach - Where Should I Go

    - by Roger Brinkley
    A few days ago I was talking to person new to community development and they asked me what guidelines I used to determine the worthiness of a particular event. After our conversation was over I thought about it a little bit more and figured out there are three ways to determine if any event (be it conference, blog, podcast or other social medias) is worth doing: Transferability, Multiplication, and Impact. Transferability - Is what I have to say useful to the people that are going to hear it. For instance, consider a company that has product offering that can connect up using a number of languages like Scala, Grovey or Java. Sending a Scala expert to talk about Scala and the product is not transferable to a Java User Group, but a Java expert doing the same talk with a Java slant is. Similarly, talking about JavaFX to any Java User Group meeting in Brazil was pretty much a wasted effort until it was open sourced. Once it was open sourced it was well received. You can also look at transferability in relation to the subject matter that you're dealing with. How transferable is a presentation that I create. Can I, or a technical writer on the staff, turn it into some technical document. Could it be converted into some type of screen cast. If we have a regular podcast can we make a reference to the document, catch the high points or turn it into a interview. Is there a way of using this in the sales group. In other words is the document purely one dimensional or can it be re-purposed in other forms. Multiplication - On every trip I'm looking for 2 to 5 solid connections that I can make with developers. These are long term connections, because I know that once that relationship is established it will lead to another 2 - 5 from that connection and within a couple of years were talking about some 100 connections from just one developer. For instance, when I was working on JavaHelp in 2000 I hired a science teacher with a programming background. We've developed a very tight relationship over the year though we rarely see each other more than once a year. But at this JavaOne, one of his employees came up to me and said, "Richard (Rick Hard in Czech) told me to tell you that he couldn't make it to JavaOne this year but if I saw you to tell you hi". Another example is from my Mobile & Embedded days in Brasil. On our very first FISL trip about 5 years ago there were two university students that had created a project called "Marge". Marge was a Bluetooth framework that made connecting bluetooth devices easier. I invited them to a "Sun" dinner that evening. Originally they were planning on leaving that afternoon, but they changed their plans recognizing the opportunity. Their eyes were as big a saucers when they realized the level of engineers at the meeting. They went home started a JUG in Florianoplis that we've visited more than a couple of times. One of them went to work for Brazilian government lab like Berkley Labs, MIT Lab, John Hopkins Applied Physicas Labs or Lincoln Labs in the US. That presented us with an opportunity to show Embedded Java as a possibility for some of the work they were doing there. Impact - The final criteria is how life changing is what I'm going to say be to the individuals I'm reaching. A t-shirt is just a token, but when I reach down and tug at their developer hearts then I know I've succeeded. I'll never forget one time we flew all night to reach Joan Pasoa in Northern Brazil. We arrived at 2am went immediately to our hotel only to be woken up at 6 am to travel 2 hours by car to the presentation hall. When we arrived we were totally exhausted. Outside the facility there were 500 people lined up to hear 6 speakers for the day. That itself was uplifting.  I delivered one of my favorite talks on "I have passion". It was a talk on golf and embedded java development, "Find your passion". When we finished a couple of first year students came up to me and said how much my talk had inspired them. FISL is another great example. I had been about 4 years in a row. FISL is a very young group of developers so capturing their attention is important. Several of the students will come back 2 or 3 years later and ask me questions about research or jobs. And then there's Louis. Louis is one my favorite Brazilians. I can only describe him as a big Brazilian teddy bear. I see him every year at FISL. He works primarily in Java EE but he's attended every single one of my talks over the last 4 years. I can't tell you why, but he always greets me and gives me a hug. For some reason I've had a real impact. And of course when it comes to impact you don't just measure a presentation but every single interaction you have at an event. It's the hall way conversations, the booth conversations, but more importantly it's the conversations at dinner tables or in the cars when you're getting transported to an event. There's a good story that illustrates this. Last year in the spring I was traveling to Goiânia in Brazil. I've been there many times and leaders there no me well. One young man has picked me up at the airport on more than one occasion. We were going out to dinner one evening and he brought his girl friend along. One thing let to another and I eventually asked him, in front of her, "Why haven't you asked her to marry you?" There were all kinds of excuses and she just looked at him and smiled. When I came back in December for JavaOne he came and sought me. "I just want to tell you that I thought a lot about what you said, and I asked her to marry me. We're getting married next Spring." Sometimes just one presentation is all it takes to make an impact. Other times it takes years. Some impacts are directly related to the company and some are more personal in nature. It doesn't matter which it is because it's having the impact that matters.

    Read the article

  • Unlocking Productivity

    - by Michael Snow
    Unlocking Productivity in Life Sciences with Consolidated Content Management by Joe Golemba, Vice President, Product Management, Oracle WebCenter As life sciences organizations look to become more operationally efficient, the ability to effectively leverage information is a competitive advantage. Whether data mining at the drug discovery phase or prepping the sales team before a product launch, content management can play a key role in developing, organizing, and disseminating vital information. The goal of content management is relatively straightforward: put the information that people need where they can find it. A number of issues can complicate this; information sits in many different systems, each of those systems has its own security, and the information in those systems exists in many different formats. Identifying and extracting pertinent information from mountains of farflung data is no simple job, but the alternative—wasted effort or even regulatory compliance issues—is worse. An integrated information architecture can enable health sciences organizations to make better decisions, accelerate clinical operations, and be more competitive. Unstructured data matters Often when we think of drug development data, we think of structured data that fits neatly into one or more research databases. But structured data is often directly supported by unstructured data such as experimental protocols, reaction conditions, lot numbers, run times, analyses, and research notes. As life sciences companies seek integrated views of data, they are typically finding diverse islands of data that seemingly have no relationship to other data in the organization. Information like sales reports or call center reports can be locked into siloed systems, and unavailable to the discovery process. Additionally, in the increasingly networked clinical environment, Web pages, instant messages, videos, scientific imaging, sales and marketing data, collaborative workspaces, and predictive modeling data are likely to be present within an organization, and each source potentially possesses information that can help to better inform specific efforts. Historically, content management solutions that had 21CFR Part 11 capabilities—electronic records and signatures—were focused mainly on content-enabling manufacturing-related processes. Today, life sciences companies have many standalone repositories, requiring different skills, service level agreements, and vendor support costs to manage them. With the amount of content doubling every three to six months, companies have recognized the need to manage unstructured content from the beginning, in order to increase employee productivity and operational efficiency. Using scalable and secure enterprise content management (ECM) solutions, organizations can better manage their unstructured content. These solutions can also be integrated with enterprise resource planning (ERP) systems or research systems, making content available immediately, in the context of the application and within the flow of the employee’s typical business activity. Administrative safeguards—such as content de-duplication—can also be applied within ECM systems, so documents are never recreated, eliminating redundant efforts, ensuring one source of truth, and maintaining content standards in the organization. Putting it in context Consolidating structured and unstructured information in a single system can greatly simplify access to relevant information when it is needed through contextual search. Using contextual filters, results can include therapeutic area, position in the value chain, semantic commonalities, technology-specific factors, specific researchers involved, or potential business impact. The use of taxonomies is essential to organizing information and enabling contextual searches. Taxonomy solutions are composed of a hierarchical tree that defines the relationship between different life science terms. When overlaid with additional indexing related to research and/or business processes, it becomes possible to effectively narrow down the amount of data that is returned during searches, as well as prioritize results based on specific criteria and/or prior search history. Thus, search results are more accurate and relevant to an employee’s day-to-day work. For example, a search for the word "tissue" by a lab researcher would return significantly different results than a search for the same word performed by someone in procurement. Of course, diverse data repositories, combined with the immense amounts of data present in an organization, necessitate that the data elements be regularly indexed and cached beforehand to enable reasonable search response times. In its simplest form, indexing of a single, consolidated data warehouse can be expected to be a relatively straightforward effort. However, organizations require the ability to index multiple data repositories, enabling a single search to reference multiple data sources and provide an integrated results listing. Security and compliance Beyond yielding efficiencies and supporting new insight, an enterprise search environment can support important security considerations as well as compliance initiatives. For example, the systems enable organizations to retain the relevance and the security of the indexed systems, so users can only see the results to which they are granted access. This is especially important as life sciences companies are working in an increasingly networked environment and need to provide secure, role-based access to information across multiple partners. Although not officially required by the 21 CFR Part 11 regulation, the U.S. Food and Drug Administraiton has begun to extend the type of content considered when performing relevant audits and discoveries. Having an ECM infrastructure that provides centralized management of all content enterprise-wide—with the ability to consistently apply records and retention policies along with the appropriate controls, validations, audit trails, and electronic signatures—is becoming increasingly critical for life sciences companies. Making the move Creating an enterprise-wide ECM environment requires moving large amounts of content into a single enterprise repository, a daunting and risk-laden initiative. The first key is to focus on data taxonomy, allowing content to be mapped across systems. The second is to take advantage new tools which can dramatically speed and reduce the cost of the data migration process through automation. Additional content need not be frozen while it is migrated, enabling productivity throughout the process. The ability to effectively leverage information into success has been gaining importance in the life sciences industry for years. The rapid adoption of enterprise content management, both in operational processes as well as in scientific management, are clear indicators that the companies are looking to use all available data to be better informed, improve decision making, minimize risk, and increase time to market, to maintain profitability and be more competitive. As more and more varieties and sources of information are brought under the strategic management umbrella, the ability to divine knowledge from the vast pool of information is increasingly difficult. Simple search engines and basic content management are increasingly unable to effectively extract the right information from the mountains of data available. By bringing these tools into context and integrating them with business processes and applications, we can effectively focus on the right decisions that make our organizations more profitable. More Information Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • The Madness of March

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} From “Linsanity” to “LOB City”, there is no doubt that basketball dominates the month of March. As many are aware, March Madness is well underway and continues to be a time when college basketball teams get together to bring their A-game to the court. Here at Oracle we also like to bring our A-game, and that includes some new players and talent from our newly acquired companies. Each new acquisition expands Oracle’s solution portfolio, fills customer requirements, and ultimately brings greater opportunities for partners. OPN follows a consistent approach to delivering key information about these acquisitions to you in a timely manner. We do this so partners can get educated, get trained and gain access to demand gen and sales tools. Through this slam dunk of a process we provide (using Pillar Data Systems as an example): A welcome page where partners can download information and learn how to sell and maximize sales returns. A Discovery section where partners can listen to key Oracle Executives speak about the many benefits this new solution brings, as well review a FAQ sheet. A Prepare section where partners can learn about the product strategies and the different OPN Knowledge Zones that have become available. A Sell and Deliver section that partners can leverage when discussing product positioning and functionality, as well as gain access to relevant deliverables. Just as any competitive team strives to be #1, Oracle also wants to stay best-in-class which is why we have recently joined forces with some ‘baller’ companies such as RightNow, Endeca and Pillar Axiom to secure our place in the industry bracket. By running our 3-2 Oracle play and bringing in our newly acquired products, we are able to deliver a solid, expanded solution to our partners. These and many other MVP companies have helped Oracle broaden its offerings and score big. Watch the half time show below to find out what Judson thinks about Oracle’s current offerings: Mergers and acquisitions are a strategic part of how we currently go to market. If you haven’t done so already, dribble down or post up and visit the Acquisition Catalog to learn more about Oracle’s acquired products and the unique benefits they can bring to your own court. Or click here to learn about the ways of monetizing opportunities through Oracle acquisitions. Until Next Time, It’s Game Time, The OPN Communications Team Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • PASS: Bylaw Change 2013

    - by Bill Graziano
    PASS launched a Global Growth Initiative in the Summer of 2011 with the appointment of three international Board advisors.  Since then we’ve thought and talked extensively about how we make PASS more relevant to our members outside the US and Canada.  We’ve collected much of that discussion in our Global Growth site.  You can find vision documents, plans, governance proposals, feedback sites, and transcripts of Twitter chats and town hall meetings.  We also address these plans at the Board Q&A during the 2012 Summit. One of the biggest changes coming out of this process is around how we elect Board members.  And that requires a change to the bylaws.  We published the proposed bylaw changes as a red-lined document so you can clearly see the changes.  Our goal in these bylaw changes was to address the changes required by the global growth initiatives, conduct a legal review of the document and address other minor issues in the document.  There are numerous small wording changes throughout the document.  For example, we replaced every reference of “The Corporation” with the word “PASS” so it now reads “PASS is organized…”. Board Composition The biggest change in these bylaw changes is how the Board is composed and elected.  This discussion starts in section VI.2.  This section now says that some elected directors will come from geographic regions.  I think this is the best way to make sure we give all of our members a voice in the leadership of the organization.  The key parts of this section are: The remaining Directors (i.e. the non-Officer Directors and non-Vendor Appointed Directors) shall be elected by the voting membership (“Elected Directors”). Elected Directors shall include representatives of defined PASS regions (“Regions”) as set forth below (“Regional Directors”) and at minimum one (1) additional Director-at-Large whose selection is not limited by region. Regional Directors shall include, but are not limited to, two (2) seats for the Region covering Canada and the United States of America. Additional Regions for the purpose of electing additional Regional Directors and additional Director-at-Large seats for the purpose of expanding the Board shall be defined by a majority vote of the current Board of Directors and must be established prior to the public call for nominations in the general election. Previously defined Regions and seats approved by the Board of Directors shall remain in effect and can only be modified by a 2/3 majority vote by the then current Board of Directors. Currently PASS has six At-Large Directors elected by the members.  These changes allow for a Regional Director position that is elected by the members but must come from a particular region.  It also stipulates that there must always be at least one Director-at-Large who can come from any region. We also understand that PASS is currently a very US-centric organization.  Our Summit is held in America, roughly half our chapters are in the US and Canada and most of the Board members over the last ten years have come from America.  We wanted to reflect that by making sure that our US and Canadian volunteers would continue to play a significant role by ensuring that two Regional seats are reserved specifically for Canada and the US. Other than that, the bylaws don’t create any specific regional seats.  These rules allow us to create Regional Director seats but don’t require it.  We haven’t fully discussed what the criteria will be in order for a region to have a seat designated for it or how many regions there will be.  In our discussions we’ve broadly discussed regions for United States and Canada Europe, Middle East, and Africa (EMEA) Australia, New Zealand and Asia (also known as Asia Pacific or APAC) Mexico, South America, and Central America (LATAM) As you can see, our thinking is that there will be a few large regions.  I’ve also considered a non-North America region that we can gradually split into the regions above as our membership grows in those areas.  The regions will be defined by a policy document that will be published prior to the elections. I’m hoping that over the next year we can begin to publish more of what we do as Board-approved policy documents. While the bylaws only require a single non-region specific At-large Director, I would expect we would always have two.  That way we can have one in each election.  I think it’s important that we always have one seat open that anyone who is eligible to run for the Board can contest.  The Board is required to have any regions defined prior to the start of the election process. Board Elections – Regional Seats We spent a lot of time discussing how the elections would work for these Regional Director seats.  Ultimately we decided that the simplest solution is that every PASS member should vote for every open seat.  Section VIII.3 reads: Candidates who are eligible (i.e. eligible to serve in such capacity subject to the criteria set forth herein or adopted by the Board of Directors) shall be designated to fill open Board seats in the following order of priority on the basis of total votes received: (i) full term Regional Director seats, (ii) full term Director-at-Large seats, (iii) not full term (vacated) Regional Director seats, (iv) not full term (vacated) Director-at-Large seats. For the purposes of clarity, because of eligibility requirements, it is contemplated that the candidates designated to the open Board seats may not receive more votes than certain other candidates who are not selected to the Board. We debated whether to have multiple ballots or one single ballot.  Multiple ballot elections get complicated quickly.  Let’s say we have a ballot for US/Canada and one for Region 2.  After that we’d need a mechanism to merge those two together and come up with the winner of the at-large seat or have another election for the at-large position.  We think the best way to do this is a single ballot and putting the highest vote getters into the most restrictive seats.  Let’s look at an example: There are seats open for Region 1, Region 2 and at-large.  The election results are as follows: Candidate A (eligible for Region 1) – 550 votes Candidate B (eligible for Region 1) – 525 votes Candidate C (eligible for Region 1) – 475 votes Candidate D (eligible for Region 2) – 125 votes Candidate E (eligible for Region 2) – 75 votes In this case, Candidate A is the winner for Region 1 and is assigned that seat.  Candidate D is the winner for Region 2 and is assigned that seat.  The at-large seat is filled by the high remaining vote getter which is Candidate B. The key point to understand is that we may have a situation where a person with a lower vote total is elected to a regional seat and a person with a higher vote total is excluded.  This will be true whether we had multiple ballots or a single ballot.  Board Elections – Vacant Seats The other change to the election process is for vacant Board seats.  The actual changes are sprinkled throughout the document. Previously we didn’t have a mechanism that allowed for an election of a Board seat that we knew would be vacant in the future.  The most common case is when a Board members moves to an Officer role in the middle of their term.  One of the key changes is to allow the number of votes members have to match the number of open seats.  This allows each voter to express their preference on all open seats.  This only applies when we know about the opening prior to the call for nominations.  This all means that if there’s a seat will be open at the start of the next Board term, and we know about it prior to the call for nominations, we can include that seat in the elections.  Ultimately, the aim is to have PASS members decide who sits on the Board in as many situations as possible. We discussed the option of changing the bylaws to just take next highest vote-getter in all other cases.  I think that’s wrong for the following reasons: All voters aren’t able to express an opinion on all candidates.  If there are five people running for three seats, you can only vote for three.  You have no way to express your preference between #4 and #5. Different candidates may have different information about the number of seats available.  A person may learn that a Board member plans to resign at the end of the year prior to that information being made public. They may understand that the top four vote getters will end up on the Board while the rest of the members believe there are only three openings.  This may affect someone’s decision to run.  I don’t think this creates a transparent, fair election. Board members may use their knowledge of the election results to decide whether to remain on the Board or not.  Admittedly this one is unlikely but I don’t want to create a situation where this accusation can be leveled. I think the majority of vacancies in the future will be handled through elections.  The bylaw section quoted above also indicates that partial term vacancies will be filled after the full term seats are filled. Removing Directors Section VI.7 on removing directors has always had a clause that allowed members to remove an elected director.  We also had a clause that allowed appointed directors to be removed.  We added a clause that allows the Board to remove for cause any director with a 2/3 majority vote.  The updated text reads: Any Director may be removed for cause by a 2/3 majority vote of the Board of Directors whenever in its judgment the best interests of PASS would be served thereby. Notwithstanding the foregoing, the authority of any Director to act as in an official capacity as a Director or Officer of PASS may be suspended by the Board of Directors for cause. Cause for suspension or removal of a Director shall include but not be limited to failure to meet any Board-approved performance expectations or the presence of a reason for suspension or dismissal as listed in Addendum B of these Bylaws. The first paragraph is updated and the second and third are unchanged (except cleaning up language).  If you scroll down and look at Addendum B of these bylaws you find the following: Cause for suspension or dismissal of a member of the Board of Directors may include: Inability to attend Board meetings on a regular basis. Inability or unwillingness to act in a capacity designated by the Board of Directors. Failure to fulfill the responsibilities of the office. Inability to represent the Region elected to represent Failure to act in a manner consistent with PASS's Bylaws and/or policies. Misrepresentation of responsibility and/or authority. Misrepresentation of PASS. Unresolved conflict of interests with Board responsibilities. Breach of confidentiality. The bold line about your inability to represent your region is what we added to the bylaws in this revision.  We also added a clause to section VII.3 allowing the Board to remove an officer.  That clause is much less restrictive.  It doesn’t require cause and only requires a simple majority. The Board of Directors may remove any Officer whenever in their judgment the best interests of PASS shall be served by such removal. Other There are numerous other small changes throughout the document. Proxy voting.  The laws around how members and Board members proxy votes are specific in Illinois law.  PASS is an Illinois corporation and is subject to Illinois laws.  We changed section IV.5 to come into compliance with those laws.  Specifically this says you can only vote through a proxy if you have a written proxy through your authorized attorney.  English language proficiency.  As we increase our global footprint we come across more members that aren’t native English speakers.  The business of PASS is conducted in English and it’s important that our Board members speak English.  If we get big enough to afford translators, we may be able to relax this but right now we need English language skills for effective Board members. Committees.  The language around committees in section IX is old and dated.  Our lawyers advised us to clean it up.  This section specifically applies to any committees that the Board may form outside of portfolios.  We removed the term limits, quorum and vacancies clause.  We don’t currently have any committees that this would apply to.  The Nominating Committee is covered elsewhere in the bylaws. Electronic Votes.  The change allows the Board to vote via email but the results must be unanimous.  This is to conform with Illinois state law. Immediate Past President.  There was no mechanism to fill the IPP role if an outgoing President chose not to participate.  We changed section VII.8 to allow the Board to invite any previous President to fill the role by majority vote. Nominations Committee.  We’ve opened the language to allow for the transparent election of the Nominations Committee as outlined by the 2011 Election Review Committee. Revocation of Charters. The language surrounding the revocation of charters for local groups was flagged by the lawyers. We have allowed for the local user group to make all necessary payment before considering returning of items to PASS if required. Bylaw notification. We’ve spent countless meetings working on these bylaws with the intent to not open them again any time in the near future. Should the bylaws be opened again, we have included a clause ensuring that the PASS membership is involved. I’m proud that the Board has remained committed to transparency and accountability to members. This clause will require that same level of commitment in the future even when all the current Board members have rolled off. I think that covers everything.  I’d encourage you to look through the red-line document and see the changes.  It’s helpful to look at the language that’s being removed and the language that’s being added.  I’m happy to answer any questions here or you can email them to [email protected].

    Read the article

  • Hard to append a table with many records into another without generating duplicates

    - by Bill Mudry
    I may seem to be a bit wordy at first but for the hope it will be easier for all of you to understand what I am doing in the first place. I have an uncommon but enjoyable activity of collecting as many species of wood from around the world as I can (over 2,900 so far). Ok, that is the real world. Meanwhile I have spent over 8 years compiling over 5.8 meg of text data on all the woods of the world. That got so large that learning some basic PHP and MySQL was most welcome so I could build a new database driven home for all this research. I am still slow at it but getting there. The original premise was to find evidence of as many species of woods in the world I can. The more names identified, the more successful the project. I have named the project TAXA for ease of conversation (short for Taxonomy). You are most welcome to take a look at what I have so far at www.prowebcanada.com/taxa. It is 95% dynamically driven. So far I am reporting about 6,500 botanical wood names and, as said above, the more I can report, the more successful is the project. I have a file of all the woods in the second largest wood collection in the world, the Tervuren wood collection in the Netherlands with over 11,300 wood names even after cleaning out all duplicates. That is almost twice the number I am reporting now so porting all the new wood names from Tervuren to the 'species' table where I keep the reported data would be a major desirable advancement in the project. At one point I was able to add all the Tervuren records to the species table but over 3,000 duplicates also formed. They were not in the Tervuren file in the first place but represent the same wood names common to both files. It is common sense that there would be woods common to both that when merged would create new duplicates. At one point and with the help of others from another forum, I may very well have finally got the proper SQL statement. When I ran it, though, the system said (semi-amusingly at first) ----- that it had gone away! After looking up on the Net what could have have done this, one reason is that the MySQL timeout lapses and probably because of the large size of files I am running. I am running this on a rented account on Godaddy so I cannot go about trying to adjust any config file. For safety, I copied the tervuren.sql file as tervuren_target.sql and the species.sql file as species_master.sql tp use as working files just to make sure I protect the original files from destruction or damage. Later I can name the species_master back to just species.sql once I am happy all worked well. The species file has about 18 columns in it but only 5 columns match the columns in the Tervuren file (name for name and collation also). The rest of the columns are just along for the ride, so to speak. The common key in both is the 'species_name" columns in both. I am not sure it is at all proper to call one a primary key and the other a foreign key since there really is no relational connection to them. One is just more data for the other and can disappear after, never to be referred to the working code in the application. I have been very surprised and flabbergasted on how hard it can be to append records from one large table into another (with same column names plus others) without generating NEW duplicates in the first place. Watch out thinking that a SELECT DISTINCT statement may do the job because absolutely NO records in the species table must get destroyed in the process and there is no way (well, that I know of) to tell the 'DISTINCT" command this. Yes, the original 'species' table has duplicates in it even before all this but, trust me ---- they have to be removed the long hard way manually record by record or I will lose precious information. It is more important to just make sure no NEW duplicates form through bringing in new names in the tervuren_target.species_name into species.species_name. I am hoping and thinking that a straight SQL solution should work --- except for that nasty timeout. How do I get past that? Could it mean that I may have to turn to a PHP plus SQL method?? Or ..... would I have to break up the Tervuren files into a few smaller ones and run them independently (hope not....)" So far, what seems should be easy has proven to be unexpectedly tricky. I appreciate any help you can give but start from the assumption that this may be harder to do right than it may seem on the surface. By the way --- I am running a quad 64 bit system with Windows 7, so at least I have some fairly hefty power on the client end. I have a direct ethernet cable feeding a cable connection to the Internet. Once I get an algorithm and code working for this, I also have many other lists to process that could make the 'species' table grow even more. It could be equivalent to (ahem) lighting a rocket under my project (especially compared to do this record by record manually)! This is my first time in this forum, so I do not know how I can receive any replies. Do I have to to come back here periodically or are replies emailed out also? It would be great if you CC'd copies to me at billmudry at rogers.com :-) Much thanks for your patience and help, Bill Mudry Mississauga, Ontario Canada (next to Toronto).

    Read the article

  • CodePlex Daily Summary for Thursday, September 06, 2012

    CodePlex Daily Summary for Thursday, September 06, 2012Popular Releasesmenu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.1: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...iPDC - Free Phasor Data Concentrator: iPDC-v1.3.1: iPDC suite version-1.3.1, Modifications and Bug Fixed (from v 1.3.0) New User Manual for iPDC-v1.3.1 available on websites. Bug resolved : PMU Simulator TCP connection error and hang connection for client (PDC). Now PMU Simulator (server) can communicate more than one PDCs (clients) over TCP and UDP parallely. PMU Simulator is now sending the exact data frames as mentioned in data rate by user. PMU Simulator data rate has been verified by iPDC database entries and PMU Connection Tes...Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.Active Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.Hidden Capture (HC): Hidden Capture 1.1: Hidden Capture 1.1 by Mohsen E.Dawatgar http://Hidden-Capture.blogfa.comExt Spec: Ext Spec 0.2.1: Refined examples and improved distribution options.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsAny-Service: AnyService is a .net 4.0 Windows service shell. It hosts any windows application in non-gui mode to run as a service.BabyCloudDrives - the multi cloud drive desktop's application: wpf ????BLACK ORANGE: Download The HPAD TEXT EDITOR and use it Wisely.. CodePlex New Release Checker: CodePlex New Release Checker is a small library that makes it easy to add, "New Version Available!" functionality to your CodePlex project.Collect: ????????!CSVManager: CSV??CSV?????,????CSV??,??????Exam Project: My Exam Project. Computer Vision, C and OpenCV-FTP: Hey guys thanks for checking out my ftp!Haushaltsbuch: 1ModMaker.Lua: ModMaker.Lua is an open source .NET library that parses and executes Lua code.MyJabbr: MyJabbr netduinoscope: Design shield and software to use netduino as oscilloscopeNetSurveillance Web Application: Net Surveillance Web ApplicationNiconicoApiHelper: ????API?????????OStega: A simple library for encrypt text into an bmp or png image.OURORM: ormTFS Cloud Deployment Toolkit: The TFS Cloud Deployment Toolkit is a set of tools that integrate with TFS 2010 to help manage configuration and deployment to various remote environments.The Visual Guide for Building Team Foundation Server 2012 Environments: A step-by-step guide for building Team Foundation Server 2012 environments that include SharePoint Server 2010, SQL Server 2012, Windows Server 2012 and more!WinRT LineChart: An attempt at creating an usable LineChart for everyone to use in his/her own Windows 8 Apps

    Read the article

  • Orchestrating the Virtual Enterprise

    - by John Murphy
    During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems.  Case in point, almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value.

    Read the article

  • Watching a variable for changes without polling.

    - by milkfilk
    I'm using a framework called Processing which is basically a Java applet. It has the ability to do key events because Applet can. You can also roll your own callbacks of sorts into the parent. I'm not doing that right now and maybe that's the solution. For now, I'm looking for a more POJO solution. So I wrote some examples to illustrate my question. Please ignore using key events on the command line (console). Certainly this would be a very clean solution but it's not possible on the command line and my actual app isn't a command line app. In fact, a key event would be a good solution for me but I'm trying to understand events and polling beyond just keyboard specific problems. Both these examples flip a boolean. When the boolean flips, I want to fire something once. I could wrap the boolean in an Object so if the Object changes, I could fire an event too. I just don't want to poll with an if() statement unnecessarily. import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; /* * Example of checking a variable for changes. * Uses dumb if() and polls continuously. */ public class NotAvoidingPolling { public static void main(String[] args) { boolean typedA = false; String input = ""; System.out.println("Type 'a' please."); while (true) { InputStreamReader isr = new InputStreamReader(System.in); BufferedReader br = new BufferedReader(isr); try { input = br.readLine(); } catch (IOException ioException) { System.out.println("IO Error."); System.exit(1); } // contrived state change logic if (input.equals("a")) { typedA = true; } else { typedA = false; } // problem: this is polling. if (typedA) System.out.println("Typed 'a'."); } } } Running this outputs: Type 'a' please. a Typed 'a'. On some forums people suggested using an Observer. And although this decouples the event handler from class being observed, I still have an if() on a forever loop. import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.Observable; import java.util.Observer; /* * Example of checking a variable for changes. * This uses an observer to decouple the handler feedback * out of the main() but still is polling. */ public class ObserverStillPolling { boolean typedA = false; public static void main(String[] args) { // this ObserverStillPolling o = new ObserverStillPolling(); final MyEvent myEvent = new MyEvent(o); final MyHandler myHandler = new MyHandler(); myEvent.addObserver(myHandler); // subscribe // watch for event forever Thread thread = new Thread(myEvent); thread.start(); System.out.println("Type 'a' please."); String input = ""; while (true) { InputStreamReader isr = new InputStreamReader(System.in); BufferedReader br = new BufferedReader(isr); try { input = br.readLine(); } catch (IOException ioException) { System.out.println("IO Error."); System.exit(1); } // contrived state change logic // but it's decoupled now because there's no handler here. if (input.equals("a")) { o.typedA = true; } } } } class MyEvent extends Observable implements Runnable { // boolean typedA; ObserverStillPolling o; public MyEvent(ObserverStillPolling o) { this.o = o; } public void run() { // watch the main forever while (true) { // event fire if (this.o.typedA) { setChanged(); // in reality, you'd pass something more useful notifyObservers("You just typed 'a'."); // reset this.o.typedA = false; } } } } class MyHandler implements Observer { public void update(Observable obj, Object arg) { // handle event if (arg instanceof String) { System.out.println("We received:" + (String) arg); } } } Running this outputs: Type 'a' please. a We received:You just typed 'a'. I'd be ok if the if() was a NOOP on the CPU. But it's really comparing every pass. I see real CPU load. This is as bad as polling. I can maybe throttle it back with a sleep or compare the elapsed time since last update but this is not event driven. It's just less polling. So how can I do this smarter? How can I watch a POJO for changes without polling? In C# there seems to be something interesting called properties. I'm not a C# guy so maybe this isn't as magical as I think. private void SendPropertyChanging(string property) { if (this.PropertyChanging != null) { this.PropertyChanging(this, new PropertyChangingEventArgs(property)); } }

    Read the article

  • Dynamic scoping in Clojure?

    - by j-g-faustus
    Hi, I'm looking for an idiomatic way to get dynamically scoped variables in Clojure (or a similar effect) for use in templates and such. Here is an example problem using a lookup table to translate tag attributes from some non-HTML format to HTML, where the table needs access to a set of variables supplied from elsewhere: (def *attr-table* ; Key: [attr-key tag-name] or [boolean-function] ; Value: [attr-key attr-value] (empty array to ignore) ; Context: Variables "tagname", "akey", "aval" '( ; translate :LINK attribute in <a> to :href [:LINK "a"] [:href aval] ; translate :LINK attribute in <img> to :src [:LINK "img"] [:src aval] ; throw exception if :LINK attribute in any other tag [:LINK] (throw (RuntimeException. (str "No match for " tagname))) ; ... more rules ; ignore string keys, used for internal bookkeeping [(string? akey)] [] )) ; ignore I want to be able to evaluate the rules (left hand side) as well as the result (right hand side), and need some way to put the variables in scope at the location where the table is evaluated. I also want to keep the lookup and evaluation logic independent of any particular table or set of variables. I suppose there are similar issues involved in templates (for example for dynamic HTML), where you don't want to rewrite the template processing logic every time someone puts a new variable in a template. Here is one approach using global variables and bindings. I have included some logic for the table lookup: ;; Generic code, works with any table on the same format. (defn rule-match? [rule-val test-val] "true if a single rule matches a single argument value" (cond (not (coll? rule-val)) (= rule-val test-val) ; plain value (list? rule-val) (eval rule-val) ; function call :else false )) (defn rule-lookup [test-val rule-table] "looks up rule match for test-val. Returns result or nil." (loop [rules (partition 2 rule-table)] (when-not (empty? rules) (let [[select result] (first rules)] (if (every? #(boolean %) (map rule-match? select test-val)) (eval result) ; evaluate and return result (recur (rest rules)) ))))) ;; Code specific to *attr-table* (def tagname) ; need these globals for the binding in html-attr (def akey) (def aval) (defn html-attr [tagname h-attr] "converts to html attributes" (apply hash-map (flatten (map (fn [[k v :as kv]] (binding [tagname tagname akey k aval v] (or (rule-lookup [k tagname] *attr-table*) kv))) h-attr )))) (defn test-attr [] "test conversion" (prn "a" (html-attr "a" {:LINK "www.google.com" "internal" 42 :title "A link" })) (prn "img" (html-attr "img" {:LINK "logo.png" }))) user=> (test-attr) "a" {:href "www.google.com", :title "A link"} "img" {:src "logo.png"} This is nice in that the lookup logic is independent of the table, so it can be reused with other tables and different variables. (Plus of course that the general table approach is about a quarter of the size of the code I had when I did the translations "by hand" in a giant cond.) It is not so nice in that I need to declare every variable as a global for the binding to work. Here is another approach using a "semi-macro", a function with a syntax-quoted return value, that doesn't need globals: (defn attr-table [tagname akey aval] `( [:LINK "a"] [:href ~aval] [:LINK "img"] [:src ~aval] [:LINK] (throw (RuntimeException. (str "No match for " tagname))) ; ... more rules [(string? ~akey)] [] ))) Only a couple of changes are needed to the rest of the code: In rule-match?, when syntax-quoted the function call is no longer a list: - (list? rule-val) (eval rule-val) + (seq? rule-val) (eval rule-val) In html-attr: - (binding [tagname tagname akey k aval v] - (or (rule-lookup [k tagname] *attr-table*) kv))) + (or (rule-lookup [k tagname] (attr-table tagname k v)) kv))) And we get the same result without globals. (And without dynamic scoping.) Are there other alternatives to pass along sets of variable bindings declared elsewhere, without the globals required by Clojure's binding? Is there an idiomatic way of doing it, like Ruby's binding or Javascript's function.apply(context)?

    Read the article

  • Unable to connect to Wireless after installing Ubuntu 12.10

    - by Moulik
    I am using Asus U56E laptop and after installing Ubuntu 12.10 alongside Windows 8, I am unable to connect to the Wireless. I have been trying to solve this problem since two weeks and couldn't solve it. Please help. Any answer would be appreciated. Here are some command-line results. lspci -v | grep -iA 7 network ubuntu@ubuntu:~$ lspci -v | grep -iA 7 network 02:00.0 Network controller: Intel Corporation Centrino Wireless-N + WiMAX 6150 (rev 67) Subsystem: Intel Corporation Centrino Wireless-N + WiMAX 6150 BGN Flags: bus master, fast devsel, latency 0, IRQ 52 Memory at de800000 (64-bit, non-prefetchable) [size=8K] Capabilities: <access denied> Kernel driver in use: iwlwifi Kernel modules: iwlwifi lsmod | grep iwlwifi ubuntu@ubuntu:~$ lsmod | grep iwlwifi iwlwifi 386826 0 mac80211 539908 1 iwlwifi cfg80211 206566 2 iwlwifi,mac80211 ubuntu@ubuntu:~$ dmesg | grep iwlwifi [ 57.846261] iwlwifi: Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree: [ 57.846264] iwlwifi: Copyright(c) 2003-2012 Intel Corporation [ 57.846336] iwlwifi 0000:02:00.0: >pci_resource_len = 0x00002000 [ 57.846338] iwlwifi 0000:02:00.0: >pci_resource_base = ffffc90000c7c000 [ 57.846341] iwlwifi 0000:02:00.0: >HW Revision ID = 0x67 [ 57.846438] iwlwifi 0000:02:00.0: >irq 52 for MSI/MSI-X [ 59.558335] iwlwifi 0000:02:00.0: >loaded firmware version 41.28.5.1 build 33926 [ 59.558514] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEBUG disabled [ 59.558516] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEBUGFS enabled [ 59.558517] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEVICE_TRACING enabled [ 59.558519] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEVICE_TESTMODE enabled [ 59.558520] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_P2P disabled [ 59.558522] iwlwifi 0000:02:00.0: >Detected Intel(R) Centrino(R) Wireless-N + WiMAX 6150 BGN, REV=0x84 [ 59.558583] iwlwifi 0000:02:00.0: >L1 Disabled; Enabling L0S [ 59.569083] iwlwifi 0000:02:00.0: >device EEPROM VER=0x557, CALIB=0x6 [ 59.569085] iwlwifi 0000:02:00.0: >Device SKU: 0x150 [ 59.569087] iwlwifi 0000:02:00.0: >Valid Tx ant: 0x1, Valid Rx ant: 0x3 [ 59.569100] iwlwifi 0000:02:00.0: >Tunable channels: 13 802.11bg, 0 802.11a channels [ 70.208469] iwlwifi 0000:02:00.0: >L1 Disabled; Enabling L0S [ 70.208648] iwlwifi 0000:02:00.0: >Radio type=0x1-0x2-0x0 [ 70.366319] iwlwifi 0000:02:00.0: >L1 Disabled; Enabling L0S [ 70.366470] iwlwifi 0000:02:00.0: >Radio type=0x1-0x2-0x0 sudo lshw -c network ubuntu@ubuntu:~$ sudo lshw -c network *-network description: Wireless interface product: Centrino Wireless-N + WiMAX 6150 vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 67 serial: 40:25:c2:84:99:c4 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.5.0-17-generic firmware=41.28.5.1 build 33926 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:52 memory:de800000-de801fff *-network description: Ethernet interface product: AR8151 v2.0 Gigabit Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: c0 serial: 54:04:a6:2b:6a:ef capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI latency=0 link=no multicast=yes port=twisted pair resources: irq:54 memory:dd400000-dd43ffff ioport:a000(size=128) ifconfig ubuntu@ubuntu:~$ ifconfig eth0 Link encap:Ethernet HWaddr 54:04:a6:2b:6a:ef UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:176 errors:0 dropped:0 overruns:0 frame:0 TX packets:176 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:14368 (14.3 KB) TX bytes:14368 (14.3 KB) wlan0 Link encap:Ethernet HWaddr 40:25:c2:84:99:c4 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) iwconfig ubuntu@ubuntu:~$ iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off iwlist scan ubuntu@ubuntu:~$ iwlist scan eth0 Interface doesn't support scanning. lo Interface doesn't support scanning. wlan0 No scan results nm-tool ubuntu@ubuntu:~$ nm-tool NetworkManager Tool State: disconnected - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: atl1c State: unavailable Default: no HW Address: 54:04:A6:2B:6A:EF Capabilities: Carrier Detect: yes Wired Properties Carrier: off - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: iwlwifi State: disconnected Default: no HW Address: 40:25:C2:84:99:C4 Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points hypeness2: Infra, 00:21:29:DA:08:4F, Freq 2462 MHz, Rate 54 Mb/s, Strength 42 WPA love: Infra, 68:7F:74:17:02:66, Freq 2412 MHz, Rate 54 Mb/s, Strength 19 WPA WPA2 DIRECT-MwSCX-3400Pamela: Infra, 02:15:99:A3:3F:AC, Freq 2412 MHz, Rate 54 Mb/s, Strength 22 WPA2 router: Infra, 1C:AF:F7:D6:76:F3, Freq 2417 MHz, Rate 54 Mb/s, Strength 20 WPA2 wing: Infra, E8:40:F2:34:E4:F7, Freq 2437 MHz, Rate 54 Mb/s, Strength 20 WPA WPA2 132LINKSYS: Infra, 00:1A:70:80:1F:E9, Freq 2437 MHz, Rate 54 Mb/s, Strength 57 WEP VMITTAL: Infra, E0:46:9A:3C:F0:C4, Freq 2412 MHz, Rate 54 Mb/s, Strength 27 WEP HP-Print-10-LaserJet 1025: Infra, 7C:E9:D3:7E:F8:10, Freq 2437 MHz, Rate 54 Mb/s, Strength 59 ACNBB: Infra, 00:26:75:22:A6:2F, Freq 2437 MHz, Rate 54 Mb/s, Strength 20 SATKAIVAL: Infra, 00:18:E7:CE:69:A6, Freq 2412 MHz, Rate 54 Mb/s, Strength 69 WPA WPA2 hypeness: Infra, B8:E6:25:24:C3:B1, Freq 2437 MHz, Rate 54 Mb/s, Strength 54 WPA WPA2 CSNetwork: Infra, BC:14:01:58:C5:88, Freq 2437 MHz, Rate 54 Mb/s, Strength 25 WPA WPA2 tharma: Infra, BC:14:01:E2:06:18, Freq 2412 MHz, Rate 54 Mb/s, Strength 15 WPA WPA2 Active2.4: Infra, 10:6F:3F:0E:F3:8E, Freq 2462 MHz, Rate 54 Mb/s, Strength 17 WPA WPA2 ACNBB: Infra, 00:26:75:58:4E:7A, Freq 2437 MHz, Rate 54 Mb/s, Strength 85 KO: Infra, BC:14:01:2E:AF:A8, Freq 2452 MHz, Rate 54 Mb/s, Strength 22 WPA WPA2 FEAR: Infra, 00:18:4D:C0:BC:58, Freq 2462 MHz, Rate 54 Mb/s, Strength 17 WPA Pamela: Infra, BC:14:01:52:F6:F8, Freq 2412 MHz, Rate 54 Mb/s, Strength 24 WPA WPA2 bvrk2: Infra, 78:CD:8E:7B:3C:79, Freq 2457 MHz, Rate 54 Mb/s, Strength 19 WPA WPA2 BELL030: Infra, D8:6C:E9:17:AF:09, Freq 2462 MHz, Rate 54 Mb/s, Strength 22 WPA2 Desai: Infra, 00:1D:7E:52:FB:C5, Freq 2437 MHz, Rate 54 Mb/s, Strength 14 WEP Sritharan: Infra, BC:14:01:E5:59:78, Freq 2462 MHz, Rate 54 Mb/s, Strength 19 WPA WPA2 PFN: Infra, 00:13:10:8B:CF:45, Freq 2437 MHz, Rate 54 Mb/s, Strength 19 WEP rfkill list all ubuntu@ubuntu:~$ rfkill list all 0: asus-wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: asus-wimax: WiMAX Soft blocked: yes Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no so these are some more results sudo modprobe -r iwlwifi ubuntu@ubuntu:~$ sudo modprobe -r iwlwifi sudo modprobe iwlwifi 11n_disable=1 ubuntu@ubuntu:~$ sudo modprobe iwlwifi 11n_disable=1 echo "blacklist asus_wmi" | sudo tee -a /etcmodprobe.d/blacklist.conf ubuntu@ubuntu:~$ echo "blacklist asus_wmi" | sudo tee -a /etc/modprobe.d/blacklist.conf blacklist asus_wmi echo "options iwlwifi 11n_disable=1" | sudo tee /etc/modprobe.d/iwlwifi.conf ubuntu@ubuntu:~$ echo "options iwlwifi 11n_disable=1" | sudo tee /etc/modprobe.d/iwlwifi.conf options iwlwifi 11n_disable=1 sudo modprobe -rfv iwlwifi ubuntu@ubuntu:~$ sudo modprobe -rfv iwlwifi rmmod /lib/modules/3.5.0-17-generic/kernel/drivers/net/wireless/iwlwifi/iwlwifi.ko rmmod /lib/modules/3.5.0-17-generic/kernel/net/mac80211/mac80211.ko rmmod /lib/modules/3.5.0-17-generic/kernel/net/wireless/cfg80211.ko sudo modprobe -v iwlwifi ubuntu@ubuntu:~$ sudo modprobe -v iwlwifi insmod /lib/modules/3.5.0-17-generic/kernel/net/wireless/cfg80211.ko insmod /lib/modules/3.5.0-17-generic/kernel/net/mac80211/mac80211.ko insmod /lib/modules/3.5.0-17-generic/kernel/drivers/net/wireless/iwlwifi/iwlwifi.ko 11n_disable=1

    Read the article

  • Servlet dispatcher is currently unavailable

    - by theJava
    <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver" p:prefix="/WEB-INF/jsp/" p:suffix=".jsp" /> <bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://127.0.0.1:3306/spring"/> <property name="username" value="monty"/> <property name="password" value="indian"/> </bean> <bean id="mySessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="myDataSource" /> <property name="annotatedClasses"> <list> <value>uk.co.vinoth.spring.domain.User</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.hbm2ddl.auto">create</prop> </props> </property> </bean> <bean id="myUserDAO" class="uk.co.vinoth.spring.dao.UserDAOImpl"> <property name="sessionFactory" ref="mySessionFactory"/> </bean> <bean name="/user/*.htm" class="uk.co.vinoth.spring.web.UserController" > <property name="userDAO" ref="myUserDAO" /> </bean> </beans> The above is my bean configuration, why do i get error when i run my application. My logs folder is empty... org.apache.catalina.loader.StandardClassLoader@122c9df org.springframework.web.servlet.DispatcherServlet java.lang.ClassNotFoundException: org.springframework.web.servlet.DispatcherServlet at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1436) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1282) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1068) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:966) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:3996) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4266) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1014) at org.apache.catalina.core.StandardHost.start(StandardHost.java:736) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1014) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:448) at org.apache.catalina.core.StandardServer.start(StandardServer.java:700) at org.apache.catalina.startup.Catalina.start(Catalina.java:552) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:295) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:433) Dec 22, 2010 3:44:48 PM org.apache.catalina.core.StandardContext loadOnStartup SEVERE: Servlet /interMedix threw load() exception java.lang.ClassNotFoundException: org.springframework.web.servlet.DispatcherServlet at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1436) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1282) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1068) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:966) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:3996) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4266) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1014) at org.apache.catalina.core.StandardHost.start(StandardHost.java:736) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1014) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:448) at org.apache.catalina.core.StandardServer.start(StandardServer.java:700) at org.apache.catalina.startup.Catalina.start(Catalina.java:552) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:295) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:433) Dec 22, 2010 3:44:48 PM org.apache.coyote.http11.Http11BaseProtocol start INFO: Starting Coyote HTTP/1.1 on http-8181 Dec 22, 2010 3:44:48 PM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 Dec 22, 2010 3:44:48 PM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/27 config=null Dec 22, 2010 3:44:48 PM org.apache.catalina.storeconfig.StoreLoader load INFO: Find registry server-registry.xml at classpath resource Dec 22, 2010 3:44:49 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 558 ms Dec 22, 2010 3:44:50 PM org.apache.catalina.core.StandardWrapperValve invoke INFO: Servlet dispatcher is currently unavailable Dec 22, 2010 3:50:18 PM org.apache.catalina.core.StandardWrapperValve invoke INFO: Servlet dispatcher is currently unavailable But i have added spring-web-mvc to my class path which does contain this class file.

    Read the article

  • OIM 11g : Multi-thread approach for writing custom scheduled job

    - by Saravanan V S
    In this post I have shared my experience of designing and developing an OIM schedule job that uses multi threaded approach for updating data in OIM using APIs.  I have used thread pool (in particular fixed thread pool) pattern in developing the OIM schedule job. The thread pooling pattern has noted advantages compared to thread per task approach. I have listed few of the advantage here ·         Threads are reused ·         Creation and tear-down cost of thread is reduced ·         Task execution latency is reduced ·         Improved performance ·         Controlled and efficient management of memory and resources used by threads More about java thread pool http://docs.oracle.com/javase/tutorial/essential/concurrency/pools.html The following diagram depicts the high-level architectural diagram of the schedule job that process input from a flat file to update OIM process form data using fixed thread pool approach    The custom scheduled job shared in this post is developed to meet following requirement 1)      Need to process a CSV extract that contains identity, account identifying key and list of data to be updated on an existing OIM resource account. 2)      CSV file can contain data for multiple resources configured in OIM 3)      List of attribute to update and mapping between CSV column to OIM fields may vary between resources The following are three Java class developed for this requirement (I have given only prototype of the code that explains how to use thread pools in schedule task) CustomScheduler.java - Implementation of TaskSupport class that reads and passes the parameters configured on the schedule job to Thread Executor class. package com.oracle.oim.scheduler; import java.util.HashMap; import com.oracle.oim.bo.MultiThreadDataRecon; import oracle.iam.scheduler.vo.TaskSupport; public class CustomScheduler extends TaskSupport {      public void execute(HashMap options) throws Exception {             /*  Read Schedule Job Parameters */             String param1 = (String) options.get(“Parameter1”);             .             int noOfThread = (int) options.get(“No of Threads”);             .             String paramn = (int) options.get(“ParamterN”); /* Provide all the required input configured on schedule job to Thread Pool Executor implementation class like 1) Name of the file, 2) Delimiter 3) Header Row Numer 4) Line Escape character 5) Config and resource map lookup 6) No the thread to create */ new MultiThreadDataRecon(all_required_parameters, noOfThreads).reconcile();       }       public HashMap getAttributes() { return null; }       public void setAttributes() {       } } MultiThreadDataRecon.java – Helper class that reads data from input file, initialize the thread executor and builds the task queue. package com.oracle.oim.bo; import <required file IO classes>; import  <required java.util classes>; import  <required OIM API classes>; import <csv reader api>; public class MultiThreadDataRecon {  private int noOfThreads;  private ExecutorService threadExecutor = null;  public MetaDataRecon(<required params>, int noOfThreads)  {       //Store parameters locally       .       .       this.noOfThread = noOfThread;  }        /**        *  Initialize         */  private void init() throws Exception {       try {             // Initialize CSV file reader API objects             // Initialize OIM API objects             /* Initialize Fixed Thread Pool Executor class if no of threads                 configured is more than 1 */             if (noOfThreads > 1) {                   threadExecutor = Executors.newFixedThreadPool(noOfThreads);             } else {                   threadExecutor = Executors.newSingleThreadExecutor();             }             /* Initialize TaskProcess clas s which will be executing task                 from the Queue */                TaskProcessor.initializeConfig(params);       } catch (***Exception e) {                   // TO DO       }  }       /**        *  Method to reconcile data from CSV to OIM        */ public void reconcile() throws Exception {        try {             init();             while(<csv file has line>){                   processRow(line);             }             /* Initiate thread shutdown */             threadExecutor.shutdown();             while (!threadExecutor.isTerminated()) {                 // Wait for all task to complete.             }            } catch (Exception e) {                   // TO DO            } finally {                   try {                         //Close all the file handles                   } catch (IOException e) {                         //TO DO                   }             }       }       /**        * Method to process         */       private void processRow(String row) {             // Create task processor instance with the row data              // Following code push the task to work queue and wait for next                available thread to execute             threadExecutor.execute(new TaskProcessor(rowData));       } } TaskProcessor.java – Implementation of “Runnable” interface that executes the required business logic to update data in OIM. package com.oracle.oim.bo; import <required APIs> class TaskProcessor implements Runnable {       //Initialize required member variables       /**        * Constructor        */       public TaskProcessor(<row data>) {             // Initialize and parse csv row       }       /*       *  Method to initialize required object for task execution       */       public static void initializeConfig(<params>) {             // Process param and initialize the required configs and object       }           /*        * (non-Javadoc)        *         * @see java.lang.Runnable#run()        */            public void run() {             if (<is csv data valid>){                   processData();             }       }  /**   * Process the the received CSV input   */  private void processData() {     try{       //Find the user in OIM using the identity matching key value from CSV       // Find the account to be update from user’s account based on account identifying key on CSV       // Update the account with data from CSV       }catch(***Exception e){           //TO DO       }   } }

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • Ubuntu 12.04 + Wifi not working

    - by user171154
    i'm having problems connecting over wireless. At the moment, I'm using wicd. It seems to get stuck on "Verifying AP association...". Without wicd I can get the connection up and ping the Net - but if I take eth0 down (ifconfig eth0 down), my wireless goes away too (same result if I unplug the wire instead). wicd is the only way I can bring eth0 back (which is the main reason I'm using it) - ifconfig eth0 and/or ifup eth0 do not re-enable the connection (I just discovered it leaves out the gateway. Adding the gateway back in re-enables the connection including wifi; I didn't want to delete the info about wicd above in case it gives someone an idea.) Doing it manually, despite the errors (which it would be nice to also resolve) - allows me to ping the outside world: ifup wlan0 ioctl[SIOCSIWENCODEEXT]: Invalid argument ioctl[SIOCSIWENCODEEXT]: Invalid argument ssh stop/waiting ssh start/running, process 17336 ping -I wlan0 -c 4 8.8.8.8 PING 8.8.8.8 (8.8.8.8) from 192.168.0.12 wlan0: 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_req=1 ttl=43 time=48.8 ms 64 bytes from 8.8.8.8: icmp_req=2 ttl=43 time=47.9 ms 64 bytes from 8.8.8.8: icmp_req=3 ttl=43 time=48.7 ms 64 bytes from 8.8.8.8: icmp_req=4 ttl=43 time=53.2 ms --- 8.8.8.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 47.975/49.711/53.235/2.063 ms # iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"TPLINK" Mode:Managed Frequency:2.427 GHz Access Point: 64:66:xx:xx:xx:22 Bit Rate=108 Mb/s Tx-Power=27 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=70/70 Signal level=-39 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:3 Missed beacon:0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 01 serial: f0:7d:68:c1:b4:13 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-67-generic-pae firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:dfbf0000-dfbfffff ip route default via 192.168.0.1 dev eth0 default via 192.168.0.1 dev wlan0 metric 100 169.254.0.0/16 dev wlan0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.102 192.168.0.0/24 dev wlan0 proto kernel scope link src 192.168.0.12 (For the record, I have no idea what the 169.254.0.0 address is doing there.) uname -a 3.2.0-67-generic-pae #101-Ubuntu SMP Tue Jul 15 18:04:54 UTC 2014 i686 i686 i386 GNU/Linux lshw -C network *-network description: Ethernet interface product: NetXtreme BCM5751 Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 01 serial: 00:11:11:59:fc:09 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5751-v3.23a ip=192.168.0.102 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:16 memory:dfcf0000-dfcfffff *-network description: Wireless interface product: AR5418 Wireless Network Adapter [AR5008E 802.11(a)bgn] (PCI-Express) vendor: Qualcomm Atheros physical id: 0 /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback source /etc/network/interfaces.eth0 source /etc/network/interfaces.wlan0 /etc/network/interfaces.eth0 #Main Interface auto eth0 iface eth0 inet static address 192.168.0.102 netmask 255.255.255.0 gateway 192.168.0.1 /etc/network/interfaces.wlan0 auto wlan0 iface wlan0 inet static address 192.168.0.12 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 netmask 255.255.255.0 wpa-driver wext wpa-ssid TPLINK wpa-ap-scan 1 wpa-proto RSN wpa-pairwise CCMP wpa-group CCMP wpa-key-mgmt WPA-PSK wpa-psk dca1badb5fd4e9axxx4xxdaaxxfa91xx610bxx6a7d57ef67af9809dxx6af42e39 /etc/wpa_supplicant.conf ctrl_interface=/var/run/wpa_supplicant network={ ssid="TPLINK" psk="my password" key_mgmt=WPA-PSK proto=RSN pairwise=CCMP group=CCMP } ifdown eth0 ifdown: interface eth0 not configured ifconfig eth0 Link encap:Ethernet HWaddr 00:11:xx:xx:xx:09 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::211:11ff:fe59:fc09/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:213690 errors:0 dropped:0 overruns:0 frame:0 TX packets:155266 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:220057808 (220.0 MB) TX bytes:21137696 (21.1 MB) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:196412 errors:0 dropped:0 overruns:0 frame:0 TX packets:196412 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:153270697 (153.2 MB) TX bytes:153270697 (153.2 MB) wlan0 Link encap:Ethernet HWaddr f0:7d:xx:xx:xx:13 inet addr:192.168.0.12 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::f27d:68ff:fec1:b413/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11335 errors:0 dropped:0 overruns:0 frame:0 TX packets:7287 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2563290 (2.5 MB) TX bytes:855746 (855.7 KB) ifconfig eth0 down ifconfig eth0 Link encap:Ethernet HWaddr 00:xx:xx:xx:xx:09 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::211:11ff:fe59:fc09/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:192 (192.0 B) TX bytes:94 (94.0 B) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:196418 errors:0 dropped:0 overruns:0 frame:0 TX packets:196418 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:153270871 (153.2 MB) TX bytes:153270871 (153.2 MB) wlan0 Link encap:Ethernet HWaddr f0:7d:xx:xx:xx:13 inet addr:192.168.0.12 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::f27d:68ff:fec1:b413/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11359 errors:0 dropped:0 overruns:0 frame:0 TX packets:7293 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2565482 (2.5 MB) TX bytes:856363 (856.3 KB) ip route default via 192.168.0.1 dev wlan0 metric 100 169.254.0.0/16 dev wlan0 scope link metric 1000 192.168.0.0/24 dev wlan0 proto kernel scope link src 192.168.0.12 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.102 ping -I wlan0 -c 4 8.8.8.8 PING 8.8.8.8 (8.8.8.8) from 192.168.0.12 wlan0: 56(84) bytes of data. --- 8.8.8.8 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3024ms ping -I eth0 -c 3 router PING router (192.168.0.1) from 192.168.0.102 eth0: 56(84) bytes of data. --- router ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2015ms ping -I wlan0 -c 3 router PING router (192.168.0.1) from 192.168.0.12 wlan0: 56(84) bytes of data. --- router ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms Let me know if you need more info. Thank you in advance.

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >