Search Results

Search found 13104 results on 525 pages for 'malcolm box'.

Page 499/525 | < Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >

  • 202 blog articles

    - by mprove
    All my blog articles under blogs.oracle.com since August 2005: 202 blog articles Apr 2012 blogs.oracle.com design patch Mar 2012 Interaction 12 - Critique Mar 2012 Typing. Clicking. Dancing. Feb 2012 Desktop Mobility in Hospitals with Oracle VDI /video Feb 2012 Interaction 12 in Dublin - Highlights of Day 3 Feb 2012 Interaction 12 in Dublin - Highlights of Day 2 Feb 2012 Interaction 12 in Dublin - Highlights of Day 1 Feb 2012 Shit Interaction Designers Say Feb 2012 Tips'n'Tricks for WebCenter #3: How to display custom page titles in Spaces Jan 2012 Tips'n'Tricks for WebCenter #2: How to create an Admin menu in Spaces and save a lot of time Jan 2012 Tips'n'Tricks for WebCenter #1: How to apply custom resources in Spaces Jan 2012 Merry XMas and a Happy 2012! Dec 2011 One Year Oracle SocialChat - The Movie Nov 2011 Frank Ludolph's Last Working Day Nov 2011 Hans Rosling at TED Oct 2011 200 Countries x 200 Years Oct 2011 Blog Aggregation for Desktop Virtualization Oct 2011 Oracle VDI at OOW 2011 Sep 2011 Design for Conversations & Conversations for Design Sep 2011 All Oracle UX Blogs Aug 2011 Farewell Loriot Aug 2011 Oracle VDI 3.3 Overview Aug 2011 Sutherland's Closing Remarks at HyperKult Aug 2011 Surface and Subface Aug 2011 Back to Childhood in UI Design Jul 2011 The Art of Engineering and The Engineering of Art Jul 2011 Oracle VDI Seminar - June-30 Jun 2011 SGD White Paper May 2011 TEDxHamburg Live Feed May 2011 Oracle VDI in 3 Minutes May 2011 Space Ship Earth 2011 May 2011 blog moving times Apr 2011 Frozen tag cloud Apr 2011 Oracle: Hardware Software Complete in 1953 Apr 2011 Interaction Design with Wireframes Apr 2011 A guide to closing down a project Feb 2011 Oracle VDI 3.2.2 Jan 2011 free VDI charts Jan 2011 Sun Founders Panel 2006 Dec 2010 Sutherland on Leadership Dec 2010 SocialChat: Efficiency of E20 Dec 2010 ALWAYS ON Desktop Virtualization Nov 2010 12,000 Desktops at JavaOne Nov 2010 SocialChat on Sharing Best Practices Oct 2010 Globe of Visitors Oct 2010 SocialChat about the Next Big Thing Oct 2010 Oracle VDI UX Story - Wireframes Oct 2010 What's a PC anyway? Oct 2010 SocialChat on Getting Things Done Oct 2010 SocialChat on Infoglut Oct 2010 IT Twenty Twenty Oct 2010 Desktop Virtualization Webcasts from OOW Oct 2010 Oracle VDI 3.2 Overview Sep 2010 Blog Usability Top 7 Sep 2010 100 and counting Aug 2010 Oracle'izing the VDI Blogs Aug 2010 SocialChat on Apple Aug 2010 SocialChat on Video Conferencing Aug 2010 Oracle VDI 3.2 - Features and Screenshots Aug 2010 SocialChat: Don't stop making waves Aug 2010 SocialChat: Giving Back to the Community Aug 2010 SocialChat on Learning in Meetings Aug 2010 iPAD's Natural User Interface Jul 2010 Last day for Sun Microsystems GmbH Jun 2010 SirValUse Celebration Snippets Jun 2010 10 years SirValUse - Happy Birthday! Jun 2010 Wim on Virtualization May 2010 New Home for Oracle VDI Apr 2010 Renaissance Slide Sorter Comments Apr 2010 Unboxing Sun Ray 3 Plus Apr 2010 Desktop Virtualisierung mit Sun VDI 3.1 Apr 2010 Blog Relaunch Mar 2010 Social Messaging Slides from CeBIT Mar 2010 Social Messaging Talk at CeBIT Feb 2010 Welcome Oracle Jan 2010 My last presentation at Sun Jan 2010 Ivan Sutherland on Leadership Jan 2010 Learning French with Sun VDI Jan 2010 Learning Danish with Sun Ray Jan 2010 VDI workshop in Nieuwegein Jan 2010 Happy New Year 2010 Jan 2010 On Creating Slides Dec 2009 Best VDI Ever Nov 2009 How to store the Big Bang Nov 2009 Social Enterprise Tools. Beipiel Sun. Nov 2009 Nov-19 Nov 2009 PDF and ODF links on your blog Nov 2009 Q&A on VDI and MySQL Cluster Nov 2009 Zürich next week: Swiss Intranet Summit 09 Nov 2009 Designing for a Sustainable World - World Usabiltiy Day, Nov-12 Nov 2009 How to export a desktop from VDI 3 Nov 2009 Virtualisation Roadshow in the UK Nov 2009 Project Wonderland at EDUCAUSE 09 Nov 2009 VDI Roadshow in Dublin, Nov-26, 2009 Nov 2009 Sun VDI at EDUCAUSE 09 Nov 2009 Sun VDI 3.1 Architecture and New Features Oct 2009 VDI 3.1 is Early-Access Sep 2009 Virtualization for MySQL on VMware Sep 2009 Silpion & 13. Stock Sommerparty Sep 2009 Sun Ray and VMware View 3.1.1 2009-08-31 New Set of Sun Ray Status Icons 2009-08-25 Virtualizing the VDI Core? 2009-08-23 World Usability Day Hamburg 2009 - CfP 2009-07-16 Rising Sun 2009-07-15 featuring twittermeme 2009-06-19 ISC09 Student Party on June-20 /Hamburg 2009-06-18 Before and behind the curtain of JavaOne 2009-06-09 20k desktops at JavaOne 2009-06-01 sweet microblogging 2009-05-25 VDI 3 - Why you need 3 VDI hosts and what you can do about that? 2009-05-21 IA Konferenz 2009 2009-05-20 Sun VDI 3 UX Story - Power of the Web 2009-05-06 Planet of Sun and Oracle User Experience Design 2009-04-22 Sun VDI 3 UX Story - User Research 2009-04-08 Sun VDI 3 UX Story - Concept Workshops 2009-04-06 Localized documentation for Sun Ray Connector for VMware View Manager 1.1 2009-04-03 Sun VDI 3 Press Release 2009-03-25 Sun VDI 3 launches today! 2009-03-25 Sun Ray Connector for VMware View Manager 1.1 Update 2009-03-11 desktop virtualization wiki relaunch 2009-03-06 VDI 3 at CeBIT hall 6, booth E36 2009-03-02 Keyboard layout problems with Sun Ray Connector for VMware VDM 2009-02-23 wikis.sun.com tips & tricks 2009-02-23 Sun VDI 3 is in Early Access 2009-02-09 VirtualCenter unable to decrypt passwords 2009-02-02 Sun & VMware Desktop Training 2009-01-30 VDI at next09? 2009-01-16 Sun VDI: How to use virtual machines with multiple network adapters 2009-01-07 Sun Ray and VMware View 2009-01-07 Hamburg World Usability Day 2008 - Webcasts 2009-01-06 Sun Ray Connector for VMware VDM slides 2008-12-15 mother of all demos 2008-12-08 Build your own Thumper 2008-12-03 Troubleshooting Sun Ray Connector for VMware VDM 2008-12-02 My Roller Tag Cloud 2008-11-28 Sun Ray Connector: SSL connection to VDM 2008-11-25 Setting up SSL and Sun Ray Connector for VMware VDM 2008-11-13 Inspiration for Today and Tomorrow 2008-10-23 Sun Ray Connector for VMware VDM released 2008-10-14 From Sketchpad to ILoveSketch 2008-10-09 Desktop Virtualization on Xing 2008-10-06 User Experience Forum on Xing 2008-10-06 Sun Ray Connector for VMware VDM certified 2008-09-17 Virtual Clouds over Las Vegas 2008-09-14 Bill Verplank sketches metaphors 2008-09-04 End of Early Access - Sun Ray Connector for VMware 2008-08-27 Early Access: Sun Ray Connector for VMware Virtual Desktop Manager 2008-08-12 Sun Virtual Desktop Connector - Insides on Recycling Part 2 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling Part 3 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling 2008-07-20 lost in wiki space 2008-07-07 Evolution of the Desktop 2008-06-17 Virtual Desktop Webcast 2008-06-16 Woodstock 2008-06-16 What's a Desktop PC anyway? 2008-06-09 Virtual-T-Box 2008-06-05 Virtualization Glossary 2008-05-06 Five User Experience Principles 2008-04-25 Virtualization News Feed 2008-04-21 Acetylcholinesterase - Second Season 2008-04-18 Acetylcholinesterase - End of Signal 2007-12-31 Produkt-Management ist... 2007-10-22 Usability Verbände, Verteiler und Netzwerke. 2007-10-02 The Meaning is the Message 2007-09-28 Visualization Methods 2007-09-10 Inhouse und Open Source Projekte – Usability verankern und Synergien nutzen 2007-09-03 Der Schwabe Darth Vader entdeckt das Virale Marketing 2007-08-29 Dick Hardt 3.0 on Identity 2.0 2007-08-27 quality of written text depends on the tool 2007-07-27 podcasts for reboot9 2007-06-04 It is the user's itch that need to be scratched 2007-05-25 A duel at reboot9 2007-05-14 Taxonomien und Folksonomien - Tagging als neues HCI-Element 2007-05-10 Dueling Interaction Models of Personal-Computing and Web-Computing 2007-03-01 22.März: Weizenbaum. Rebel at Work. /Filmpremiere Hamburg 2007-02-25 Bruce Sterling at UbiComp 2006 /webcast 2006-11-12 FSOSS 2006 /webcasts 2006-11-10 Highway 101 2006-11-09 User Experience Roundtable Hamburg: EuroGEL 2006 2006-11-08 Douglas Adams' Hyperland (BBC 1990) 2006-10-08 Taxonomien und Folksonomien – Tagging als neues HCI-Element 2006-09-13 Usability im Unternehmen 2006-09-13 Doug does HyperScope 2006-08-26 TED Talks and TechTalks 2006-08-21 Kai Krause über seine Freundschaft zu Douglas Adams 2006-07-20 Rebel At Work: Film Portrait on Weizenbaum 2006-07-04 Gabriele Fischer, mp3 2006-06-07 Dick Hardt at ETech 06 2006-06-05 Weinberger: From Control to Conversation 2006-04-16 Eye Tracking at User Experience Roundtable Hamburg 2006-04-14 dropping knowledge 2006-04-09 GEL 2005 2006-03-13 slide photos of reboot7 2006-03-04 Dick Hardt on Identity 2.0 2006-02-28 User Experience Newsletter #13: Versioning 2006-02-03 Ester Dyson on Choice and Happyness 2006-02-02 Requirements-Engineering im Spannungsfeld von Individual- und Produktsoftware 2006-01-15 User Experience Newsletter #12: Intuition Quiz 2005-11-30 User Experience und Requirements-Engineering für Software-Projekte 2005-10-31 Ivan Sutherland on "Research and Fun" 2005-10-18 Ars Electronica / Mensch und Computer 2005 2005-09-14 60 Jahre nach Memex: Über die Unvereinbarkeit von Desktop- und Web-Paradigma 2005-08-31 reboot 7 2005-06-30

    Read the article

  • Web Self Service installation on Windows

    - by Rajesh Sharma
    Web Self Service (WSS) installation on windows is pretty straight forward but you might face some issues if deployed under tomcat. Here's a step-by-step guide to install Oracle Utilities Web Self Service on windows.   Below installation steps are done on: Oracle Utilities Framework version 2.2.0 Oracle Utilities Application - Customer Care & Billing version 2.2.0 Application server - Apache Tomcat 6.0.13 on default port 6500 Other settings include: SPLBASE = C:\spl\CCBDEMO22 SPLENVIRON = CCBV22 SPLWAS = TCAT   Follow these steps for a Web Self Service installation on windows: Download Web Self Service application from edelivery.   Copy the delivery file Release-SelfService-V2.2.0.zip from the Oracle Utilities Customer Care and Billing version 2.2.0 Web Self Service folder on the installation media to a directory on your Windows box where you would like to install the application, in our case it's a temporary folder C:\wss_temp.   Setup application environment, execute splenviron.cmd -e <ENVIRON_NAME>   Create base folder for Self Service application named SelfService under %SPLEBASE%\splapp\applications   Install Oracle Utilities Web Self Service   C:\wss_temp\Release-SelfService-V2.2.0>install.cmd -d %SPLEBASE%\splapp\applications\SelfService   Web Self Service installation menu. Populate environment values for each item.   ******************************************************** Pick your installation options: ******************************************************** 1. Destination directory name for installation.             | C:\spl\CCBDEMO22\splapp\applications\SelfService 2. Web Server Host.                                         | CCBV22 3. Web Server Port Number.                                  | 6500 4. Mail SMTP Host.                                          | CCBV22 5. Top Product Installation directory.                      | C:\spl\CCBDEMO22 6.     Web Application Server Type.                         | TCAT 7.     When OAS: SPLWeb OC4J instance name is required.     | OC4J1 8.     When WAS: SPLWeb server instance name is required.   | server1   P. Process the installation. Each item in the above list should be configured for a successful installation. Choose option to configure or (P) to process the installation:  P   Option 7 and Option 8 can be ignored for TCAT.   Above step installs SelfService.war file in the destination directory. We need to explode this war file. Change directory to the installation destination folder, and   C:\spl\CCBDEMO22\splapp\applications\SelfService>jar -xf SelfService.war   Review SelfServiceConfig.properties and CMSelfServiceConfig.properties. Change any properties value within the file specific to your installation/site. Generally default settings apply, for this exercise assumes that WEB user already exists in your application database.   For more information on property file customization, refer to Oracle Utilities Web Self Service Configuration section in Customer Care & Billing Installation Guide.   Add context entry in server.xml located under tomcat-base folder C:\spl\CCBDEMO22\product\tomcatBase\conf   ... <!-- SPL Context -->           <Context path="" docBase="C:/spl/CCBDEMO22/splapp/applications/root" debug="0" privileged="true"/>           <Context path="/appViewer" docBase="C:/spl/CCBDEMO22/splapp/applications/appViewer" debug="0" privileged="true"/>           <Context path="/help" docBase="C:/spl/CCBDEMO22/splapp/applications/help" debug="0" privileged="true"/>           <Context path="/XAIApp" docBase="C:/spl/CCBDEMO22/splapp/applications/XAIApp" debug="0" privileged="true"/>           <Context path="/SelfService" docBase="C:/spl/CCBDEMO22/splapp/applications/SelfService" debug="0" privileged="true"/> ...   Add User in tomcat-users.xml file located under tomcat-base folder C:\spl\CCBDEMO22\product\tomcatBase\conf   <user username="WEB" password="selfservice" roles="cisusers"/>   Note the password is "selfservice", this is the default password set within the SelfServiceConfig.properties file with base64 encoding.   Restart the application (spl.cmd stop | start)   12.  Although Apache Tomcat version 6.0.13 does not come with the admin pack, you can verify whether SelfService application is loaded and running, go to following URL http://server:port/manager/list, in our case it'll be http://ccbv22:6500/manager/list Following output will be displayed   OK - Listed applications for virtual host localhost /admin:running:0:C:/tomcat/apache-tomcat-6.0.13/webapps/ROOT/admin /XAIApp:running:0:C:/spl/CCBDEMO22/splapp/applications/XAIApp /host-manager:running:0:C:/tomcat/apache-tomcat-6.0.13/webapps/host-manager /SelfService:running:0:C:/spl/CCBDEMO22/splapp/applications/SelfService /appViewer:running:0:C:/spl/CCBDEMO22/splapp/applications/appViewer /manager:running:1:C:/tomcat/apache-tomcat-6.0.13/webapps/manager /help:running:0:C:/spl/CCBDEMO22/splapp/applications/help /:running:0:C:/spl/CCBDEMO22/splapp/applications/root   Also ensure that the XAIApp is running.   Run Oracle Utilities Web Self Service application http://server:port/SelfService in our case it'll be  http://ccbv22:6500/SelfService   Still doesn't work? And you get '503 HTTP response' at the time of customer registration?     This is because XAI service is still unavailable. There is initialize.waittime set for a default value of 90 seconds for the XAI Application to come up.   Remember WSS uses XAI to perform actions/validations on the CC&B database.  

    Read the article

  • Use Those Extra Mouse Buttons to Increase Efficiency

    - by Mark Virtue
    Did you know that the most commonly used mouse actions are clicking a window’s “Close” button (the X in the top-right corner), and clicking the “Back” button (in a browser and various other programs)?  How much time do you spend every day locating the Close button or the Back button with your mouse so that you can click on them?  And what about that mouse you’re using – how many buttons does it have, besides the two main ones?  Most mouses these days have at least four (including the scroll-wheel, which a lot of people don’t realize is also a button as well).  Why not assign those extra buttons to your most common mouse actions, and save yourself a bundle of mousing-around time every day? If your mouse was manufactured by one of the “premium” mouse manufacturers (Microsoft, Logitech, etc), it almost certain came with driver software to allow you to customize your mouse’s controls and take advantage of your mouse’s special features.  Microsoft, for example, provides driver software called IntelliPoint (link below), while Logitech provides SetPoint.  It’s possible that your mouse has some extra buttons but doesn’t come with its own driver software (the author is using a Microsoft Bluetooth Notebook Mouse 5000, which amazingly is not supported by the Microsoft IntelliPoint software!).  If your mouse falls into this category, you can use a marvelous free product called X-Mouse Button Control, from Highresolution Enterprises (link below).  It provides a truly amazing array of mouse configuration options, including assigning actions to buttons on a per-application basis. Once X-Mouse Button Control is downloaded, its setup process is quite straightforward. Once downloaded, you can start the program via Start / Highresolution Enterprises / X-Mouse Button Control.  You will find the program’s icon in the system tray: Right-click on the icon and select Setup from the pop-up menu.  The program’s configuration window appears: It’s extremely unlikely that we will want to change the functionality of our mouse’s two main buttons (left and right), so instead we’ll look at the rest of the options on the right side of the window.  The Middle Button refers to either the third, middle button (found on some old mouses), or the pressing of the wheel itself, as a button (if you didn’t know you could press your wheel like a button, try it out now).  Mouse Button 4 and Mouse Button 5 usually refer to the extra buttons found on the side of the mouse, often near your thumb. So what can we use these extra mouse buttons for?  Well, clearly Close and Back are two obvious candidates.  Each of these can be found by selecting them from the drop-down menu next to each button field: Once the two options are chosen, the window will look something like this: If you’re not interested in choosing Back or Close, you may like to try some of the other options in the list, including: Cut, Copy and Paste Undo Show the Desktop Next/Previous track (for media playback) Open any program Simulate any keystroke or combination of keystrokes ….and many other options.  Explore the drop-down list to see them all. You may decide, for example, that closing the current document (as opposed to the current program) would be a good use for Mouse Button 5.  In other words, we need to simulate the keypress of Ctrl-F4.  Let’s see how we achieve this. First we select Simulated Keystrokes from the drop-down list: The Simulated Keystrokes window opens: The instructions on the page are pretty comprehensive.  If you want to simulate the Ctrl-F4 keystroke, you need to type {CTRL}{F4} into the box: …and then click OK. Assigning Actions to Buttons on a Per-Application Basis One of the most powerful features of X-Mouse Button Control is the ability to assign actions to buttons on a per-application basis.  This means that if we have a particular program open, then our mouse will behave differently – our buttons will do different things. For example, when we have Windows Media Player open, for example, we may wish to have buttons assigned to Play/Pause, Next track and Previous track, as well as changing the volume with the mouse!  This is easy with X-Mouse Button Control.  We start by opening Windows Media Player.  This makes the next step easier.  Then we return to X-Mouse Button Control and add a new “configuration”.  This is done by clicking the Add button: A window opens containing a list of all running programs, including our recently opened Windows Media Player: We select Windows Media Player and click OK.  A new, blank “configuration” is created: We repeat the earlier steps to assign buttons to Play/Pause, Next track and Previous track, and assign scrolling the wheel to alter the volume:   To save all our changes and close the window, we click Apply. Now spend a few minutes thinking of all the applications you use the most, and what are the most common simple tasks you perform in each of those applications.  Those tasks are then perfect candidates for per-application button assignments. There are many more configuration options and capabilities of X-Mouse Button Control – too many to list here.  We encourage you to spend a bit of time exploring the Setup window.  Then, most important of all, don’t forget to use your new mouse buttons!  Get into the habit of using them, and then after a while you’ll start to wonder how you ever tolerated the laborious, tedious, time-consuming process of actually locating each window’s Close button… Download X-Mouse Button Control Highresolution Enterprise Similar Articles Productive Geek Tips Add Specialized Toolbar Buttons to Firefox the Easy WayBoost Your Mouse Pointing Accuracy in WindowsMake Mouse Navigation Faster in WindowsVista Style Popup Previews for Firefox TabsStupid Geek Tricks: Using the Quick Zoom Feature in Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses Mashpedia is a Real-time Encyclopedia

    Read the article

  • Company Review: Google Products

    Google, Inc offers an array of products and services to all of its end-users. However their search capabilities are the foundation for Google’s current success and their primary business focus. Currently, Google offers over twenty different search applications that allow users to search the internet for books, maps, videos, images, products and much more. Their product decisions have allowed users demands to be met while focusing on the free based model. This allows users to access Google data free of charge and indirectly gives Google a strong competitive advantage of other competitors along with the accuracy of the search results. According to Google, Inc, they offer the following types of searching capabilities: Alerts Get email updates on the topics of your choice Blog Search Find blogs on your favorite topics  Books Search the full text of books  Custom Search Create a customized search experience for your community  Desktop Search and personalize your computer  Dictionary Search for definitions of words and phrases Directory Search the web, organized by topic or category Earth Explore the world from your computer Finance Business info, news and interactive charts GOOG-411 Find and connect for free with businesses from your phone  Images Search for images on the web Maps View maps and directions News Search thousands of news stories Patent Search Search the full text of US Patents Product Search Search for stuff to buy Scholar Search scholarly papers Toolbar Add a search box to your browser Trends Explore past and present search trends Videos Search for videos on the web Web Search Search billions of web pages Web Search Features Find movies, music, stocks, books and more mapping Google’s free based business model is only one way it differentiates itself from its competition. There is also a strong focus on the accuracy of search results and the speed in which they are returned to the end-user. Quality function deployment (QFD) is a structured method used to help connect user needs to the design features of a project proposed to address those needs. This method is particularly useful in accounting for needs that are not easily articulated or precisely defined according to the U. S. Department of Transportation Federal Highway Administration. Due to the fact that QFD is so customer driven Google is always in a constant state of change in attempt to reengineer its search algorithms, and other dependant systems so that end-users requirements are constantly being met. Value engineering is a key example of this, Google is constantly trying to improve all aspects of its products, improve system maintainability, and system interoperability. Bridgefield Group defines value engineering as an organized methodology that identifies and selects the lowest lifecycle cost options in design, materials and processes that achieves the desired level of performance, reliability and customer satisfaction. In addition, it seeks to remove unnecessary costs in the above areas and is often a joint effort with cross-functional internal teams and relevant suppliers. Common issues that appear when developing large scale systems like Google’s search applications include modular design of a product and/or service and providing accurate value analysis. A design approach that adheres to four fundamental tenets of cohesiveness, encapsulation, self-containment, and high binding to design a system component as an independently operable unit subject to change is how the Open System Joint Task Force defines modular design. More specifically M. S. Schmaltz defines modular software design as having a large collection of statements strung together in one partition of in-line code; we segment or divide the statements into logical groups called modules. Each module performs one or two tasks, and then passes control to another module. By breaking up the code into "bite-sized chunks", so to speak, we are able to better control the flow of data and control. This is especially true in large software systems. Value analysis is a process to evaluate products and services based on effectiveness, safety, and cost. Value analysis involves assessing the quality as well as the cost of a product or service as defined by the Healthcare Financial Management Association.  “Operations Management deals with the design and management of products, processes, services and supply chains. It considers the acquisition, development, and utilization of resources that firms need to deliver the goods and services their clients want.” (MIT,2010) Google, Inc encourages an open environment between all employees, also known as Googlers. This is reinforced by a cross-section team or cross-functional teams comprised from multiple departments assigned to every project so that every department like marketing, finance, and quality assurance has input on every project. In addition, Google is known for their openness to new ideas regardless of the status or seniority of an employee. In fact, Google allows for 20% of an employee’s time can be devoted to developing new ideas and/or pet projects. HumTech.com defines a cross-functional team as a collection of people with varied levels of skills and experience brought together to accomplish a task. As the name implies, Cross-Functional Team members come from different organizational units. Cross-Functional Teams may be permanent or ad hoc. Google’s search application product strategy primarily focuses on mass customization. This is allows Google to create a base search application and allows results to be returned to the end-users quickly based on specific parameters and search settings. In addition, they also store the data that is returned in case other desire the same results based on other end-users supplying the same customized settings. This allows Google to appear to render search results in virtually real-time to the user while allowing for complete customization of the searching criteria. Greg Vogl, a professor at Uganda Martyrs University, defines mass customization as when a business gives its customers the opportunity to tailor its products or services to the customer's specifications. The IT staff at Google play a key role in ensuring that the search application’s product strategy is maintained simply because the IT staff designs, develops, and maintains all of their proprietary applications. In fact, they also maintain all network infrastructure to ensure that it is available to all end-users. References: http://www.google.com/intl/en/options/ http://ops.fhwa.dot.gov/freight/publications/ftat_user_guide/sec5.htm http://www.bridgefieldgroup.com/bridgefieldgroup/glos9.htm#V http://www.acq.osd.mil/osjtf/termsdef.html http://www.cise.ufl.edu/~mssz/Pascal-CGS2462/prog-dsn.html http://www.hfma.org/publications/business_caring_newsletter/exclusives/Supply+and+Inventory+Terms+Defined.htm http://mitsloan.mit.edu/omg/om-definition.php http://www.humtech.com/opm/grtl/ols/ols3.cfm http://www.gregvogl.net/courses/mis1/glossary.htm

    Read the article

  • Customize the Default Screensavers in Windows 7 and Vista

    - by Matthew Guay
    Windows 7 and Vista include a nice set of backgrounds, but unfortunately most of them aren’t configurable by default.  Thanks to a free app and some registry changes, however, you can make the default screensavers uniquely yours! Customize the default screensavers If you’ve ever pressed the Customize button on most of the default screensavers in Windows 7 and Vista, you were probably greeted with this message: A little digging in the registry shows that this isn’t fully correct.  The default screensavers in Vista and 7 do have options you can set, but they’re not obvious.  With the help of an app or some registry tips, you can easily customize the screensavers to be uniquely yours.  Here’s how you can do it with an app or in the registry. Customize Windows Screensavers with System Screensavers Tweaker Download the System Screensavers Tweaker (link below), and unzip the folder.  Run nt6srccfg.exe in the folder to tweak your screensavers.  This application lets you tweak the screensavers’ registry settings graphically, and it works great in all editions of Windows Vista and 7, including x64 versions. Change any of the settings you want in the screensaver tweaker, and click Apply. To preview the changes to your screensaver, open the Screen Saver settings window as normal by right-clicking on the desktop, and selecting Personalize. Click on the Screensaver button on the bottom right. Now, select your modified screensaver, and click Preview to see your changes. You can change a wide variety of settings for the Bubbles, Ribbons, and Mystify screensavers in Windows 7 and Vista, as well as the Aurora screensaver in Windows Vista.  The tweaks to the Bubbles screensaver are especially nice.  Here’s how the Bubbles look without transparency. And, by tweaking a little more, you get a screensaver that looks more like a screen full of marbles. Ribbons and Mystify each have less settings, but still can produce some unique effects.   How’s that for a brilliant screensaver? And, if you want to return your screensavers to their default settings, simply run the System Screensavers Tweaker and select Reset to defaults on any screensaver you wish to reset. Customize Windows Screensavers in the Registry If you prefer to roll up your sleeves and tweak Windows under-the-hood, then here’s how you can customize the screensavers yourself in the Registry.  Type regedit into the search box in the Start menu, browse to the key for each screensaver, and add or modify the DWORD values listed for that screensaver using the Decimal base. Please Note: Tweaking the Registry can be difficult, so if you’re unsure, just use the tweaking application above. Also, you’ll probably want to create a System Restore Point.   Bubbles To edit the Bubbles screensaver, browse to the following in regedit: HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Screensavers\Bubbles Now, add or modify the following DWORD values to tweak the screensaver: MaterialGlass – enter 0 for solid or 1 for transparent bubbles Radius – enter a number between 1090000000 and 1130000000; the larger the number, the larger the bubbles’ radius ShowBubbles – enter 0 to show a black background or 1 to show the current desktop behind the bubbles ShowShadows – enter 0 for no shadow or 1 for shadows behind the bubbles SphereDensity – enter a number from 1000000000 to 2100000000; the higher the number, the more bubbles on the screen. TurbulenceNumOctaves – enter a number from 1 to 255; the higher the number, the faster the bubble colors will change. Ribbons To edit the Ribbons screensaver, browse to the following in regedit: HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Screensavers\Ribbons Now, add or modify the following DWORD values to tweak the screensaver: Blur – enter 0 to prevent ribbons from fading, or 1 to have them fade away after a few moments. Numribbons – enter a number from 1 to 100; the higher the number, the more ribbons on the screen. RibbonWidth – enter a number from 1000000000 to 1080000000; the higher the number, the thicker the ribbons. Mystify To edit the Mystify screensaver, browse to the following in regedit: HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Screensavers\Mystify Now, add or modify the following DWORD values to tweak the screensaver: Blur – enter 0 to prevent lines from fading, or 1 to have them fade away after a few moments. LineWidth – enter a number from 1000000000 to 1080000000; the higher the number, the wider the lines. NumLines – enter a number from 1 to 100; the higher the value, the more lines on the screen. Aurora – Windows Vista only To edit the Aurora screensaver in Windows Vista, browse to the following in regedit: HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Screensavers\Aurora Now, add or modify the following DWORD values to tweak the screensaver: Amplitude – enter a value from 500000000 to 2000000000; the higher the value, the slower the motion. Brightness – enter a value from 1000000000 to 1050000000; the higher the value, the brighter the affect. NumLayers – enter a value from 1 to 15; the higher the value, the more aurora layers displayed. Speed – enter a value from 1000000000 to 2100000000; the higher the value, the faster the cycling. Conclusion Although the default screensavers are nice, they can be boring after awhile with their default settings.  But with these tweaks, you can create a variety of vibrant screensavers that should keep your desktop fresh and interesting. Link Download the System Screensavers Tweaker Similar Articles Productive Geek Tips Create Icons to Start the Screensaver on Windows 7 or VistaMake Your Windows XP Logon Screen Look Like Windows VistaSpeed up Windows Vista Start Menu Search By Limiting ResultsRoundup: 16 Tweaks to Windows Vista Look & FeelSet XP as the Default OS in a Windows Vista Dual-Boot Setup TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor

    Read the article

  • Customize the Five Windows Folder Templates

    - by Mark Virtue
    Are you’re particular about the way Windows Explorer presents each folder’s contents? Here we show you how to take advantage of Explorer’s built-in templates, which cuts down the time it takes to do customizations. Note: The techniques in this article apply to Windows XP, Vista, and Windows 7. When opening a folder for the first time in Windows Explorer, we are presented with a standard default view of the files and folders in that folder. It may be that the items are presented are perfectly fine, but on the other hand, we may want to customize the view.  The aspects of it that we can customize are the following: The display type (list view, details, tiles, thumbnails, etc) Which columns are displayed, and in which order The widths of the visible columns The order in which the files and folders are sorted Any file groupings Thankfully, Windows offers us a shortcut.  A particular folder’s settings can be used as a “template” for other, similar folders.  In fact, we can store up to five separate sets of folder presentation configurations.  Once we save the settings for a particular template, that template can then be applied to other folders. Customize Your First Folder We’ll start by setting up the first of our templates – the default one.  Once we create this template and apply it, the vast majority of the folders in our file system will change to match it, so it’s important that we set it up very carefully.  The first step in creating and applying the template is to customize one folder with the settings that all the rest will have. Choose a folder that is typical of the folders that you wish to have this default template.  Select it in Windows Explorer.  To ensure that it is a suitable candidate, right-click the folder name and select Properties, then go to the Customize tab.  Ensure that this folder is marked as General Items.  If it is not, either choose a different folder or select General Items from the list. Click OK.  Now we’re ready to customize our first folder. Changing the way one single folder is presented is straightforward.  We start with the folder’s display type.  Click the Change your view button in the top-right corner of every Explorer window. Each time you click the button, the folder’s view cycles to the next view type.  Alternatively you can click the little down-arrow next to the button to see all the display types at once, and select the one you want. Click the view you want, or drag the slider next to the one you want. If you have chosen Details, then the next thing you may wish to change is which columns are displayed, and the order of these.  To choose which columns are displayed, simply right-click on any column heading.  A list of the columns currently being display appears. Simply uncheck a column if you don’t want it displayed, and check the columns that you want displayed.  If you want some information displayed about your files that is not listed here, then click the More… button for a full list of file attributes. There’s a lot of them! To change the order of the columns that are currently being displayed, simply click on a column heading and drag it to where you think it should be.  To change the width of a column, click the line that represents the right-hand edge of the column and drag it left or right. To sort by a column, click once on that column.  To reverse the sort-order, click that same column again. To change the groupings of the files in the folder, right-click in a blank area of the folder, select Group by, and select the appropriate column. Apply This Default Template to All Similar Folders Once you have the folder exactly the way you want it, we now use this folder as our default template for most of the folders in our file system.  To do this, ensure that you are still in the folder you just customized, and then, from the Organize menu in Explorer, click on Folder and search options. Then select the View tab and click the Apply to Folders button. After you’ve clicked OK, visit some of the other folders in your file system.  You should see that most have taken on these new settings. What we’ve just done, in effect, is we have customized the General Items template.  This is one of five templates that Windows Explorer uses to display folder contents.  The five templates are called (in Windows 7): General Items Documents Pictures Music Videos When a folder is opened, Windows Explorer examines the contents to see if it can automatically determine which folder template to use to display the folder contents.  If it is not obvious that the folder contents falls into any of the last four templates, then Windows Explorer chooses the General Items template.  That’s why most of the folders in your file system are shown using the General Items template. Changing the Other Four Templates If you want to adjust the other four templates, the process is very similar to what we’ve just done.  If you wanted to change the “Music” template, for example, the steps would be as follows: Select a folder that contains music items Apply the existing Music template to the folder (even if it doesn’t look like you want it to) Customize the folder to your personal preferences Apply the new template to all “Music” folders A fifth step would be:  When you open a folder that contains music items but is not automatically displayed using the Music template, you manually select the Music template for that folder. First, select a folder that contains music items.  It will probably be displayed using the existing Music template: Next, ensure that it is using the Music template.  If it’s not, then manually select the Music template. Next, customize the folder to suit your personal preferences (here we’ve added a couple of columns, and sorted by Artist). Now we can set this view to be our Music template.  Choose Organize, then the View tab, and click the Apply to Folders button. Note: The only folders that will inherit these settings are the ones that are currently (or will soon be) using the Music template. Now, if you have any folder that contains music items, and you want it to inherit all of these settings, then right-click the folder name, choose Properties, and select that this folder should use the Music template.  You can also cehck the box entitled Also apply this template to all subfolders if you want to save yourself even more time with all the sub-folders. Conclusion It’s neat to be able to set up templates for your folder views like this.  It’s a shame that Microsoft didn’t take the concept just a little further and allow you to create as many templates as you want. Similar Articles Productive Geek Tips Fix For When Windows Explorer in Vista Stops Showing File NamesCustomize the Windows 7 or Vista Send To MenuFix for New Contact Group Button Not Displaying in VistaWhy Did Windows Vista’s Music Folder Icon Turn Yellow?Make Your Last Minute Holiday Cards with Microsoft Word TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • Creating A SharePoint Parent/Child List Relationship&ndash; SharePoint 2010 Edition

    - by Mark Rackley
    Hey blog readers… It has been almost 2 years since I posted my most read blog on creating a Parent/Child list relationship in SharePoint 2007: Creating a SharePoint List Parent / Child Relationship - Out of the Box And then a year ago I improved on my method and redid the blog post… still for SharePoint 2007: Creating a SharePoint List Parent/Child Relationship – VIDEO REMIX Since then many of you have been asking me how to get this to work in SharePoint 2010, and frankly I have just not had time to look into it. I wish I could have jumped into this sooner, but have just recently began to look at it. Well.. after all this time I have actually come up with two solutions that work, neither of them are as clean as I’d like them to be, but I wanted to get something in your hands that you can start using today. Hopefully in the coming weeks and months I’ll be able to improve upon this further and give you guys some better options. For the most part, the process is identical to the 2007 process, but you have probably found out that the list view web parts in 2010 behave differently, and getting the Parent ID to your new child form can be a pain in the rear (at least that’s what I’ve discovered). Anyway, like I said, I have found a couple of solutions that work. If you know of a better one, please let us know as it bugs me that this not as eloquent as my 2007 implementation. Getting on the same page First thing I’d recommend is recreating this blog: Creating a SharePoint List Parent/Child Relationship – VIDEO REMIX in SharePoint 2010… There are some vague differences, but it’s basically the same…  Here’s a quick video of me doing this in SP 2010: Creating Lists necessary for this blog post Now that you have the lists created, lets set up the New Time form to use a QueryString variable to populate the Parent ID field: Creating parameters in Child’s new item form to set parent ID Did I talk fast enough through both of those videos? Hopefully by now that stuff is old hat to you, but I wanted to make sure everyone could get on the same page.  Okay… let’s get started. Solution 1 – XSLTListView with Javascript This solution is the more elegant of the two, however it does require the use of a little javascript.  The other solution does not use javascript, but it also doesn’t use the pretty new SP 2010 pop-ups.  I’ll let you decide which you like better. The basic steps of this solution are: Inserted a Related Item View Insert a ContentEditorWebPart Insert script in ContentEditorWebPart that pulls the ID from the Query string and calls the method to insert a new item on the child entry form Hide the toolbar from data view to remove “add new item” link. Again, you don’t HAVE to use a CEWP, you could just put the javascript directly in the page using SPD.  Anyway, here is how I did it: Using Related Item View / JavaScript Here’s the JavaScript I used in my Content Editor Web Part: <script type="text/javascript"> function NewTime() { // Get the Query String values and split them out into the vals array var vals = new Object(); var qs = location.search.substring(1, location.search.length); var args = qs.split("&"); for (var i=0; i < args.length; i++) { var nameVal = args[i].split("="); var temp = unescape(nameVal[1]).split('+'); nameVal[1] = temp.join(' '); vals[nameVal[0]] = nameVal[1]; } var issueID = vals["ID"]; //use this to bring up the pretty pop up NewItem2(event,"http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID=" + issueID); //use this to open a new window //window.location="http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID=" + issueID; } </script> Solution 2 – DataFormWebPart and exact same 2007 Process This solution is a little more of a hack, but it also MUCH more close to the process we did in SP 2007. So, if you don’t mind not having the pretty pop-up and prefer the comforts of what you are used to, you can give this one a try.  The basics steps are: Insert a DataFormWebPart instead of the List Data View Create a Parameter on DataFormWebPart to store “ID” Query String Variable Filter DataFormWebPart using Parameter Insert a link at bottom of DataForm Web part that points to the Child’s new item form and passes in the Parent Id using the Parameter. See.. like I told you, exact same process as in 2007 (except using the DataFormWeb Part). The DataFormWebPart also requires a lot more work to make it look “pretty” but it’s just table rows and cells, and can be configured pretty painlessly.  Here is that video: Using DataForm Web Part One quick update… if you change the link in this solution from: <tr> <td><a href="http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID={$IssueIDParam}">Click here to create new item...</a> </td> </tr> to: <tr> <td> <a href="javascript:NewItem2(event,'http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID={$IssueIDParam}');">Click here to create new item...</a> </td> </tr> It will open up in the pretty pop up and act the same as solution one… So… both Solutions will now behave the same to the end user. Just depends on which you want to implement. That’s all for now… Remember in both solutions when you have them working, you can make the “IssueID” invisible to users by using the “ms-hidden” class (it’s my previous blog post on the subject up there). That’s basically all there is to it! No pithy or witty closing this time… I am sorry it took me so long to dive into this and I hope your questions are answered. As I become more polished myself I will try to come up with a cleaner solution that will make everyone happy… As always, thanks for taking the time to stop by.

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Robotic Arm &ndash; Hardware

    - by Szymon Kobalczyk
    This is first in series of articles about project I've been building  in my spare time since last Summer. Actually it all began when I was researching a topic of modeling human motion kinematics in order to create gesture recognition library for Kinect. This ties heavily into motion theory of robotic manipulators so I also glanced at some designs of robotic arms. Somehow I stumbled upon this cool looking open source robotic arm: It was featured on Thingiverse and published by user jjshortcut (Jan-Jaap). Since for some time I got hooked on toying with microcontrollers, robots and other electronics, I decided to give it a try and build it myself. In this post I will describe the hardware build of the arm and in later posts I will be writing about the software to control it. Another reason to build the arm myself was the cost factor. Even small commercial robotic arms are quite expensive – products from Lynxmotion and Dagu look great but both cost around USD $300 (actually there is one cheap arm available but it looks more like a toy to me). In comparison this design is quite cheap. It uses seven hobby grade servos and even the cheapest ones should work fine. The structure is build from a set of laser cut parts connected with few metal spacers (15mm and 47mm) and lots of M3 screws. Other than that you’d only need a microcontroller board to drive the servos. So in total it comes a lot cheaper to build it yourself than buy an of the shelf robotic arm. Oh, and if you don’t like this one there are few more robotic arm projects at Thingiverse (including one by oomlout). Laser cut parts Some time ago I’ve build another robot using laser cut parts so I knew the process already. You can grab the design files in both DXF and EPS format from Thingiverse, and there are also 3D models of each part in STL. Actually the design is split into a second project for the mini servo gripper (there is also a standard servo version available but it won’t fit this arm).  I wanted to make some small adjustments, layout, and add measurements to the parts before sending it for cutting. I’ve looked at some free 2D CAD programs, and finally did all this work using QCad 3 Beta with worked great for me (I also tried LibreCAD but it didn’t work that well). All parts are cut from 4 mm thick material. Because I was worried that acrylic is too fragile and might break, I also ordered another set cut from plywood. In the end I build it from plywood because it was easier to glue (I was told acrylic requires a special glue). Btw. I found a great laser cutter service in Kraków and highly recommend it (www.ebbox.com.pl). It cost me only USD $26 for both sets ($16 acrylic + $10 plywood). Metal parts I bought all the M3 screws and nuts at local hardware store. Make sure to look for nylon lock (nyloc) nuts for the gripper because otherwise it unscrews and comes apart quickly. I couldn’t find local store with metal spacers and had to order them online (you’d need 11 x 47mm and 3 x 15mm). I think I paid less than USD $10 for all metal parts. Servos This arm uses five standards size servos to drive the arm itself, and two micro servos are used on the gripper. Author of the project used Modelcraft RS-2 Servo and Modelcraft ES-05 HT Servo. I had two Futaba S3001 servos laying around, and ordered additional TowerPro SG-5010 standard size servos and TowerPro SG90 micro servos. However it turned out that the SG90 won’t fit in the gripper so I had to replace it with a slightly smaller E-Sky EK2-0508 micro servo. Later it also turned out that Futaba servos make some strange noise while working so I swapped one with TowerPro SG-5010 which has higher torque (8kg / cm). I’ve also bought three servo extension cables. All servos cost me USD $45. Assembly The build process is not difficult but you need to think carefully about order of assembling it. You can do the base and upper arm first. Because two servos in the base are close together you need to put first with one piece of lower arm already connected before you put the second servo. Then you connect the upper arm and finally put the second piece of lower arm to hold it together. Gripper and base require some gluing so think it through too. Make sure to look closely at all the photos on Thingiverse (also other people copies) and read additional posts on jjshortcust’s blog: My mini servo grippers and completed robotic arm  Multiply the robotic arm and electronics Here is also Rob’s copy cut from aluminum My assembled arm looks like this – I think it turned out really nice: Servo controller board The last piece of hardware I needed was an electronic board that would take command from PC and drive all seven servos. I could probably use Arduino for this task, and in fact there are several Arduino servo shields available (for example from Adafruit or Renbotics).  However one problem is that most support only up to six servos, and second that their accuracy is limited by Arduino’s timer frequency. So instead I looked for dedicated servo controller and found a series of Maestro boards from Pololu. I picked the Pololu Mini Maestro 12-Channel USB Servo Controller. It has many nice features including native USB connection, high resolution pulses (0.25µs) with no jitter, built-in speed and acceleration control, and even scripting capability. Another cool feature is that besides servo control, each channel can be configured as either general input or output. So far I’m using seven channels so I still have five available to connect some sensors (for example distance sensor mounted on gripper might be useful). And last but important factor was that they have SDK in .NET – what more I could wish for! The board itself is very small – half of the size of Tic-Tac box. I picked one for about USD $35 in this store. Perhaps another good alternative would be the Phidgets Advanced Servo 8-Motor – but it is significantly more expensive at USD $87.30. The Maestro Controller Driver and Software package includes Maestro Control Center program with lets you immediately configure the board. For each servo I first figured out their move range and set the min/max limits. I played with setting the speed an acceleration values as well. Big issue for me was that there are two servos that control position of lower arm (shoulder joint), and both have to be moved at the same time. This is where the scripting feature of Pololu board turned out very helpful. I wrote a script that synchronizes position of second servo with first one – so now I only need to move one servo and other will follow automatically. This turned out tricky because I couldn’t find simple offset mapping of the move range for each servo – I had to divide it into several sub-ranges and map each individually. The scripting language is bit assembler-like but gets the job done. And there is even a runtime debugging and stack view available. Altogether I’m very happy with the Pololu Mini Maestro Servo Controller, and with this final piece I completed the build and was able to move my arm from the Meastro Control program.   The total cost of my robotic arm was: $10 laser cut parts $10 metal parts $45 servos $35 servo controller ----------------------- $100 total So here you have all the information about the hardware. In next post I’ll start talking about the software that I wrote in Microsoft Robotics Developer Studio 4. Stay tuned!

    Read the article

  • Building extensions for Expression Blend 4 using MEF

    - by Timmy Kokke
    Introduction Although it was possible to write extensions for Expression Blend and Expression Design, it wasn’t very easy and out of the box only one addin could be used. With Expression Blend 4 it is possible to write extensions using MEF, the Managed Extensibility Framework. Until today there’s no documentation on how to build these extensions, so look thru the code with Reflector is something you’ll have to do very often. Because Blend and Design are build using WPF searching the visual tree with Snoop and Mole belong to the tools you’ll be using a lot exploring the possibilities.  Configuring the extension project Extensions are regular .NET class libraries. To create one, load up Visual Studio 2010 and start a new project. Because Blend is build using WPF, choose a WPF User Control Library from the Windows section and give it a name and location. I named mine DemoExtension1. Because Blend looks for addins named *.extension.dll  you’ll have to tell Visual Studio to use that in the Assembly Name. To change the Assembly Name right click your project and go to Properties. On the Application tab, add .Extension to name already in the Assembly name text field. To be able to debug this extension, I prefer to set the output path on the Build tab to the extensions folder of Expression Blend. This means that everything that used to go into the Debug folder is placed in the extensions folder. Including all referenced assemblies that have the copy local property set to false. One last setting. To be able to debug your extension you could start Blend and attach the debugger by hand. I like it to be able to just hit F5. Go to the Debug tab and add the the full path to Blend.exe in the Start external program text field. Extension Class Add a new class to the project.  This class needs to be inherited from the IPackage interface. The IPackage interface can be found in the Microsoft.Expression.Extensibility namespace. To get access to this namespace add Microsoft.Expression.Extensibility.dll to your references. This file can be found in the same folder as the (Expression Blend 4 Beta) Blend.exe file. Make sure the Copy Local property is set to false in this reference. After implementing the interface the class would look something like: using Microsoft.Expression.Extensibility; namespace DemoExtension1 { public class DemoExtension1:IPackage { public void Load(IServices services) { } public void Unload() { } } } These two methods are called when your addin is loaded and unloaded. The parameter passed to the Load method, IServices services, is your main entry point into Blend. The IServices interface exposes the GetService<T> method. You will be using this method a lot. Almost every part of Blend can be accessed thru a service. For example, you can use to get to the commanding services of Blend by calling GetService<ICommandService>() or to get to the Windowing services by calling GetService<IWindowService>(). To get Blend to load the extension we have to implement MEF. (You can get up to speed on MEF on the community site or read the blog of Mr. MEF, Glenn Block.)  In the case of Blend extensions, all that needs to be done is mark the class with an Export attribute and pass it the type of IPackage. The Export attribute can be found in the System.ComponentModel.Composition namespace which is part of the .NET 4 framework. You need to add this to your references. using System.ComponentModel.Composition; using Microsoft.Expression.Extensibility;   namespace DemoExtension1 { [Export(typeof(IPackage))] public class DemoExtension1:IPackage { Blend is able to find your addin now. Adding UI The addin doesn’t do very much at this point. The WPF User Control Library came with a UserControl so lets use that in this example. I just drop a Button and a TextBlock onto the surface of the control to have something to show in the demo. To get the UserControl to work in Blend it has to be registered with the WindowService.  Call GetService<IWindowService>() on the IServices interface to get access to the windowing services. The UserControl will be used in Blend on a Palette and has to be registered to enable it. This is done by calling the RegisterPalette on the IWindowService interface and passing it an identifier, an instance of the UserControl and a caption for the palette. public void Load(IServices services) { IWindowService windowService = services.GetService<IWindowService>(); UserControl1 uc = new UserControl1(); windowService.RegisterPalette("DemoExtension", uc, "Demo Extension"); } After hitting F5 to start debugging Expression Blend will start. You should be able to find the addin in the Window menu now. Activating this window will show the “Demo Extension” palette with the UserControl, style according to the settings of Blend. Now what? Because little is publicly known about how to access different parts of Blend adding breakpoints in Debug mode and browsing thru objects using the Quick Watch feature of Visual Studio is something you have to do very often. This demo extension can be used for that purpose very easily. Add the click event handler to the button on the UserControl. Change the contructor to take the IServices interface and store this in a field. Set a breakpoint in the Button_Click method. public partial class UserControl1 : UserControl { private readonly IServices _services;   public UserControl1(IServices services) { _services = services; InitializeComponent(); }   private void button1_Click(object sender, RoutedEventArgs e) { } } Change the call to the constructor in the load method and pass it the services property. public void Load(IServices services) { IWindowService service = services.GetService<IWindowService>(); UserControl1 uc = new UserControl1(services); service.RegisterPalette("DemoExtension", uc, "Demo Extension"); } Hit F5 to compile and start Blend. Got to the window menu and start show the addin. Click on  the button to hit the breakpoint. Now place the carrot text _services text in the code window and hit Shift+F9 to show the Quick Watch window. Now start exploring and discovering where to find everything you need.  More Information The are no official resources available yet. Microsoft has released one extension for expression Blend that is very useful as a reference, the Microsoft Expression Blend® Add-in Preview for Windows® Phone. This will install a .extension.dll file in the extension folder of Blend. You can load this file with Reflector and have a peek at how Microsoft is building his addins. Conclusion I hope this gives you something to get started building extensions for Expression Blend. Until Microsoft releases the final version, which hopefully includes more information about building extensions, we’ll have to work on documenting it in the community.

    Read the article

  • It's not just “Single Sign-on” by Steve Knott (aurionPro SENA)

    - by Greg Jensen
    It is true that Oracle Enterprise Single Sign-on (Oracle ESSO) started out as purely an application single sign-on tool but as we have seen in the previous articles in this series the product has matured into a suite of tools that can do more than just automated single sign-on and can also provide rapidly deployed, cost effective solution to many demanding password management problems. In the last article of this series I would like to discuss three cases where customers faced password scenarios that required more than just single sign-on and how some of the less well known tools in the Oracle ESSO suite “kitbag” helped solve these challenges. Case #1 One of the issues often faced by our customers is how to keep their applications compliant. I had a client who liked the idea of automated single sign-on for most of his applications but had a key requirement to actually increase the security for one specific SOX application. For the SOX application he wanted to secure access by using two-factor authentication with a smartcard. The problem was that the application did not support two-factor authentication. The solution was to use a feature from the Oracle ESSO suite called authentication manager. This feature enables you to have multiple authentication methods for the same user which in this case was a smartcard and the Windows password.  Within authentication manager each authenticator can be configured with a security grade so we gave the smartcard a high grade and the Windows password a normal grade. Security grading in Oracle ESSO can be configured on a per application basis so we set the SOX application to require the higher grade smartcard authenticator. The end result for the user was that they enjoyed automated single sign-on for most of the applications apart from the SOX application. When the SOX application was launched, the user was required by ESSO to present their smartcard before being given access to the application. Case #2 Another example solving compliance issues was in the case of a large energy company who had a number of core billing applications. New regulations required that users change their password regularly and use a complex password. The problem facing the customer was that the core billing applications did not have any native user password change functionality. The customer could not replace the core applications because of the cost and time required to re-develop them. With a reputation for innovation aurionPro SENA were approached to provide a solution to this problem using Oracle ESSO. Oracle ESSO has a password expiry feature that can be triggered periodically based on the timestamp of the users’ last password creation therefore our strategy here was to leverage this feature to provide the password change experience. The trigger can launch an application change password event however in this scenario there was no native change password feature that could be launched therefore a “dummy” change password screen was created that could imitate the missing change password function and connect to the application database on behalf of the user. Oracle ESSO was configured to trigger a change password event every 60 days. After this period if the user launched the application Oracle ESSO would detect the logon screen and invoke the password expiry feature. Oracle ESSO would trigger the “dummy screen,” detect it automatically as the application change password screen and insert a complex password on behalf of the user. After the password event had completed the user was logged on to the application with their new password. All this was provided at a fraction of the cost of re-developing the core applications. Case #3 Recent popular initiatives such as the BYOD and working from home schemes bring with them many challenges in administering “unmanaged machines” and sometimes “unmanageable users.” In a recent case, a client had a dispersed community of casual contractors who worked for the business using their own laptops to access applications. To improve security the around password management the security goal was to provision the passwords directly to these contractors. In a previous article we saw how Oracle ESSO has the capability to provision passwords through Provisioning Gateway but the challenge in this scenario was how to get the Oracle ESSO agent to the casual contractor on an unmanaged machine. The answer was to use another tool in the suite, Oracle ESSO Anywhere. This component can compile the normal Oracle ESSO functionality into a deployment package that can be made available from a website in a similar way to a streamed application. The ESSO Anywhere agent does not actually install into the registry or program files but runs in a folder within the user’s profile therefore no local administrator rights are required for installation. The ESSO Anywhere package can also be configured to stay persistent or disable itself at the end of the user’s session. In this case the user just needed to be told where the website package was located and download the package. Once the download was complete the agent started automatically and the user was provided with single sign-on to their applications without ever knowing the application passwords. Finally, as we have seen in these series Oracle ESSO not only has great utilities in its own tool box but also has direct integration with Oracle Privileged Account Manager, Oracle Identity Manager and Oracle Access Manager. Integrated together with these tools provides a complete and complementary platform to address even the most complex identity and access management requirements. So what next for Oracle ESSO? “Agentless ESSO available in the cloud” – but that will be a subject for a future Oracle ESSO series!                                                                                                                               

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 2

    - by AjarnMark
    In Part 1, I started talking about using Red-Gate’s newest version of SQL Source Control and how I really like it as a viable method to source control your database development.  It looks like this is going to turn into a little series where I will explain how we have done things in the past, and how life is different with SQL Source Control.  I will also explain some of my philosophy and methodology around deployment with these tools.  But for now, let’s talk about some of the good and the bad of the tool itself. More Kudos and Features I mentioned previously how impressed I was with the responsiveness of Red-Gate’s team.  I have been having an ongoing email conversation with Gyorgy Pocsi, and as I have run into problems or requested things behave a little differently, it has not been more than a day or two before a new Build is ready for me to download and test.  Quite impressive! I’m sure much of the requests I put in were already in the plans, so I can’t really take credit for them, but throughout this conversation, Red-Gate has implemented several features that were not in the first Early Access version.  Those include: Honoring the Fortress configuration option to require Work Item (Bug) IDs on check-ins. Adding the check-in comment text as a comment to the Work Item. Adding the list of checked-in files, along with the Fortress links for automatic History and DIFF view Updating the status of a Work Item on check-in (e.g. setting the item to Complete or, in our case “Dev-Complete”) Support for the Fortress 2.0 API, and not just the Vault Pro 5.1 API.  (See later notes regarding support for Fortress 2.0). These were all features that I felt we really needed to have in-place before I could honestly consider converting my team to using SQL Source Control on a regular basis.  Now that I have those, my only excuse is not wanting to switch boats on the team mid-stream.  So when we wrap up our current release in a few weeks, we will make the jump.  In the meantime, I will continue to bang on it to make sure it is stable.  It passed one test for stability when I did a test load of one of our larger database schemas into Fortress with SQL Source Control.  That database has about 150 tables, 200 User-Defined Functions and nearly 900 Stored Procedures.  The initial load to source control went smoothly and took just a brief amount of time. Warnings Remember that this IS still in pre-release stage and while I have not had any problems after that first hiccup I wrote about last time, you still need to treat it with a healthy respect.  As I understand it, the RTM is targeted for February.  There are a couple more features that I hope make it into the final release version, but if not, they’ll probably be coming soon thereafter.  Those are: A Browse feature to let me lookup the Work Item ID instead of having to remember it or look back in my Item details.  This is just a matter of convenience. I normally have my Work Item list open anyway, so I can easily look it up, but hey, why not make it even easier. A multi-line comment area.  The current space for writing check-in comments is a single-line text box.  I would like to have a multi-line space as I sometimes write lengthy commentary.  But I recognize that it is a struggle to get most developers to put in more than the word “fixed” as their comment, so this meets the need of the majority as-is, and it’s not a show-stopper for us. Merge.  SQL Source Control currently does not have a Merge feature.  If two or more people make changes to the same database object, you will get a warning of the conflict and have to choose which one wins (and then manually edit to include the others’ changes).  I think it unlikely you will run into actual conflicts in Stored Procedures and Functions, but you might with Views or Tables.  This will be nice to have, but I’m not losing any sleep over it.  And I have multiple tools at my disposal to do merges manually, so really not a show-stopper for us. Automation has its limits.  As cool as this automation is, it has its limits and there are some changes that you will be better off scripting yourself.  For example, if you are refactoring table definitions, and want to change a column name, you can write that as a quick sp_rename command and preserve the data within that column.  But because this tool is looking just at a before and after picture, it cannot tell that you just renamed a column.  To the tool, it looks like you dropped one column and added another.  This is not a knock against Red-Gate.  All automated scripting tools have this issue, unless the are actively monitoring your every step to know exactly what you are doing.  This means that when you go to Deploy your changes, SQL Compare will script the change as a column drop and add, or will attempt to rebuild the entire table.  Unfortunately, neither of these approaches will preserve the existing data in that column the way an sp_rename will, and so you are better off scripting that change yourself.  Thankfully, SQL Compare will produce warnings about the potential loss of data before it does the actual synchronization and give you a chance to intercept the script and do it yourself. Also, please note that the current official word is that SQL Source Control supports Vault Professional 5.1 and later.  Vault Professional is the new name for what was previously known as Fortress.  (You can read about the name change on SourceGear’s site.)  The last version of Fortress was 2.x, and the API for Fortress 2.x is different from the API for Vault Pro.  At my company, we are currently running Fortress 2.0, with plans to upgrade to Vault Pro early next year.  Gyorgy was able to come up with a work-around for me to be able to use SQL Source Control with Fortress 2.0, even though it is not officially supported.  If you are using Fortress 2.0 and want to use SQL Source Control, be aware that this is not officially supported, but it is working for us, and you can probably get the work-around instructions from Red-Gate if you’re really, really nice to them. Upcoming Topics Some of the other topics I will likely cover in this series over the next few weeks are: How we used to do source control back in the old days (a few weeks ago) before SQL Source Control was available to Vault users What happens when you restore a database that is linked to source control Handling multiple development branches of source code Concurrent Development practices and handling Conflicts Deployment Tips and Best Practices A recap after using the tool for a while

    Read the article

  • User Experience Highlights in PeopleSoft and PeopleTools: Direct from Jeff Robbins

    - by mvaughan
    By Kathy Miedema, Oracle Applications User Experience  This is the fifth in a series of blog posts on the user experience (UX) highlights in various Oracle product families. The last posted interview was with Nadia Bendjedou, Senior Director, Product Strategy on upcoming Oracle E-Business Suite user experience highlights. You’ll see themes around productivity and efficiency, and get an early look at the latest mobile offerings coming through these product lines. Today’s post is on the user experience in PeopleSoft and PeopleTools. To learn more about what’s ahead, attend PeopleSoft or PeopleTools OpenWorld presentations.This interview is with Jeff Robbins, Senior Director, PeopleSoft Development. Jeff Robbins Q: How would you describe the vision you have for the user experience of PeopleSoft?A: Intuitive – Specifically, customers use PeopleSoft to help their employees do their day-to-day work, and the UI (user interface) has been helpful and assistive in that effort. If it’s not obvious what they need to do a task, then the UI isn’t working. So the application needs to make it simple for users to find information they need, complete a task, do all the things they are responsible for, and it really helps when the UI just makes sense. Productive – PeopleSoft is a tool used to support people to do their work, and a lot of users are measured by how much work they’re able to get done per hour, per day, etc. The UI needs to help them be as productive as possible, and can’t make them waste time or energy. The UI needs to reflect the type of work necessary for a task -- if it's data entry, the UI needs to assist the user to get information into the system. For analysts, the UI needs help users assess or analyze information in a particular way. Innovative – The concept of the UI being innovative is something we’ve been working on for years. It’s not just that we want to be seen as innovative, the fact is that companies are asking their employees to do more than they’ve ever asked before. More often companies want to roll out processes as employee or manager self-service, where an employee is responsible to review and maintain their own data. So we’ve had to reinvent, and ask,  “How can we modify the ways an employee interacts with our applications so that they can be more productive and efficient – even with tasks that are entirely unfamiliar?”  Our focus on innovation has forced us to design new ways for users to interact with the entire application.Q: How are the UX features you have delivered so far resonating with customers?  A: Resonating very well. We’re hearing tremendous responses from users, managers, decision-makers -- who are very happy with the improved user experience. Many of the individual features resonate well. Some have really hit home, others are better than they used to be but show us that there’s still room for improvement.A couple innovations really stand out; features that have a significant effect on how users interact with PeopleSoft.First, the deployment of PeopleSoft in a way that’s more like a consumer website with the PeopleSoft Home page and Dashboards.  This new approach is very web-centric, where users feel they’re coming to a website rather than logging into an enterprise application.  There’s lots of information from all around the organization collected in a way that feels very familiar to users. In order to do your job, you can come to this web site rather than having to learn how to log into an application and figure out a complicated menu. Companies can host these really rich web sites for employees that are home pages for accessing critical tasks and information. The UI elements of incorporating search into the whole navigation process is another hit. Rather than having to log in and choose a task from a menu, users come to the web site and begin a task by simply searching for data: themselves, another employee, a customer record, whatever.  The search results include the data along with a set of actions the user might take, completely eliminating the need to hunt through a complicated system menu. Search-centric navigation is really sitting well with customers who are trying to deploy an intuitive set of systems. Q: Are any UX highlights more popular than you expected them to be?  A: We introduced a feature called Pivot Grid in the last release, which is a combination of an interactive grid, like an Excel Pivot Table, along with a dynamic visual chart that automatically graphs the data. I wasn’t certain at first how extensively this would be used. It looked like an innovative tool, but it wasn’t clear how it would be incorporated in business process applications. The fact is that everyone who sees Pivot Grids is thrilled with that kind of interactivity.  It reflects the amount of analytical thinking customers are asking employees to do. Employees can’t just enter data any more. They must interact with it, analyze it, and make decisions. Pivot Grids fit into this way of working. Q: What can you tell us about PeopleSoft’s mobile offerings?A: A lot of customers are finding that mobile is the chief priority in their organization.  They tell us they want their employees to be able to access company information from their mobile devices.  Of course, not everyone has the same requirements, so we’re working to make sure we can help our customers accomplish what they’re trying to do.  We’ve already delivered a number of mobile features.  For instance, PeopleSoft home pages, dashboards and workcenters all work well on an iPad, straight out of the box.  We’ve delivered a number of key functions and tasks for mobile workers – those who are responsible for using a mobile device to manage inventory, for example.  Customers tell us they also need a holistic strategy, one that allows their employees to access nearly every task from a mobile device.  While we don’t expect users to do extensive data entry from their smartphone, it makes sense that they have access to company information and systems while away from their desk.  That’s where our strategy is going now.  We plan to unveil a number of new mobile offerings at OpenWorld.  Some will be available then, some shortly after. Q: What else are you working on now that you think is going to be exciting to customers at Oracle OpenWorld?A: Our next release -- the big thing is PeopleSoft 9.2, and we’ll be talking about the huge amount of work that’s gone into the next versions. A new toolset, 8.53, will be coming, and there’s a lot to talk about there, and the next generation of PeopleSoft 9.2.  We have a ton of new stuff coming.Q: What do you want PeopleSoft customers to know? A: We have been focusing on the user experience in PeopleSoft as a very high priority for the last 4 years, and it’s had interesting effects. One thing is that the application is better, more usable.  We’ve made visible improvements. Another aspect is that in customers’ minds, the PeopleSoft brand is being reinvigorated. Customers invested in PeopleSoft years ago, and then they weren’t sure where PeopleSoft was going.  This investment in the UI and overall user experience keeps PeopleSoft current, innovative and fresh.  Customers  are able to take advantage of a lot of new features, even on the older applications, simply by upgrading their PeopleTools. The interest in that ability has been tremendous. Knowing they have a lot of these features available -- right now, that’s pretty huge. There’s been a tremendous amount of positive response, just on the fact that we’re focusing on the user experience. Editor’s note: For more on PeopleSoft and PeopleTools user experience highlights, visit the Usable Apps web site.To find out more about these enhancements at Openworld, be sure to check out these sessions: GEN8928     General Session: PeopleSoft Update and Product RoadmapCON9183     PeopleSoft PeopleTools Technology Roadmap CON8932     New Functional PeopleSoft PeopleTools Capabilities for the Line-of-Business UserCON9196     PeopleSoft PeopleTools Roadmap: Mobile ApplicationsCON9186     Case Study: Delivering a Groundbreaking User Interface with PeopleSoft PeopleTools

    Read the article

  • When OneTug Just Isn&rsquo;t Enough&hellip;

    - by onefloridacoder
    I stole that from the back of a T-shirt I saw at the Orlando Code Camp 2010.  This was my first code camp and my first time volunteering for an event like this as well.  It was an awesome day.  I cannot begin to count the “aaahh”, “I did-not-know I could do that”, in the crowds and for myself.  I think it was a great day of learning for everyone at all levels.  All of the presenters were different and provided great insights into the topics they were presenting.  Here’s a list of the ones that I attended. KodeFuGuru, “Pirates vs. Ninjas” He touched on many good topics to relax some of the ways we think when we are writing out code, and still looks good, readable, etc.  As he pointed out in all of his examples, we might not always realize everything that’s going on under the covers.  He exposed a bug in his own code, and verbalized the mental gymnastics he went through when he knew there was something wrong with one of his IEnumerable implementations.  For me, it was great to hear that someone else labors over these gut reactions to code quickly snapped together, to the point that we rush to the refactor stage to fix what’s bothering us – and learn.  He has some content on extension methods that was very interesting.  My “that is so cool” moment was when he swapped out AddEntity method on an entity class and used a With extension method instead.  Some of the LINQ scales fell off my eyes at that moment, and I realized my own code could be a lot more powerful (and readable) if incorporate a few of these examples at the appropriate times.  And he cautioned as well… “don’t go crazy with this stuff”, there’s a place and time for everything.  One of his examples demo’d toward the end of the talk is on his sight where he’s chaining methods together, cool stuff. Quotes I liked: “Extension Methods - Extension methods to put features back on the model type, without impacting the type.” “Favor Declarative Code” – Check out the ? and ?? operators if you’re not already using them. “Favor Fluent Code” “Avoid Pirate Ninja Zombies!  If you see one run!” I’m definitely going to be looking at “Extract Projection” when I get into VS2010. BDD 101 – Sean Chambers http://github.com/schambers This guy had a whole host of gremlins against him, final score Sean 5, Gremlins 1.  He ran the code samples from his github repo  in the code github code viewer since the PC they school gave him to use didn’t have VS installed. He did a great job of converting the grammar between BDD and TDD, and how this style of development can be used in integration tests as well as the different types of gated builds on a CI box – he didn’t go into a discussion around CI, but we could infer that it could work. Like when we use WSSF, it does cause a class explosion to happen however the amount of code per class it limit to just covering the concern at hand – no more, no less.  As in “When I as a <Role>, expect {something} to happen, because {}”  This keeps us (the developer) from gold plating our solutions and creating less waste.  He basically keeps the code that prove out the requirement to two lines of code.  Nice. He uses SpecUnit to merge this grammar into his .NET projects and gave an overview on how this ties into writing his own BDD tests.  Some folks were familiar with Given / When / Then as story acceptance criteria and here’s how he mapped it: “Given <Context>  When <Something Happens> Then <I expect...>”  There are a few base classes and overrides in the SpecUnit framework that help with setting up the context for each test which looked very handy. Successfully Running Your Own Coding Business The speaker ran through a list of items that sounded like common sense stuff LLC, banking, separating expenses, etc.  Then moved into role playing with business owners and an ISV.  That was pretty good stuff, it pays to be a good listener all of the time even if your client is sitting on the other side of the phone tearing you head off for you – but that’s all it is, and get used to it its par for the course.  Oh, yeah always answer the phone was one simple thing that you can do to move  your business forward.  But like Cory Foy tweeted this week, “If you owe me a lot of money, don’t have a message that says your away for five weeks skiing in Colorado.”  Lots of food for thought that’s on my list of “todo’s and to-don’ts”. Speaker Idol Next, I had the pleasure of helping Russ Fustino tape this part of Code Camp as my primary volunteer opportunity that day.  You remember Russ, “know the code” from the awesome Russ’ Tool Shed series.  He did a great job orchestrating and capturing the Speaker Idol finals.   So I didn’t actually miss any sessions, but was able to see three back to back in one setting.  The idol finalists gave a 10 minute talk and very deep subjects, but different styles of talks.  No one walked away empty handed for jobs very well done.  Russ has details on his site.  The pictures and  video captured is supposed to be published on Channel 9 at a later date.  It was also a valuable experience to see what makes technical speakers effective in their talks.  I picked up quite a few speaking tips from what I heard from the judges and contestants. Design For Developers – Diane Leeper If you are a great developer, you’re probably a lousy designer.  Diane didn’t come to poke holes in what we think we can do with UI layout and design, but she provided some tools we can use to figure out metaphors for visualizing data.  If you need help with that check out Silverlight Pivot – that’s what she was getting at.  I was first introduced to her at one of John Papa’s talks last year at a Lakeland User Group meeting and she’s very passionate about design.  She was able to discuss different elements of Pivot, while to a developer is just looked cool. I believe she was providing the deck from her talk to folks after her talk, so send her an email if you’re interested.   She says she can talk about design for hours and hours – we all left that session believing her.   Rinse and Repeat Orlando Code Camp 2010 was awesome, and would totally do it again.  There were lots of folks from my shop there, and some that have left my shop to go elsewhere.  So it was a reunion of sorts and a great celebration for the simple fact that its great to be a developer and there’s a community that supports and recognizes it as well.  The sponsors were generous and the organizers were very tired, namely Esteban Garcia and Will Strohl who were responsible for making a lot of this magic happen.  And if you don’t believe me, check out the chatter on Twitter.

    Read the article

  • 13.10 - Weird WiFi connection problems - WMP300N - Broadcom BCM4321

    - by user1898041
    Just installed 13.10 on my desktop and I really like it. After having problems with getting the wifi to work, I installed it connected to the internet with an ethernet cable and added in the 3rd party software and updates as per the installation procedure. After installation was completed, I saw the wifi icon in the upper right hand corner, but it was not seeing any wifi networks. Some Googling brought me to use the 'Additional Drivers' application. It found the WMP300N Broadcom BDM4321 based pci wifi card and installed the proprietary Broadcom STA wireless driver, which may have been installed before. I'm not sure. Here is the weird part: when I start my system, wifi seems to be in some sort of suspended state where the system sees that the card exists but the card will not detect any wifi networks. It will work after booting once I 'Additional Drivers' application and then start FireFox. I know it seems weird, but this is the process I've got down to get the card to recognize wifi networks. After those applications are open for a few seconds, the card starts to function like normal (although maintaining the wifi connection is problem but most likely a seperate issue). The reason this is a problem is because this is supposed to just be a headless box managed through SSH. Here are the readouts from the common network diagnosis programs BEFORE I open 'Additional Drivers' and 'FireFox'. All commands were done with sudo. lspci 00:00.0 Host bridge: Intel Corporation 82G35 Express DRAM Controller (rev 03) 00:01.0 PCI bridge: Intel Corporation 82G35 Express PCI Express Root Port (rev 03) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation 82801IR (ICH9R) LPC Interface Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 01:00.0 VGA compatible controller: NVIDIA Corporation GT216 [GeForce GT 220] (rev a2) 01:00.1 Audio device: NVIDIA Corporation High Definition Audio Controller (rev a1) 02:00.0 Ethernet controller: Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0) 03:00.0 IDE interface: JMicron Technology Corp. JMB368 IDE controller 05:00.0 Network controller: Broadcom Corporation BCM4321 802.11b/g/n (rev 01) 05:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) - lshw *-network description: Ethernet interface product: Attansic L1 Gigabit Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: b0 serial: 00:22:15:00:a8:12 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1 driverversion=2.1.3 latency=0 link=no multicast=yes port=twisted pair resources: irq:46 memory:feac0000-feafffff memory:feaa0000-feabffff *-network description: Wireless interface product: BCM4321 802.11b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth1 version: 01 serial: 00:23:69:d8:2b:16 width: 32 bits clock: 33MHz capabilities: bus_master ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=6.30.223.141 (r415941) latency=64 multicast=yes wireless=IEEE 802.11abg resources: irq:16 memory:febfc000-febfffff - ifconfig eth0 Link encap:Ethernet HWaddr 00:22:15:00:a8:12 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:23:69:d8:2b:16 inet6 addr: fe80::223:69ff:fed8:2b16/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:24 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1856 (1.8 KB) TX bytes:1856 (1.8 KB) - iwconfig eth1 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=200 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off - iwlist scan eth1 No scan results - Here are the various commands AFTER I open 'Additional Drivers' and 'FireFox' lspci 00:00.0 Host bridge: Intel Corporation 82G35 Express DRAM Controller (rev 03) 00:01.0 PCI bridge: Intel Corporation 82G35 Express PCI Express Root Port (rev 03) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation 82801IR (ICH9R) LPC Interface Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 01:00.0 VGA compatible controller: NVIDIA Corporation GT216 [GeForce GT 220] (rev a2) 01:00.1 Audio device: NVIDIA Corporation High Definition Audio Controller (rev a1) 02:00.0 Ethernet controller: Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0) 03:00.0 IDE interface: JMicron Technology Corp. JMB368 IDE controller 05:00.0 Network controller: Broadcom Corporation BCM4321 802.11b/g/n (rev 01) 05:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) - lshw *-network description: Ethernet interface product: Attansic L1 Gigabit Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: b0 serial: 00:22:15:00:a8:12 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1 driverversion=2.1.3 latency=0 link=no multicast=yes port=twisted pair resources: irq:46 memory:feac0000-feafffff memory:feaa0000-feabffff *-network description: Wireless interface product: BCM4321 802.11b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth1 version: 01 serial: 00:23:69:d8:2b:16 width: 32 bits clock: 33MHz capabilities: bus_master ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=6.30.223.141 (r415941) ip=192.168.1.103 latency=64 multicast=yes wireless=IEEE 802.11abg resources: irq:16 memory:febfc000-febfffff - ifconfig eth0 Link encap:Ethernet HWaddr 00:22:15:00:a8:12 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:23:69:d8:2b:16 inet addr:192.168.1.103 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::223:69ff:fed8:2b16/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:85 errors:0 dropped:0 overruns:0 frame:11901 TX packets:132 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:52641 (52.6 KB) TX bytes:19058 (19.0 KB) Interrupt:16 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:76 errors:0 dropped:0 overruns:0 frame:0 TX packets:76 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6084 (6.0 KB) TX bytes:6084 (6.0 KB) - iwconfig eth1 IEEE 802.11abg ESSID:"BU" Mode:Managed Frequency:2.447 GHz Access Point: 00:26:F2:1F:81:02 Bit Rate=54 Mb/s Tx-Power=200 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=59/70 Signal level=-51 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 - iwlist scan A LOT OF SSIDs FOUND! - I'd like to have this problem fixed, but I'm not quite sure where to go. Been Googling a lot and can't seem to find anyone else with this problem.

    Read the article

  • New Features in ASP.NET Web API 2 - Part I

    - by dwahlin
    I’m a big fan of ASP.NET Web API. It provides a quick yet powerful way to build RESTful HTTP services that can easily be consumed by a variety of clients. While it’s simple to get started using, it has a wealth of features such as filters, formatters, and message handlers that can be used to extend it when needed. In this post I’m going to provide a quick walk-through of some of the key new features in version 2. I’ll focus on some two of my favorite features that are related to routing and HTTP responses and cover additional features in a future post.   Attribute Routing Routing has been a core feature of Web API since it’s initial release and something that’s built into new Web API projects out-of-the-box. However, there are a few scenarios where defining routes can be challenging such as nested routes (more on that in a moment) and any situation where a lot of custom routes have to be defined. For this example, let’s assume that you’d like to define the following nested route:   /customers/1/orders   This type of route would select a customer with an Id of 1 and then return all of their orders. Defining this type of route in the standard WebApiConfig class is certainly possible, but it isn’t the easiest thing to do for people who don’t understand routing well. Here’s an example of how the route shown above could be defined:   public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "CustomerOrdersApiGet", routeTemplate: "api/customers/{custID}/orders", defaults: new { custID = 0, controller = "Customers", action = "Orders" } ); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); GlobalConfiguration.Configuration.Formatters.Insert(0, new JsonpFormatter()); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   With attribute based routing, defining these types of nested routes is greatly simplified. To get started you first need to make a call to the new MapHttpAttributeRoutes() method in the standard WebApiConfig class (or a custom class that you may have created that defines your routes) as shown next:   public static class WebApiConfig { public static void Register(HttpConfiguration config) { // Allow for attribute based routes config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } Once attribute based routes are configured, you can apply the Route attribute to one or more controller actions. Here’s an example:   [HttpGet] [Route("customers/{custId:int}/orders")] public List<Order> Orders(int custId) { var orders = _Repository.GetOrders(custId); if (orders == null) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return orders; }   This example maps the custId route parameter to the custId parameter in the Orders() method and also ensures that the route parameter is typed as an integer. The Orders() method can be called using the following route: /customers/2/orders   While this is extremely easy to use and gets the job done, it doesn’t include the default “api” string on the front of the route that you might be used to seeing. You could add “api” in front of the route and make it “api/customers/{custId:int}/orders” but then you’d have to repeat that across other attribute-based routes as well. To simply this type of task you can add the RoutePrefix attribute above the controller class as shown next so that “api” (or whatever the custom starting point of your route is) is applied to all attribute routes: [RoutePrefix("api")] public class CustomersController : ApiController { [HttpGet] [Route("customers/{custId:int}/orders")] public List<Order> Orders(int custId) { var orders = _Repository.GetOrders(custId); if (orders == null) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return orders; } }   There’s much more that you can do with attribute-based routing in ASP.NET. Check out the following post by Mike Wasson for more details.   Returning Responses with IHttpActionResult The first version of Web API provided a way to return custom HttpResponseMessage objects which were pretty easy to use overall. However, Web API 2 now wraps some of the functionality available in version 1 to simplify the process even more. A new interface named IHttpActionResult (similar to ActionResult in ASP.NET MVC) has been introduced which can be used as the return type for Web API controller actions. To return a custom response you can use new helper methods exposed through ApiController such as: Ok NotFound Exception Unauthorized BadRequest Conflict Redirect InvalidModelState Here’s an example of how IHttpActionResult and the helper methods can be used to cleanup code. This is the typical way to return a custom HTTP response in version 1:   public HttpResponseMessage Delete(int id) { var status = _Repository.DeleteCustomer(id); if (status) { return new HttpResponseMessage(HttpStatusCode.OK); } else { throw new HttpResponseException(HttpStatusCode.NotFound); } } With version 2 we can replace HttpResponseMessage with IHttpActionResult and simplify the code quite a bit:   public IHttpActionResult Delete(int id) { var status = _Repository.DeleteCustomer(id); if (status) { //return new HttpResponseMessage(HttpStatusCode.OK); return Ok(); } else { //throw new HttpResponseException(HttpStatusCode.NotFound); return NotFound(); } } You can also cleanup post (insert) operations as well using the helper methods. Here’s a version 1 post action:   public HttpResponseMessage Post([FromBody]Customer cust) { var newCust = _Repository.InsertCustomer(cust); if (newCust != null) { var msg = new HttpResponseMessage(HttpStatusCode.Created); msg.Headers.Location = new Uri(Request.RequestUri + newCust.ID.ToString()); return msg; } else { throw new HttpResponseException(HttpStatusCode.Conflict); } } This is what the code looks like in version 2:   public IHttpActionResult Post([FromBody]Customer cust) { var newCust = _Repository.InsertCustomer(cust); if (newCust != null) { return Created<Customer>(Request.RequestUri + newCust.ID.ToString(), newCust); } else { return Conflict(); } } More details on IHttpActionResult and the different helper methods provided by the ApiController base class can be found here. Conclusion Although there are several additional features available in Web API 2 that I could cover (CORS support for example), this post focused on two of my favorites features. If you have .NET 4.5.1 available then I definitely recommend checking the new features out. Additional articles that cover features in ASP.NET Web API 2 can be found here.

    Read the article

  • Silverlight Recruiting Application Part 4 - Navigation and Modules

    After our brief intermission (and the craziness of Q1 2010 release week), we're back on track here and today we get to dive into how we are going to navigate through our applications as well as how to set up our modules. That way, as I start adding the functionality- adding Jobs and Applicants, Interview Scheduling, and finally a handy Dashboard- you'll see how everything is communicating back and forth. This is all leading up to an eventual webinar, in which I'll dive into this process and give a honest look at the current story for MVVM vs. Code-Behind applications. (For a look at the future with SL4 and a little thing called MEF, check out what Ross is doing over at his blog!) Preamble... Before getting into really talking about this app, I've done a little bit of work ahead of time to create a ton of files that I'll need. Since the webinar is going to cover the Dashboard, it's not here, but otherwise this is a look at what the project layout looks like (and remember, this is both projects since they share the .Web): So as you can see, from an architecture perspective, the code-behind app is much smaller and more streamlined- aka a better fit for the one man shop that is me. Each module in the MVVM app has the same setup, which is the Module class and corresponding Views and ViewModels. Since the code-behind app doesn't need a go-between project like Infrastructure, each MVVM module is instead replaced by a single Silverlight UserControl which will contain all the logic for each respective bit of functionality. My Very First Module Navigation is going to be key to my application, so I figured the first thing I would setup is my MenuModule. First step here is creating a Silverlight Class Library named MenuModule, creatingthe View and ViewModel folders, and adding the MenuModule.cs class to handle module loading. The most important thing here is that my MenuModule inherits from IModule, which runs an Initialize on each module as it is created that, in my case, adds the views to the correct regions. Here's the MenuModule.cs code: public class MenuModule : IModule { private readonly IRegionManager regionManager; private readonly IUnityContainer container; public MenuModule(IUnityContainer container, IRegionManager regionmanager) { this.container = container; this.regionManager = regionmanager; } public void Initialize() { var addMenuView = container.Resolve<MenuView>(); regionManager.Regions["MenuRegion"].Add(addMenuView); } } Pretty straightforward here... We inject a container and region manager from Prism/Unity, then upon initialization we grab the view (out of our Views folder) and add it to the region it needs to live in. Simple, right? When the MenuView is created, the only thing in the code-behind is a reference to the set the MenuViewModel as the DataContext. I'd like to achieve MVVM nirvana and have zero code-behind by placing the viewmodel in the XAML, but for the reasons listed further below I can't. Navigation - MVVM Since navigation isn't the biggest concern in putting this whole thing together, I'm using the Button control to handle different options for loading up views/modules. There is another reason for this- out of the box, Prism has command support for buttons, which is one less custom command I had to work up for the functionality I would need. This comes from the Microsoft.Practices.Composite.Presentation assembly and looks as follows when put in code: <Button x:Name="xGoToJobs" Style="{StaticResource menuStyle}" Content="Jobs" cal:Click.Command="{Binding GoModule}" cal:Click.CommandParameter="JobPostingsView" /> For quick reference, 'menuStyle' is just taking care of margins and spacing, otherwise it looks, feels, and functions like everyone's favorite Button. What MVVM's this up is that the Click.Command is tying to a DelegateCommand (also coming fromPrism) on the backend. This setup allows you to tie user interaction to a command you setup in your viewmodel, which replaces the standard event-based setup you'd see in the code-behind app. Due to databinding magic, it all just works. When we get looking at the DelegateCommand in code, it ends up like this: public class MenuViewModel : ViewModelBase { private readonly IRegionManager regionManager; public DelegateCommand<object> GoModule { get; set; } public MenuViewModel(IRegionManager regionmanager) { this.regionManager = regionmanager; this.GoModule = new DelegateCommand<object>(this.goToView); } public void goToView(object obj) { MakeMeActive(this.regionManager, "MainRegion", obj.ToString()); } } Another for reference, ViewModelBase takes care of iNotifyPropertyChanged and MakeMeActive, which switches views in the MainRegion based on the parameters. So our public DelegateCommand GoModule ties to our command on the view, that in turn calls goToView, and the parameter on the button is the name of the view (which we pass with obj.ToString()) to activate. And how do the views get the names I can pass as a string? When I called regionManager.Regions[regionname].Add(view), there is an overload that allows for .Add(view, "viewname"), with viewname being what I use to activate views. You'll see that in action next installment, just wanted to clarify how that works. With this setup, I create two more buttons in my MenuView and the MenuModule is good to go. Last step is to make sure my MenuModule loads in my Bootstrapper: protected override IModuleCatalog GetModuleCatalog() { ModuleCatalog catalog = new ModuleCatalog(); // add modules here catalog.AddModule(typeof(MenuModule.MenuModule)); return catalog; } Clean, simple, MVVM-delicious. Navigation - Code-Behind Keeping with the history of significantly shorter code-behind sections of this series, Navigation will be no different. I promise. As I explained in a prior post, due to the one-project setup I don't have to worry about the same concerns so my menu is part of MainPage.xaml. So I can cheese-it a bit, though, since I've already got three buttons all set I'm just copying that code and adding three click-events instead of the command/commandparameter setup: <!-- Menu Region --> <StackPanel Grid.Row="1" Orientation="Vertical"> <Button x:Name="xJobsButton" Content="Jobs" Style="{StaticResource menuStyleCB}" Click="xJobsButton_Click" /> <Button x:Name="xApplicantsButton" Content="Applicants" Style="{StaticResource menuStyleCB}" Click="xApplicantsButton_Click" /> <Button x:Name="xSchedulingModule" Content="Scheduling" Style="{StaticResource menuStyleCB}" Click="xSchedulingModule_Click" /> </StackPanel> Simple, easy to use events, and no extra assemblies required! Since the code for loading each view will be similar, we'll focus on JobsView for now.The code-behind with this setup looks something like... private JobsView _jobsView; public MainPage() { InitializeComponent(); } private void xJobsButton_Click(object sender, RoutedEventArgs e) { if (MainRegion.Content.GetType() != typeof(JobsView)) { if (_jobsView == null) _jobsView = new JobsView(); MainRegion.Content = _jobsView; } } What am I doing here? First, for each 'view' I create a private reference which MainPage will hold on to. This allows for a little bit of state-maintenance when switching views. When a button is clicked, first we make sure the 'view' typeisn't active (why load it again if it is already at center stage?), then we check if the view has been created and create if necessary, then load it up. Three steps to switching views and is easy as pie. Part 4 Results The end result of all this is that I now have a menu module (MVVM) and a menu section (code-behind) that load their respective views. Since I'm using the same exact XAML (except with commands/events depending on the project), the end result for both is again exactly the same and I'll show a slightly larger image to show it off: Next time, we add the Jobs Module and wire up RadGridView and a separate edit page to handle adding and editing new jobs. That's when things get fun. And somewhere down the line, I'll make the menu look slicker. :) Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • CodePlex Daily Summary for Sunday, June 06, 2010

    CodePlex Daily Summary for Sunday, June 06, 2010New ProjectsActive Worlds Dot Net Wrapper (Based on AwSdk): Active Worlds Dot Net Wrapper (Based on AwSdk)Combina: Smart calculator for large combinatorial calculations.Concurrent Cache: ConcurrentCache is a smart output cache library extending OutputCacheProvider. It consists of in memory, cache files and compressed files modes and...Decay: Personal use. For learningFazTalk: FazTalk is a suite of tools and products that are designed to improve collaboration and workflow interactions. FazTalk takes an innovative approach...grouped: A peer to peer text editor, written in C# [update] I wrote this little thing a while back and even forgot about it, I stopped coding for more tha...HitchARide MVC 2 Sample: An MVC 2 sample written as part of the Microsoft 2010 London Web Camp based on the wireframes at http://schematics.earthware.co.uk/hitcharide. Not...Inspiration.Web: Description: A simple (but entertaining) ASP.NET MVC (C#) project to suggest random code names for projects. Intended audience: People who ne...NetFileBrowser - TinyMCE: tinyMCE file plugin with asp.netOil Slick Live Feeds: All live feeds from BP's Remotely Operated VehiclesParticle Lexer: Parser and Tokenizer libraryPdf Form Tool: Pdf Form Tool demonstrates how the iTextSharp library could be used to fill PDF forms. The input data is provided as a csv file. The application ...Planning Poker Windows Mobile 7: This project is a Planning Poker application for Windows Mobile 7 (and later?). RandomRat: RandomRat is a program for generating random sets that meet specific criteriaScience.NET: A scientific library written in managed code. It supports advanced mathematics (algebra system, sequences, statistics, combinatorics...), data stru...Spider Compiler: Spider Compiler parses the input of a spider programming source file and compiles it (with help of csc.exe; the C#-Compiler) to an exe-file. This p...Sununpro: sunun's project for study by team foundation server.TFS Buddy: An application that manipulates your I-Buddy whenever something happens in your Team Foundation ServerValveSoft: ValveSysWiiMote Physics: WiiMote Physics is an application that allows you to retrieve data from your WiiMote or Balance Board and display it in real-time. It has a number...WinGet: WinGet is a download manager for Windows. You can drag links onto the WinGet Widget and it will download a file on the selected folder. It is dev...XProject.NET: A project management and team collaboration platformNew Releases.NET DiscUtils: Version 0.9 Preview: This release is still under development. New features available in this release: Support for accessing short file names stored in WIM files Incr...Active Worlds Dot Net Wrapper (Based on AwSdk): Active World Dot Net Wrapper (0.0.1.85): Based on AwSdk 85AwSdk UnOfficial Wrapper Howto Use: C# using AwWrapper; VB.Net Import AwWrapperAjaxControlToolkit additional extenders: ZhecheAjaxControls for .NET3.5: Used AJAX Control Toolkit Release Notes - April 12th 2010 Release Version 40412. Fixed deadlock in long operation canceling Some other fixesAnyCAD: AnyCAD.v1.2.ENU.Install: http://www.anycad.net Parametric Modeling *3D: Sphere, Box, Cylinder, Cone •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extr...Community Forums NNTP bridge: Community Forums NNTP Bridge V29: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Concurrent Cache: 1.0: This is the first release for the ConcurrentCache library.Configuration Section Designer: 2.0.0: This is the first Beta Release for VS 2010 supportDoxygen Browser Addin for VS: Doxygen Browser Addin - v0.1.4 Beta: Support for Visual Studio 2010 improved the logging of errors (Event Logs) Fixed some issues/bugs Hot key for navigation "Control + F1, Contr...Folder Bookmarks: Folder Bookmarks 1.6.2: The latest version of Folder Bookmarks (1.6.2), with new UI changes. Once you have extracted the file, do not delete any files/folders. They are n...HERB.IQ: Beta 0.1 Source code release 5: Beta 0.1 Source code release 5Inspiration.Web: Initial release (deployment package): Initial release (deployment package)NetFileBrowser - TinyMCE: Demo Project: Demo ProjectNetFileBrowser - TinyMCE: NetFileBrowser: NetImageBrowserNLog - Advanced .NET Logging: Nightly Build 2010.06.05.001: Changes since the last build:2010-06-04 23:29:42 Jarek Kowalski Massive update to documentation generator. 2010-05-28 15:41:42 Jarek Kowalski upda...Oil Slick Live Feeds: Oil Slick Live Feeds 0.1: A the first release, with feeds from the MS Skandi, Boa Deep C, Enterprise and Q4000. They are live streams from the ROV's monitoring the damaged...Pcap.Net: Pcap.Net 0.7.0 (46671): Pcap.Net - June 2010 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and includes...sqwarea: Sqwarea 0.0.289.0 (alpha): API supportTFS Buddy: TFS Buddy First release (Beta 1): This is the first release of the TFS Buddy.Visual Studio DSite: Looping Animation (Visual C++ 2008): A solider firing a bullet that loops and displays an explosion everytime it hits the edge of the form.WiiMote Physics: WiiMote Physics v4.0: v4.0.0.1 Recovered from existing compiled assembly after hard drive failure Now requires .NET 4.0 (it seems to make it run faster) Added new c...WinGet: Alpha 1: First Alpha of WinGet. It includes all the planned features but it contains many bugs. Packaged using 7-Zip and ClickOnce.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)PHPExcelpatterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsCommunity Forums NNTP bridgeRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationN2 CMSIonics Isapi Rewrite FilterStyleCopsmark C# LibraryFarseer Physics Enginepatterns & practices: Composite WPF and Silverlight

    Read the article

  • ASP.NET MVC 3 Hosting :: Rolling with Razor in MVC v3 Preview

    - by mbridge
    Razor is an alternate view engine for asp.net MVC.  It was introduced in the “WebMatrix” tool and has now been released as part of the asp.net MVC 3 preview 1.  Basically, Razor allows us to replace the clunky <% %> syntax with a much cleaner coding model, which integrates very nicely with HTML.  Additionally, it provides some really nice features for master page type scenarios and you don’t lose access to any of the features you are currently familiar with, such as HTML helper methods. First, download and install the ASP.NET MVC Preview 1.  You can find this at http://www.microsoft.com/downloads/details.aspx?FamilyID=cb42f741-8fb1-4f43-a5fa-812096f8d1e8&displaylang=en. Now, follow these steps to create your first asp.net mvc project using Razor: 1. Open Visual Studio 2010 2. Create a new project.  Select File->New->Project (Shift Control N) 3. You will see the list of project types which should look similar to what’s shown:   4. Select “ASP.NET MVC 3 Web Application (Razor).”  Set the application name to RazorTest and the path to c:projectsRazorTest for this tutorial. If you select accidently select ASPX, you will end up with the standard asp.net view engine and template, which isn’t what you want. 5. For this tutorial, and ONLY for this tutorial, select “No, do not create a unit test project.”  In general, you should create and use a unit test project.  Code without unit tests is kind of like diet ice cream.  It just isn’t very good. Now, once we have this done, our brand new project will be created.    In all likelihood, Visual Studio will leave you looking at the “HomeController.cs” class, as shown below: Immediately, you should notice one difference.  The Index action used to look like: public ActionResult Index () { ViewData[“Message”] = “Welcome to ASP.Net MVC!”; Return View(); } While this will still compile and run just fine, ASP.Net MVC 3 has a much nicer way of doing this: public ActionResult Index() { ViewModel.Message = “Welcome to ASP.Net MVC!”; Return View(); } Instead of using ViewData we are using the new ViewModel object, which uses the new dynamic data typing of .Net 4.0 to allow us to express ourselves much more cleanly.  This isn’t a tutorial on ALL of MVC 3, but the ViewModel concept is one we will need as we dig into Razor. What comes in the box? When we create a project using the ASP.Net MVC 3 Template with Razor, we get a standard project setup, just like we did in ASP.NET MVC 2.0 but with some differences.  Instead of seeing “.aspx” view files and “.ascx” files, we see files with the “.cshtml” which is the default razor extension.  Before we discuss the details of a razor file, one thing to keep in mind is that since this is an extremely early preview, intellisense is not currently enabled with the razor view engine.  This is promised as an updated before the final release.  Just like with the aspx view engine, the convention of the folder name for a set of views matching the controller name without the word “Controller” still stands.  Similarly, each action in the controller will usually have a corresponding view file in the appropriate view directory.  Remember, in asp.net MVC, convention over configuration is key to successful development! The initial template organizes views in the following folders, located in the project under Views: - Account – The default account management views used by the Account controller.  Each file represents a distinct view. - Home – Views corresponding to the appropriate actions within the home controller. - Shared – This contains common view objects used by multiple views.  Within here, master pages are stored, as well as partial page views (user controls).  By convention, these partial views are named “_XXXPartial.cshtml” where XXX is the appropriate name, such as _LogonPartial.cshtml.  Additionally, display templates are stored under here. With this in mind, let us take a look at the index.cshtml file under the home view directory.  When you open up index.cshtml you should see 1:   @inherits System.Web.Mvc.WebViewPage 2:  @{ 3:          View.Title = "Home Page"; 4:       LayoutPage = "~/Views/Shared/_Layout.cshtml"; 5:   } 6:  <h2>@View.Message</h2> 7:  <p> 8:     To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC     9:    Website">http://asp.net/mvc</a>. 10:  </p> So looking through this, we observe the following facts: Line 1 imports the base page that all views (using Razor) are based on, which is System.Web.Mvc.WebViewPage.  Note that this is different than System.Web.MVC.ViewPage which is used by asp.net MVC 2.0 Also note that instead of the <% %> syntax, we use the very simple ‘@’ sign.  The View Engine contains enough context sensitive logic that it can even distinguish between @ in code and @ in an email.  It’s a very clean markup.  Line 2 introduces the idea of a code block in razor.  A code block is a scoping mechanism just like it is in a normal C# class.  It is designated by @{… }  and any C# code can be placed in between.  Note that this is all server side code just like it is when using the aspx engine and <% %>.  Line 3 allows us to set the page title in the client page’s file.  This is a new feature which I’ll talk more about when we get to master pages, but it is another of the nice things razor brings to asp.net mvc development. Line 4 is where we specify our “master” page, but as you can see, you can place it almost anywhere you want, because you tell it where it is located.  A Layout Page is similar to a master page, but it gains a bit when it comes to flexibility.  Again, we’ll come back to this in a later installment.  Line 6 and beyond is where we display the contents of our view.  No more using <%: %> intermixed with code.  Instead, we get to use very clean syntax such as @View.Message.  This is a lot easier to read than <%:@View.Message%> especially when intermixed with html.  For example: <p> My name is @View.Name and I live at @View.Address </p> Compare this to the equivalent using the aspx view engine <p> My name is <%:View.Name %> and I live at <%: View.Address %> </p> While not an earth shaking simplification, it is easier on the eyes.  As  we explore other features, this clean markup will become more and more valuable.

    Read the article

  • The DOS DEBUG Environment

    - by MarkPearl
    Today I thought I would go back in time and have a look at the DEBUG command that has been available since the beginning of dawn in DOS, MS-DOS and Microsoft Windows. up to today I always knew it was there, but had no clue on how to use it so for those that are interested this might be a great geek party trick to pull out when you want the awe the younger generation and want to show them what “real” programming is about. But wait, you will have to do it relatively quickly as it seems like DEBUG was finally dumped from the Windows group in Windows 7. Not to worry, pull out that Windows XP box which will get you even more geek points and you can still poke DEBUG a bit. So, for those that are interested and want to find out a bit about the history of DEBUG read the wiki link here. That all put aside, lets get our hands dirty.. How to Start DEBUG in Windows Make sure your version of Windows supports DEBUG. Open up a console window Make a directory where you want to play with debug – in my instance I called it C221 Enter the directory and type Debug You will get a response with a – as illustrated in the image below…   The commands available in DEBUG There are several commands available in DEBUG. The most common ones are A (Assemble) R (Register) T (Trace) G (Go) D (Dump or Display) U (Unassemble) E (Enter) P (Proceed) N (Name) L (Load) W (Write) H (Hexadecimal) I (Input) O (Output) Q (Quit) I am not going to cover all these commands, but what I will do is go through a few of them briefly. A is for Assemble Command (to write code) The A command translates assembly language statements into machine code. It is quite useful for writing small assembly programs. Below I have written a very basic assembly program. The code typed out is as follows mov ax,0015 mov cx,0023 sub cx,ax mov [120],al mov cl,[120]A nop R is for Register (to jump to a point in memory) The r command turns out to be one of the most frequent commands you will use in DEBUG. It allows you to view the contents of registers and to change their values. It can be used with the following combinations… R – Displays the contents of all the registers R f – Displays the flags register R register_name – Displays the contents of a specific register All three methods are illustrated in the image above T is for Trace (To execute a program step by step) The t command allows us to execute the program step by step. Before we can trace the program we need to point back to the beginning of the program. We do this by typing in r ip, which moves us back to memory point 100. We then type trace which executes the first line of code (line 100) (As shown in the image below starting from the red arrow). You can see from the above image that the register AX now contains 0015 as per our instruction mov ax,0015 You can also see that the IP points to line 0103 which has the MOV CX,0023 command If we type t again it will now execute the second line of the program which moves 23 in the cx register. Again, we can see that the line of code was executed and that the CX register now holds the value of 23. What I would like to highlight now is the section underlined in red. These are the status flags. The ones we are going to look at now are 1st (NV), 4th (PL), 5th (NZ) & 8th (NC) NV means no overflow, the alternate would be OV PL means that the sign of the previous arithmetic operation was Plus, the alternate would be NG (Negative) NZ means that the results of the previous arithmetic operation operation was Not Zero, the alternate would be ZR NC means that No final Carry resulted from the previous arithmetic operation. CY means that there was a final Carry. We could now follow this process of entering the t command until the entire program is executed line by line. G is for Go (To execute a program up to a certain line number) So we have looked at executing a program line by line, which is fine if your program is minuscule BUT totally unpractical if we have any decent sized program. A quicker way to run some lines of code is to use the G command. The ‘g’ command executes a program up to a certain specified point. It can be used in connection with the the reset IP command. You would set your initial point and then run the G command with the line you want to end on. P is for Proceed (Similar to trace but slightly more streamlined) Another command similar to trace is the proceed command. All that the p command does is if it is called and it encounters a CALL, INT or LOOP command it terminates the program execution. In the example below I modified our example program to include an int 20 at the end of it as illustrated in the image below… Then when executing the code when I encountered the int 20 command I typed the P command and the program terminated normally (illustrated below). D is for Dump (or for those more polite Display) So, we have all these assembly lines of code, but if you have ever opened up an exe or com file in a text/hex editor, it looks nothing like assembly code. The D command is a way that we can see what our code looks like in memory (or in a hex editor). If we examined the image above, we can see that Debug is storing our assembly code with each instruction following immediately after the previous one. For instance in memory address 110 we have int and 111 we have 20. If we examine the dump of memory we can see at memory point 110 CD is stored and at memory point 111 20 is stored. U is for Unassemble (or Convert Machine code to Assembly Code) So up to now we have gone through a bunch of commands, but probably one of the most useful is the U command. Let’s say we don’t understand machine code so well and so instead we want to see it in its equivalent assembly code. We can type the U command followed by the start memory point, followed by the end memory point and it will show us the assembly code equivalent of the machine code. E is for a bunch of things… The E command can be used for a bunch of things… One example is to enter data or machine code instructions directly into memory. It can also be used to display the contents of memory locations. I am not going to worry to much about it in this post. N / L / W is for Name, Load & Write So we have written out assembly code in debug, and now we want to save it to disk, or write it as a com file or load it. This is where the N, L & W command come in handy. The n command is used to give a name to the executable program file and is pretty simple to use. The w command is a bit trickier. It saves to disk all the memory between point bx and point cx so you need to specify the bx memory address and the cx memory address for it to write your code. Let’s look at an example illustrated below. You do this by calling the r command followed by the either bx or cx. We can then go to the directory where we were working and will see the new file with the name we specified. The L command is relatively simple. You would first specify the name of the file you would like to load using the N command, and then call the L command. Q is for Quit The last command that I am going to write about in this post is the Q command. Simply put, calling the Q command exits DEBUG. Commands we did not Cover Out of the standard DEBUG commands we covered A, T, G, D, U, E, P, R, N, L & W. The ones we did not cover were H, I & O – I might make mention of these in a later post, but for the basics they are not really needed. Some Useful Resources Please note this post is based on the COS2213 handouts for UNISA A Guide to DEBUG - http://mirror.href.com/thestarman/asm/debug/debug.htm#NT

    Read the article

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

  • Portal And Content - Content Integration - Best Practices

    - by Stefan Krantz
    Lately we have seen an increase in projects that have failed to either get user friendly content integration or non satisfactory performance. Our intention is to mitigate any knowledge gap that our previous post might have left you with, therefore this post will repeat some recommendation or reference back to old useful post. Moreover this post will help you understand ground up how to design, architect and implement business enabled, responsive and performing portals with complex requirements on business centric information publishing. Design the Information Model The key to successful portal deployments is Information modeling, it's a key task to understand the use case you designing for, therefore I have designed a set of question you need to ask yourself or your customer: Question: Who will own the content, IT or Business? Answer: BusinessQuestion: Who will publish the content, IT or Business? Answer: BusinessQuestion: Will there be multiple publishers? Answer: YesQuestion: Are the publishers computer scientist?Answer: NoQuestion: How often do the information changes, daily, weekly, monthly?Answer: Daily, weekly If your answers to the questions matches at least 2, we strongly recommend you design your content with following principles: Divide your pages in to logical sections, where each section is marked with its purpose Assign capabilities to each section, does it contain text, images, formatting and/or is it static and is populated through other contextual information Select editor/design element type WYSIWYG - Rich Text Plain Text - non-format text Image - Image object Static List - static list of formatted informationDynamic Data List - assembled information from multiple data files through CMIS query The result of such design map could look like following below examples: Based on the outcome of the required elements in the design column 3 from the left you will now simply design a data model in WebCenter Content - Site Studio by creating a Region Definition structure matching your design requirements.For more information on how to create a Region definition see following post: Region Definition Post - note see instruction 7 for details. Each region definition can now be used to instantiate data files, a data file will hold the actual data for each element in the region definition. Another way you can see this is to compare the region definition as an extension to the metadata model in WebCenter Content for each data file item. Design content templates With a solid dependable information model we can now proceed to template creation and page design, in this phase focuses on how to place the content sections from the region definition on the page via a Content Presenter template. Remember by creating content presenter templates you will leverage the latest and most integrated technology WebCenter has to offer. This phase is much easier since the you already have the information model and design wire-frames to base the logic on, however there is still few considerations to pay attention to: Base the template on ADF and make only necessary exceptions to markup when required Leverage ADF design components for Tabs, Accordions and other similar components, this way the design in the content published areas will comply with other design areas based on custom ADF taskflows There is no performance impact when using meta data or region definition based data All data access regardless of type, metadata or xml data it can be accessed via the Content Presenter - Node. See below for applied examples on how to access data Access metadata property from Document - #{node.propertyMap['myProp'].value}myProp in this example can be for instance (dDocName, dDocTitle, xComments or any other available metadata) Access element data from data file xml - #{node.propertyMap['[Region Definition Name]:[Element name]'].asTextHtml}Region Definition Name is the expect region definition that the current data file is instantiatingElement name is the element value you like to grab from the data file I recommend you read following  useful post on content template topic:CMIS queries and template creation - note see instruction 9 for detailsStatic List template rendering For more information on templates:Single Item Content TemplateMulti Item Content TemplateExpression Language Internationalization Considerations When integrating content assets via content presenter you by now probably understand that the content item/data file is wired to the page, what is also pretty common at this stage is that the content item/data file only support one language since its not practical or business friendly to mix that into a complex structure. Therefore you will be left with a very common dilemma that you will have to either build a complete new portal for each locale, which is not an good option! However with little bit of information modeling and clear naming convention this can be addressed. Basically you can simply make sure that all content item/data file are named with a predictable naming convention like "Content1_EN" for the English rendition and "Content1_ES" for the Spanish rendition. This way through simple none complex customizations you will be able to dynamically switch the actual content item/data file just before rendering. By following proposed approach above you not only enable a simple mechanism for internationalized content you also preserve the functionality in the content presenter to support business accessible run-time publishing of information on existing and new pages. I recommend you read following useful post on Internationalization topics:Internationalize with Content Presenter Integrate with Review & Approval processes Today the Review and approval functionality and configuration is based out of WebCenter Content - Criteria Workflows. Criteria Workflows uses the metadata of the checked in document to evaluate if the document is under any review/approval process. So for instance if a Criteria Workflow is configured to force any documents with Version = "2" or "higher" and Content Type is "Instructions", any matching content item version on check in will now enter the workflow before getting released for general access. Few things to consider when configuring Criteria Workflows: Make sure to not trigger on version one for Content Items that are Data Files - if you trigger on version 1 you will not only approve an empty document you will also have a content presenter pointing to a none existing document - since the document will only be available after successful completion of the workflow Approval workflows sometimes requires more complex criteria, the recommendation if that is the case is that the meta data triggering such criteria is automatically populated, this can be achieved through many approaches including Content Profiles Criteria workflows are configured and managed in WebCenter Content Administration Applets where you can configure one or more workflows. When you configured Criteria workflows the Content Presenter will support the editors with the approval process directly inline in the "Contribution mode" of the portal. In addition to approve/reject and details of the task, the content presenter natively support the user to view the current and future version of the change he/she is approving. See below for example: Architectural recommendation To support review&approval processes - minimize the amount of data files per page Each CMIS query can consume significant time depending on the complexity of the query - minimize the amount of CMIS queries per page Use Content Presenter Templates based on ADF - this way you minimize the design considerations and optimize the usage of caching Implement the page in as few Data files as possible - simplifies publishing process, increases performance and simplifies release process Named data file (node) or list of named nodes when integrating to pages increases performance vs. querying for data Named data file (node) or list of named nodes when integrating to pages enables business centric page creation and publishing and reduces the need for IT department interaction Summary Just because one architectural decision solves a business problem it doesn't mean its the right one, when designing portals all architecture has to be in harmony and not impacting each other. For instance the most technical complex solution is not always the best since it will most likely defeat the business accessibility, performance or both, therefore the best approach is to first design for simplicity that even a non-technical user can operate, after that consider the performance impact and final look at the technology challenges these brings and workaround them first with out-of-the-box features, after that design and develop functions to complement the short comings.

    Read the article

  • jBullet Collision/Physics not working correctly

    - by Kenneth Bray
    Below is the code for one of my objects in the game I am creating (yes although this is a cube, I am not making anything remotely like MineCraft), and my issue is I while the cube will display and is does follow the physics if the cube falls, it does not interact with any other objects in the game. If I was to have multiple cubes in screen at once they all just sit there, or shoot off in all directions never stopping. Anyway, I am new to jBullet, and any help would be appreciated. package Object; import static org.lwjgl.opengl.GL11.GL_QUADS; import static org.lwjgl.opengl.GL11.glBegin; import static org.lwjgl.opengl.GL11.glColor3f; import static org.lwjgl.opengl.GL11.glEnd; import static org.lwjgl.opengl.GL11.glPopMatrix; import static org.lwjgl.opengl.GL11.glPushMatrix; import static org.lwjgl.opengl.GL11.glVertex3f; import javax.vecmath.Matrix4f; import javax.vecmath.Quat4f; import javax.vecmath.Vector3f; import com.bulletphysics.collision.shapes.BoxShape; import com.bulletphysics.collision.shapes.CollisionShape; import com.bulletphysics.dynamics.RigidBody; import com.bulletphysics.dynamics.RigidBodyConstructionInfo; import com.bulletphysics.linearmath.DefaultMotionState; import com.bulletphysics.linearmath.Transform; public class Cube { // Cube size/shape variables private float size; boolean cubeCollidable; boolean cubeDestroyable; // Position variables - currently this defines the center of the cube private float posX; private float posY; private float posZ; // Rotation variables - should be between 0 and 359, might consider letting rotation go higher though I can't think of a purpose currently private float rotX; private float rotY; private float rotZ; //collision shape is a box shape CollisionShape fallShape; // setup the motion state for the ball DefaultMotionState fallMotionState; Vector3f fallInertia = new Vector3f(0, 1, 0); RigidBodyConstructionInfo fallRigidBodyCI; public RigidBody fallRigidBody; int mass = 1; // Constructor public Cube(float pX, float pY, float pZ, float pSize) { posX = pX; posY = pY; posZ = pZ; size = pSize; rotX = 0; rotY = 0; rotZ = 0; // define the physics based on the values passed in fallShape = new BoxShape(new Vector3f(size, size, size)); fallMotionState = new DefaultMotionState(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(0, 50, 0), 1f))); fallRigidBodyCI = new RigidBodyConstructionInfo(mass, fallMotionState, fallShape, fallInertia); fallRigidBody = new RigidBody(fallRigidBodyCI); } public void Update() { Transform trans = new Transform(); fallRigidBody.getMotionState().getWorldTransform(trans); posY = trans.origin.x; posX = trans.origin.y; posZ = trans.origin.z; } public void Draw() { fallShape.calculateLocalInertia(mass, fallInertia); // center point posX, posY, posZ float radius = size / 2; //top glPushMatrix(); glBegin(GL_QUADS); { glColor3f(1.0f,0.0f,0.0f); // red glVertex3f(posX + radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY + radius, posZ + radius); } glEnd(); glPopMatrix(); //bottom glPushMatrix(); glBegin(GL_QUADS); { glColor3f(1.0f,1.0f,0.0f); // ?? color glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY - radius, posZ - radius); } glEnd(); glPopMatrix(); //right side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(1.0f,0.0f,1.0f); // ?? color glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } glEnd(); glPopMatrix(); //left side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(0.0f,1.0f,1.0f); // ?? color glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); } glEnd(); glPopMatrix(); //front side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(0.0f,0.0f,1.0f); //blue glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); } glEnd(); glPopMatrix(); //back side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(0.0f,1.0f,0.0f); // green glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } glEnd(); glPopMatrix(); } }

    Read the article

  • Ardour wont start Jack problem

    - by Drew S
    I downloaded Ardour yesterday, it worked, edited an audio file done. Come back today it wont start I get this: Ardour could not start JACK There are several possible reasons: 1) You requested audio parameters that are not supported.. 2) JACK is running as another user. Please consider the possibilities, and perhaps try different parameters. So I try and look at qjackctl to see what happening there. When I try to start JACK I get D-BUS: JACK server could not be started. then Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. and this is the message box in JACK. 15:22:12.927 Patchbay deactivated. 15:22:12.927 Statistics reset. 15:22:12.944 ALSA connection change. 15:22:12.951 D-BUS: Service is available (org.jackaudio.service aka jackdbus). Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:22:12.959 ALSA connection graph change. 15:22:45.850 ALSA connection graph change. 15:22:46.021 ALSA connection change. 15:22:56.492 ALSA connection graph change. 15:22:56.624 ALSA connection change. 15:23:42.340 D-BUS: JACK server could not be started. Sorry Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:23:42 2013: Starting jack server... Wed Oct 23 15:23:42 2013: JACK server starting in realtime mode with priority 10 Wed Oct 23 15:23:42 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Wed Oct 23 15:23:42 2013: Acquired audio card Audio0 Wed Oct 23 15:23:42 2013: creating alsa driver ... hw:0|hw:0|1024|2|44100|0|0|nomon|swmeter|-|32bit Wed Oct 23 15:23:42 2013: ERROR: ATTENTION: The playback device "hw:0" is already in use. The following applications are using your soundcard(s) so you should check them and stop them as necessary before trying to start JACK again: pulseaudio (process ID 2553) Wed Oct 23 15:23:42 2013: ERROR: Cannot initialize driver Wed Oct 23 15:23:42 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:23:42 2013: ERROR: Failed to open server Wed Oct 23 15:23:43 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:26:41.669 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:26:49.006 D-BUS: JACK server could not be started. Sorry Wed Oct 23 15:26:48 2013: Starting jack server... Wed Oct 23 15:26:48 2013: JACK server starting in non-realtime mode Wed Oct 23 15:26:48 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:26:48 2013: ERROR: cannot register object path "/org/freedesktop/ReserveDevice1/Audio0": A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:48 2013: ERROR: Failed to acquire device name : Audio0 error : A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:48 2013: ERROR: Audio device hw:0 cannot be acquired... Wed Oct 23 15:26:48 2013: ERROR: Cannot initialize driver Wed Oct 23 15:26:48 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:26:48 2013: ERROR: Failed to open server Wed Oct 23 15:26:50 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:26:52.441 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:26:55.997 D-BUS: JACK server could not be started. Sorry Wed Oct 23 15:26:55 2013: Starting jack server... Wed Oct 23 15:26:55 2013: JACK server starting in non-realtime mode Wed Oct 23 15:26:55 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:26:55 2013: ERROR: cannot register object path "/org/freedesktop/ReserveDevice1/Audio0": A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:55 2013: ERROR: Failed to acquire device name : Audio0 error : A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:55 2013: ERROR: Audio device hw:0 cannot be acquired... Wed Oct 23 15:26:55 2013: ERROR: Cannot initialize driver Wed Oct 23 15:26:55 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:26:55 2013: ERROR: Failed to open server Wed Oct 23 15:26:57 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:26:59.054 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:29:24.624 ALSA connection graph change. 15:29:24.641 ALSA connection change. 15:33:11.760 D-BUS: JACK server could not be started. Sorry Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:33:11 2013: Starting jack server... Wed Oct 23 15:33:11 2013: JACK server starting in non-realtime mode Wed Oct 23 15:33:11 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Wed Oct 23 15:33:11 2013: ERROR: cannot register object path "/org/freedesktop/ReserveDevice1/Audio0": A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:33:11 2013: ERROR: Failed to acquire device name : Audio0 error : A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:33:11 2013: ERROR: Audio device hw:0 cannot be acquired... Wed Oct 23 15:33:11 2013: ERROR: Cannot initialize driver Wed Oct 23 15:33:11 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:33:11 2013: ERROR: Failed to open server Wed Oct 23 15:33:12 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:34:09.439 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started

    Read the article

  • Conversion of BizTalk Projects to Use the New WCF-SAP Adaptor

    - by Geordie
    We are in the process of upgrading our BizTalk Environment from BizTalk 2006 R2 to BizTalk 2010. The SAP adaptor in BizTalk 2010 is an all new and more powerful WCF-SAP adaptor. When my colleagues tested out the new adaptor they discovered that the format of the data extracted from SAP was not identical to the old adaptor. This is not a big deal if the structure of the messages from SAP is simple. In this case we were receiving the delivery and invoice iDocs. Both these structures are complex especially the delivery document. Over the past few years I have tweaked the delivery mapping to remove bugs from original mapping. The idea of redoing these maps did not appeal and due to the current work load was not even an option. I opted for a rather crude alternative of pulling in the iDoc in the new typed format and then adding a static map at the start of the orchestration to convert the data to the old schema.  Note WCF-SAP data formats (on the binding tab of the configuration dialog box is the ‘RecieiveIdocFormat’ field): Typed:  Returns a XML document with the hierarchy represented in XML and all fields being represented by XML tags. RFC: Returns an XML document with the hierarchy represented in XML but the iDoc lines in flat file format. String: This returns the iDoc in a format that is closest to the original flat file format but is still wrapped with some top level XML tags. The files also contained some strange characters at the end of each line. I started with the invoice document and it was quite straight forward to add the mapping but this is where my problems started. The orchestrations for these documents are dynamic and so require the identity of the partner to be able to correctly configure the orchestration. The partner identity is in the EDI_DC40 segment of the iDoc. In the old project the RECPRN node of the segment was promoted. The code to set a variable to the partner ID was now failing. After lot of head scratching I discovered the problem was due to the addition of Namespaces to the fields in the EDI_DC40 segment. To overcome this I needed to use an xPath query with a Namespace Manager. This had to be done in custom code. I now tried to repeat the process with the delivery document. Unfortunately when we tried to get sample typed data from SAP an exception was thrown. The adapter "WCF-SAP" raised an error message. Details "Microsoft.ServiceModel.Channels.Common.XmlReaderGenerationException: The segment or group definition E2EDKA1001 was not found in the IDoc metadata. The UniqueId of the IDoc type is: IDOCTYP/3/DESADV01/ZASNEXT1/640. For Receive operations, the SAP adapter does not support unreleased segments.   Our guess is that when the WCF-SAP adaptor tries to down load the data it retrieves a data schema from SAP. For some reason the schema does not match the data. This may be due to the version of SAP we are running or due to a customization. Either way resolving this problem did not look easy. When doing some research on this problem I found an article showing me how to get the data from SAP using the WCF-SAP adaptor without any XML tags. http://blogs.msdn.com/b/adapters/archive/2007/10/05/receiving-idocs-getting-the-raw-idoc-data.aspx Reproduction of Mustansir blog: Since the WCF based SAP Adapter is ... well, WCF based, all data flowing in and out of the adapter is encapsulated within a SOAP message. Which means there are those pesky xml tags all over the place. If you want to receive an Idoc from SAP, you can receive it in "Typed" format (in which case each column in each segment of the idoc appears within its own xml tag), or you can receive it in "String" format (in which case there are just 2 xml tags at the top, the raw xml data in string/flat file format, and the 2 closing xml tags). In "String" format, an incoming idoc (for ORDERS05, containing 5 data records) would look like: <ReceiveIdoc ><idocData>EDI_DC40 8000000000001064985620 E2EDK01005 800000000000106498500000100000001 E2EDK14 8000000000001064985000002000000020111000 E2EDK14 8000000000001064985000003000000020081000 E2EDK14 80000000000010649850000040000000200710 E2EDK14 80000000000010649850000050000000200600</idocData></ReceiveIdoc> (I have trimmed part of the control record so that it fits cleanly here on one line). Now, you're only interested in the IDOC data, and don't care much for the XML tags. It isn't that difficult to write your own pipeline component, or even some logic in the orchestration to remove the tags, right? Well, you don't need to write any extra code at all - the WCF Adapter can help you here! During the configuration of your one-way Receive Location using WCF-Custom, navigate to the Messages tab. Under the section "Inbound BizTalk Messge Body", select the "Path" radio button, and: (a) Enter the body path expression as: /*[local-name()='ReceiveIdoc']/*[local-name()='idocData'] (b) Choose "String" for the Node Encoding. What we've done is, used an XPATH to pull out the value of the "idocData" node from the XML. Your Receive Location will now emit text containing only the idoc data. You can at this point, for example, put the Flat File Pipeline component to convert the flat text into a different xml format based on some other schema you already have, and receive your version of the xml formatted message in your orchestration.   This was potentially a much easier solution than adding the static maps to the orchestrations and overcame the issue with ‘Typed’ delivery documents. Not quite so fast… Note: When I followed Mustansir’s blog the characters at the end of each line disappeared. After configuring the adaptor and passing the iDoc data into the original flat file receive pipelines I was receiving exceptions. There was a failure executing the receive pipeline: "PAPINETPipelines.DeliveryFlatFileReceive, CustomerIntegration2.PAPINET.Pipelines, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4ca3635fbf092bbb" Source: "Pipeline " Receive Port: "recSAP_Delivery" URI: "D:\CustomerIntegration2\SAP\Delivery\*.xml" Reason: An error occurred when parsing the incoming document: "Unexpected data found while looking for: 'Z2EDPZ7' The current definition being parsed is E2EDP07GRP. The stream offset where the error occured is 8859. The line number where the error occured is 23. The column where the error occured is 0.". Although the new flat file looked the same as the old one there was a differences. In the original file all lines in the document were exactly 1064 character long. In the new file all lines were truncated to the last alphanumeric character. The final piece of the puzzle was to add a custom pipeline component to pad all the lines to 1064 characters. This component was added to the decode node of the custom delivery and invoice flat file disassembler pipelines. Execute method of the custom pipeline component: public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg) { //Convert Stream to a string Stream s = null; IBaseMessagePart bodyPart = inmsg.BodyPart;   // NOTE inmsg.BodyPart.Data is implemented only as a setter in the http adapter API and a //getter and setter for the file adapter. Use GetOriginalDataStream to get data instead. if (bodyPart != null) s = bodyPart.GetOriginalDataStream();   string newMsg = string.Empty; string strLine; try { StreamReader sr = new StreamReader(s); strLine = sr.ReadLine(); while (strLine != null) { //Execute padding code if (strLine != null) strLine = strLine.PadRight(1064, ' ') + "\r\n"; newMsg += strLine; strLine = sr.ReadLine(); } sr.Close(); } catch (IOException ex) { throw new Exception("Error occured trying to pad the message to 1064 charactors"); }   //Convert back to stream and set to Data property inmsg.BodyPart.Data = new MemoryStream(Encoding.UTF8.GetBytes(newMsg)); ; //reset the position of the stream to zero inmsg.BodyPart.Data.Position = 0; return inmsg; }

    Read the article

< Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >