Search Results

Search found 653 results on 27 pages for 'sean thompson'.

Page 25/27 | < Previous Page | 21 22 23 24 25 26 27  | Next Page >

  • "Well, Swing took a bit of a beating this week..."

    - by Geertjan
    One unique aspect of the NetBeans community presence at JavaOne 2012 was its usage of large panels to highlight and discuss various aspects (e.g., Java EE, JavaFX, etc) of NetBeans IDE usage and tools. For example, here's a pic of one of the panels, taken by Markus Eisele: Above you see me, Sean Comerford from ESPN.com, Gerrick Bivins from Halliburton, Angelo D'Agnano and Ioannis Kostaras from the NATO Programming Center, and Çagatay Çivici from PrimeFaces. (And Tinu Awopetu was also on the panel but not in the picture!) On one of those panels a remark was made which has kind of stuck with me. Henry Arousell, a member of the "NetBeans Platform Discussion Panel", who works on accounting software in Sweden, together with Thomas Boqvist, who was also at JavaOne, said, a bit despondently, I thought, the following words at the start of the demo of his very professional looking accounting software: "Well, Swing took a bit of a beating this week..." That remark comes in the light of several JavaFX sessions held at JavaOne, together with many sessions from the web and mobile worlds making the argument that the browser, tablet, and mobile platforms are the future of all applications everywhere. However, then I had another look at the list of Duke's Choice Award winners: http://www.oracle.com/us/corporate/press/1854931 OK, there are 10 winners of the Duke's Choice Award this year. Three of them (JDuchess, London Java Community, Student Nokia Developer Group) are not awards for software, but for people or groups. So, that leaves seven awards. Three of them (Hadoop, Jelastic, and Parleys) are, in one way or another, some kind of web-oriented solution, though both Hadoop and Jelastic are broader than that, but are service-oriented solutions, relating to cloud technologies. That leaves four others: NATO air defense software, Liquid Robotics software, AgroSense software, and UNHCR Refugee Registration software. All these are, on the software level, Java desktop solutions that, on the UI layer, make use of Java Swing, together with LuciadMaps (NATO), GeoToolkit (AgroSense), and WorldWind (Liquid Robotics). (And, it went even further than that, i.e., this is not passive usage of Swing but active and motivated: Timon Veenstra, during his AgroSense demo, said "There are far more Swing applications out there than we seem to think. Web developers just make more noise." And, during his Liquid Robotics demo, James Gosling said: "Not everything can be done in HTML.") Seems to me that Java Swing was the enabler of more Duke's Choice Award winners this year than any other UI-oriented Java technology. Now, I'm not going to interpret that one way or another, since I've noticed that interpretations of facts tend to validate some underlying agenda. Take any fact anywhere and you can interpret it to prove whatever opinion you're already holding to be true. Therefore, no interpretation from me. Simply stating the fact that Swing, far from taking a beating during JavaOne 2012, was a more significant user interface enabler of Duke's Choice Award winners than any other Java user interface technology. That's not an interpretation, but a fact.

    Read the article

  • ArchBeat Link-o-Rama for October 14-20, 2012

    - by Bob Rhubart
    The Top 10 items shared on the OTN ArchBeat Facebook page for the week of October 14-21, 2012. Panel: On the Impact of Software | InfoQ Les Hatton (Oakwood Computing Associates), Clive King (Oracle), Paul Good (Shell), Mike Andrews (Microsoft) and Michiel van Genuchten (moderator) discuss the impact of software engineering on our lives in this panel discussion recorded at the Computer Society Software Experts Summit 2012. ResCare Solves Content Lifecycle Challenges with Oracle WebCenter Learn how ResCare solves content lifecycle challenges with Oracle WebCenter. Speakers: Joe Lichtefeld, VP of Application Services & PMO, ResCare Wayne Boerger, Product Manager, TEAM Informatics Doug Thompson, EVP Global Development, TEAM Informatics Date: Tuesday, October 30, 2012 Time: 10:00 a.m. PT / 1:00 p.m. ET WebLogic Server 11gR1 Interactive Quick Reference "The WebLogic Server 11gR1 Administration interactive quick reference," explains Juergen Kress, "is a multimedia tool for various terms and concepts used in WebLogic Server architecture. This tool is available for administrators for online or offline use. This is built as a multimedia web page which provides descriptions of WebLogic Server Architectural components, and references to relevant documentation. This tool offers valuable reference information for any complex concept or product in an intuitive and useful manner." Oracle ACE Directors Nordic Tour 2012 : Venues and BI Presentations | Mark Rittman Oracle ACE Director Mark Rittman shares information on the Oracle ACE Director Tour, as the community leaders make their way through the land of the midnight sun, with events in Copenhagen, Stockholm, Oslo and Helsinki. Mobile Apps for EBS | Capgemini Oracle Blog Capgemini solution architect Satish Iyer breifly describes how Oracle ADF and Oracle SOA Suite can be used to fill the gap in mobile applications for Oracle EBS. Introducing the New Face of Fusion Applications | Misha Vaughan Oracle ACE Directors Debra Lilly and Floyd Teter have already blogged about the the new face of Oracle Fusion Applications. Now Applications User Experience Architect Misha Vaughan shares a brief overview of how the Oracle Applications User Experience (UX) team developed the new look. BPM 11g - Dynamic Task Assignment with Multi-level Organization Units | Mark Foster "I've seen several requirements to have a more granular level of task assignment in BPM 11g based on some value in the data passed to the process," says Fusion Middleware A-Team architect Mark Foster. "Parametric Roles is normally the first port of call to try to satisfy this requirement, but in this blog we will show how a lot of use-cases can be satisfied by the easier to implement and flexible Organization Unit." OTN Architect Day Los Angeles - Oct 25 Oracle Technology Network Architect Day in Los Angeles happens in one week. Register now to make sure you don't miss out on a rich schedule of expert technical sessions and peer interaction covering the use of Oracle technologies in cloud computing, SOA, and more. Even better: it's all free. When: October 25, 2012, 8:30am - 5:00pm. Where: Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Oracle VM VirtualBox 4.2.2 released | Oracle's Virtualization Blog The Fat Bloke weighs in with a short post with information on where you can find information and the download for the latest VirtualBox release. Advanced Oracle SOA Suite #OOW 2012 SOA Presentations The Oracle SOA Product Management team has compiled a complete list of all twelve of their Oracle SOA Suite presentations from Oracle OpenWorld 2012, with links to the slide decks. Thought for the Day "Software: do you write it like a book, grow it like a plant, accrete it like a pearl, or construct it like a building?" — Jeff Atwood Source: softwarequotes.com

    Read the article

  • Bullet Physics implementing custom MotionState class

    - by Arosboro
    I'm trying to make my engine's camera a kinematic rigid body that can collide into other rigid bodies. I've overridden the btMotionState class and implemented setKinematicPos which updates the motion state's tranform. I use the overridden class when creating my kinematic body, but the collision detection fails. I'm doing this for fun trying to add collision detection and physics to Sean O' Neil's Procedural Universe I referred to the bullet wiki on MotionStates for my CPhysicsMotionState class. If it helps I can add the code for the Planetary rigid bodies, but I didn't want to clutter the post. Here is my motion state class: class CPhysicsMotionState: public btMotionState { protected: // This is the transform with position and rotation of the camera CSRTTransform* m_srtTransform; btTransform m_btPos1; public: CPhysicsMotionState(const btTransform &initialpos, CSRTTransform* srtTransform) { m_srtTransform = srtTransform; m_btPos1 = initialpos; } virtual ~CPhysicsMotionState() { // TODO Auto-generated destructor stub } virtual void getWorldTransform(btTransform &worldTrans) const { worldTrans = m_btPos1; } void setKinematicPos(btQuaternion &rot, btVector3 &pos) { m_btPos1.setRotation(rot); m_btPos1.setOrigin(pos); } virtual void setWorldTransform(const btTransform &worldTrans) { btQuaternion rot = worldTrans.getRotation(); btVector3 pos = worldTrans.getOrigin(); m_srtTransform->m_qRotate = CQuaternion(rot.x(), rot.y(), rot.z(), rot.w()); m_srtTransform->SetPosition(CVector(pos.x(), pos.y(), pos.z())); m_btPos1 = worldTrans; } }; I add a rigid body for the camera: // Create rigid body for camera btCollisionShape* cameraShape = new btSphereShape(btScalar(5.0f)); btTransform startTransform; startTransform.setIdentity(); // forgot to add this line CVector vCamera = m_srtCamera.GetPosition(); startTransform.setOrigin(btVector3(vCamera.x, vCamera.y, vCamera.z)); m_msCamera = new CPhysicsMotionState(startTransform, &m_srtCamera); btScalar tMass(80.7f); bool isDynamic = (tMass != 0.f); btVector3 localInertia(0,0,0); if (isDynamic) cameraShape->calculateLocalInertia(tMass,localInertia); btRigidBody::btRigidBodyConstructionInfo rbInfo(tMass, m_msCamera, cameraShape, localInertia); m_rigidBody = new btRigidBody(rbInfo); m_rigidBody->setCollisionFlags(m_rigidBody->getCollisionFlags() | btCollisionObject::CF_KINEMATIC_OBJECT); m_rigidBody->setActivationState(DISABLE_DEACTIVATION); This is the code in Update() that runs each frame: CSRTTransform srtCamera = CCameraTask::GetPtr()->GetCamera(); Quaternion qRotate = srtCamera.m_qRotate; btQuaternion rot = btQuaternion(qRotate.x, qRotate.y, qRotate.z, qRotate.w); CVector vCamera = CCameraTask::GetPtr()->GetPosition(); btVector3 pos = btVector3(vCamera.x, vCamera.y, vCamera.z); CPhysicsMotionState* cameraMotionState = CCameraTask::GetPtr()->GetMotionState(); cameraMotionState->setKinematicPos(rot, pos);

    Read the article

  • Worse is better. Is there an example?

    - by J.F. Sebastian
    Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)? An acceptable answer might be in a form: There are algorithms A and B that have O(N**2) and O(N) time complexity correspondingly, but B has such a big constant that it has no advantages over A for inputs less then a number of atoms in the Universe. Examples highlights from the answers: Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems. A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm. Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines. All these examples exploit worst-case vs. average scenarios. Are there examples that do not rely on the difference between the worst case vs. average case scenario? Related: The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article) Python's Design Philosophy: The ABC group strived for perfection. For example, they used tree-based data structure algorithms that were proven to be optimal for asymptotically large collections (but were not so great for small collections). This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case). Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others? From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."

    Read the article

  • Why isn't my WPF Datagrid showing data?

    - by Edward Tanguay
    This walkthrough says you can create a WPF datagrid in one line but doesn't give a full example. So I created an example using a generic list and connected it to the WPF datagrid, but it doesn't show any data. What do I need to change on the code below to get it to show data in the datagrid? ANSWER: This code works now: XAML: <Window x:Class="TestDatagrid345.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:toolkit="http://schemas.microsoft.com/wpf/2008/toolkit" xmlns:local="clr-namespace:TestDatagrid345" Title="Window1" Height="300" Width="300" Loaded="Window_Loaded"> <StackPanel> <toolkit:DataGrid ItemsSource="{Binding}"/> </StackPanel> </Window> Code Behind: using System.Collections.Generic; using System.Windows; namespace TestDatagrid345 { public partial class Window1 : Window { private List<Customer> _customers = new List<Customer>(); public List<Customer> Customers { get { return _customers; }} public Window1() { InitializeComponent(); } private void Window_Loaded(object sender, RoutedEventArgs e) { DataContext = Customers; Customers.Add(new Customer { FirstName = "Tom", LastName = "Jones" }); Customers.Add(new Customer { FirstName = "Joe", LastName = "Thompson" }); Customers.Add(new Customer { FirstName = "Jill", LastName = "Smith" }); } } }

    Read the article

  • Recent ImageMagick on CentOS 6.3

    - by organicveggie
    I'm having a terrible time trying to get a recent version of ImageMagick installed on a CentOS 6.3 x86_64 server. First, I downloaded the RPM from the ImageMagick site and tried to install it. That failed due to missing dependencies: error: Failed dependencies: libHalf.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libIex.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libIlmImf.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libImath.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libltdl.so.3()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 I have libtool-ltdl installed, but that includes libltdl.so.7, not libltdl.so.4. I have a similar problem with libHalf, libIex, libIlmImf and libImath. Typically, you can install OpenEXR to get those dependencies. Unfortunately, CentOS 6.3 includes OpenEXR 1.6.1, which includes ilmbase-devel 1.0.1. And that release of ilmbase-devel includes newer versions of those dependencies: libHalf.so.6 libIex.so.6 libIlmImf.so.6 libImath.so.6 I next tried following the instructions for installing ImageMagick from source. No luck there either. I get a build error: RPM build errors: File not found by glob: /home/sean/rpmbuild/BUILDROOT/ImageMagick-6.8.0-4.x86_64/usr/lib64/ImageMagick-6.8.0/modules-Q16/coders/djvu.* I even re-ran configure to explicitly exclude djvu and I still get the same error. At this point, I'm pulling my hair out. What's the easiest way to get a relatively recent version of ImageMagick ( 6.7) installed on CentOS 6.3? Does someone offer RPMs with dependencies somewhere?

    Read the article

  • Error when starting .Net-Application from ThinApp-Application

    - by user50209
    one of our customers uses SAP through VMWare ThinApp. In SAP there is a button that launches an .Net application from a server. When starting the .Net-application directly, there is no error. If the user tries to start the application by clicking the button in the ThinApp-Application, it displays the following errors: Microsoft Visual C++ Runtime Library R6034 An application has made an attempt to load the C runtime library incorrectly. Please contact the application's support team for more information. After clicking "OK" it displays: Microsoft Visual C++ Runtime Library Runtime Error! R6030 - CRT not initialized So, does the customer have to install some components into his ThinApp (if yes, which?) to get things working? Regards, inno ----- [EDIT] ----- @Sean: It's installed the following way: The .exe of the .Net-Application is on a mapped drive on a server. All clients have the requirements installed (.Net-framework for example) and start the .exe from the mapped drive. The ThinApp-Application tries to start this application and throws the mentioned exceptions. AFAIK there are no entry points for this application configured. What I should also mention is: The .Net-Application crashes during execution. That means, we have a debug mode implemented that shows what the application is doing. The application shows what it's doing and after some steps it crashes. The interesting point is: It's a .Net-application, not a C++ Application.

    Read the article

  • How can see what processes makes my server slow?

    - by Steven
    All my websites on my server are extremely slow or not loading at all. Even server admin (Plesk) will not load some times. There's been no changes to the sites for the last coupple of months. How can I see what processes is making my server slow? My environment looks like this: Server: VPS running Linux 2.8.x OS: Centos 5 Manage interface: Plesk 9.x Memmory: 1024MB CPU: 2.2GHz My websites run on PHP and MySQL. I finally managed to telnet (Putty + SSH) in to my server. Running top did not show any processes using more than max 2% CPU and none were using exesive memmory. I also got a friend to install a program that checks the core files, and all seemed fine. So I'm leaning towards network issues or some other server malfunction. But I'm not able to find out what can be wrong. Here are some answers to Sean Kimball: I don't run mail services on my server yet There are noe specific bandwidth peaks. Prefork looks like this <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> Not sure what you mean with DNS question. But I think it's up and running. There are no processes running wild Where can I find avarage load? Telnet is disabled and I have to log in using SSH :)

    Read the article

  • Oracle Fusion Applications User Experience Design Patterns: Feeling the Love after Launch

    - by mvaughan
    By Misha Vaughan, Oracle Applications User ExperienceIn the first video by the Oracle Applications User Experience team on the Oracle Partner Network, Vice President Jeremy Ashley said that Oracle is looking to expand the ecosystem of support for Oracle’s applications customers as they begin to assess their investment and adoption of Oracle Fusion Applications. Oracle has made a massive investment to maintain the benefits of the Fusion Applications User Experience. This summer, the Applications User Experience team released the Oracle Fusion Applications user experience design patterns.Design patterns help create consistent experiences across devices.The launch has been very well received:Angelo Santagata, Senior Principal Technologist and Fusion Middleware evangelist for Oracle,  wrote this to the system integrator community: “The web site is the result of many years of Oracle R&D into user interface design for Fusion Applications and features a really cool web app which allows you to visualise the UI components in action.”  Grant Ronald, Director of Product Management, Application Development Framework (ADF) said: “It’s a science I don't understand, but now I don't have to ... Now you can learn from the UX experience of Fusion Applications.”Frank Nimphius, Senior Principal Product Manager, Oracle (ADF) wrote about the launch of the design patterns for the ADF Code Corner, and Jürgen Kress, Senior Manager EMEA Alliances & Channels for Fusion MiddleWare and Service Oriented Architecture, (SOA), shared the news with his Partner Community. Oracle Twitter followers also helped spread the message about the design patterns launch: ?@bex – Brian Huff, founder and Chief Software Architect for Bezzotech, and Oracle ACE Director:“Nifty! The Oracle Fusion UX team just released new ADF design patterns.”@maiko_rocha, Maiko Rocha, Oracle Consulting Solutions Architect and Oracle FMW engineer: “Haven't seen any other vendor offer such comprehensive UX Design Patterns catalog for free!”@zirous_chad, Chad Thompson, Senior Solutions Architect for Zirous, Inc. and ADF Developer:Wow - @ultan and company did a great job with the Fusion UX PatternsWhat is a user experience design pattern?A user experience design pattern is a re-usable, usability tested functional blueprint for a particular user experience.  Some examples are guided processes, shopping carts, and search and search results.  Ultan O’Broin discusses the top design patterns every developer should know.The patterns that were just released are based on thousands of hours of end-user field studies, state-of-the-art user interface assessments, and usability testing.  To be clear, these are functional design patterns, not technical design patterns that developers may be used to working with.  Because we know there is a gap, we are putting together some training that will help close that gap.Who should care?This is an offering targeted primarily at Application Development Framework (ADF) developers. If you are faced with the following questions regarding Fusion Applications, you will want to know and learn more:•    How do I build something that looks like Fusion Applications?•    How do I build a next-generation application?•    How do I extend a Fusion Application and maintain the user experience?•    I don’t want to re-invent the wheel on the user interface, so where do I start?•    I need to build something that will eventually co-exist with Fusion Applications. How do I do that?These questions are relevant to partners with an ADF competency, individual practitioners, or small consultancies with an ADF specialization, and customers who are trying to shift their IT staff over to supporting Fusion Applications.Where you can find out more?OnlineOur Fusion User Experience design patterns maven is Ultan O’Broin. The Oracle Partner Network is helping our team bring this first e-seminar to you in order to go into a more detail on what this means and how to take advantage of it:? Webinar: Build a Better User Experience with Oracle: Oracle Fusion Applications Functional Design PatternsSept 20, 2012 , 10:30am-11:30am PacificDial-In:  1. 877-664-9137 / Passcode 102546?International:  706-634-9619  http://www.intercall.com/national/oracleuniversity/gdnam.htmlAccess the Live Event Or Via Webconference Access http://ouweb.webex.com  ?and enter this session number: 598036234At a Usergroup eventThe Fusion User Experience Advocates (FXA) are also going to be getting some deep-dive training on this content and can share it with local user groups.At OpenWorld Ultan O’Broin               Chris MuirIf you will be at OpenWorld this year, our own Ultan O’Broin will be visiting the ADF demopod to say hello, thanks to Shay Shmeltzer, Senior Group Manager for ADF outbound communication and at the OTN lounge: Monday 10-10:45, Tuesday 2:15-2:45, Wednesday 2:15-3:30 ?  Oracle JDeveloper and Oracle ADF,  Moscone South, Right - S-207? “ADF Meet and Greett”, OTN Lounge, Wednesday 4:30 And I cannot talk about OpenWorld and ADF without mentioning Chris Muir’s ADF EMG event: the Year After the Year Of the ADF Developer – Sunday, Sept 30 of OpenWorld. Chris has played host to Ultan and the Applications user experience message for his online community and is now a seasoned UX expert.Expect to see additional announcements about expanded and training on similar topics in the future.

    Read the article

  • Custom ASP.Net MVC 2 ModelMetadataProvider for using custom view model attributes

    - by SeanMcAlinden
    There are a number of ways of implementing a pattern for using custom view model attributes, the following is similar to something I’m using at work which works pretty well. The classes I’m going to create are really simple: 1. Abstract base attribute 2. Custom ModelMetadata provider which will derive from the DataAnnotationsModelMetadataProvider   Base Attribute MetadataAttribute using System; using System.Web.Mvc; namespace Mvc2Templates.Attributes {     /// <summary>     /// Base class for custom MetadataAttributes.     /// </summary>     public abstract class MetadataAttribute : Attribute     {         /// <summary>         /// Method for processing custom attribute data.         /// </summary>         /// <param name="modelMetaData">A ModelMetaData instance.</param>         public abstract void Process(ModelMetadata modelMetaData);     } } As you can see, the class simple has one method – Process. Process accepts the ModelMetaData which will allow any derived custom attributes to set properties on the model meta data and add items to its AdditionalValues collection.   Custom Model Metadata Provider For a quick explanation of the Model Metadata and how it fits in to the MVC 2 framework, it is basically a set of properties that are usually set via attributes placed above properties on a view model, for example the ReadOnly and HiddenInput attributes. When EditorForModel, DisplayForModel or any of the other EditorFor/DisplayFor methods are called, the ModelMetadata information is used to determine how to display the properties. All of the information available within the model metadata is also available through ViewData.ModelMetadata. The following class derives from the DataAnnotationsModelMetadataProvider built into the mvc 2 framework. I’ve overridden the CreateMetadata method in order to process any custom attributes that may have been placed above a property in a view model.   CustomModelMetadataProvider using System; using System.Collections.Generic; using System.Linq; using System.Web.Mvc; using Mvc2Templates.Attributes; namespace Mvc2Templates.Providers {     public class CustomModelMetadataProvider : DataAnnotationsModelMetadataProvider     {         protected override ModelMetadata CreateMetadata(             IEnumerable<Attribute> attributes,             Type containerType,             Func<object> modelAccessor,             Type modelType,             string propertyName)         {             var modelMetadata = base.CreateMetadata(attributes, containerType, modelAccessor, modelType, propertyName);               attributes.OfType<MetadataAttribute>().ToList().ForEach(x => x.Process(modelMetadata));               return modelMetadata;         }     } } As you can see, once the model metadata is created through the base method, a check for any attributes deriving from our new abstract base attribute MetadataAttribute is made, the Process method is then called on any existing custom attributes with the model meta data for the property passed in.   Hooking it up The last thing you need to do to hook it up is set the new CustomModelMetadataProvider as the current ModelMetadataProvider, this is done within the Global.asax Application_Start method. Global.asax protected void Application_Start()         {             AreaRegistration.RegisterAllAreas();               RegisterRoutes(RouteTable.Routes);               ModelMetadataProviders.Current = new CustomModelMetadataProvider();         }   In my next post, I’m going to demonstrate a cool custom attribute that turns a textbox into an ajax driven AutoComplete text box. Hope this is useful. Kind Regards, Sean McAlinden.

    Read the article

  • Oracle Fusion Procurement Designed for User Productivity

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience Oracle Fusion Procurement Design Goals In Oracle Fusion Procurement, we set out to create a streamlined user experience based on the way users do their jobs. Oracle has spent hundreds of hours with customers to get to the heart of what users need to do their jobs. By designing a procurement application around user needs, Oracle has crafted a user experience that puts the tools that people need at their fingertips. In Oracle Fusion Procurement, the user experience is designed to provide the user with information that will drive navigation rather than requiring the user to find information. One of our design goals for Oracle Fusion Procurement was to reduce the number of screens and clicks that a user must go through to complete frequently performed tasks. The requisition process in Oracle Fusion Procurement (Figure 1) illustrates how we have streamlined workflows. Oracle Fusion Self-Service Procurement brings together billing metrics, descriptions of the order, justification for the order, a breakdown of the components of the order, and the amount—all in one place. Previous generations of procurement software required the user to navigate to several different pages to gather all of this information. With Oracle Fusion, everything is presented on one page. The result is that users can complete their tasks in less time. The focus is on completing the work, not finding the work. Figure 1. Creating a requisition in Oracle Fusion Self-Service Procurement is a consumer-like shopping experience. Will Oracle Fusion Procurement Increase Productivity? To answer this question, Oracle sought to model how two experts working head to head—one in an existing enterprise application and another in Oracle Fusion Procurement—would perform the same task. We compared Oracle Fusion designs to corresponding existing applications using the keystroke-level modeling (KLM) method. This method is based on years of research at universities such as Carnegie Mellon and research labs like Xerox Palo Alto Research Center. The KLM method breaks tasks into a sequence of operations and uses standardized models to evaluate all of the physical and cognitive actions that a person must take to complete a task: what a user would have to click, how long each click would take (not only the physical action of the click or typing of a letter, but also how long someone would have to think about the page when taking the action), and user interface changes that result from the click. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time is calculated. Task times from the model enable researchers to predict end-user productivity. For the study, we focused on modeling procurement business process task flows that were considered business or mission critical: high-frequency tasks and high-value tasks. The designs evaluated encompassed tasks that are currently performed by employees, professional buyers, suppliers, and sourcing professionals in advanced procurement applications. For each of these flows, we created detailed task scenarios that provided the context for each task, conducted task walk-throughs in both the Oracle Fusion design and the existing application, analyzed and documented the steps and actions required to complete each task, and applied standard time estimates to the operators in each task to estimate overall task completion times. The Results The KLM method predicted that the Oracle Fusion Procurement designs would result in productivity gains in each task, ranging from 13 percent to 38 percent, with an overall productivity gain of 22.5 percent. These performance gains can be attributed to a reduction in the number of clicks and screens needed to complete the tasks. For example, creating a requisition in Oracle Fusion Procurement takes a user through only two screens, while ordering the same item in a previous version requires six screens to complete the task. Modeling user productivity has resulted not only in advances in Oracle Fusion applications, but also in advances in other areas. We leveraged lessons learned from the KLM studies to establish products like Oracle E-Business Suite (EBS). New user experience features in EBS 12.1.3, such as navigational improvements to the main menu, a Google-type search using auto-suggest, embedded analytics, and an in-context list of values tool help to reduce clicks and improve efficiency. For more information about KLM, refer to the Measuring User Productivity blog.

    Read the article

  • Building a Mafia&hellip;TechFest Style

    - by David Hoerster
    It’s been a few months since I last blogged (not that I blog much to begin with), but things have been busy.  We all have a lot going on in our lives, but I’ve had one item that has taken up a surprising amount of time – Pittsburgh TechFest 2012.  After the event, I went through some minutes of the first meetings for TechFest, and I started to think about how it all came together.  I think what inspired me the most about TechFest was how people from various technical communities were able to come together and build and promote a common event.  As a result, I wanted to blog about this to show that people from different communities can work together to build something that benefits all communities.  (Hopefully I've got all my facts straight.)  TechFest started as an idea Eric Kepes and myself had when we were planning our next Pittsburgh Code Camp, probably in the summer of 2011.  Our Spring 2011 Code Camp was a little different because we had a great infusion of some folks from the Pittsburgh Agile group (especially with a few speakers from LeanDog).  The line-up was great, but we felt our audience wasn’t as broad as it should have been.  We thought it would be great to somehow attract other user groups around town and have a big, polyglot conference. We started contacting leaders from Pittsburgh’s various user groups.  Eric and I split up the ones that we knew about, and we just started making contacts.  Most of the people we started contacting never heard of us, nor we them.  But we all had one thing in common – we ran user groups who’s primary goal is educating our members to make them better at what they do. Amazingly, and I say this because I wasn’t sure what to expect, we started getting some interest from the various leaders.  One leader, Greg Akins, is, in my opinion, Pittsburgh’s poster boy for the polyglot programmer.  He’s helped us in the past with .NET Code Camps, is a Java developer (and leader in Pittsburgh’s Java User Group), works with Ruby and I’m sure a handful of other languages.  He helped make some e-introductions to other user group leaders, and the whole thing just started to snowball. Once we realized we had enough interest with the user group leaders, we decided to not have a Fall Code Camp and instead focus on this new entity. Flash-forward to October of 2011.  I set up a meeting, with the help of Jeremy Jarrell (Pittsburgh Agile leader) to hold a meeting with the leaders of many of Pittsburgh technical user groups.  We had representatives from 12 technical user groups (Python, JavaScript, Clojure, Ruby, PittAgile, jQuery, PHP, Perl, SQL, .NET, Java and PowerShell) – 14 people.  We likened it to a scene from a Godfather movie where the heads of all the families come together to make some deal.  As a result, the name “TechFest Mafia” was born and kind of stuck. Over the next 7 months or so, we had our starts and stops.  There were moments where I thought this event would not happen either because we wouldn’t have the right mix of topics (was I off there!), or enough people register (OK, I was wrong there, too!) or find an appropriate venue (hmm…wrong there, too) or find enough sponsors to help support the event (wow…not doing so well).  Overall, everything fell into place with a lot of hard work from Eric, Jen, Greg, Jeremy, Sean, Nicholas, Gina and probably a few others that I’m forgetting.  We also had a bit of luck, too.  But in the end, the passion that we had to put together an event that was really about making ourselves better at what we do really paid off. I’ve never been more excited about a project coming together than I have been with Pittsburgh TechFest 2012.  From the moment the first person arrived at the event to the final minutes of my closing remarks (where I almost lost my voice – I ended up being diagnosed with bronchitis the next day!), it was an awesome event.  I’m glad to have been part of bringing something like this to Pittsburgh…and I’m looking forward to Pittsburgh TechFest 2013.  See you there!

    Read the article

  • Parse.com REST API in Java (NOT Android)

    - by Orange Peel
    I am trying to use the Parse.com REST API in Java. I have gone through the 4 solutions given here https://parse.com/docs/api_libraries and have selected Parse4J. After importing the source into Netbeans, along with importing the following libraries: org.slf4j:slf4j-api:jar:1.6.1 org.apache.httpcomponents:httpclient:jar:4.3.2 org.apache.httpcomponents:httpcore:jar:4.3.1 org.json:json:jar:20131018 commons-codec:commons-codec:jar:1.9 junit:junit:jar:4.11 ch.qos.logback:logback-classic:jar:0.9.28 ch.qos.logback:logback-core:jar:0.9.28 I ran the example code from https://github.com/thiagolocatelli/parse4j Parse.initialize(APP_ID, APP_REST_API_ID); // I replaced these with mine ParseObject gameScore = new ParseObject("GameScore"); gameScore.put("score", 1337); gameScore.put("playerName", "Sean Plott"); gameScore.put("cheatMode", false); gameScore.save(); And I got that it was missing org.apache.commons.logging, so I downloaded that and imported it. Then I ran the code again and got Exception in thread "main" java.lang.NoSuchMethodError: org.slf4j.spi.LocationAwareLogger.log(Lorg/slf4j/Marker;Ljava/lang/String;ILjava/lang/String;Ljava/lang/Throwable;)V at org.apache.commons.logging.impl.SLF4JLocationAwareLog.debug(SLF4JLocationAwareLog.java:120) at org.apache.http.client.protocol.RequestAddCookies.process(RequestAddCookies.java:122) at org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:131) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:193) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:85) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:108) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57) at org.parse4j.command.ParseCommand.perform(ParseCommand.java:44) at org.parse4j.ParseObject.save(ParseObject.java:450) I could probably fix this with another import, but I suppose then something else would pop up. I tried the other libraries with similar results, missing a bunch of libraries. Has anyone actually used REST API successfully in Java? If so, I would be grateful if you shared which library/s you used and anything else required to get it going successfully. I am using Netbeans. Thanks.

    Read the article

  • Remove a hover class from a checkbox once clicked

    - by seangeraghty
    I am trying to remove a hover class applied to a checkbox via CSS once the box has been clicked. Does anyone know how to do this? JSFiddle http://jsfiddle.net/sa9fe/ The checkbox code is: <div> <input type="checkbox" id="checkbox-1-1" class="regular-checkbox flaticon-boxing3" /> <input type="checkbox" id="checkbox-1-2" class="regular-checkbox" /> <input type="checkbox" id="checkbox-1-3" class="regular-checkbox" /> <input type="checkbox" id="checkbox-1-4" class="regular-checkbox" /> </div> And the CSS for the checkbox are as follows: .regular-checkbox { vertical-align: middle; text-align: center; margin-left: auto; margin-right: auto; align: center; color: #39c; width: 140px; height: 140px; -webkit-border-radius: 6px; -moz-border-radius: 6px; border-radius: 6px; background-color: #fff; border: solid 1px #ccc; -webkit-appearance: none; background-color: #fff; display: inline-block; position: relative;} .regular-checkbox:checked { background-color: #39c; color: #fff !important;} .regular-checkbox:hover { background-color: #f0f7f9;} .regular-checkbox:checked:after { color: #fff; position: absolute; top: 0px; left: 3px; color: #99a1a7; } So any suggestions? Also does anyone know how to change the highlight because at the moment it seems to highlight the edges of the box at a border radius of 3px whereas the boxes I am using are 6px. Thanks Sean (newbie to coding)

    Read the article

  • Problem with IE and Jquery qTip plugin

    - by user272899
    I am having problems with the Jquery qtip plugin. It works fine in Firefox (see here http://movieo.no-ip.org/ hover over the first image). But doesn't work in IE. This is the code: $('.moviebox').each(function() { $(this).qtip({ content: $(this).children('.info'), show: 'mouseover', hide: 'mouseout', style: { name: 'light' }, position: { corner: { target: 'rightbottom', tooltip: 'bottomleft' } } }); }); And the html <!--start moviebox--> <div class="moviebox"> <a href="#"> <img src="http://1.bp.blogspot.com/_mySxtRcQIag/S6deHcoChaI/AAAAAAAAObc/Z1Xg3aB_wkU/s200/rising_sun.jpg" /> </a> <!--start infobox--> <div class="info"> <span>Rising Sun (2006)</span> <div class="description"><strong>Description:</strong><br /> test test test test test test test test test test test test test test test test</div> <img src="http://1.bp.blogspot.com/_mySxtRcQIag/S6deHcoChaI/AAAAAAAAObc/Z1Xg3aB_wkU/s200/rising_sun.jpg" /> <div class="cast"><strong>Cast:</strong><br /> Sean connery</div> <div class="rating"><strong>Rating:</strong><br />5stars</div> </div> <!--end infobox--> </div> <!--end moviebox--> Why wouldn't that work in IE????? Beats me. Checkout movieo.no-ip.org for the whole source

    Read the article

  • Visual Studio 2008 / ASP.NET 3.5 / C# -- issues with intellisense, references, and builds

    - by goober
    Hey all, Hoping you can help me -- the strangest thing seems to have happened with my VS install. System config: Windows 7 Pro x64, Visual Studio 2008 SP1, C#, ASP.NET 3.5. I have two web site projects in a solution. I am referencing NUnit / NHibernate (did this by right-clicking on the project and selecting "Add Reference". I've done this for several projects in the past). Things were working fine but recently stopped working and I can't figure out why. Intellisense completely disappears for any files in my App_Code directory, and none of the references are recognized (they are recognized by any file in the root directory of the web site project. Additionally, pretty simple commands like the following (in Page_Load) fail (assume TextBox1 is definitely an element on the page): if (Page.IsPostBack) { str test1; test1 = TextBox1.Text; } It says that all the page elements are null or that it can't access them. At first I thought it was me, but due to the combination of issues, it seems to be Visual Studio itself. I've tried clearing the temp directories & rebuilding the solution. I've also tried tools -- options -- text editor settings to ensure intellisense is turned on. I'd appreciate any help you can give! Thanks, Sean

    Read the article

  • XML Schema: Can I make some of an attribute's values be required but still allow other values?

    - by scrotty
    (Note: I cannot change structure of the XML I receive, I am only able to change how I validate it.) Let's say I can get XML like this: <Address Field="Street" Value="123 Main"/> <Address Field="StreetPartTwo" Value="Unit B"/> <Address Field="State" Value="CO"/> <Address Field="Zip" Value="80020"/> <Address Field="SomeOtherCrazyValue" Value="Foo"/> I need to create an XSD schema that validates that "Street", "State" and "Zip" must be present. But I don't care if "StreetPartTwo" or "SomeOTherCrazyValue" is present. If I knew that only the three I care about could be included, I could do this: <xs:element name="Address" type="addressType" maxOccurs="unbounded" minOccurs="3"/> <xs:complexType name="addressType"> <xs:attribute name="Field" use="required"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="Street"/> <xs:enumeration value="State"/> <xs:enumeration value="Zip"/> </xs:restriction> </xs:simpleType> </xs:attribute> </xs:complexType> But this won't work with my case because I may also receive those other Address elements (that also have "Field" attributes) that I don't care about. Any ideas how I can ensure the stuff I care about is present but let the other stuff in too? TIA! Sean

    Read the article

  • SharePoint Saturday Michigan 2010 Recap, Slides, and Photos

    - by Brian Jackett
    This past weekend I attended SharePoint Saturday Michigan (SPSMI) in Ann Arbor, Michigan.  For those unfamiliar, SharePoint Saturday is a community driven event where various speakers gather to present at a FREE conference on all topics related to SharePoint.  This made my third SharePoint Saturday attended and second I’ve spoken at.  I believe today it was announced that about 210 people total attended the event.  I was very happy with the turnout, especially the ratio of male to female attendees.  Typically with computer related conferences the ratio leans towards more males attending, but both Peter Serzo (one of conference organizers) and I both commented to each other that at the end of the day it appeared to be close to 40% women in the crowd.  So here’s my recap of the weekend. Arrival     Friday afternoon I drove up from Columbus, OH to Ann Arbor, MI and arrived around 4pm.  I was attempting to avoid the rush hour traffic and construction backups.  Turned out to be a good idea because other speakers coming up Friday got stuck on a highway which literally closed down in both directions due to a bad accident.  I was talking my friend Sean McDonough through the highway closing and this was the first time I had seen a solid black traffic line on Google Maps.  Most of us are familiar with Green, Yellow, and Red, but this line was black if that tells you how bad it got. Speaker “Dinner”     Fast forward a few hours and it was time for the speaker “dinner.”  I put “dinner” in quotes because with this night alone SPSMI set a new bar for nicest and most extravagant speaker appreciation events for SharePoint Saturday.  By tapping into some very influential contacts, the conference organizers were able to provide a truck limo (yep you heard right) with refreshments, access to an underground suite at the Palace of Auburn Hills, and courtside tickets to see the Detroit Pistons play that night.  Being a Michigan native I have to say that I was absolutely floored by this experience and very thankful to our conference organizers Peter, Sebastian, and Jesse along with Trillium Teamologies. Sessions     The actual conference started Saturday morning at 9am with the keynote by Rob Collie who is the Microsoft program manager for PowerPivot.  The day continued and I attended the following sessions: Mike Watson (@mikewat) – “SharePoint 2010 Fight Night: Devs vs. Admins” Karl Swedeberg (@kswedberg) – “A Walk on the Client Side with jQuery“ [my session] Brian Jackett (@briantjackett) - “Real World Deployment of SharePoint 2007 Solutions” Jeff Willinger (@jwillie) - “Social Computing and Collaboration Inside and Outside the 4 Walls” Paul Schaeflein (@paulschaeflein) – “PowerShell for the SharePoint Developer” My Presentation     I had a great time presenting my session on Deploying SharePoint 2007 Solutions, but it wasn’t without its fair share of technical issues.  As my session was right after lunch I came in to my room 10 mins early to set up my laptop, slides, and demos.  As a quick background note, a few months ago I got an upgraded laptop from my company Sogeti and have been dual booting it between XP (factory installed) and Windows Server 2008 R2 w/ Hyper-V.  As such I had prepared all of my demo virtual machines to run under Hyper-V.  About 3 minutes before my session was scheduled to start though it became apparent that I did not have the correct display drivers to connect Windows Server 2008 R2 to the projector…     As you can imagine this was a slight cause for concern as I was potentially going to be unable to give my presentation.  Luckily for me I usually prepare for such unforeseen issues and had my presentation and some spare VMs that would run on XP on my external hard drive.  Knowing this I rebooted my machine into XP and began my presentation without slides until about 5 mins into the session when everything was up and running on XP.  Despite this being the first time I gave this presentation I have to say it was one of my favorites I’ve given so far.  The audience was very engaged in the session and I received some great, positive feedback afterwards.  Thanks to all who attended my session, I appreciate it very much. Link to Presentation Files     For those of you who attended my session and would like my slides or demo PowerShell scripts they can be found on my SkyDrive at the link below.  Also, if you have a few minutes and wouldn’t mind rating my session I have this session posted on SpeakerRate.  As speakers we always appreciate any and all feedback attendees offer, so thank you if you are able to provide any. SkyDrive folder with session files Rate my SharePoint 2007 Solutions session   Picture Albums     For everyone else, here are my pictures from the weekend.  The first link is to my FaceBook album which will have tagging (recommend this one.)  The second is to my Live album if you care for higher resolution images. http://www.facebook.com/album.php?aid=2154482&id=21905041&l=a3fb72ee8c View Full Album Conclusion     A big thank you goes out to all of the organizers, speakers, sponsors, and attendees of SPSMI.  As I’ve said so many times, without each and every one of you these events wouldn’t be possible.  I thoroughly enjoyed this trip back to my home state and presenting a new session.  For those interested in my upcoming schedule I will be giving two sessions on PowerShell at SharePoint Saturday Charlotte in April, helping plan Stir Trek: Iron Man Edition in May, and I’m submitting sessions to Day of .Net Ann Arbor in May as well.  Beyond that I haven’t planned out any travels.  Thanks for reading my recap.  Look forward to more technical posts now that I have a short break in conferences.         -Frog Out   links: Michigan image

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • Measuring Usability with Common Industry Format (CIF) Usability Tests

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience A User-centered Research and Design Process The Oracle Fusion Applications user experience was five years in the making. The development of this suite included an extensive and comprehensive user experience design process: ethnographic research, low-fidelity workflow prototyping, high fidelity user interface (UI) prototyping, iterative formative usability testing, development feedback and iteration, and sales and customer evaluation throughout the design cycle. However, this process does not stop when our products are released. We conduct summative usability testing using the ISO 25062 Common Industry Format (CIF) for usability test reports as an organizational framework. CIF tests allow us to measure the overall usability of our released products.  These studies provide benchmarks that allow for comparisons of a specific product release against previous versions of our product and against other products in the marketplace. What Is a CIF Usability Test? CIF refers to the internationally standardized method for reporting usability test findings used by the software industry. The CIF is based on a formal, lab-based test that is used to benchmark the usability of a product in terms of human performance and subjective data. The CIF was developed and is endorsed by more than 375 software customer and vendor organizations led by the National Institute for Standards and Technology (NIST), a US government entity. NIST sponsored the CIF through the American National Standards Institute (ANSI) and International Organization for Standardization (ISO) standards-making processes. Oracle played a key role in developing the CIF. The CIF report format and metrics are consistent with the ISO 9241-11 definition of usability: “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” Our goal in conducting CIF tests is to measure performance and satisfaction of a representative sample of users on a set of core tasks and to help predict how usable a product will be with the larger population of customers. Why Do We Perform CIF Testing? The overarching purpose of the CIF for usability test reports is to promote incorporation of usability as part of the procurement decision-making process for interactive products. CIF provides a common format for vendors to report the methods and results of usability tests to customer organizations, and enables customers to compare the usability of our software to that of other suppliers. CIF also enables us to compare our current software with previous versions of our software. CIF Testing for Fusion Applications Oracle Fusion Applications comprises more than 100 modules in seven different product families. These modules encompass more than 400 task flows and 400 user roles. Due to resource constraints, we cannot perform comprehensive CIF testing across the entire product suite. Therefore, we had to develop meaningful inclusion criteria and work with other stakeholders across the applications development organization to prioritize product areas for testing. Ultimately, we want to test the product areas for which customers might be most interested in seeing CIF data. We also want to build credibility with customers; we need to be able to make the case to current and prospective customers that the product areas tested are representative of the product suite as a whole. Our goal is to test the top use cases for each product. The primary activity in the scoping process was to work with the individual product teams to identify the key products and business process task flows in each product to test. We prioritized these products and flows through a series of negotiations among the user experience managers, product strategy, and product management directors for each of the primary product families within the Oracle Fusion Applications suite (Human Capital Management, Supply Chain Management, Customer Relationship Management, Financials, Projects, and Procurement). The end result of the scoping exercise was a list of 47 proposed CIF tests for the Fusion Applications product suite.  Figure 1. A participant completes tasks during a usability test in Oracle’s Usability Labs Fusion Supplier Portal CIF Test The first Fusion CIF test was completed on the Supplier Portal application in July of 2011.  Fusion Supplier Portal is part of an integrated suite of Procurement applications that helps supplier companies manage orders, schedules, shipments, invoices, negotiations and payments. The user roles targeted for the usability study were Supplier Account Receivables Specialists and Supplier Sales Representatives, including both experienced and inexperienced users across a wide demographic range.  The test specifically focused on the following functionality and features: Manage payments – view payments Manage invoices – view invoice status and create invoices Manage account information – create new contact, review bank account information Manage agreements – find and view agreement, upload agreement lines, confirm status of agreement lines upload Manage purchase orders (PO) – view history of PO, request change to PO, find orders Manage negotiations – respond to request for a quote, check the status of a negotiation response These product areas were selected to represent the most important subset of features and functionality of the flow, in terms of frequency and criticality of use by customers. A total of 20 users participated in the usability study. The results of the Supplier Portal evaluation were favorable and exceeded our expectations. Figure 2. Fusion Supplier Portal Next Studies We plan to conduct two Fusion CIF usability studies per product family over the next nine months. The next product to be tested will be Self-service Procurement. End users are currently being recruited to participate in this usability study, and the test sessions are scheduled to begin during the last week of November.

    Read the article

  • Atmospheric Scattering

    - by Lawrence Kok
    I'm trying to implement atmospheric scattering based on Sean O`Neil algorithm that was published in GPU Gems 2. But I have some trouble getting the shader to work. My latest attempts resulted in: http://img253.imageshack.us/g/scattering01.png/ I've downloaded sample code of O`Neil from: http://http.download.nvidia.com/developer/GPU_Gems_2/CD/Index.html. Made minor adjustments to the shader 'SkyFromAtmosphere' that would allow it to run in AMD RenderMonkey. In the images it is see-able a form of banding occurs, getting an blueish tone. However it is only applied to one half of the sphere, the other half is completely black. Also the banding appears to occur at Zenith instead of Horizon, and for a reason I managed to get pac-man shape. I would appreciate it if somebody could show me what I'm doing wrong. Vertex Shader: uniform mat4 matView; uniform vec4 view_position; uniform vec3 v3LightPos; const int nSamples = 3; const float fSamples = 3.0; const vec3 Wavelength = vec3(0.650,0.570,0.475); const vec3 v3InvWavelength = 1.0f / vec3( Wavelength.x * Wavelength.x * Wavelength.x * Wavelength.x, Wavelength.y * Wavelength.y * Wavelength.y * Wavelength.y, Wavelength.z * Wavelength.z * Wavelength.z * Wavelength.z); const float fInnerRadius = 10; const float fOuterRadius = fInnerRadius * 1.025; const float fInnerRadius2 = fInnerRadius * fInnerRadius; const float fOuterRadius2 = fOuterRadius * fOuterRadius; const float fScale = 1.0 / (fOuterRadius - fInnerRadius); const float fScaleDepth = 0.25; const float fScaleOverScaleDepth = fScale / fScaleDepth; const vec3 v3CameraPos = vec3(0.0, fInnerRadius * 1.015, 0.0); const float fCameraHeight = length(v3CameraPos); const float fCameraHeight2 = fCameraHeight * fCameraHeight; const float fm_ESun = 150.0; const float fm_Kr = 0.0025; const float fm_Km = 0.0010; const float fKrESun = fm_Kr * fm_ESun; const float fKmESun = fm_Km * fm_ESun; const float fKr4PI = fm_Kr * 4 * 3.141592653; const float fKm4PI = fm_Km * 4 * 3.141592653; varying vec3 v3Direction; varying vec4 c0, c1; float scale(float fCos) { float x = 1.0 - fCos; return fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } void main( void ) { // Get the ray from the camera to the vertex, and its length (which is the far point of the ray passing through the atmosphere) vec3 v3FrontColor = vec3(0.0, 0.0, 0.0); vec3 v3Pos = normalize(gl_Vertex.xyz) * fOuterRadius; vec3 v3Ray = v3CameraPos - v3Pos; float fFar = length(v3Ray); v3Ray = normalize(v3Ray); // Calculate the ray's starting position, then calculate its scattering offset vec3 v3Start = v3CameraPos; float fHeight = length(v3Start); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fCameraHeight)); float fStartAngle = dot(v3Ray, v3Start) / fHeight; float fStartOffset = fDepth*scale(fStartAngle); // Initialize the scattering loop variables float fSampleLength = fFar / fSamples; float fScaledLength = fSampleLength * fScale; vec3 v3SampleRay = v3Ray * fSampleLength; vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5; // Now loop through the sample rays for(int i=0; i<nSamples; i++) { float fHeight = length(v3SamplePoint); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight)); float fLightAngle = dot(normalize(v3LightPos), v3SamplePoint) / fHeight; float fCameraAngle = dot(normalize(v3Ray), v3SamplePoint) / fHeight; float fScatter = (-fStartOffset + fDepth*( scale(fLightAngle) - scale(fCameraAngle)))/* 0.25f*/; vec3 v3Attenuate = exp(-fScatter * (v3InvWavelength * fKr4PI + fKm4PI)); v3FrontColor += v3Attenuate * (fDepth * fScaledLength); v3SamplePoint += v3SampleRay; } // Finally, scale the Mie and Rayleigh colors and set up the varying variables for the pixel shader vec4 newPos = vec4( (gl_Vertex.xyz + view_position.xyz), 1.0); gl_Position = gl_ModelViewProjectionMatrix * vec4(newPos.xyz, 1.0); gl_Position.z = gl_Position.w * 0.99999; c1 = vec4(v3FrontColor * fKmESun, 1.0); c0 = vec4(v3FrontColor * (v3InvWavelength * fKrESun), 1.0); v3Direction = v3CameraPos - v3Pos; } Fragment Shader: uniform vec3 v3LightPos; varying vec3 v3Direction; varying vec4 c0; varying vec4 c1; const float g =-0.90f; const float g2 = g * g; const float Exposure =2; void main(void){ float fCos = dot(normalize(v3LightPos), v3Direction) / length(v3Direction); float fMiePhase = 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos*fCos) / pow(1.0 + g2 - 2.0*g*fCos, 1.5); gl_FragColor = c0 + fMiePhase * c1; gl_FragColor.a = 1.0; }

    Read the article

  • GLSL Atmospheric Scattering Issue

    - by mtf1200
    I am attempting to use Sean O'Neil's shaders to accomplish atmospheric scattering. For now I am just using SkyFromSpace and GroundFromSpace. The atmosphere works fine but the planet itself is just a giant dark sphere with a white blotch that follows the camera. I think the problem might rest in the "v3Attenuation" variable as when this is removed the sphere is show (albeit without scattering). Here is the vertex shader. Thanks for the time! uniform mat4 g_WorldViewProjectionMatrix; uniform mat4 g_WorldMatrix; uniform vec3 m_v3CameraPos; // The camera's current position uniform vec3 m_v3LightPos; // The direction vector to the light source uniform vec3 m_v3InvWavelength; // 1 / pow(wavelength, 4) for the red, green, and blue channels uniform float m_fCameraHeight; // The camera's current height uniform float m_fCameraHeight2; // fCameraHeight^2 uniform float m_fOuterRadius; // The outer (atmosphere) radius uniform float m_fOuterRadius2; // fOuterRadius^2 uniform float m_fInnerRadius; // The inner (planetary) radius uniform float m_fInnerRadius2; // fInnerRadius^2 uniform float m_fKrESun; // Kr * ESun uniform float m_fKmESun; // Km * ESun uniform float m_fKr4PI; // Kr * 4 * PI uniform float m_fKm4PI; // Km * 4 * PI uniform float m_fScale; // 1 / (fOuterRadius - fInnerRadius) uniform float m_fScaleDepth; // The scale depth (i.e. the altitude at which the atmosphere's average density is found) uniform float m_fScaleOverScaleDepth; // fScale / fScaleDepth attribute vec4 inPosition; vec3 v3ELightPos = vec3(g_WorldMatrix * vec4(m_v3LightPos, 1.0)); vec3 v3ECameraPos= vec3(g_WorldMatrix * vec4(m_v3CameraPos, 1.0)); const int nSamples = 2; const float fSamples = 2.0; varying vec4 color; float scale(float fCos) { float x = 1.0 - fCos; return m_fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } void main(void) { gl_Position = g_WorldViewProjectionMatrix * inPosition; // Get the ray from the camera to the vertex and its length (which is the far point of the ray passing through the atmosphere) vec3 v3Pos = vec3(g_WorldMatrix * inPosition); vec3 v3Ray = v3Pos - v3ECameraPos; float fFar = length(v3Ray); v3Ray /= fFar; // Calculate the closest intersection of the ray with the outer atmosphere (which is the near point of the ray passing through the atmosphere) float B = 2.0 * dot(m_v3CameraPos, v3Ray); float C = m_fCameraHeight2 - m_fOuterRadius2; float fDet = max(0.0, B*B - 4.0 * C); float fNear = 0.5 * (-B - sqrt(fDet)); // Calculate the ray's starting position, then calculate its scattering offset vec3 v3Start = m_v3CameraPos + v3Ray * fNear; fFar -= fNear; float fDepth = exp((m_fInnerRadius - m_fOuterRadius) / m_fScaleDepth); float fCameraAngle = dot(-v3Ray, v3Pos) / fFar; float fLightAngle = dot(v3ELightPos, v3Pos) / fFar; float fCameraScale = scale(fCameraAngle); float fLightScale = scale(fLightAngle); float fCameraOffset = fDepth*fCameraScale; float fTemp = (fLightScale + fCameraScale); // Initialize the scattering loop variables float fSampleLength = fFar / fSamples; float fScaledLength = fSampleLength * m_fScale; vec3 v3SampleRay = v3Ray * fSampleLength; vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5; // Now loop through the sample rays vec3 v3FrontColor = vec3(0.0, 0.0, 0.0); vec3 v3Attenuate; for(int i=0; i<nSamples; i++) { float fHeight = length(v3SamplePoint); float fDepth = exp(m_fScaleOverScaleDepth * (m_fInnerRadius - fHeight)); float fScatter = fDepth*fTemp - fCameraOffset; v3Attenuate = exp(-fScatter * (m_v3InvWavelength * m_fKr4PI + m_fKm4PI)); v3FrontColor += v3Attenuate * (fDepth * fScaledLength); v3SamplePoint += v3SampleRay; } vec3 first = v3FrontColor * (m_v3InvWavelength * m_fKrESun + m_fKmESun); vec3 secondary = v3Attenuate; color = vec4((first + vec3(0.25,0.25,0.25) * secondary), 1.0); // ^^ that color is passed to the frag shader and is used as the gl_FragColor } Here is also an image of the problem image

    Read the article

  • Spring AOP pointcut that matches annotation on interface

    - by seanizer
    Hello, this is my first post here, so I apologize in advance for any stupidity on my side. I have a service class implemented in Java 6 / Spring 3 that needs an annotation to restrict access by role. I have defined an annotation called RequiredPermission that has as its value attribute one or more values from an enum called OperationType: public @interface RequiredPermission { /** * One or more {@link OperationType}s that map to the permissions required * to execute this method. * * @return */ OperationType[] value();} public enum OperationType { TYPE1, TYPE2; } package com.mycompany.myservice; public interface MyService{ @RequiredPermission(OperationType.TYPE1) void myMethod( MyParameterObject obj ); } package com.mycompany.myserviceimpl; public class MyServiceImpl implements MyService{ public myMethod( MyParameterObject obj ){ // do stuff here } } I also have the following aspect definition: /** * Security advice around methods that are annotated with * {@link RequiredPermission}. * * @param pjp * @param param * @param requiredPermission * @return * @throws Throwable */ @Around(value = "execution(public *" + " com.mycompany.myserviceimpl.*(..))" + " && args(param)" + // parameter object " && @annotation( requiredPermission )" // permission annotation , argNames = "param,requiredPermission") public Object processRequest(final ProceedingJoinPoint pjp, final MyParameterObject param, final RequiredPermission requiredPermission) throws Throwable { if(userService.userHasRoles(param.getUsername(),requiredPermission.values()){ return pjp.proceed(); }else{ throw new SorryButYouAreNotAllowedToDoThatException( param.getUsername(),requiredPermission.value()); } } The parameter object contains a user name and I want to look up the required role for the user before allowing access to the method. When I put the annotation on the method in MyServiceImpl, everything works just fine, the pointcut is matched and the aspect kicks in. However, I believe the annotation is part of the service contract and should be published with the interface in a separate API package. And obviously, I would not like to put the annotation on both service definition and implementation (DRY). I know there are cases in Spring AOP where aspects are triggered by annotations one interface methods (e.g. Transactional). Is there a special syntax here or is it just plain impossible out of the box. PS: I have not posted my spring config, as it seems to be working just fine. And no, those are neither my original class nor method names. Thanks in advance, Sean PPS: Actually, here is the relevant part of my spring config: <aop:aspectj-autoproxy proxy-target-class="false" /> <bean class="com.mycompany.aspect.MyAspect"> <property name="userService" ref="userService" /> </bean>

    Read the article

  • I am using a constructor for my array but why is it saying i need a return type?

    - by JB
    using System; using System.IO; using System.Data; using System.Text; using System.Drawing; using System.Data.OleDb; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Drawing.Printing; using System.Collections.Generic; namespace Eagle_Eye_Class_Finder { public class GetSchedule { public class IDnumber { public string Name { get; set; } public string ID { get; set; } public string year { get; set; } public string class1 { get; set; } public string class2 { get; set; } public string class3 { get; set; } public string class4 { get; set; } IDnumber[] IDnumbers = new IDnumber[3]; public GetSchedule() //here i get an error saying method must have a return type??? { IDnumbers[0] = new IDnumber() { Name = "Joshua Banks",ID = "900456317", year = "Senior", class1 = "TEET 4090", class2 = "TEET 3020", class3 = "TEET 3090", class4 = "TEET 4290" }; IDnumbers[1] = new IDnumber() { Name = "Sean Ward", ID = "900456318", year = "Junior", class1 = "ENGNR 4090", class2 = "ENGNR 3020", class3 = "ENGNR 3090", class4 = "ENGNR 4290" }; IDnumbers[2] = new IDnumber() { Name = "Terrell Johnson",ID = "900456319",year = "Sophomore", class1 = "BUS 4090", class2 = "BUS 3020", class3 = "BUS 3090", class4 = "BUS 4290" }; } static void ProcessNumber(IDnumber myNum) { StringBuilder myData = new StringBuilder(); myData.AppendLine(IDnumber.Name); myData.AppendLine(": "); myData.AppendLine(IDnumber.ID); myData.AppendLine(IDnumber.year); myData.AppendLine(IDnumber.class1); myData.AppendLine(IDnumber.class2); myData.AppendLine(IDnumber.class3); myData.AppendLine(IDnumber.class4); MessageBox.Show(myData); } public string GetDataFromNumber(string ID) { foreach (IDnumber idCandidateMatch in IDnumbers) { if (IDCandidateMatch.ID == ID) { StringBuilder myData = new StringBuilder(); myData.AppendLine(IDnumber.Name); myData.AppendLine(": "); myData.AppendLine(IDnumber.ID); myData.AppendLine(IDnumber.year); myData.AppendLine(IDnumber.class1); myData.AppendLine(IDnumber.class2); myData.AppendLine(IDnumber.class3); myData.AppendLine(IDnumber.class4); return myData; } } return ""; } } } }

    Read the article

  • No image displayed in Fancybox

    - by seansean11
    I am having trouble getting fancybox to display its corresponding images on a website that I'm building http://www.nomadicdrift.com/test/kaniwa#events . It's a custom one page portfolio theme that I set up on the WordPress platform. If you follow the link to the events section you will see 1 figure item in a gallery like position. I have this image set up to work as a fancybox gallery, but when you click on it, it opens up the fancybox interface but does not place a image in the frame, even though it should. So this is the problem...the images do not show up in fancybox and instead I see just the frame. Here is the html that I'm displaying: <figure> <a class="fancybox" data-fancybox-type="Fashion Show de Paris, France" href="http://www.nomadicdrift.com/test/kaniwa/wp-content/uploads/2012/07/NM2.jpg"> <img class="attachment-evento wp-post-image" width="231" height="191" title="NM" alt="NM" src="http://www.nomadicdrift.com/test/kaniwa/wp-content/uploads/2012/07/NM2-231x191.jpg"> </a> <figcaption> <a class="fancybox" data-fancybox-type="Fashion Show de Paris, France" href="http://www.nomadicdrift.com/test/kaniwa/wp-content/uploads/2012/07/CEDESAN.jpg"> </a> <h4>Fashion Show de Paris, France</h4> </figcaption> </figure> I'm not going to bore you with the PHP that I used to get that output because I think the problem lies elsewhere. ***I have tried to set up a simple standard fancybox gallery on the site also, but it gives me thes same problem, leading me to believe that the problem is deeper than the html markup. I have also successfully used this same markup for a one thumbnail fancybox gallery on another site. I thought maybe it was due to some conflict in the .js files I'm using. I tried uninstalling all of my plugins/addons (which aren't too many) one by one and still had the same result. I have all of my personal javascript in the functions.js file, which is where I call the fancybox plugin using the standard $("a.fancybox").fancybox();. I have installed this plugin before on other sites and have searched extensively for an answer, so any help is greatly appreciated. Thanks, Sean

    Read the article

< Previous Page | 21 22 23 24 25 26 27  | Next Page >