Search Results

Search found 4549 results on 182 pages for 'quickly'.

Page 159/182 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • What Can We Learn About Software Security by Going to the Gym

    - by Nick Harrison
    There was a recent rash of car break-ins at the gym. Not an epidemic by any stretch, probably 4 or 5, but still... My gym used to allow you to hang your keys from a peg board at the front desk. This way you could come to the gym dressed to work out, lock your valuables in your car, and not have anything to worry about. Ignorance is bliss. The problem was that anyone who wanted to could go pick up your car keys, click the unlock button and find your car. Once there, they could rummage through your stuff and then walk back in and finish their workout as if nothing had happened. The people doing this were a little smatter then the average thief and would swipe some but not all of your cash leaving everything else in place. Most thieves would steal the whole car and be busted more quickly. The victims were unaware that anything had happened for several days. Fortunately, once the victims realized what had happened, the gym was still able to pull security tapes and find out who was misbehaving. All of the bad guys were busted, and everyone can now breathe a sigh of relieve. It is once again safe to go to the gym. Except there was still a fundamental problem. Putting your keys on a peg board by the front door is just asking for bad things to happen. One person got busted exploiting this security flaw. Others can still be exploiting it. In fact, others may well have been exploiting it and simply never got caught. How long would it take you to realize that $10 was missing from your wallet, if everything else was there? How would you even know when it went missing? Would you go to the front desk and even bother to ask them to review security tapes if you were only missing a small amount. Once highlighted, it is easy to see how commonly such vulnerability may have been exploited. So the gym did the very reasonable precaution of removing the peg board. To me the most shocking part of this story is the resulting uproar from gym members losing the convenient key peg. How dare they remove the trusted peg board? How can I work out now, I have to carry my keys from machine to machine? How can I enjoy my workout with this added inconvenience? This all happened a couple of weeks ago, and some people are still complaining. In light of the recent high profile hacking, there are a couple of parallels that can be drawn. Many web sites are riddled with vulnerabilities are crazy and easily exploitable as leaving your car keys by the front door while you work out. No one ever considered thanking the people who were swiping these keys for pointing out the vulnerability. Without a hesitation, they had their gym memberships revoked and are awaiting prosecution. The gym did recognize the vulnerability for what it is, and closed up that attack vector. What can we learn from this? Monitoring and logging will not prevent a crime but they will allow us to identify that a crime took place and may help track down who did it. Once we find a security weakness, we need to eliminate it. We may never identify and eliminate all security weaknesses, but we cannot allow well known vulnerabilities to persist in our system. In our case, we are not likely to meet resistance from end users. We are more likely to meet resistance from stake holders, product owners, keeper of schedules and budgets. We may meet resistance from integration partners, co workers, and third party vendors. Regardless of the source, we will see resistance, but the weakness needs to be dealt with. There is no need to glorify a cracker for bringing to light a security weakness. Regardless of their claimed motives, they are not heroes. There is also no point in wasting time defending weaknesses once they are identified. Deal with the weakness and move on. In may be embarrassing to find security weaknesses in our systems, but it is even more embarrassing to continue ignoring them. Even if it is unpopular, we need to seek out security weaknesses and eliminate them when we find them. http://www.sans.org has put together the Common Weakness Enumeration http://cwe.mitre.org/ which lists out common weaknesses. The site navigation takes a little getting used to, but there is a treasure trove here. Here is the detail page for SQL Injection. It clearly states how this can be exploited, in case anyone doubts that the weakness should be taken seriously, and more importantly how to mitigate the risk.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Agile Testing Days 2012 – My First Conference!

    - by Chris George
    I’d like to give you a bit of background first… so please bear with me! In 1996, whilst studying for my final year of my degree, I applied for a job as a C++ Developer at a small software house in Hertfordshire  After bodging up the technical part of the interview I didn’t get the job, but was offered a position as a QA Engineer instead. The role sounded intriguing and the pay was pretty good so in the absence of anything else I took it. Here began my career in the world of software testing! Back then, testing/QA was often an afterthought, something that was bolted on to the development process and very much a second class citizen. Test automation was rare, and tools were basic or non-existent! The internet was just starting to take off, and whilst there might have been testing communities and resources, we were certainly not exposed to any of them. After 8 years I moved to another small company, and again didn’t find myself exposed to any of the changes that were happening in the industry. It wasn’t until I joined Red Gate in 2008 that my view of testing and software development as a whole started to expand. But it took a further 4 years for my view of testing to be totally blown open, and so the story really begins… In May 2012 I was fortunate to land the role of Head of Test Engineering. Soon after, I received an email with details for the “Agile Testi However, in my new role, I decided that it was time to bite the bullet and at least go to one conference. Perhaps I could get some new ideas to supplement and support some of the ideas I already had.ng Days” conference in Potsdam, Germany. I looked over the suggested programme and some of the talks peeked my interest. For numerous reasons I’d shied away from attending conferences in the past, one of the main ones being that I didn’t see much benefit in attending loads of talks when I could just read about stuff like that on the internet. So, on the 18th November 2012, myself and three other Red Gaters boarded a plane at Heathrow bound for Potsdam, Germany to attend Agile Testing Days 2012. Tutorial Day – “Software Testing Reloaded” We chose to do the tutorials on the 19th, I chose the one titled “Software Testing Reloaded – So you wanna actually DO something? We’ve got just the workshop for you. Now with even less powerpoint!”. With such a concise and serious title I just had to see what it was about! I nervously entered the room to be greeted by tables, chairs etc all over the place, not set out and frankly in one hell of a mess! There were a few people in there playing a game with dice. Okaaaay… this is going to be a long day! Actually the dice game was an exercise in deduction and simplification… I found it very interesting and is certainly something I’ll be using at work as a training exercise! (I won’t explain the game here cause I don’t want to let the cat out of the bag…) The tutorial consisted of several games, exploring different aspects of testing. They were all practical yet required a fair amount of thin king. Matt Heusser and Pete Walen were running the tutorial, and presented it in a very relaxed and light-hearted manner. It was really my first experience of working in small teams with testers from very different backgrounds, and it was really enjoyable. Matt & Pete were very approachable and offered advice where required whilst still making you work for the answers! One of the tasks was to devise several strategies for testing some electronic dice. The premise was that a Vegas casino wanted to use the dice to appeal to the twenty-somethings interested in tech, but needed assurance that they were as reliable and random as traditional dice. This was a very interesting and challenging exercise that forced us to challenge various assumptions, determine/clarify requirements but most of all it was frustrating because the dice made a very very irritating beeping noise. Multiple that by at least 12 dice and I was dreaming about them all that night!! Some of the main takeaways that were brilliantly demonstrated through the games were not to make assumptions, challenge requirements, and have fun testing! The tutorial lasted the whole day, but to be honest the day went very quickly! My introduction into the conference experience started very well indeed, and I would talk to both Matt and Pete several times during the 4 days. Days 1,2 & 3 will be coming soon…  

    Read the article

  • Real Time BI in the Real World

    - by tobin.gilman(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} One of my favorite BI offerings from Oracle is a solution called Oracle Real Time Decisions.  Whenever I mention this product in customer meetings, eyes light up.  There are some fascinating examples of customers using it to up-sell, cross-sell, increase customer retention, and reduce risk in real time, with off the charts return on investment. I plan to share some of those stories in a future blog.  In this post however, I want to share some far more common real time analytics use case scenarios that are being addressed with widely deployed Oracle BI and data integration technologies Not all real time BI applications require continuous learning, predictive modeling, and data mining.  Many simply require the ability to integrate, aggregate, and access information that is current (typically within in few minutes or a few seconds).  The use cases are infinite.  A few I've seen: ·         Purchasing agents need to match demand against available inventory ·         Manufacturing planners need to monitor current parts and material against scheduled build plans ·         Airline agents need to match ticket demand against flight schedules, ·         Human resources managers need to track the status of global hiring requisitions against current headcount authorizations...you get the idea. One way of doing this is to run reports or federated queries directly against transactional systems.  That approach can be viable if you only need to access simple data sets on rare occasions.  High volume and complex queries can quickly bog down performance of mission critical transactional systems.  There is an architecturally simple way of solving the problem, and it's being applied by real companies around the world to solve real needs in real time.    Cbeyond is an Atlanta, GA based  provider of voice, data and mobile business applications delivers.  They deliver real time information to its call center agents  as they are interacting with their customers. The data they need resides in production CRM and other transactional systems, but  instead or reporting directly off the those systems, data is first moved to an operational data store (ODS).  Rather than running data intensive, time consuming, and performance degrading batch ETL routines to populate the ODS, Cbeyond uses Oracle Golden Gate software to incrementally capture and move only the changed records from log files of the transactional systems every few minutes.  There is no impact on transactional system performance, and the information needed by call center representatives is up to date.  Oracle Business Intelligence software presents the information to services reps in a rich, visual, and highly interactive format. Avea is similar to Cbeyond.  They are a telecommunications company who integrates billing and customer information in an ODS that is accessed by their call center agents in real time using Oracle Golden Gate and Oracle Business Intelligence.  They've taken it a step further by using the ODS to feed a data warehouse.  The operational data store provides the current information needed by call center agents during "in flight" customer interactions.  The data warehouse is used for more sophisticated analysis of historical data.  For maximum performance, both the ODS and data warehouse run on the Oracle Exadata Database Machine. These are practical illustrations of companies addressing real time reporting and analysis needs using established business intelligence/data warehousing methodologies and tools common to many IT departments.  If real time BI could benefit your organization, you may be already be closer than you thought to having the pieces in place to solving the problem.    Give us a shout if you are interested in learning more or if you have an interesting use or approach to real-time BI.

    Read the article

  • #OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

    - by Eric Bezille
    For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012. During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid. From Traditional IT to Clouds One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider. In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed. So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user. IaaS, PaaS, SaaS : what's provided and what needs to be done First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people. If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform. SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage. In addition to that, there are common challenges encompassing all type of Cloud Services : Security : covering all aspect, not only of users management but also data flows and data privacy Charge back : measuring what is used and by whom Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS. Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed Availability : ability to cover “always on” requirements Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve) Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...) Management : providing monitoring, configuring and self-service up to the end-users Oracle Strategy and Clouds For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency. That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency. The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption. With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration. This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost. I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

    Read the article

  • C#/.NET Little Wonders: Getting Caller Information

    - by James Michael Hare
    Originally posted on: http://geekswithblogs.net/BlackRabbitCoder/archive/2013/07/25/c.net-little-wonders-getting-caller-information.aspx Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. There are times when it is desirable to know who called the method or property you are currently executing.  Some applications of this could include logging libraries, or possibly even something more advanced that may server up different objects depending on who called the method. In the past, we mostly relied on the System.Diagnostics namespace and its classes such as StackTrace and StackFrame to see who our caller was, but now in C# 5, we can also get much of this data at compile-time. Determining the caller using the stack One of the ways of doing this is to examine the call stack.  The classes that allow you to examine the call stack have been around for a long time and can give you a very deep view of the calling chain all the way back to the beginning for the thread that has called you. You can get caller information by either instantiating the StackTrace class (which will give you the complete stack trace, much like you see when an exception is generated), or by using StackFrame which gets a single frame of the stack trace.  Both involve examining the call stack, which is a non-trivial task, so care should be done not to do this in a performance-intensive situation. For our simple example let's say we are going to recreate the wheel and construct our own logging framework.  Perhaps we wish to create a simple method Log which will log the string-ified form of an object and some information about the caller.  We could easily do this as follows: 1: static void Log(object message) 2: { 3: // frame 1, true for source info 4: StackFrame frame = new StackFrame(1, true); 5: var method = frame.GetMethod(); 6: var fileName = frame.GetFileName(); 7: var lineNumber = frame.GetFileLineNumber(); 8: 9: // we'll just use a simple Console write for now 10: Console.WriteLine("{0}({1}):{2} - {3}", 11: fileName, lineNumber, method.Name, message); 12: } So, what we are doing here is grabbing the 2nd stack frame (the 1st is our current method) using a 2nd argument of true to specify we want source information (if available) and then taking the information from the frame.  This works fine, and if we tested it out by calling from a file such as this: 1: // File c:\projects\test\CallerInfo\CallerInfo.cs 2:  3: public class CallerInfo 4: { 5: Log("Hello Logger!"); 6: } We'd see this: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! This works well, and in fact CallStack and StackFrame are still the best ways to examine deeper into the call stack.  But if you only want to get information on the caller of your method, there is another option… Determining the caller at compile-time In C# 5 (.NET 4.5) they added some attributes that can be supplied to optional parameters on a method to receive caller information.  These attributes can only be applied to methods with optional parameters with explicit defaults.  Then, as the compiler determines who is calling your method with these attributes, it will fill in the values at compile-time. These are the currently supported attributes available in the  System.Runtime.CompilerServices namespace": CallerFilePathAttribute – The path and name of the file that is calling your method. CallerLineNumberAttribute – The line number in the file where your method is being called. CallerMemberName – The member that is calling your method. So let’s take a look at how our Log method would look using these attributes instead: 1: static int Log(object message, 2: [CallerMemberName] string memberName = "", 3: [CallerFilePath] string fileName = "", 4: [CallerLineNumber] int lineNumber = 0) 5: { 6: // we'll just use a simple Console write for now 7: Console.WriteLine("{0}({1}):{2} - {3}", 8: fileName, lineNumber, memberName, message); 9: } Again, calling this from our sample Main would give us the same result: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! However, though this seems the same, there are a few key differences. First of all, there are only 3 supported attributes (at this time) that give you the file path, line number, and calling member.  Thus, it does not give you as rich of detail as a StackFrame (which can give you the calling type as well and deeper frames, for example).  Also, these are supported through optional parameters, which means we could call our new Log method like this: 1: // They're defaults, why not fill 'em in 2: Log("My message.", "Some member", "Some file", -13); In addition, since these attributes require optional parameters, they cannot be used in properties, only in methods. These caveats aside, they do let you get similar information inside of methods at a much greater speed!  How much greater?  Well lets crank through 1,000,000 iterations of each.  instead of logging to console, I’ll return the formatted string length of each.  Doing this, we get: 1: Time for 1,000,000 iterations with StackTrace: 5096 ms 2: Time for 1,000,000 iterations with Attributes: 196 ms So you see, using the attributes is much, much faster!  Nearly 25x faster in fact.  Summary There are a few ways to get caller information for a method.  The StackFrame allows you to get a comprehensive set of information spanning the whole call stack, but at a heavier cost.  On the other hand, the attributes allow you to quickly get at caller information baked in at compile-time, but to do so you need to create optional parameters in your methods to support it. Technorati Tags: Little Wonders,CSharp,C#,.NET,StackFrame,CallStack,CallerFilePathAttribute,CallerLineNumberAttribute,CallerMemberName

    Read the article

  • Think before you animate

    - by David Paquette
    Animations are becoming more and more common in our applications.  With technologies like WPF, Silverlight and jQuery, animations are becoming easier for developers to use (and abuse).  When used properly, animation can augment the user experience.  When used improperly, animation can degrade the user experience.  Sometimes, the differences can be very subtle. I have recently made use of animations in a few projects and I very quickly realized how easy it is to abuse animation techniques.  Here are a few things I have learned along the way. 1) Don’t animate for the sake of animating We’ve all seen the PowerPoint slides with annoying slide transitions that animate 20 different ways.  It’s distracting and tacky.  The same holds true for your application.  While animations are fun and becoming easy to implement, resist the urge to use the technology just because you think the technology is amazing.   2) Animations should (and do) have meaning I recently built a simple Windows Phone 7 (WP7) application, Steeped (download it here).  The application has 2 pages.  The first page lists a number of tea types.  When the user taps on one of the tea types, the application navigates to the second page with information about that tea type and some options for the user to choose from.       One of the last things I did before submitting Steeped to the marketplace was add a page transition between the 2 pages.  I choose the Slide / Fade Out transition.  When the user selects a tea type, the main page slides to the left and fades out.  At the same time, the details page slides in from the right and fades in.  I tested it and thought it looked great so I submitted the app.  A few days later, I asked a friend to try the app.  He selected a tea type, and I was a little surprised by how he used the app.  When he wanted to navigate back to the main page, instead of pressing the back button on the phone, he tried to use a swiping gesture.  Of course, the swiping gesture did nothing because I had not implemented that feature.  After thinking about it for a while, I realized that the page transition I had chosen implied a particular behaviour.  As a user, if an action I perform causes an item (in this case the page) to move, then my expectation is that I should be able to move it back.  I have since added logic to handle the swipe gesture and I think the app flows much better now. When using animation, it pays to ask yourself:  What story does this animation tell my users?   3) Watch the replay Some animations might seem great initially but can get annoying over time.  When you use an animation in your application, make sure you try using it over and over again to make sure it doesn’t get annoying.  When I add an animation, I try watch it at least 25 times in a row.  After watching the animation repeatedly, I can make a more informed decision whether or not I should keep the animation.  Often, I end up shortening the length of the animations.   4) Don’t get in the users way An animation should never slow the user down.  When implemented properly, an animation can give a perceived bump in performance.  A good example of this is a the page transitions in most of the built in apps on WP7.  Obviously, these page animations don’t make the phone any faster, but they do provide a more responsive user experience.  Why?  Because most of the animations begin as soon as the user has performed some action.  The destination page might not be fully loaded yet, but the system responded immediately to user action, giving the impression that the system is more responsive.  If the user did not see anything happen until after the destination page was fully loaded, the application would feel clumsy and slow.  Also, it is important to make sure the animation does not degrade the performance (or perceived performance) of the application.   Jut a few things to consider when using animations.  As is the case with many technologies, we often learn how to misuse it before we learn how to use it effectively.

    Read the article

  • Gamification = -10#/3mo

    - by erikanollwebb
    One of the purposes of gamification of anything is to see if you can modify the behavior of the user. In the enterprise, that might mean getting sales people to enter more information into a CRM system, encouraging employees to update their HR records, motivating people to participate in forums and discussions, or process invoices more quickly.  Wikipedia defines behavior modification as "the traditional term for the use of empirically demonstrated behavior change techniques to increase or decrease the frequency of behaviors, such as altering an individual's behaviors and reactions to stimuli through positive and negative reinforcement of adaptive behavior and/or the reduction of behavior through its extinction, punishment and/or satiation."  Gamification is just a way to modify someone's behavior using game mechanics. And the magic question is always whether it works. So I thought I would present my own little experiment from the last few months.  This spring, I upgraded to a Samsung Galaxy 4.  It's a pretty sweet phone in many ways, but one of the little extras I discovered was a built in app called S Health. S Health is an app that you can use to track calories, weight, exercise and it has a built in pedometer. I looked at it when I got the phone, but assumed you had to turn it on to use it so I didn't look at it much.  But sometime in July, I realized that in fact, it just ran in the background and was quietly tracking my steps, with a goal of 10,000 per day.  10,000 steps per day is this magic number recommended by the Surgeon General and the American Heart Association.  Dr. Oz pushes it as the goal for daily exercise.  It's about 5 miles of walking. I'm generally not the kind of person who always has my phone with me.  I leave it in my purse and pull it out when I need it.  But then I realized that meant I wasn't getting a good measure of my steps.  I decided to do a little experiment, and carry it with me as much as possible for a week.  That's when I discovered the gamification that changed my life over the last 3 months.  When I hit 10,000 steps, the app jingled out a little "success!" tune and I got a badge.  I was hooked.  I started carrying my phone.  I started making sure I had shoes I could walk in with me.  I started walking at lunch time, because I realized how often I sat at my desk for 8-10 hours every day without moving.  I started pestering my husband to walk with me after work because I hadn't hit my 10,000 yet, leading him at one point to say "I'm not as much a slave to that badge as you are!"  I started looking at parking lots differently.  Can't get a space up close?  No worries, just that many steps toward my 10,000.  I even tried to see if there was a second power user level at 15,000 or 20,000 (*sadly, no).  If I was close at the end of the day, I have done laps around my house until I got my badge.  I have walked around the block one more time to get my badge.  I have mentally chastised myself when I forgot to put my phone in my pocket because I don't know how many steps I got.  The badge below I got when my boss and I were in New York City and we walked around the block of our hotel just to watch the badge pop up. There are a bunch of tools out on the market now that have similar ideas for helping you to track your exercise, make it social.  There are apps (my favorite is still Zombies, Run!).  You could buy a FitBit or UP by Jawbone.   Interactive fitness makes the Expresso stationary bike with built in video games.  All designed to help you be more aware of your activity and keep you engaged and motivated.  And the idea is to help you change your behavior. I know someone who would spend extra time and work hard on the Expresso because he had built up strategies for how to kill the most dragons while he was riding to get more points.  When the machine broke down, he didn't ride a different bike because it just wasn't that interesting. But for me, just the simple jingle and badge have been all I needed.  I admit, I still giggle gleefully when I hear the tune sing out from my pocket. After a few weeks, I noticed I had dropped a few pounds.  Not a lot, just 2-3.  But then I was really hooked.  I started making a point both to eat a little less and hit 10,000 steps as much as I could.  I bemoaned that during the floods in Boulder, I wasn't hitting my 10,000 steps.  And now, a few months later, I'm almost 10 lbs lighter. All for 1 badge a day. So yes, simple gamification can increase motivation and engagement.  And that can lead to changes in behavior.  Now the job is to apply that to the enterprise space in a meaningful and engaging way. 

    Read the article

  • EM CLI, diving in and beyond!

    - by Maureen Byrne
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Doing more in less time… Isn’t that what we all strive to do? With this in mind, I put together two screen watches on Oracle Enterprise Manager 12c command line interface, or EM CLI as it is also known. There is a wealth of information on any topic that you choose to read about, from manual pages to coding documents…might I even say blog posts? In our busy lives it is so nice to just sit back with a short video, watch and learn enough to dive in. Doing more in less time, is the essence of EM CLI. It enables you to script fundamental and complex administrative tasks in an elegant way, thanks to the Jython scripting language. Repetitive tasks can be scripted and reused again and again. Sure, a Graphical User Interface provides a more intuitive step by step approach to tasks, and it provides a way of quickly becoming familiar with a product and its many features, and it is definitely the way to go when viewing performance data and historical trending…but for repetitive and complex tasks, scripting is the way to go! Lets us take the everyday task of creating an administrator. Using EM CLI in interactive mode the command could look like this.. emcli>create_user(name='jan.doe', type='EXTERNAL_USER') This command creates an administrator called jan.doe which is an externally authenticated user, possibly LDAP or SSO, defined by the EXTERNAL_USER tag. The create_user procedure takes many arguments; see the documentation for more information. Now, where EM CLI really shines and shows power is in creating multiple users. Regardless of the number, tens or thousands, the effort is the same. With the use of a standard programming construct, a loop, you can place your create_user() procedure within it. Using a loop allows you to iterate through a previously created list, creating new users until the list is complete. Using EM CLI in Script mode, your Jython loop would look something like this… for user in list_of_users:       create_user(name=user, expire=’true’, password=’welcome123’) This Jython code snippet iterates through a previously defined list of names, list_of_users, and iterates through the list, taking each name, user in this case, and creates an administrator sets the password to welcome123, but forces the user to reset it when they first login. This is only one of over four hundred procedures created to expose Oracle Enterprise Manager 12c functionality in a powerful and programmatic way. It is a few months since we released EM CLI with scripting option. We are seeing many users adapt to this fun and powerful way of using Oracle Enterprise Manager 12c. What are the first steps? Watch these screen watches, and dive in. The first screen watch steps you through where and how to download and install and how to run your first few commands. The Second screen watch steps you through a few scripts. Next time, I am going to show you the basic building blocks to writing a Jython script to perform Oracle Enterprise Manager 12c administrative tasks. Join this growing group of EM CLI users…. Dive in! Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • SQLIO Writes

    - by Grant Fritchey
    SQLIO is a fantastic utility for testing the abilities of the disks in your system. It has a very unfortunate name though, since it's not really a SQL Server testing utility at all. It really is a disk utility. They ought to call it DiskIO because they'd get more people using I think. Anyway, branding is not the point of this blog post. Writes are the point of this blog post. SQLIO works by slamming your disk. It performs as mean reads as it can or it performs as many writes as it can depending on how you've configured your tests. There are much smarter people than me who will get into all the various types of tests you should run. I'd suggest reading a bit of what Jonathan Kehayias (blog|twitter) has to say or wade into Denny Cherry's (blog|twitter) work. They're going to do a better job than I can describing all the benefits and mechanisms around using this excellent piece of software. My concerns are very focused. I needed to set up a series of tests to see how well our product SQL Storage Compress worked. I wanted to know the effects it would have on a system, the disk for sure, but also memory and CPU. How to stress the system? SQLIO of course. But when I set it up and ran it, following the documentation that comes with it, I was seeing better than 99% compression on the files. Don't get me wrong. Our product is magnificent, wonderful, all things great and beautiful, gets you coffee in the morning and is made mostly from bacon. But 99% compression. No, it's not that good. So what's up? Well, it's the configuration. The default mechanism is to load up a file, something large that will overwhelm your disk cache. You're instructed to load the file with a character 0x0. I never got a computer science degree. I went to film school. Because of this, I didn't memorize ASCII tables so when I saw this, I thought it was zero's or something. Nope. It's NULL. That's right, you're making a very large file, but you're filling it with NULL values. That's actually ok when all you're testing is the disk sub-system. But, when you want to test a compression and decompression, that can be an issue. I got around this fairly quickly. Instead of generating a file filled with NULL values, I just copied a database file for my tests. And to test it with SQL Storage Compress, I used a database file that had already been run through compression (about 40% compression on that file if you're interested). Now the reads were taken care of. I am seeing very realistic performance from decompressing the information for reads through SQLIO. But what about writes? Well, the issue is, what does SQLIO write? I don't have access to the code. But I do have access to the results. I did two different tests, just to be sure of what I was seeing. First test, use the .DAT file as described in the documentation. I opened the .DAT file after I was done with SQLIO, using WordPad. Guess what? It's a giant file full of air. SQLIO writes NULL values. What does that do to compression? I did the test again on a copy of an uncompressed database file. Then I ran the original and the SQLIO modified copy through ZIP to see what happened. I got better than 99% compression out of the SQLIO modified file (original file of 624,896kb went to 275,871kb compressed, after SQLIO it went to 608kb compressed). So, what does SQLIO write? It writes air. If you're trying to test it with compression or maybe some other type of file storage mechanism like dedupe, you need to know this because your tests really won't be valid. Should I find some other mechanism for testing? Yeah, if all I'm interested in is establishing performance to my own satisfaction, yes. But, I want to be able to compare my results with other people's results and we all need to be using the same tool in order for that to happen. SQLIO is the common mechanism that most people I know use to establish disk performance behavior. It'd be better if we could get SQLIO to do writes in some other fashion. Oh, and before I go, I get to brag a bit. Measuring IOPS, SQL Storage Compress outperforms my disk alone by about 30%.

    Read the article

  • Finding it Hard to Deliver Right Customer Experience: Think BPM!

    - by Ajay Khanna
    Our relationship with our customers is not a just a single interaction and we should not treat it like one. A customer’s relationship with a vendor is like a journey which starts way before customer makes a purchase and lasts long after that. The journey may start with customer researching a product that may lead to the eventual purchase and may continue with support or service needs for the product. A typical customer journey can be represented as shown below: As you may notice, customers tend to use multiple channels to interact with a company throughout their journey.  They also expect that they should get consistent experience, no matter what interaction channel they may choose. Customers do not like to repeat the information they have already provided and expect companies to remember their preferences, and offer them relevant products and services. If the company fails to meet this expectation, customers not only will abandon the purchase and go to the competitor but may also influence others’ purchase decision. Gone are the days when word of mouth was the only medium, and the customer could influence “Six” others. This is the age of social media and customer’s good or bad experience, especially bad get highly amplified and may influence hundreds of others. Challenges that face B2C companies today include: Delivering consistent experience: The reason that delivering consistent experience is challenging is due to fragmented data, disjointed systems and siloed multichannel interactions. Customers tend to get different service quality if they use web vs. phone vs. store. They get different responses from different service agents or get inconsistent answers if they call sales vs. service group in the company. Such inconsistent experiences result in lower customer satisfaction or NPS (net promoter score) numbers. Increasing Revenue: To stay competitive companies frequently introduce new products and services. Delay in launching such offerings has a significant impact on revenue realization. In addition to new product revenue, there are multiple opportunities to up-sell and cross-sell that impact bottom line. If companies are not able to identify such opportunities, bring a product to market quickly, or not offer the right product to the right customer at the right time, significant loss of revenue may occur. Ensuring Compliance: Companies must be compliant to ever changing regulations, these could be about Know Your Customer (KYC), Export/Import regulations, or taxation policies. In addition to government agencies, companies also need to comply with the SLA that they have committed to their customers. Lapse in meeting any of these requirements may lead to serious fines, penalties and loss in business. Companies have to make sure that they are in compliance will all such regulations and SLA commitments, at any given time. With the advent of social networks and mobile technology, companies not only need to focus on process efficiency but also on customer engagement. Improving engagement means delivering the customer experience as the customer is expecting and interacting with the customer at right time using right channel. Customers expect to be able to contact you via any channel of their choice (web, email, chat, mobile, social media), purchase via any viable channel (web, phone, store, mobile). Customers expect companies to understand their particular needs and remember their preferences on repeated visits. To deliver such an integrated, consistent, and contextual experience, power of BPM in must. Your company may be organized in departments like Marketing, Sales, Service. You may hold prospect data in SFA, order information in ERP, customer issues in CRM. However, the experience delivered to the customer must not be constrained by your system legacy. BPM helps in designing the right experience for the right customer and integrates all the underlining channels, systems, applications to make sure right information will be delivered to the right knowledge worker or to the customer every single time.     Orchestrating information across all systems (MDM, CRM, ERP), departments (commerce, merchandising, marketing service) and channels (Email, phone, web, social)  is the key, and that’s what BPM delivers. In addition to orchestrating systems and channels for consistency, BPM also provides an ability for analysis and decision management. By using data from historical transactions, social media and from other systems, users can determine the customer preferences, customer value, and churn propensity. This information, in the context, is then used while making a decision at a process step. Working with real-time decision management system can also suggest right up-sell or cross-sell offers, discounts or next-best-action steps for a particular customer. Timely action on customer issues or request is also a key tenet of a good customer experience. BPM’s complex event processing capabilities help companies to take proactive actions before issues get escalated. BPM system can be designed to listen to a certain event patters then deduce from those customer situations (credit card stolen, baggage lost, change of address) and do a triage before situation goes out of control. If such a situation arises you can send alerts to right people or immediately invoke corrective actions. Last but not least one of BPM’s key values is to drive continuous improvement. Learning about customers past experiences, interactions and social conversations, provide valuable insight. Such insight can be used to improve products, customer facing processes, and customer experience. You may take these insights as an input to design better more efficient and customer friendly sales, contact center or self-service processes. If customer experience is important for your business, make sure you have incorporated BPM as a part of your strategy to design, orchestrate and improve your customer facing processes.

    Read the article

  • Profiling Startup Of VS2012 &ndash; YourKit Profiler

    - by Alois Kraus
    The YourKit (v7.0.5) profiler is interesting in terms of price (79€ single place license, 409€ + 1 year support and upgrades) and feature set. You do get a performance and memory profiler in one package for which you normally need also to pay extra from the other vendors. As an interesting side note the profiler UI is written in Java because they do also sell Java profilers with the same feature set. To get all methods of a VS startup you need first to configure it to include System* in the profiled methods and you need to configure * to measure wall clock time. By default it does record only CPU times which allows you to optimize CPU hungry operations. But you will never see a Thread.Sleep(10000) in the profiler blocking the UI in this mode. It can profile as all others processes started from within the profiler but it can also profile the next or all started processes. As usual it can profile in sampling and tracing mode. But since it is a memory profiler as well it does by default also record all object allocations > 1MB. With allocation recording enabled VS2012 did crash but without allocation recording there were no problems. The CPU tab contains the time line of the application and when you click in the graph you the call stacks of all threads at this time. This is really a nice feature. When you select a time region you the CPU Usage estimation for this time window. I have seen many applications consuming 100% CPU only because they did create garbage like crazy. For this is the Garbage Collection tab interesting in conjunction with a time range. This view is like the CPU table only that the CPU graph (green) is missing. All relevant information except for GCs/s is already visible in the CPU tab. Very handy to pinpoint excessive GC or CPU bound issues. The Threads tab does show the thread names and their lifetime. This is useful to see thread interactions or which thread is hottest in terms of CPU consumption. On the CPU tab the call tree does exist in a merged and thread specific view. When you click on a method you get below a list of all called methods. There you can sort for methods with a high own time which are worth optimizing. In the Method List you can select which scope you want to see. Back Traces are the methods which did call you. Callees ist the list of methods called directly or indirectly by your method as a flat list. This is not a call stack but still very useful to see which methods were slow so you can see the “root” cause quite quickly without the need to click trough long call stacks. The last view Merged Calles is a call stacked view of the previous view. This does help a lot to understand did call each method at run time. You would get the same view with a debugger for one call invocation but here you get the full statistics (invocation count) as well. Since YourKit is also a memory profiler you can directly see which objects you have on your managed heap and which objects do hold most of your precious memory. You can in in the Object Explorer view also examine the contents of your objects (strings or whatsoever) to get a better understanding which objects where potentially allocating this stuff.   YourKit is a very easy to use combined memory and performance profiler in one product. The unbeatable single license price makes it very attractive to straightly buy it. Although it is a Java UI it is very responsive and the memory consumption is considerably lower compared to dotTrace and ANTS profiler. What I do really like is to start the YourKit ui and then start the processes I want to profile as usual. There is no need to alter your own application code to be able to inject a profiler into your new started processes. For performance and memory profiling you can simply select the process you want to investigate from the list of started processes. That's the way I like to use profilers. Just get out of the way and let the application run without any special preparations.   Next: Telerik JustTrace

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • What's My Problem? What's Your Problem?

    - by Jacek Ziabicki
    Software installers are not made for building demo environments. I can say this much after 12 years (on and off) of supporting my fellow sales consultants with environments for software demonstrations. When we release software, we include installation programs and procedures that are designed for use by our clients – to build a production environment and a limited number of testing, training and development environments. Different Objectives Your priorities when building an environment for client use vs. building a demo environment are very different. In a production environment, security, stability, and performance concerns are paramount. These environments are built on a specific server and rarely, if ever, moved to a different server or different network address. There is typically just one application running on a particular server (physical or virtual). Once built, the environment will be used for months or years at a time. Because of security considerations, the installation program wants to make these environments very specific to the organization using the software and the use case, encoding a fully qualified name of the server, or even the IP address on the network, in the configuration. So you either go through the installation procedure for each environment, or learn how to clone and reconfigure the software as a separate instance to build all your non-production environments. This may not matter much if the installation is as simple as clicking on the Setup program. But for enterprise applications, you have a number of configuration settings that you need to get just right – so whether you are installing from scratch or reconfiguring an existing installation, this requires both time and expertise in the particular piece of software. If you need a setup of several applications that are integrated to talk to one another, it is a whole new level of complexity. Now you need the expertise in all of the applications involved (plus the supporting technology products), and in addition to making each application work, you also have to configure the integration endpoints. Each application needs the URLs and credentials to call the integration layer, and the integration must be able to call each application. Then you have to make sure that each app has the right data so a business process initiated in one application can continue in the next. And, you will need to check that each application has the correct version and patch level for the integration to work. When building demo environments, your #1 concern is agility. If you can get away with a small number of long-running environments, you are lucky. More likely, you may get a request for a dedicated environment for a demonstration that is two weeks away: how quickly can you make this available so we still have the time to build the client-specific data? We are running a hands-on workshop next month, and we’ll need 15 instances of application X environment so each student can have a separate server for the exercises. We cannot connect to our data center from the client site, the client’s security policy won’t allow our VPN to go through – so we need a portable environment that we can bring with us. Our consultants need to be able to work at the hotel, airport, and the airplane, so we really want an environment that can run on a laptop. The client will need two playpen environments running in the cloud, accessible from their network, for a series of workshops that start two weeks from now. We have seen all of these scenarios and more. Here you would be much better served by a generic installation that would be easy to clone. Welcome to the Wonder Machine The reason I started this blog is to share a particular design of a demo environment, a special way to install software, that can address the above requirements, even for integrated setups. This design was created by a team at Oracle Utilities Global Business Unit, and we are using this setup for most of our demo environments. In a bout of modesty we called it the Wonder Machine. Over the next few posts – think of it as a novel in parts – I will tell you about the big idea, how it was implemented and what you can do with it. After we have laid down the groundwork, I would like to share some tips and tricks for users of our Wonder Machine implementation, as well as things I am learning about building portable, cloneable environments. The Wonder Machine is by no means a closed specification, it is under active development! I am hoping this blog will be of interest to two groups of readers – the users of the Wonder Machine we have built at Oracle Utilities, who want to get the most out of their demo environments and be able to reconfigure it to their needs – and to people who need to build environments for demonstration, testing, training, development and would like to make them cloneable and portable to maximize the reuse of their effort. Surely we are not the only ones facing this problem? If you can think of a better way to solve it, or if you can help us improve on our concept, I will appreciate your comments!

    Read the article

  • Right-Time Retail Part 2

    - by David Dorf
    This is part two of the three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Integration Of course these real-time enabling technologies are only as good as the systems that utilize them, and it only takes one bottleneck to slow everyone else down. What good is an immediate stock-out notification if the supply chain can’t react until tomorrow? Since being formed in 2006, Oracle Retail has been not only adding more integrations between systems, but also modernizing integrations for appropriate speed. Notice I tossed in the word “appropriate.” Not everything needs to be real-time – again, we’re talking about Right-Time Retail. The speed of data capture, analysis, and execution must be synchronized or you’re wasting effort. Unfortunately, there isn’t an enterprise-wide dial that you can crank-up for your estate. You’ll need to improve things piecemeal, with people and processes as limiting factors while choosing the appropriate types of integrations. There are three integration styles we see in the retail industry. First is batch. I know, the word “batch” just sounds slow, but this pattern is less about velocity and more about volume. When there are large amounts of data to be moved, you’ll want to use batch processes. Our technology of choice here is Oracle Data Integrator (ODI), which provides a fast version of Extract-Transform-Load (ETL). Instead of the three-step process, the load and transform steps are combined to save time. ODI is a key technology for moving data into Retail Analytics where we can apply science. Performing analytics on each sale as it occurs doesn’t make any sense, so we batch up a statistically significant amount and submit all at once. The second style is fire-and-forget. For some types of data, we want the data to arrive ASAP but immediacy is not necessary. Speed is less important than guaranteed delivery, so we use message-oriented middleware available in both Weblogic and the Oracle database. For example, Point-of-Service transactions are queued for delivery to Central Office at corporate. If the network is offline, those transactions remain in the queue and will be delivered when the network returns. Transactions cannot be lost and they must be delivered in order. (Ever tried processing a return before the sale?) To enhance the standard queues, we offer the Retail Integration Bus (RIB) to help the management and monitoring of fire-and-forget messaging in the enterprise. The third style is request-response and is most commonly implemented as Web services. This is a synchronous message where the sender waits for a response. In this situation, the volume of data is small, guaranteed delivery is not necessary, but speed is very important. Examples include the website checking inventory, a price lookup, or processing a credit card authorization. The Oracle Service Bus (OSB) typically handles the routing of such messages, and we’ve enhanced its abilities with the Retail Service Backbone (RSB). To better understand these integration patterns and where they apply within the retail enterprise, we’re providing the Retail Reference Library (RRL) at no charge to Oracle Retail customers. The library is composed of a large number of industry business processes, including those necessary to support Commerce Anywhere, as well as detailed architectural diagrams. These diagrams allow implementers to understand the systems involved in integrations and the specific data payloads. Furthermore, with our upcoming release we’ll be providing a new tool called the Retail Integration Console (RIC) that allows IT to monitor and manage integrations from a single point. Using RIC, retailers can quickly discern where integration activity is occurring, volume statistics, average response times, and errors. The dashboards provide the ability to dive down into the architecture documentation to gather information all the way down to the specific payload. Retailers that want real-time integrations will also need real-time monitoring of those integrations to ensure service-level agreements are maintained. Part 3 looks at marketing.

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • "Guiding" a Domain Expert to Retire from Programming

    - by James Kolpack
    I've got a friend who does IT at a local non-profit where they're using a custom web application which is no longer supported by the company who built it. (out of business, support was too expensive, I'm not sure...) Development on this app started around 10+ years ago so the technologies being harnessed are pretty out of date now - classic asp using vbscript and SQL Server 2000. The application domain is in the realm of government bookkeeping - so even though the development team is long gone, there are often new requirements of this software. Enter the... The domain expert. This is an middle aged accounting whiz without much (or any?) prior development experience. He studied the pages, code and queries and learned how to ape the style of the original team which, believe me, is mediocre at best. He's very clever and very tenacious but has no experience in software beyond what he's picked up from this app. Otherwise, he's a pleasant guy to talk to and definitely knows his domain. My friend in IT, and probably his superiors in the company, want him out of the code. They view him as wasting his expertise on coding tasks he shouldn't be doing. My friend got me involved with a few small contracts which I handled without much problem - other than somewhat of a communication barrier with the domain expert. He explained the requirements very quickly, assuming prior knowledge of the domain which I do not have. This is partially his normal style, and I think maybe a bit of resentment from my involvement. So, I think he feels like the owner of the code and has entrenched himself in a development position. So... his coding technique. One of his latest endeavors was to make a page that only he could reach (theoretically - the security model for the system is wretched) where he can enter a raw SQL query, run it, and save the query to run again later. A report that I worked on had been originally implemented by him using 6 distinct queries, 3 or 4 temp tables to coordinate the data between the queries, and the final result obtained by importing the data from the final query into Access and doing a pivot and some formatting. It worked - well, some of the results were incorrect - but at what a cost! (I implemented the report in a single query with at least 1/10th the amount of code.) He edits code in notepad. He doesn't seem to know about online reference material for the languages. I recently read an article on Dr. Dobbs titled "What Makes Bad Programmers Different" - and instantly thought of our domain expert. From the article: Their code is large, messy, and bug laden. They have very superficial knowledge of their problem domain and their tools. Their code has a lot of copy/paste and they have very little interest in techniques that reduce it. The fail to account for edge cases, while inefficiently dealing with the general case. They never have time to comment their code or break it into smaller pieces. Empirical evidence plays no little role in their decisions. 5.5 out of 6. My friend is wanting me to argue the case to their management - specifically, I got this email from their manager to respond to: ...Also, I need to talk to you about what effect there is from Domain Expert continuing to make edits to the live environment. If that is a problem for you I need to know so I can have his access blocked. Some examples would help. In my opinion, from a technical standpoint, it's dangerous to have him making changes without any oversight. On the other hand, I'm just doing one-off contracts at this point and don't have much desire to get involved deeply enough that I'm essentially arguing as one of the Bobs from Office Space. I'd like to help my friend out - but I feel like I'm getting in the middle of a political battle. More importantly - if I do get involved and suggest that his editing privileges be removed, it needs to be handled carefully so that doesn't feel belittled. He is beyond a doubt the foremost expert on this system. I'm hoping this is familiar territory for some other stackechangers, because I'm feeling a little bewildered. How should I respond? Should I argue that he shouldn't be allowed to touch the code? Should I phrase it as "no single developer, no matter how experienced, should be working on production code unchecked"? Should I argue to keep him involved with the code, but with a review process? Should I say "glad I could help, but uh, I'm busy now!" Other options? Thanks a bunch!

    Read the article

  • The Best BPM Journey: More Exciting Destinations with Process Accelerators

    - by Cesare Rotundo
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Open World (OOW) earlier this month has been a great occasion to discuss with our BPM customers. It was interesting to hear definite patterns emerging from those conversations: “BPM is a journey”, “experiences to share”, “our organization now understands what BPM is”, and my favorite (with some caveats): “BPM is like wine tasting, once you start, you want to try more”. These customers have started their journey, climbed up the learning curve, and reached a vantage point that allows them to see their next BPM destination. They see the next few processes they are going to tackle and improve with BPM. These processes/destinations target both horizontal processes where BPM replaces or coordinates manual activities, and critical industry processes that the company needs to improve to compete and deliver increasing value. Each new destination generates value, allowing the organization to reduce the cost of manual processes that were not supported by apps/custom development, and increase efficiency of end-to-end processes partially covered by apps/custom dev. The question we wanted to answer is how to help organizations experience deeper success with BPM, by increasing their awareness of the potential for reaching new targets, and equipping them with the right tools. We decided that we needed to identify destinations, and plot routes to show the fastest path to those destinations. In the end we want to enable customers to reach “Process Excellence”: continuously set new targets and consistently and efficiently reach them. The result is Oracle Process Accelerators (PA), solutions built using the rich functionality in Oracle BPM Suite. PAs offers a rapidly expanding list of exciting destinations. Our launch of the latest installment of Process Accelerators at Oracle Open World includes new Industry-focused solutions such as Public Sector Incident Reporting and Financial Services Loan Origination, and improved other horizontal PAs, including Travel Request Management, Document Routing and Approval, and Internal Service Requests. Just before OOW we had extended the Oracle deployment of Travel Request Management, riding the enthusiastic response from early adopters among travelers (employees), management and support (approvers). “Getting there first” means being among the first to extract value from the PA approach, while acquiring deeper insights into the customers’ perspective. This is especially noteworthy when it comes to PAs, a set of solutions designed to be quickly deployed and iteratively improved by customers. The OOW launch has generated immediate feedback from customers, non-customers, analysts, and partners. They all confirmed that both Business and IT at organizations benefit from PAs when it comes to exploring the potential for BPM to improve their business processes. PAs help customers visualize what can be done with BPM, and PAs are made to be extended: you can see your destination, change the path to fit your needs, and deploy. We're discovering new destinations/processes that the market wants us to support, generic enough across industries and within industries. We'll keep on building sets of requirements, deliver functional design, construct solutions using Oracle BPM, and test them not only functionally but for performance, scalability, clustering, making them robust, product-quality. Delivering BPM solutions with product-grade quality is the equivalent of following a tried-and-tested path on a map. Do you know of existing destinations in your industry? If yes, we can draw a path to innovative processes together.

    Read the article

  • Cloud Computing Forces Better Design Practices

    - by Herve Roggero
    Is cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles?  A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications. So the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal. Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables. Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough). The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality. Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better… … the cloud forces better design practices. My 2 cents.

    Read the article

  • Eloqua Experience 2013: Mystique, Modern Marketing and Masterful Engagement

    - by Mike Stiles
    The following is a guest post from Erick Mott, a social business leader at Oracle Eloqua. There’s a growing gap between 20th century marketing and a modern marketing way of doing business. I can’t think of a better example of modern marketing in action than what more than 2,000 people experienced in San Francisco at #EE13; customer-obsession, multichannel content, and real-time engagement all coming together at one extraordinary event. This was my first Eloqua Experience as a new Oracle Eloqua employee. In weeks prior, I heard about the mystique but didn’t know what to expect. What I’ve come to understand with more clarity is everything we do revolves around customer success, and we operate and educate at all times with these five tenets in mind: 1. Targeting: Really Know Your Buyer 2. Engagement: Create a 1:1 Relationship 3. Conversion: Visualize Guided Thinking 4. Analysis: Learn What’s Working 5. Marketing Technology: Enable and Extend the Cloud Product News from Eloqua Experience 2013 We made some announcements that John Stetic, VP of Products, Oracle Eloqua covers in this brief ‘Modern Marketing Minute’ video recorded after Wednesday’s keynote; summarized below, too: Oracle Eloqua AdFocus: While understanding the impact of a specific marketing channel was formerly relegated to marketers’ wish lists, the channels we now focus on are digital, social, and mobile. AdFocus gives marketers a single platform to dynamically create, manage and measure display ads alongside owned and earned media. AdFocus enables marketers to target only key accounts or prospects you want to reach with display ads, as well as provide creative content or personalized ad copy based on their persona and activities. Oracle Eloqua Profiler: The details of what we now know about customers have expanded into a universal customer profile, which can be used to create highly targeted segments. Marketers now can take data that’s not even stored in Eloqua to help targeted and score prospects for a complete, multichannel view of the customer. Profiler gives sales reps one, detailed view of the prospect to extend views beyond Oracle Eloqua asset activity (emails, forms, page views) to any external assets stored in Oracle Eloqua. Marketing Resource Management: New capabilities create more secure and controlled access to marketing resources and data. New integrations provide greater insight into campaign resources and management through a central marketing calendar and simplify resource management. Integrated Sales and Marketing Funnel: An integrated sales and marketing funnel view gives marketing and sales users, cross-functional teams, and executive management a consistent and clear view of pipeline performance. It also quickly provides users with historical metrics across different time spans and conditions. Eloqua AppCloud: More than 20 new AppCloud partners have been added to the community, which now includes 100+ apps. Eloqua AppCloud now provides modern marketers with an even broader range of marketing applications that help expand and enrich sales and marketing efforts; easily accessible in the Topliners Community. Social Capabilities: Recent integration between Oracle Eloqua and Oracle Social Relationship Management (SRM) deliver a comprehensive, scalable and integrated modern marketing solution. New capabilities include better tracking of social activities for a more complete customer profile. Engage Facebook custom audiences with AdFocus to deliver ads and meaningful experiences through trusted social networks. Biggest and Best Eloqua Experience. There’s a lot of talk in the industry about the Marketing Cloud. At Oracle Eloqua, we have been on a mission of delivering the most advanced and integrated modern marketing technology on the planet. It’s not just a concept but reality with proven execution, as seen first-hand this week in San Francisco. In this video, Kevin Akeroyd, SVP of Oracle Eloqua, provides some highlights of what made this year’s Eloqua Experience, exceptional, including Steve Woods’ presentation about the journey of modern marketers and Andrea Ward’s conversation with Vince Gilligan, creator of the Breaking Bad television series. The 2013 Markie Awards The Oracle Eloqua Marketing Cloud was best exemplified for me as 19 Markies were awarded to customers for their exceptional creativity and results as modern marketers. Wow, what a night to remember with so many committed and talented people working to create an extraordinary experience! To learn more about how to become a modern marketer, check out these resources. We look forward to seeing you next year at Eloqua Experience. More on Erick: 20 years experience at Oracle, Ektron, Sitecore, Lyris, Habeas, Nokia, creatorbase, Mark Monitor, Cisco Systems, GlobalFluency, Sun Microsystems, Philips NV, Elm Products and CBS TV. Patent holder with agency, Fortune 500, media, and startup company expertise. @mikestiles

    Read the article

  • UK OUG Conference Highlights and Insights

    - by Richard Bingham
    As per my preemptive post, this was the first time the annual conference organized by the UK Oracle User Group (UKOUG) was split into two events, one for Oracle Applications and another in December for Oracle Technology. Apps13, as it was branded, was hailed as a success, with over 1000 registered attendees and three days of sessions, exhibition, round-tables and many other types of content. As this poster on their stand illustrates, the UKOUG is a strong community with popular participants from both big and small Oracle partners and customers. The venue was a more intimate setting than previous years also, allowing everyone to casually bump into those they hoped to. It gave a real feeling of an Apps Community. The main themes over the days where CRM and Customer Experience, HCM, and FIN/SCM. This allowed people to attend just one focused day if they wanted. In addition the Apps Transformation stream ran across all three days, offering insights, advice, and details on the newer product solutions like Fusion Applications.  Here are some of the key take-aways I got from the conference, specific to my role in Fusion Applications Developer Relations: User Experience continues to be a significant reason for adopting some of the newer application products available, with immediately obvious gains in user productivity and satisfaction reported by customers. Also this doesn't stop with the baked-in UX either, with their Design Patterns proving popular and indeed currently being extended to including things like extending on ADF mobile and customizing the Simplified UI. More on this to come from us soon. The executive sessions emphasized the "it's a journey" phrase, illustrating that modern business applications are powered by technologies such as Cloud, Mobile, Social and Big Data and these can be harnessed to help propel your organization forward. Indeed the emphasis is away from the traditional vendor prescribed linear applications road map, and towards plotting a course based on business priorities supported by a broad range of integrated solutions. To help with this several conference sessions demoed the new "Applications Navigator" tool, developed in partnership with OUG members, which offers a visual framework to help organizations plan their Oracle Applications investments around business and technology imperatives. Initial reaction was positive, especially as customers do not need to decipher Oracle's huge product catalog and embeds the best blend of proven and integrated applications solutions. We'll share more on this when it is generally available. Several sessions focused around explanations and interpretation of Oracle OpenWorld 2013, helping highlight the key Oracle Applications messages and directions. With a relative small percentage of conference attendees also at OpenWorld (from a show of hands) this was a popular way to distill the information available down into specific items of interest for the community. Please note the original OpenWorld 2013 content is still available for download but will not remain available forever (via the Oracle website OpenWorld Content Catalog > pick a session > see the PDF download). With the release of E-Business Suite 12.2 the move to develop and deploy on the Fusion Middleware stack becomes a reality for many Oracle Applications customers. This coupled with recent E-Business Suite features such as the Integrated SOA Gateway and the E-Business Suite SDK for Java, illustrates how the gap between the technologies and techniques involved in extending E-Business Suite and Fusion Applications is quickly narrowing. We'll see this merging continue to evolve going forwards. Getting started with Oracle Cloud Applications is actually easier than many customers expected, with a broad selection of both large and medium sized organizations explaining how they added new features to their existing Oracle Applications portfolios. New functionality available from Fusion HCM and CX are popular extensions that do not have to disrupt those core business services. Coexistence is the buzzword here, and the available integration is also simpler than many expected, commonly involving an initial setup data load, then regularly incremental synchronizations, often without a need for real-time constant communication between systems. With much of this pre-built already the implementation process is also quite rapid. With most people dressed in suits, we wanted to get the conversations going without the traditional english reserve, so we decided to make ourselves a bit more obvious, as the photo below shows. This seemed to be quite successful and helped those interested identify and approach us. Keep a look out for similar again. In fact if you're in the UK there is an "Apps Transformation Day" planned by the UKOUG for the 19th March 2014, with more details to follow. Again something we'll be sure to participate in. I am hoping to attend the next half of the UKOUG annual conference, Tech13, that focuses more on Oracle technology and where there is more likely to be larger attendance of those interested in the lower-level aspects of applications customization and development. If you're going, let me know and maybe we can meet up.

    Read the article

  • No More NCrunch For Me

    - by Steve Wilkes
    When I opened up Visual Studio this morning, I was greeted with this little popup: NCrunch is a Visual Studio add-in which runs your tests while you work so you know if and when you've broken anything, as well as providing coverage indicators in the IDE and coverage metrics on demand. It recently went commercial (which I thought was fair enough), and time is running out for the free version I've been using for the last couple of months. From my experiences using NCrunch I'm going to let it expire, and go about my business without it. Here's why. Before I start, let me say that I think NCrunch is a good product, which is to say it's had a positive impact on my programming. I've used it to help test-drive a library I'm making right from the start of the project, and especially at the beginning it was very useful to have it run all my tests whenever I made a change. The first problem is that while that was cool to start with, it’s recently become a bit of a chore. Problems Running Tests NCrunch has two 'engine modes' in which it can run tests for you - it can run all your tests when you make a change, or it can figure out which tests were impacted and only run those. Unfortunately, it became clear pretty early on that that second option (which is marked as 'experimental') wasn't really working for me, so I had to have it run everything. With a smallish number of tests and while I was adding new features that was great, but I've now got 445 tests (still not exactly loads) and am more in a 'clean and tidy' mode where I know that a change I'm making will probably only affect a particular subset of the tests. With that in mind it's a bit of a drag sitting there after I make a change and having to wait for NCrunch to run everything. I could disable it and manually run the tests I know are impacted, but then what's the point of having NCrunch? If the 'impacted only' engine mode worked well this problem would go away, but that's not what I found. Secondly, what's wrong with this picture? I've got 445 tests, and NCrunch has queued 455 tests to run. So it's queued duplicate tests - in this quickly-screenshotted case 10, but I've seen the total queue get up over 600. If I'm already itchy waiting for it to run all my tests against a change I know only affects a few, I'm even itchier waiting for it to run a lot of them twice. Problems With Code Coverage NCrunch marks each line of code with a dot to say if it's covered by tests - a black dot says the line isn't covered, a red dot says it's covered but at least one of the covering tests is failing, and a green dot means all the covering tests pass. It also calculates coverage statistics for you. Unfortunately, there's a couple of flaws in the coverage. Firstly, it doesn't support ExcludeFromCodeCoverage attributes. This feature has been requested and I expect will be included in a later release, but right now it doesn't. So this: ...is counted as a non-covered line, and drags your coverage statistics down. Hmph. As well as that, coverage of certain types of code is missed. This: ...is definitely covered. I am 100% absolutely certain it is, by several tests. NCrunch doesn't pick it up, down go my coverage statistics. I've had NCrunch find genuinely uncovered code which I've been able to remove, and that's great, but what's the coverage percentage on this project? Umm... I don't know. Conclusion None of these are major, tool-crippling problems, and I expect NCrunch to get much better in future releases. The current version has some great features, like this: ...that's a line of code with a failing test covering it, and NCrunch can run that failing test and take me to that line exquisitely easily. That's awesome! I'd happily pay for a tool that can do that. But here's the thing: NCrunch (currently) costs $159 (about £100) for a personal licence and $289 (about £180) for a commercial one. I'm not sure which one I'd need as my project is a personal one which I'm intending to open-source, but I'm a professional, self-employed developer, but in any case - that seems like a lot of money for an imperfect tool. If it did everything it's advertised to do more or less perfectly I'd consider it, but it doesn't. So no more NCrunch for me.

    Read the article

  • Key Windows Phone Development Concepts

    - by Tim Murphy
    As I am doing more development in and out of the enterprise arena for Windows Phone I decide I would study for the 70-599 test.  I generally take certification tests as a way to force me to dig deeper into a technology.  Between the development and studying I decided it would be good to put a post together of key development features in Windows Phone 7 environment.  Contrary to popular belief the launch of Windows Phone 8 will not obsolete Windows Phone 7 development.  With the launch of 7.8 coming shortly and people who will remain on 7.X for the foreseeable future there are still consumers needing these apps so don’t throw out the baby with the bath water. PhoneApplicationService This is a class that every Windows Phone developer needs to become familiar with.  When it comes to application state this is your go to repository.  It also contains events that help with management of your application’s lifecycle.  You can access it like the following code sample. 1: PhoneApplicationService.Current.State["ValidUser"] = userResult; DeviceNetworkInformation This class allows you to determine the connectivity of the device and be notified when something changes with that connectivity.  If you are making web service calls you will want to check here before firing off. I have found that this class doesn’t actually work very well for determining if you have internet access.  You are better of using the following code where IsConnectedToInternet is an App level property. private void Application_Launching(object sender, LaunchingEventArgs e){ // Validate user access if (Microsoft.Phone.Net.NetworkInformation.NetworkInterface.NetworkInterfaceType != Microsoft.Phone.Net.NetworkInformation.NetworkInterfaceType.None) { IsConnectedToInternet = true; } else { IsConnectedToInternet = false; } NetworkChange.NetworkAddressChanged += new NetworkAddressChangedEventHandler(NetworkChange_NetworkAddressChanged);}void NetworkChange_NetworkAddressChanged(object sender, EventArgs e){ IsConnectedToInternet = (Microsoft.Phone.Net.NetworkInformation.NetworkInterface.NetworkInterfaceType != Microsoft.Phone.Net.NetworkInformation.NetworkInterfaceType.None);} Push Notification Push notification allows your application to receive notifications in a way that reduces the application’s power needs. This MSDN article is a good place to get the basics of push notification, but you can see the essential concept in the diagram below.  There are three types of push notification: toast, Tile and raw.  The first two work regardless of the state of the application where as raw messages are discarded if your application is not running.   Live Tiles Live tiles are one of the main differentiators of the Windows Phone platform.  They allow users to find information at a glance from their start screen without navigating into individual apps.  Knowing how to implement them can be a great boost to the attractiveness of your application. The simplest step-by-step explanation for creating live tiles is here. Local Database While your application really only has Isolated Storage as a data store there are some ways of giving you database functionality to develop against.  There are a number of open source ORM style solutions.  Probably the best and most native way I have found is to use LINQ to SQL.  It does take a significant amount of setup, but the ease of use once it is configured is worth the cost.  Rather than repeat the full concepts here I will point you to a post that I wrote previously. Tasks (Bing, Email) Leveraging built in features of the Windows Phone platform is an easy way to add functionality that would be expensive to develop on your own.  The classes that you need to make yourself familiar with are BingMapsDirectionsTask and EmailComposeTask.  This will allow your application to supply directions and give the user an email path to relay information to friends and associates. Event model Because of the ability for users to switch quickly to switch to other apps or the home screen is just one reason why knowing the Windows Phone event model is important.  You need to be able to save data so that if a user gets a phone call they can come back to exactly where they were in your application.  This means that you will need to handle such events as Launching, Activated, Deactivated and Closing at an application level.  You will probably also want to get familiar with the OnNavigatedTo and OnNavigatedFrom events at the page level.  These will give you an opportunity to save data as a user navigates through your app. Summary This is just a small portion of the concepts that you will use while building Windows Phone apps, but these are some of the most critical.  With the launch of Windows Phone 8 this list will probably expand.  Take the time to investigate these topics further and try them out in your apps. del.icio.us Tags: Windows Phone 7,Windows Phone,WP7,Software Development,70-599

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • 2012&ndash;The End Of The World Review

    - by Tim Murphy
    The end of the world must be coming.  Not because the Mayan calendar says so, but because Microsoft is innovating more than Apple.  It has been a crazy year, with pundits declaring not that the end of the world is coming, but that the end of Microsoft is coming.  Let’s take a look at what 2012 has brought us. The beginning of year is a blur.  I managed to get to TechEd in June which was the first time that I got to take a deep dive into Windows 8 and many other things that had been announced in 2011.  The promise I saw in these products was really encouraging.  The thought of being able to run Windows 8 from a thumb drive or have Hyper-V native to the OS told me that at least for developers good things were coming. I finally got my feet wet with Windows 8 with the developer preview just prior to the RTM.  While the initial experience was a bit of a culture shock I quickly grew to love it.  The media still seems to hold little love for the “reimagined” platform, but I think that once people spend some time with it they will enjoy the experience and what the FUD mongers say will fade into the background.  With the launch of the OS we finally got a look at the Surface.  I think this is a bold entry into the tablet market.  While I wish it was a little more affordable I am already starting to see them in the wild being used by non-techies. I was waiting for Windows Phone 8 at least as much as Windows 8, probably more.  The new hardware, better marketing and new OS features I think are going to finally push us to the point of having a real presence in the smartphone market.  I am seeing a number of iPhone users picking up a Nokia Lumia 920 and getting rid of their brand new iPhone 5.  The only real debacle that I saw around the launch was when they held back the SDK from general developers. Shortly after the launch events came Build 2012.  I was extremely disappointed that I didn’t make it to this year’s Build.  Even if they weren’t handing out Surface and Lumia devices I think the atmosphere and content were something that really needed to be experience in person.  Hopefully there will be a Build next year and it’s schedule will be announced soon.  As you would expect Windows 8 and Windows Phone 8 development were the mainstay of the conference, but improvements in Azure also played a key role.  This movement of services to the cloud will continue and we need to understand where it best fits into the solutions we build. Lower on the radar this year were Office 2013, SQL Server 2012, and Windows Server 2012.  Their glory stolen by the consumer OS and hardware announcements, these new releases are no less important.  Companies will see significant improvements in performance and capabilities if they upgrade.  At TechEd they had shown some of the new features of Windows Server 2012 around hardware integration and Hyper-V performance which absolutely blew me away.  It is our job to bring these important improvements to our company’s attention so that they can be leveraged. Personally, the consulting business in 2012 was the busiest it has been in a long time.  More companies were ready to attack new projects after several years of putting them on the back burner.  I also worked to bring back momentum to the Chicago Information Technology Architects Group.  Both the community and clients are excited about the new technologies that have come out in 2012 and now it is time to deliver. What does 2013 have in store.  I don’t see it be quite as exciting as 2012.  Microsoft will be releasing the Surface Pro in January and it seems that we will see more frequent OS update for Windows.  There are rumors that we may see a Surface phone in 2013.  It has also been announced that there will finally be a rework of the XBox next fall.  The new year will also be a time for us in the development community to take advantage of these new tools and devices.  After all, it is what we build on top of these platforms that will attract more consumers and corporations to using them. Just as I am 99.999% sure that the world is not going to end this year, I am also sure that Microsoft will move on and that most of this negative backlash from the media is actually fear and jealousy.  In the end I think we have a promising year ahead of us. del.icio.us Tags: Microsoft,Pundits,Mayans,Windows 8,Windows Phone 8,Surface

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >