Search Results

Search found 6868 results on 275 pages for 'voyager systems'.

Page 260/275 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • Improving Manageability of Virtual Environments

    - by Jeff Victor
    Boot Environments for Solaris 10 Branded Zones Until recently, Solaris 10 Branded Zones on Solaris 11 suffered one notable regression: Live Upgrade did not work. The individual packaging and patching tools work correctly, but the ability to upgrade Solaris while the production workload continued running did not exist. A recent Solaris 11 SRU (Solaris 11.1 SRU 6.4) restored most of that functionality, although with a slightly different concept, different commands, and without all of the feature details. This new method gives you the ability to create and manage multiple boot environments (BEs) for a Solaris 10 Branded Zone, and modify the active or any inactive BE, and to do so while the production workload continues to run. Background In case you are new to Solaris: Solaris includes a set of features that enables you to create a bootable Solaris image, called a Boot Environment (BE). This newly created image can be modified while the original BE is still running your workload(s). There are many benefits, including improved uptime and the ability to reboot into (or downgrade to) an older BE if a newer one has a problem. In Solaris 10 this set of features was named Live Upgrade. Solaris 11 applies the same basic concepts to the new packaging system (IPS) but there isn't a specific name for the feature set. The features are simply part of IPS. Solaris 11 Boot Environments are not discussed in this blog entry. Although a Solaris 10 system can have multiple BEs, until recently a Solaris 10 Branded Zone (BZ) in a Solaris 11 system did not have this ability. This limitation was addressed recently, and that enhancement is the subject of this blog entry. This new implementation uses two concepts. The first is the use of a ZFS clone for each BE. This makes it very easy to create a BE, or many BEs. This is a distinct advantage over the Live Upgrade feature set in Solaris 10, which had a practical limitation of two BEs on a system, when using UFS. The second new concept is a very simple mechanism to indicate the BE that should be booted: a ZFS property. The new ZFS property is named com.oracle.zones.solaris10:activebe (isn't that creative? ). It's important to note that the property is inherited from the original BE's file system to any BEs you create. In other words, all BEs in one zone have the same value for that property. When the (Solaris 11) global zone boots the Solaris 10 BZ, it boots the BE that has the name that is stored in the activebe property. Here is a quick summary of the actions you can use to manage these BEs: To create a BE: Create a ZFS clone of the zone's root dataset To activate a BE: Set the ZFS property of the root dataset to indicate the BE To add a package or patch to an inactive BE: Mount the inactive BE Add packages or patches to it Unmount the inactive BE To list the available BEs: Use the "zfs list" command. To destroy a BE: Use the "zfs destroy" command. Preparation Before you can use the new features, you will need a Solaris 10 BZ on a Solaris 11 system. You can use these three steps - on a real Solaris 11.1 server or in a VirtualBox guest running Solaris 11.1 - to create a Solaris 10 BZ. The Solaris 11.1 environment must be at SRU 6.4 or newer. Create a flash archive on the Solaris 10 system s10# flarcreate -n s10-system /net/zones/archives/s10-system.flar Configure the Solaris 10 BZ on the Solaris 11 system s11# zonecfg -z s10z Use 'create' to begin configuring a new zone. zonecfg:s10z create -t SYSsolaris10 zonecfg:s10z set zonepath=/zones/s10z zonecfg:s10z exit s11# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - s10z configured /zones/s10z solaris10 excl Install the zone from the flash archive s11# zoneadm -z s10z install -a /net/zones/archives/s10-system.flar -p You can find more information about the migration of Solaris 10 environments to Solaris 10 Branded Zones in the documentation. The rest of this blog entry demonstrates the commands you can use to accomplish the aforementioned actions related to BEs. New features in action Note that the demonstration of the commands occurs in the Solaris 10 BZ, as indicated by the shell prompt "s10z# ". Many of these commands can be performed in the global zone instead, if you prefer. If you perform them in the global zone, you must change the ZFS file system names. Create The only complicated action is the creation of a BE. In the Solaris 10 BZ, create a new "boot environment" - a ZFS clone. You can assign any name to the final portion of the clone's name, as long as it meets the requirements for a ZFS file system name. s10z# zfs snapshot rpool/ROOT/zbe-0@snap s10z# zfs clone -o mountpoint=/ -o canmount=noauto rpool/ROOT/zbe-0@snap rpool/ROOT/newBE cannot mount 'rpool/ROOT/newBE' on '/': directory is not empty filesystem successfully created, but not mounted You can safely ignore that message: we already know that / is not empty! We have merely told ZFS that the default mountpoint for the clone is the root directory. List the available BEs and active BE Because each BE is represented by a clone of the rpool/ROOT dataset, listing the BEs is as simple as listing the clones. s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.55G 42.9G 31K legacy rpool/ROOT/zbe-0 1K 42.9G 3.55G / rpool/ROOT/newBE 3.55G 42.9G 3.55G / The output shows that two BEs exist. Their names are "zbe-0" and "newBE". You can tell Solaris that one particular BE should be used when the zone next boots by using a ZFS property. Its name is com.oracle.zones.solaris10:activebe. The value of that property is the name of the clone that contains the BE that should be booted. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local Change the active BE When you want to change the BE that will be booted next time, you can just change the activebe property on the rpool/ROOT dataset. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local s10z# zfs set com.oracle.zones.solaris10:activebe=newBE rpool/ROOT s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# shutdown -y -g0 -i6 After the zone has rebooted: s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# zfs mount rpool/ROOT/newBE / rpool/export /export rpool/export/home /export/home rpool /rpool Mount the original BE to see that it's still there. s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# ls /mnt Desktop export platform Documents export.backup.20130607T214951Z proc S10Flar home rpool TT_DB kernel sbin bin lib system boot lost+found tmp cdrom mnt usr dev net var etc opt Patch an inactive BE At this point, you can modify the original BE. If you would prefer to modify the new BE, you can restore the original value to the activebe property and reboot, and then mount the new BE to /mnt (or another empty directory) and modify it. Let's mount the original BE so we can modify it. (The first command is only needed if you haven't already mounted that BE.) s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# patchadd -R /mnt -M /var/sadm/spool 104945-02 Note that the typical usage will be: Create a BE Mount the new (inactive) BE Use the package and patch tools to update the new BE Unmount the new BE Reboot Delete an inactive BE ZFS clones are children of their parent file systems. In order to destroy the parent, you must first "promote" the child. This reverses the parent-child relationship. (For more information on this, see the documentation.) The original rpool/ROOT file system is the parent of the clones that you create as BEs. In order to destroy an earlier BE that is that parent of other BEs, you must first promote one of the child BEs to be the ZFS parent. Only then can you destroy the original BE. Fortunately, this is easier to do than to explain: s10z# zfs promote rpool/ROOT/newBE s10z# zfs destroy rpool/ROOT/zbe-0 s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.56G 269G 31K legacy rpool/ROOT/newBE 3.56G 269G 3.55G / Documentation This feature is so new, it is not yet described in the Solaris 11 documentation. However, MOS note 1558773.1 offers some details. Conclusion With this new feature, you can add and patch packages to boot environments of a Solaris 10 Branded Zone. This ability improves the manageability of these zones, and makes their use more practical. It also means that you can use the existing P2V tools with earlier Solaris 10 updates, and modify the environments after they become Solaris 10 Branded Zones.

    Read the article

  • Partner Blog Series: PwC Perspectives - "Is It Time for an Upgrade?"

    - by Tanu Sood
    Is your organization debating their next step with regard to Identity Management? While all the stakeholders are well aware that the one-size-fits-all doesn’t apply to identity management, just as true is the fact that no two identity management implementations are alike. Oracle’s recent release of Identity Governance Suite 11g Release 2 has innovative features such as a customizable user interface, shopping cart style request catalog and more. However, only a close look at the use cases can help you determine if and when an upgrade to the latest R2 release makes sense for your organization. This post will describe a few of the situations that PwC has helped our clients work through. “Should I be considering an upgrade?” If your organization has an existing identity management implementation, the questions below are a good start to assessing your current solution to see if you need to begin planning for an upgrade: Does the current solution scale and meet your projected identity management needs? Does the current solution have a customer-friendly user interface? Are you completely meeting your compliance objectives? Are you still using spreadsheets? Does the current solution have the features you need? Is your total cost of ownership in line with well-performing similar sized companies in your industry? Can your organization support your existing Identity solution? Is your current product based solution well positioned to support your organization's tactical and strategic direction? Existing Oracle IDM Customers: Several existing Oracle clients are looking to move to R2 in 2013. If your organization is on Sun Identity Manager (SIM) or Oracle Identity Manager (OIM) and if your current assessment suggests that you need to upgrade, you should strongly consider OIM 11gR2. Oracle provides upgrade paths to Oracle Identity Manager 11gR2 from SIM 7.x / 8.x as well as Oracle Identity Manager 10g / 11gR1. The following are some of the considerations for migration: Check the end of product support (for Sun or legacy OIM) schedule There are several new features available in R2 (including common Helpdesk scenarios, profiling of disconnected applications, increased scalability, custom connectors, browser-based UI configurations, portability of configurations during future upgrades, etc) Cost of ownership (for SIM customers)\ Customizations that need to be maintained during the upgrade Time/Cost to migrate now vs. waiting for next version If you are already on an older version of Oracle Identity Manager and actively maintaining your support contract with Oracle, you might be eligible for a free upgrade to OIM 11gR2. Check with your Oracle sales rep for more details. Existing IDM infrastructure in place: In the past year and half, we have seen a surge in IDM upgrades from non-Oracle infrastructure to Oracle. If your organization is looking to improve the end-user experience related to identity management functions, the shopping cart style access request model and browser based personalization features may come in handy. Additionally, organizations that have a large number of applications that include ecommerce, LDAP stores, databases, UNIX systems, mainframes as well as a high frequency of user identity changes and access requests will value the high scalability of the OIM reconciliation and provisioning engine. Furthermore, we have seen our clients like OIM's out of the box (OOB) support for multiple authoritative sources. For organizations looking to integrate applications that do not have an exposed API, the Generic Technology Connector framework supported by OIM will be helpful in quickly generating custom connector using OOB wizard. Similarly, organizations in need of not only flexible on-boarding of disconnected applications but also strict access management to these applications using approval flows will find the flexible disconnected application profiling feature an extremely useful tool that provides a high degree of time savings. Organizations looking to develop custom connectors for home grown or industry specific applications will likewise find that the Identity Connector Framework support in OIM allows them to build and test a custom connector independently before integrating it with OIM. Lastly, most of our clients considering an upgrade to OIM 11gR2 have also expressed interest in the browser based configuration feature that allows an administrator to quickly customize the user interface without adding any custom code. Better yet, code customizations, if any, made to the product are portable across the future upgrades which, is viewed as a big time and money saver by most of our clients. Below are some upgrade methodologies we adopt based on client priorities and the scale of implementation. For illustration purposes, we have assumed that the client is currently on Oracle Waveset (formerly Sun Identity Manager).   Integrated Deployment: The integrated deployment is typically where a client wants to split the implementation to where their current IDM is continuing to handle the front end workflows and OIM takes over the back office operations incrementally. Once all the back office operations are moved completely to OIM, the front end workflows are migrated to OIM. Parallel Deployment: This deployment is typically done where there can be a distinct line drawn between which functionality the platforms are supporting. For example the current IDM implementation is handling the password reset functionality while OIM takes over the access provisioning and RBAC functions. Cutover Deployment: A cutover deployment is typically recommended where a client has smaller less complex implementations and it makes sense to leverage the migration tools to move them over immediately. What does this mean for YOU? There are many variables to consider when making upgrade decisions. For most customers, there is no ‘easy’ button. Organizations looking to upgrade or considering a new vendor should start by doing a mapping of their requirements with product features. The recommended approach is to take stock of both the short term and long term objectives, understand product features, future roadmap, maturity and level of commitment from the R&D and build the implementation plan accordingly. As we said, in the beginning, there is no one-size-fits-all with Identity Management. So, arm yourself with the knowledge, engage in industry discussions, bring in business stakeholders and start building your implementation roadmap. In the next post we will discuss the best practices on R2 implementations. We will be covering the Do's and Don't's and share our thoughts on making implementations successful. Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • Premature-Optimization and Performance Anxiety

    - by James Michael Hare
    While writing my post analyzing the new .NET 4 ConcurrentDictionary class (here), I fell into one of the classic blunders that I myself always love to warn about.  After analyzing the differences of time between a Dictionary with locking versus the new ConcurrentDictionary class, I noted that the ConcurrentDictionary was faster with read-heavy multi-threaded operations.  Then, I made the classic blunder of thinking that because the original Dictionary with locking was faster for those write-heavy uses, it was the best choice for those types of tasks.  In short, I fell into the premature-optimization anti-pattern. Basically, the premature-optimization anti-pattern is when a developer is coding very early for a perceived (whether rightly-or-wrongly) performance gain and sacrificing good design and maintainability in the process.  At best, the performance gains are usually negligible and at worst, can either negatively impact performance, or can degrade maintainability so much that time to market suffers or the code becomes very fragile due to the complexity. Keep in mind the distinction above.  I'm not talking about valid performance decisions.  There are decisions one should make when designing and writing an application that are valid performance decisions.  Examples of this are knowing the best data structures for a given situation (Dictionary versus List, for example) and choosing performance algorithms (linear search vs. binary search).  But these in my mind are macro optimizations.  The error is not in deciding to use a better data structure or algorithm, the anti-pattern as stated above is when you attempt to over-optimize early on in such a way that it sacrifices maintainability. In my case, I was actually considering trading the safety and maintainability gains of the ConcurrentDictionary (no locking required) for a slight performance gain by using the Dictionary with locking.  This would have been a mistake as I would be trading maintainability (ConcurrentDictionary requires no locking which helps readability) and safety (ConcurrentDictionary is safe for iteration even while being modified and you don't risk the developer locking incorrectly) -- and I fell for it even when I knew to watch out for it.  I think in my case, and it may be true for others as well, a large part of it was due to the time I was trained as a developer.  I began college in in the 90s when C and C++ was king and hardware speed and memory were still relatively priceless commodities and not to be squandered.  In those days, using a long instead of a short could waste precious resources, and as such, we were taught to try to minimize space and favor performance.  This is why in many cases such early code-bases were very hard to maintain.  I don't know how many times I heard back then to avoid too many function calls because of the overhead -- and in fact just last year I heard a new hire in the company where I work declare that she didn't want to refactor a long method because of function call overhead.  Now back then, that may have been a valid concern, but with today's modern hardware even if you're calling a trivial method in an extremely tight loop (which chances are the JIT compiler would optimize anyway) the results of removing method calls to speed up performance are negligible for the great majority of applications.  Now, obviously, there are those coding applications where speed is absolutely king (for example drivers, computer games, operating systems) where such sacrifices may be made.  But I would strongly advice against such optimization because of it's cost.  Many folks that are performing an optimization think it's always a win-win.  That they're simply adding speed to the application, what could possibly be wrong with that?  What they don't realize is the cost of their choice.  For every piece of straight-forward code that you obfuscate with performance enhancements, you risk the introduction of bugs in the long term technical debt of the application.  It will become so fragile over time that maintenance will become a nightmare.  I've seen such applications in places I have worked.  There are times I've seen applications where the designer was so obsessed with performance that they even designed their own memory management system for their application to try to squeeze out every ounce of performance.  Unfortunately, the application stability often suffers as a result and it is very difficult for anyone other than the original designer to maintain. I've even seen this recently where I heard a C++ developer bemoaning that in VS2010 the iterators are about twice as slow as they used to be because Microsoft added range checking (probably as part of the 0x standard implementation).  To me this was almost a joke.  Twice as slow sounds bad, but it almost never as bad as you think -- especially if you're gaining safety.  The only time twice is really that much slower is when once was too slow to begin with.  Think about it.  2 minutes is slow as a response time because 1 minute is slow.  But if an iterator takes 1 microsecond to move one position and a new, safer iterator takes 2 microseconds, this is trivial!  The only way you'd ever really notice this would be in iterating a collection just for the sake of iterating (i.e. no other operations).  To my mind, the added safety makes the extra time worth it. Always favor safety and maintainability when you can.  I know it can be a hard habit to break, especially if you started out your career early or in a language such as C where they are very performance conscious.  But in reality, these type of micro-optimizations only end up hurting you in the long run. Remember the two laws of optimization.  I'm not sure where I first heard these, but they are so true: For beginners: Do not optimize. For experts: Do not optimize yet. This is so true.  If you're a beginner, resist the urge to optimize at all costs.  And if you are an expert, delay that decision.  As long as you have chosen the right data structures and algorithms for your task, your performance will probably be more than sufficient.  Chances are it will be network, database, or disk hits that will be your slow-down, not your code.  As they say, 98% of your code's bottleneck is in 2% of your code so premature-optimization may add maintenance and safety debt that won't have any measurable impact.  Instead, code for maintainability and safety, and then, and only then, when you find a true bottleneck, then you should go back and optimize further.

    Read the article

  • Pet Peeves with the Windows Phone 7 Marketplace

    - by Bil Simser
    Have you ever noticed how something things just gnaw at your very being. This is the case with the WP7 marketplace, the Zune software, and the things that drive me batshit crazy with a side of fries. To go. I wanted to share. XBox Live is Not the Centre of the Universe Okay, it’s fine that the Zune software has an XBox live tag for games so can see them clearly but do we really need to have it shoved down our throats. On every click? Click on Games in the marketplace: The first thing that it defaults to on the filters on the right is XBox Live: Okay. Fine. However if you change it (say to Paid) then click onto a title when you come back from that title is the filter still set to Paid? No. It’s back to XBox Live again. Really? Give us a break. If you change to any filter on any other genre then click on the selected title, it doesn’t revert back to anything. It stays on the selection you picked. Let’s be fair here. The Games genre should behave just like every other one. If I pick Paid then when I come back to the list please remember that. Double Dipping On the subject of XBox Live titles, Microsoft (and developers who have an agreement with Microsoft to produce Live titles, which generally rules out indie game developers) is double dipping with regards to exposure of their titles. Here’s the Puzzle and Trivia Game section on the Marketplace for XBox Live titles: And here’s the same category filtered on Paid titles: See the problem? Two indie titles while the rest are XBox Live ones. So while XBL has it’s filter, they also get to showcase their wares in the Paid and Free filters as well. If you’re going to have an XBox Live filter then use it and stop pushing down indie titles until they’re off the screen (on some genres this is already the case). Free and Paid titles should be just that and not include XBox Live ones. If you’re really stoked that people can’t find the Free XBox Live titles vs. the paid ones, then create a Free XBox Live filter and a Paid XBox Live filter. I don’t think we would mind much. Whose Trial is it Anyways? You might notice apps in the marketplace with titles like “My Fart App Professional Lite” or “Silicon Lamb Spleen Builder Free”. When you submit and app to the marketplace it can either be free or paid. If it’s a paid app you also have the option to submit it with Trial capabilities. It’s up to you to decide what you offer in the trial version but trial versions can be purchased from within the app so after someone trys out your app (for free) and wants to unlock the Super Secret Obama Spy Ring Level, they can just go to the marketplace from your app (if you built that functionality in) and upgrade to the paid version. However it creates a rift of sorts when it comes to visibility. Some developers go the route of the paid app with a trial version, others decide to submit *two* apps instead of one. One app is the “Free” or “Lite” verions and the other is the paid version. Why go to the hassle of submitting two apps when you can just create a trial version in the same app? Again, visibility. There’s no way to tell Paid apps with Trial versions and ones without (it’s an option, you don’t have to provide trial versions, although I think it’s a good idea). However there is a way to see the Free apps from the Paid ones so some submit the two apps and have the Free version have links to buy the paid one (again through the Marketplace tasks in the API). What we as developers need for visibility is a new filter. Trial. That’s it. It would simply filter on Paid apps that have trial capabilities and surface up those apps just like the free ones. If Microsoft added this filter to the marketplace, it would eliminate the need for people to submit their “Free” and “Lite” versions and make it easier for the developer not to have to maintain two systems. I mean, is it really that hard? Can’t be any more difficult than the XBox Live Filter that’s already there. Location is Everything The last thing on my bucket list is about location. When I launch Zune I’m running in my native location setting, Canada. What’s great is that I navigate to the Travel Tools section where I have one of my apps and behold the splendour that I see: There are my apps in the number 1 and number 4 slot for top selling in that category. I show it to my wife to make up for the sleepless nights writing this stuff and we dance around and celebrate. Then I change my location on my operation system to United States and re-launch Zune. WTF? My flight app has slipped to the 10th spot (I’m only showing 4 across here out of the 7 in Zune) and my border check app that was #1 is now in the 32nd spot! End of celebration. Not only is relevance being looked at here, I value the comments people make on may apps as do most developers. I want to respond to them and show them that I’m listening. The next version of my border app will provide multiple camera angles. However when I’m running in my native Canada location, I only see two reviews. Changing over to United States I see fourteen! While there are tools out there to provide with you a unified view, I shouldn’t have to rely on them. My own Zune desktop software should allow me to see everything. I realize that some developers will submit an app and only target it for some locations and that’s their choice. However I shouldn’t have to jump through hoops to see what apps are ahead of mine, or see people comments and ratings. Another proposal. Either unify the marketplace (i.e. when I’m looking at it show me everything combined) or let me choose a filter. I think the first option might be difficult as you’re trying to average out top selling apps across all markets and have to deal with some apps that have been omitted from some markets. Although I think you could come up with a set of use cases that would handle that, maybe that’s too much work. At the very least, let us developers view the markets in a drop down or something from within the Zune desktop. Having to shut down Zune, change our location, and re-launch Zune to see other perspectives is just too onerous. A Call to Action These are just one mans opinion. Do you agree? Disagree? Feel hungry for a bacon sandwich? Let everyone know via the comments below. Perhaps someone from Microsoft will be reading and take some of these ideas under advisement. Maybe not, but at least let’s get the word out that we really want to see some change. Egypt can do it, why not WP7 developers!

    Read the article

  • Java JRE 1.7.0_45 Certified with Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    Java Runtime Environment 7u45 (a.k.a. JRE 7u45-b18) and later updates on the JRE 7 codeline are now certified with Oracle E-Business Suite Release 11i and 12.0, 12.1, and 12.2 for Windows-based desktop clients. Effects of new support dates on Java upgrades for EBS environments Support dates for the E-Business Suite and Java have changed.  Please review the sections below for more details: What does this mean for Oracle E-Business Suite users? Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients? Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers? All JRE 6 and 7 releases are certified with EBS upon release Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline, and from JRE 7u10 and later updates on the JRE 7 codeline.  We test all new JRE 1.6 and JRE 7 releases in parallel with the JRE development process, so all new JRE 1.6 and 7 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 or JRE 7 releases to your EBS users' desktops. What's needed to enable EBS environments for JRE 7? EBS customers should ensure that they are running JRE 7u17, at minimum, on Windows desktop clients. Of the compatibility issues identified with JRE 7, the most critical is an issue that prevents E-Business Suite Forms-based products from launching on Windows desktops that are running JRE 7.  Customers can prevent this issue -- and all other JRE 7 compatibility issues -- by ensuring that they have applied the latest certified patches documented for JRE 7 configurations to their EBS application tier servers.  These patches are compatible with JRE 6 and 7, production ready, and fully-tested with the E-Business Suite.  These patches may be applied immediately to all E-Business Suite environments. All other Forms prerequisites documented in the Notes above should also be applied.  Where are the official patch requirements documented? All patches required for ensuring full compatibility of the E-Business Suite with JRE 7 are documented in these Notes: For EBS 11i: Deploying Sun JRE (Native Plug-in) for Windows Clients in Oracle E-Business Suite Release 11i (Note 290807.1) Upgrading Developer 6i with Oracle E-Business Suite 11i (Note 125767.1) For EBS 12.0, 12.1, 12.2 Deploying Sun JRE (Native Plug-in) for Windows Clients in Oracle E-Business Suite Release 12 (Note 393931.1) Upgrading OracleAS 10g Forms and Reports in Oracle E-Business Suite Release 12 (Note 437878.1) EBS + Discoverer 11g Users JRE 1.7.0_45 is certified for Discoverer 11g in E-Business Suite environments with the following minimum requirements: Discoverer (11g) 11.1.1.6 plus Patch 13877486 and later  Reference: How To Find Oracle BI Discoverer 10g and 11g Certification Information (Document 233047.1) Worried about the 'mismanaged session cookie' issue? No need to worry -- it's fixed.  To recap: JRE releases 1.6.0_18 through 1.6.0_22 had issues with mismanaging session cookies that affected some users in some circumstances. The fix for those issues was first included in JRE 1.6.0_23. These fixes will carry forward and continue to be fixed in all future JRE releases on the JRE 6 and 7 codelines.  In other words, if you wish to avoid the mismanaged session cookie issue, you should apply any release after JRE 1.6.0_22 on the JRE 6 codeline, and JRE 7u10 and later JRE 7 codeline updates. Implications of Java 6 End of Public Updates for EBS Users The Support Roadmap for Oracle Java is published here: Oracle Java SE Support Roadmap The latest updates to that page (as of Sept. 19, 2012) state (emphasis added): Java SE 6 End of Public Updates Notice After February 2013, Oracle will no longer post updates of Java SE 6 to its public download sites. Existing Java SE 6 downloads already posted as of February 2013 will remain accessible in the Java Archive on Oracle Technology Network. Developers and end-users are encouraged to update to more recent Java SE versions that remain available for public download. For enterprise customers, who need continued access to critical bug fixes and security fixes as well as general maintenance for Java SE 6 or older versions, long term support is available through Oracle Java SE Support . What does this mean for Oracle E-Business Suite users? EBS users fall under the category of "enterprise users" above.  Java is an integral part of the Oracle E-Business Suite technology stack, so EBS users will continue to receive Java SE 6 updates from February 2013 to the end of Java SE 6 Extended Support in June 2017. In other words, nothing changes for EBS users after February 2013.  EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6 until the end of Java SE 6 Extended Support in June 2017. How can EBS customers obtain Java 6 updates after the public end-of-life? EBS customers can download Java 6 patches from My Oracle Support.  For a complete list of all Java SE patch numbers, see: All Java SE Downloads on MOS (Note 1439822.1) Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients? This upgrade is highly recommended but remains optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 desktop clients.  Java 6 is covered by Extended Support until June 2017.  All E-Business Suite customers must upgrade to JRE 7 by June 2017. Coexistence of JRE 6 and JRE 7 on Windows desktops The upgrade to JRE 7 is highly recommended for EBS users, but some users may need to run both JRE 6 and 7 on their Windows desktops for reasons unrelated to the E-Business Suite. Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 7 will be invoked instead of JRE 6 if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1. Applying Updates to JRE 6 and JRE 7 to Windows desktops Auto-update will keep JRE 7 up-to-date for Windows users with JRE 7 installed. Auto-update will only keep JRE 7 up-to-date for Windows users with both JRE 6 and 7 installed.  JRE 6 users are strongly encouraged to apply the latest Critical Patch Updates as soon as possible after each release. The Jave SE CPUs will be available via My Oracle Support.  EBS users can find more information about JRE 6 and 7 updates here: Information Center: Installation & Configuration for Oracle Java SE (Note 1412103.2) The dates for future Java SE CPUs can be found on the Critical Patch Updates, Security Alerts and Third Party Bulletin.  An RSS feed is available on that site for those who would like to be kept up-to-date. What do Mac users need? Mac users running Mac OS 10.7 or 10.8 can run JRE 7 plug-ins.  See this article: EBS 12 certified with Mac OS X 10.7 and 10.8 with Safari 6 and JRE 7 Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers? JRE is used for desktop clients.  JDK is used for application tier servers JDK upgrades for E-Business Suite application tier servers are highly recommended but currently remain optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6 for application tier servers.  Java SE 6 is covered by Extended Support until June 2017.  All EBS customers with application tier servers on Windows, Solaris, and Linux must upgrade to JDK 7 by June 2017. EBS customers running their application tier servers on other operating systems should check with their respective vendors for the support dates for those platforms. JDK 7 is certified with E-Business Suite 12.  See: Java (JDK) 7 Certified for E-Business Suite 12 Servers References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • What Makes a Good Design Critic? CHI 2010 Panel Review

    - by Applications User Experience
    Author: Daniel Schwartz, Senior Interaction Designer, Oracle Applications User Experience Oracle Applications UX Chief Evangelist Patanjali Venkatacharya organized and moderated an innovative and stimulating panel discussion titled "What Makes a Good Design Critic? Food Design vs. Product Design Criticism" at CHI 2010, the annual ACM Conference on Human Factors in Computing Systems. The panelists included Janice Rohn, VP of User Experience at Experian; Tami Hardeman, a food stylist; Ed Seiber, a restaurant architect and designer; Jonathan Kessler, a food critic and writer at the Atlanta Journal-Constitution; and Larry Powers, Chef de Cuisine at Shaun's restaurant in Atlanta, Georgia. Building off the momentum of his highly acclaimed panel at CHI 2009 on what interaction design can learn from food design (for which I was on the other side as a panelist), Venkatacharya brought together new people with different roles in the restaurant and software interaction design fields. The session was also quite delicious -- but more on that later. Criticism, as it applies to food and product or interaction design, was the tasty topic for this forum and showed that strong parallels exist between food and interaction design criticism. Figure 1. The panelists in discussion: (left to right) Janice Rohn, Ed Seiber, Tami Hardeman, and Jonathan Kessler. The panelists had great insights to share from their respective fields, and they enthusiastically discussed as if they were at a casual collegial dinner. Jonathan Kessler stated that he prefers to have one professional critic's opinion in general than a large sampling of customers, however, "Web sites like Yelp get users excited by the collective approach. People are attracted to things desired by so many." Janice Rohn added that this collective desire was especially true for users of consumer products. Ed Seiber remarked that while people looked to the popular view for their target tastes and product choices, "professional critics like John [Kessler] still hold a big weight on public opinion." Chef Powers indicated that chefs take in feedback from all sources, adding, "word of mouth is very powerful. We also look heavily at the sales of the dishes to see what's moving; what's selling and thus successful." Hearing this discussion validates our design work at Oracle in that we listen to our users (our diners) and industry feedback (our critics) to ensure an optimal user experience of our products. Rohn considers that restaurateur Danny Meyer's book, Setting the Table: The Transforming Power of Hospitality in Business, which is about creating successful restaurant experiences, has many applicable parallels to user experience design. Meyer actually argues that the customer is not always right, but that "they must always feel heard." Seiber agreed, but noted "customers are not designers," and while designers need to listen to customer feedback, it is the designer's job to synthesize it. Seiber feels it's the critic's job to point out when something is missing or not well-prioritized. In interaction design, our challenges are quite similar, if not parallel. Software tasks are like puzzles that are in search of a solution on how to be best completed. As a food stylist, Tami Hardeman has the demanding and challenging task of presenting food to be as delectable as can be. To present food in its best light requires a lot of creativity and insight into consumer tastes. It's no doubt then that this former fashion stylist came up with the ultimate catch phrase to capture the emotion that clients want to draw from their users: "craveability." The phrase was a hit with the audience and panelists alike. Sometime later in the discussion, Seiber remarked, "designers strive to apply craveability to products, and I do so for restaurants in my case." Craveabilty is also very applicable to interaction design. Creating straightforward and smooth workflows for users of Oracle Applications is a primary goal for my colleagues. We want our users to really enjoy working with our products where it makes them more efficient and better at their jobs. That's our "craveability." Patanjali Venkatacharya asked the panel, "if a design's "craveability" appeals to some cultures but not to others, then what is the impact to the food or product design process?" Rohn stated that "taste is part nature and part nurture" and that the design must take the full context of a product's usage into consideration. Kessler added, "good design is about understanding the context" that the experience necessitates. Seiber remarked how important seat comfort is for diners and how the quality of seating will add so much to the complete dining experience. Sometimes if these non-food factors are not well executed, they can also take away from an otherwise pleasant dining experience. Kessler recounted a time when he was dining at a restaurant that actually had very good food, but the photographs hanging on all the walls did not fit in with the overall décor and created a negative overall dining experience. While the tastiness of the food is critical to a restaurant's success, it is a captivating complete user experience, as in interaction design, which will keep customers coming back and ultimately making the restaurant a hit. Figure 2. Patnajali Venkatacharya enjoyed the Sardian flatbread salad. As a surprise Chef Powers brought out a signature dish from Shaun's restaurant for all the panelists to sample and critique. The Sardinian flatbread dish showcased Atlanta's taste for fresh and local produce and cheese at its finest as a salad served on a crispy flavorful flat bread. Hardeman said it could be photographed from any angle, a high compliment coming from a food stylist. Seiber really enjoyed the colors that the dish brought together and thought it would be served very well in a casual restaurant on a summer's day. The panel really appreciated the taste and quality of the different components and how the rosemary brought all the flavors together. Seiber remarked that "a lot of effort goes into the appearance of simplicity." Rohn indicated that the same notion holds true with software user interface design. A tremendous amount of work goes into crafting straightforward interfaces, including user research, prototyping, design iterations, and usability studies. Design criticism for food and software interfaces clearly share many similarities. Both areas value expert opinions and user feedback. Both areas understand the importance of great design needing to work well in its context. Last but not least, both food and interaction design criticism value "craveability" and how having users excited about experiencing and enjoying the designs is an important goal. Now if we can just improve the taste of software user interfaces, people may choose to dine on their enterprise applications over a fresh organic salad.

    Read the article

  • April Oracle Database Events

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 17-18, 2012 – Moscow, Russia Oracle Develop Conference The Oracle database developer conference, Oracle Develop, will visit Moscow, Russia and Hyderabad, India this spring. Oracle Develop includes a database development track, which contains .NET sessions and hands-on lab led by an Oracle .NET product manager. Register today before the event fills up. http://www.oracle.com/javaone/ru-en/index.html Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 24, 26 – San Diego, CA & San Jose, CA ISC2 Leadership Regional Event Series Oracle at (ISC)2 Security Leadership Series: Herding Clouds -- Managing Cloud Security Concerns and Expectations Join us for his interactive day-long session where industry leaders, including Oracle solution experts, will talk about how to: dispel the top Cloud security myths minimize "rogue Cloud" implementations identify potential compliance pitfalls and how to avoid them develop contract language for your cloud providers manage users across your fractured datacenter leverage existing technologies to protect data as it moves from the enterprise to the cloud http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=146972&src=7239493&src=7239493&Act=228 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 22-26, 2012 – Las Vegas, NV IOUG Collaborate 12 From April 22-26, 2012, Oracle takes Las Vegas. Thousands of Oracle professionals will descend upon the Mandalay Bay Convention Center for a weeks worth of education sessions, networking opportunities and more, at the only user-driven and user-run Oracle conference - COLLABORATE 12.  Your COLLABORATE 12 - IOUG Forum registration comes complete with a bonus- a full day Deep Dive education program on Sunday! Choose from numerous hot topics, including Real World Performance, High Availability and more, presented by legendary and seasoned Oracle pros, including Tom Kyte and Craig Shallahamer. http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=143637&src=7360364&src=7360364&Act=5 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Independent Oracle User Group (IOUG) Regional Events: April 16, 2012 – Columbus, Ohio Ohio Oracle Users Group Higher Performance PL/PQL and Oracle Database 11g PL/SQL New Features – Featuring Steven Feuerstein http://www.ooug.org/ Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 26, 2012- Irving, TX Dallas Oracle Users Group Oracle Database Forum: 5-7PM Oracle Corporation, 6031 Connection Drive Irving, TX http://memberservices.membee.com/538/irmevents.aspx?id=64 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Apr 30, 2012- Los Angeles, CA Real World Performance Tour A full day of real world database performance with: Tom Kyte, author of the ever popular AskTom Blog, Andrew Holdsworth, head of Oracle's Real World Performance Team, and Graham Wood, renowned Oracle Database performance architect. Through discussion, debate and demos, they’ll show you how to master performance engineering topics like: Best practices for designing hardware architectures and how to spot and fix bad design. How to develop applications that deliver the fastest possible performance without sacrificing accuracy.  New for 2012! Updates on Enterprise Manager, Exadata, and what these technologies mean to your current systems. http://www.ioug.org/Events/ADayofRealWorldPerformance/tabid/194/Default.aspx

    Read the article

  • Getting your bearings and defining the project objective

    - by johndoucette
    I wrote this two years ago and thought it was worth posting… Some may think this is a daunting task and some may even say “what a waste of time” and want to open MS Project and start typing out tasks because someone asked for an estimate and a task list. Hell, maybe you even use Excel and pump out a spreadsheet with some real scientific formula for guessing how long it will take to code a bunch of classes. However, this short exercise will provide the basis for the entire project, whether small or large and be a great friend when communicating to anyone on your team or even your client. I call this the Project Brief. If you find yourself going beyond a single page, then you must decompose the sections and summarize your findings so there is a complete and clear picture of the project you are working on in a relatively short statement. Here is a great quote from the PMBOK (Project Management Body of Knowledge) relative to what a project is;   A project is a temporary endeavor undertaken to create a unique product, service or result. With this in mind, the project brief should encompass the entirety (objective) of the endeavor in its explanation and what it will take (goals) to create the product, service or result (deliverables). Normally the process of identifying the project objective is done during the first stage of a project called the Project Kickoff, but you can perform this very important step anytime to help you get a bearing. There are many more parts to helping a project stay on course, but this is usually the foundation where it can be grounded on. Through a series of 3 exercises, you should be able to come up with the objective, goals and deliverables on your project. Follow these steps, and in no time (about &frac12; hour), you will have the foundation of your project plan. (See examples below) Exercise 1 – Objectives Begin with the end in mind. Think about your project in business terms with a couple things to help you understand the objective; Reference the business benefit in terms of cost, speed and / or quality, Provide a higher level of what the outcome will look like (future sense) It should be non-measurable, that’s what the goals are all about The output should be a single paragraph with three sentences and take 10 minutes to write. *Typically, agreement must be reached on the objectives of the project before you would proceed to the next steps of the project. Exercise 2 – Goals A project goal is a statement that answers questions about who, what, why, where and when. A good project goal statement; Answers the five “W” questions for the project Is measurable in each of its parts Is published and agreed on by all the owners This helps the Project Manager receive confirmation on defining the project target. Using the established project objective done in the first exercise, think about the things it will take to get the job done. Think about tangible activities which are the top level tasks in a typical Work Breakdown Structure (WBS). The overall goal statement plus all the deliverables (next exercise) can be seen as the project team’s contract with the project owners. Write 3 - 5 goals in about 10 minutes. You should not write the words “Who, what, why, where and when, but merely be able to answer the questions when you read a goal. Exercise 3 – Deliverables Every project creates some type of output and these outputs are called deliverables. There are two classes of deliverables; Internal – produced for project team members to meet their goals External – produced for project owners to meet their expectations The list you enter here provides a checklist for the team’s delivery and/or is a statement of all the expectations of the project owners. Here are some typical project deliverables; Product and product documentation End product/system Requirements/feature documents Installation guides Demo/prototype System design documents User guides/help files Plans Project plan Training plan Conversion/installation/delivery plan Test plans Documentation plan Communication plan Reports and general documentation Progress reports System acceptance tests Outstanding bug list Procedures Risk and issue logs Project history Deliverables should go with each of the goals. Have 3-5 deliverables for each goal. When you are done, you will have established a great foundation for the clarity of your project. This exercise can take some time, but with practice, you should be able to whip this one out in 10 minutes as well, especially if you are intimate with an ongoing project. Samples  Objective [Client] is implementing a series of MOSS sites to support external public (Internet), internal employee (Intranet) and an external secure (password protected Internet) applications. This project will focus on the public-facing web site and will provide [Client] with architectural recommendations based on the current design being done by their design partner [Partner] and the internal Content Team. In addition, it will provide [Client] with a development plan and confidence they need to deploy a world class public Internet website. Goals 1.  [Consultant] will provide technical guidance and set project team expectations for the implementation of the MOSS Internet site based on provided features/functions within three weeks. 2.  [Consultant] will understand phase 2 secure password-protected Internet site design and provide recommendations.   Deliverables 1.1  Public Internet (unsecure) Architectural Recommendation Plan 1.2  Physical Site construction Work Breakdown Structure and plan (Time, cost and resources needed) 2.1  Two Factor authentication recommendation document   Objective [Client] is currently using an application developed by [Consultant] many years ago called "XXX". This application, although functional, does not meet their new updated business requirements and contains a few defects which [Client] has developed work-around processes. [Client] would like to have a "new and improved" system to support their membership management needs by expanding membership and subscription capabilities, provide accounting integration with internal (GL) and external (VeriSign) systems, and implement hooks to the current CRM solution. This effort will take place through a series of phases, beginning with envisioning. Goals 1. Through discussions with users, [Consultant] will discover current issues/bugs which need to be resolved which must meet the current functionality requirements within three weeks. 2. [Consultant] will gather requirements from the users about what is "needed" vs. "what they have" for enhancements and provide a high level document supporting their needs. 3. [Consultant] will meet with the team members through a series of meetings and help define the overall project plan to deliver a new and improved solution. Deliverables 1.1 Prioritized list of Current application issues/bugs that need to be resolved 1.2 Provide a resolution plan on the issues/bugs identified in the current application 1.3 Risk Assessment Document 2.1 Deliver a Requirements Document showing high-level [Client] needs for the new XXX application. · New feature functionality not in the application today · Existing functionality that will remain in the new functionality 2.2 Reporting Requirements Document 3.1 A Project Plan showing the deliverables and cost for the next (second) phase of this project. 3.2 A Statement of Work for the next (second) phase of this project. 3.3 An Estimate of any work that would need to follow the second phase.

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • Exception Handling Differences Between 32/64 Bit

    - by Alois Kraus
    I do quite a bit of debugging .NET applications but from time to time I see things that are impossible (at a first look). I may ask you dear reader what your mental exception handling model is. Exception handling is easy after all right? Lets suppose the following code:         private void F1(object sender, EventArgs e)         {             try             {                 F2();             }             catch (Exception ex)             {                 throw new Exception("even worse Exception");             }           }           private void F2()         {             try             {                 F3();             }             finally             {                 throw new Exception("other exception");             }         }           private void F3()         {             throw new NotImplementedException();         }   What will the call stack look like when you break into the catch(Exception) clause in Windbg (32 and 64 bit on .NET 3.5 SP1)? The mental model I have is that when an exception is thrown the stack frames are unwound until the catch handler can execute. An exception does propagate the call chain upwards.   So when F3 does throw an exception the control flow will resume at the finally handler in F2 which does throw another exception hiding the original one (that is nasty) and then the new Exception will be catched in F1 where the catch handler is executed. So we should see in the catch handler in F1 as call stack only the F1 stack frame right? Well lets try it out in Windbg. For this I created a simple Windows Forms application with one button which does execute the F1 method in its click handler. When you compile the application for 64 bit and the catch handler is reached you will find with the following commands in Windbg   Load sos extension from the same path where mscorwks was loaded in the current process .loadby sos mscorwks   Beak on clr exceptions sxe clr   Continue execution g   Dump mixed call stack container C++  and .NET Stacks interleaved 0:000> !DumpStack OS Thread Id: 0x1d8 (0) Child-SP         RetAddr          Call Site 00000000002c88c0 000007fefa68f0bd KERNELBASE!RaiseException+0x39 00000000002c8990 000007fefac42ed0 mscorwks!RaiseTheExceptionInternalOnly+0x295 00000000002c8a60 000007ff005dd7f4 mscorwks!JIT_Throw+0x130 00000000002c8c10 000007fefa6942e1 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F1(System.Object, System.EventArgs)+0xb4 00000000002c8c60 000007fefa661012 mscorwks!ExceptionTracker::CallHandler+0x145 00000000002c8d60 000007fefa711a72 mscorwks!ExceptionTracker::CallCatchHandler+0x9e 00000000002c8df0 0000000077b055cd mscorwks!ProcessCLRException+0x25e 00000000002c8e90 0000000077ae55f8 ntdll!RtlpExecuteHandlerForUnwind+0xd 00000000002c8ec0 000007fefa637c1a ntdll!RtlUnwindEx+0x539 00000000002c9560 000007fefa711a21 mscorwks!ClrUnwindEx+0x36 00000000002c9a70 0000000077b0554d mscorwks!ProcessCLRException+0x20d 00000000002c9b10 0000000077ae5d1c ntdll!RtlpExecuteHandlerForException+0xd 00000000002c9b40 0000000077b1fe48 ntdll!RtlDispatchException+0x3cb 00000000002ca220 000007fefdaeaa7d ntdll!KiUserExceptionDispatcher+0x2e 00000000002ca7e0 000007fefa68f0bd KERNELBASE!RaiseException+0x39 00000000002ca8b0 000007fefac42ed0 mscorwks!RaiseTheExceptionInternalOnly+0x295 00000000002ca980 000007ff005dd8df mscorwks!JIT_Throw+0x130 00000000002cab30 000007fefa6942e1 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F2()+0x9f 00000000002cab80 000007fefa71b5b3 mscorwks!ExceptionTracker::CallHandler+0x145 00000000002cac80 000007fefa70dcd0 mscorwks!ExceptionTracker::ProcessManagedCallFrame+0x683 00000000002caed0 000007fefa7119af mscorwks!ExceptionTracker::ProcessOSExceptionNotification+0x430 00000000002cbd90 0000000077b055cd mscorwks!ProcessCLRException+0x19b 00000000002cbe30 0000000077ae55f8 ntdll!RtlpExecuteHandlerForUnwind+0xd 00000000002cbe60 000007fefa637c1a ntdll!RtlUnwindEx+0x539 00000000002cc500 000007fefa711a21 mscorwks!ClrUnwindEx+0x36 00000000002cca10 0000000077b0554d mscorwks!ProcessCLRException+0x20d 00000000002ccab0 0000000077ae5d1c ntdll!RtlpExecuteHandlerForException+0xd 00000000002ccae0 0000000077b1fe48 ntdll!RtlDispatchException+0x3cb 00000000002cd1c0 000007fefdaeaa7d ntdll!KiUserExceptionDispatcher+0x2e 00000000002cd780 000007fefa68f0bd KERNELBASE!RaiseException+0x39 00000000002cd850 000007fefac42ed0 mscorwks!RaiseTheExceptionInternalOnly+0x295 00000000002cd920 000007ff005dd968 mscorwks!JIT_Throw+0x130 00000000002cdad0 000007ff005dd875 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F3()+0x48 00000000002cdb10 000007ff005dd786 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F2()+0x35 00000000002cdb60 000007ff005dbe6a WindowsFormsApplication1!WindowsFormsApplication1.Form1.F1(System.Object, System.EventArgs)+0x46 00000000002cdbc0 000007ff005dd452 System_Windows_Forms!System.Windows.Forms.Control.OnClick(System.EventArgs)+0x5a   Hm okaaay. I see my method F1 two times in this call stack. Looks like we did get some recursion bug. But that can´t be given the obvious code above. Let´s try the same thing in a 32 bit process.  0:000> !DumpStack OS Thread Id: 0x33e4 (0) Current frame: KERNELBASE!RaiseException+0x58 ChildEBP RetAddr  Caller,Callee 0028ed38 767db727 KERNELBASE!RaiseException+0x58, calling ntdll!RtlRaiseException 0028ed4c 68b9008c mscorwks!Binder::RawGetClass+0x20, calling mscorwks!Module::LookupTypeDef 0028ed5c 68b904ff mscorwks!Binder::IsClass+0x23, calling mscorwks!Binder::RawGetClass 0028ed68 68bfb96f mscorwks!Binder::IsException+0x14, calling mscorwks!Binder::IsClass 0028ed78 68bfb996 mscorwks!IsExceptionOfType+0x23, calling mscorwks!Binder::IsException 0028ed80 68bfbb1c mscorwks!RaiseTheExceptionInternalOnly+0x2a8, calling KERNEL32!RaiseExceptionStub 0028eda8 68ba0713 mscorwks!Module::ResolveStringRef+0xe0, calling mscorwks!BaseDomain::GetStringObjRefPtrFromUnicodeString 0028edc8 68b91e8d mscorwks!SetObjectReferenceUnchecked+0x19 0028ede0 68c8e910 mscorwks!JIT_Throw+0xfc, calling mscorwks!RaiseTheExceptionInternalOnly 0028ee44 68c8e734 mscorwks!JIT_StrCns+0x22, calling mscorwks!LazyMachStateCaptureState 0028ee54 68c8e865 mscorwks!JIT_Throw+0x1e, calling mscorwks!LazyMachStateCaptureState 0028eea4 02ffaecd (MethodDesc 0x7af08c +0x7d WindowsFormsApplication1.Form1.F1(System.Object, System.EventArgs)), calling mscorwks!JIT_Throw 0028eeec 02ffaf19 (MethodDesc 0x7af098 +0x29 WindowsFormsApplication1.Form1.F2()), calling 06370634 0028ef58 02ffae37 (MethodDesc 0x7a7bb0 +0x4f System.Windows.Forms.Control.OnClick(System.EventArgs))   That does look more familar. The call stack has been unwound and we do see only some frames into the history where the debugger was smart enough to find out that we have called F2 from F1. The exception handling on 64 bit systems does work quite differently which seems to have the nice property to remember the called methods not only during the first pass of exception filter clauses (during first pass all catch handler are called if they are going to catch the exception which is about to be thrown)  but also when the actual stack unwind has taken place. This makes it possible to follow not only the call stack right at the moment but also to look into the “history” of the catch/finally clauses. In a 64 bit process you only need to look at the ExceptionTracker to find out if a catch or finally handler was called. The two frames ProcessManagedCallFrame/CallHandler does indicate a finally clause whereas CallCatchHandler/CallHandler indicates a catch clause. That was a interesting one. Oh and by the way if you manage to load the Microsoft symbols you can also find out the hidden exception which. When you encounter in the call stack a line 0016eb34 75b79617 KERNELBASE!RaiseException+0x58 ====> Exception Code e0434f4d cxr@16e850 exr@16e838 Then it is a good idea to execute .exr 16e838 !analyze –v to find out more. In the managed world it is even easier since we can dump the objects allocated on the stack which have not yet been garbage collected to look at former method parameters. The command !dso which is the abbreviation for dump stack objects will give you 0:000> !dso OS Thread Id: 0x46c (0) ESP/REG  Object   Name 0016dd4c 020737f0 System.Exception 0016dd98 020737f0 System.Exception 0016dda8 01f5c6cc System.Windows.Forms.Button 0016ddac 01f5d2b8 System.EventHandler 0016ddb0 02071744 System.Windows.Forms.MouseEventArgs 0016ddc0 01f5d2b8 System.EventHandler 0016ddcc 01f5c6cc System.Windows.Forms.Button 0016dddc 020737f0 System.Exception 0016dde4 01f5d2b8 System.EventHandler 0016ddec 02071744 System.Windows.Forms.MouseEventArgs 0016de40 020737f0 System.Exception 0016de80 02071744 System.Windows.Forms.MouseEventArgs 0016de8c 01f5d2b8 System.EventHandler 0016de90 01f5c6cc System.Windows.Forms.Button 0016df10 02073784 System.SByte[] 0016df5c 02073684 System.NotImplementedException 0016e2a0 02073684 System.NotImplementedException 0016e2e8 01ed69f4 System.Resources.ResourceManager From there it is easy to do 0:000> !pe 02073684 Exception object: 02073684 Exception type: System.NotImplementedException Message: Die Methode oder der Vorgang sind nicht implementiert. InnerException: <none> StackTrace (generated):     SP       IP       Function     0016ECB0 006904AD WindowsFormsApplication2!WindowsFormsApplication2.Form1.F3()+0x35     0016ECC0 00690411 WindowsFormsApplication2!WindowsFormsApplication2.Form1.F2()+0x29     0016ECF0 0069038F WindowsFormsApplication2!WindowsFormsApplication2.Form1.F1(System.Object, System.EventArgs)+0x3f StackTraceString: <none> HResult: 80004001 to see the former exception. That´s all for today.

    Read the article

  • Contracting as a Software Developer in the UK

    - by Frez
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} Having had some 15 years’ experience of working as a software contractor, I am often asked by developers who work as permanent employees (permies) about the pros and cons of working as a software consultant through my own limited company and whether the move would be a good one for them. Whilst it is possible to contract using other financial vehicles such as umbrella companies, this article will only consider limited companies as that is what I have experience of using. Contracting or consultancy requires a different mind-set from being a permanent member of staff, and not all developers are capable of this shift in attitude. Whilst you can look forward to an increase in the money you take home, there are real risks and expenses you would not normally be exposed to as a permie. So let us have a look at the pros and cons: Pros: More money There is no doubt that whilst you are working on contracts you will earn significantly more than you would as a permanent employee. Furthermore, working through a limited company is more tax efficient. Less politics You really have no need to involve yourself in office politics. When the end of the day comes you can go home and not think or worry about the power struggles within the company you are contracted to. Your career progression is not tied to the company. Expenses from gross income All your expenses of trading as a business will come out of your company’s gross income, i.e. before tax. This covers travelling expenses provided you have not been at the same client/location for more than two years, internet subscriptions, professional subscriptions, software, hardware, accountancy services and so on. Cons: Work is more transient Contracts typically range from a couple of weeks to a year, although will most likely start at 3 months. However, most contracts are extended either because the project you have been brought in to help with takes longer to deliver than expected, the client decides they can use you on other aspects of the project, or the client decides they would like to use you on other projects. The temporary nature of the work means that you will have down-time between contracts while you secure new opportunities during which time your company will have no income. You may need to attend several interviews before securing a new contract. Accountancy expenses Your company is a separate entity and there are accountancy requirements which, unless you like paperwork, means your company will need to appoint an accountant to prepare your company’s accounts. It may also be worth purchasing some accountancy software, so talk to your accountant about this as they may prefer you to use a particular software package so they can integrate it with their systems. VAT You will need to register your company for VAT. This is tax neutral for you as the VAT you charge your clients you will pass onto the government less any VAT you are reclaiming from expenses, but it is additional paperwork to undertake each quarter. It is worth checking out the Fixed Rate VAT Scheme that is available, particularly after the initial expenses of setting up your company are over. No training Clients take you on based on your skills, not to train you when they will lose that investment at the end of the contract, so understand that it is unlikely you will receive any training funded by a client. However, learning new skills during a contract is possible and you may choose to accept a contract on a lower rate if this is guaranteed as it will help secure future contracts. No financial extras You will have no free pension, life, accident, sickness or medical insurance unless you choose to purchase them yourself. A financial advisor can give you all the necessary advice in this area, and it is worth taking seriously. A year after I started as a consultant I contracted a serious illness, this kept me off work for over two months, my client was very understanding and it could have been much worse, so it is worth considering what your options might be in the case of illness, death and retirement. Agencies Whilst it is possible to work directly for end clients there are pros and cons of working through an agency.  The main advantage is cash flow, you invoice the agency and they typically pay you within a week, whereas working directly for a client could have you waiting up to three months to be paid. The downside of working for agencies, especially in the current difficult times, is that they may go out of business and you then have difficulty getting the money you are owed. Tax investigation It is possible that the Inland Revenue may decide to investigate your company for compliance with tax law. Insurance is available to cover you for this. My personal recommendation would be to join the PCG as this insurance is included as a benefit of membership, Professional Indemnity Some agencies require that you are covered by professional indemnity insurance; this is a cost you would not incur as a permie. Travel Unless you live in an area that has an abundance of opportunities, such as central London, it is likely that you will be travelling further, longer and with more expense than if you were permanently employed at a local company. This not only affects you monetarily, but also your quality of life and the ability to keep fit and healthy. Obtaining finance If you want to secure a mortgage on a property it can be more difficult or expensive, especially if you do not have three years of audited accounts to show a mortgage lender.   Caveat This post is my personal opinion and should not be used as a definitive guide or recommendation to contracting and whether it is suitable for you as an individual, i.e. I accept no responsibility if you decide to take up contracting based on this post and you fare badly for whatever reason.

    Read the article

  • How accurate is "Business logic should be in a service, not in a model"?

    - by Jeroen Vannevel
    Situation Earlier this evening I gave an answer to a question on StackOverflow. The question: Editing of an existing object should be done in repository layer or in service? For example if I have a User that has debt. I want to change his debt. Should I do it in UserRepository or in service for example BuyingService by getting an object, editing it and saving it ? My answer: You should leave the responsibility of mutating an object to that same object and use the repository to retrieve this object. Example situation: class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } A comment I received: Business logic should really be in a service. Not in a model. What does the internet say? So, this got me searching since I've never really (consciously) used a service layer. I started reading up on the Service Layer pattern and the Unit Of Work pattern but so far I can't say I'm convinced a service layer has to be used. Take for example this article by Martin Fowler on the anti-pattern of an Anemic Domain Model: There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data. (...) The logic that should be in a domain object is domain logic - validations, calculations, business rules - whatever you like to call it. To me, this seemed exactly what the situation was about: I advocated the manipulation of an object's data by introducing methods inside that class that do just that. However I realize that this should be a given either way, and it probably has more to do with how these methods are invoked (using a repository). I also had the feeling that in that article (see below), a Service Layer is more considered as a façade that delegates work to the underlying model, than an actual work-intensive layer. Application Layer [his name for Service Layer]: Defines the jobs the software is supposed to do and directs the expressive domain objects to work out problems. The tasks this layer is responsible for are meaningful to the business or necessary for interaction with the application layers of other systems. This layer is kept thin. It does not contain business rules or knowledge, but only coordinates tasks and delegates work to collaborations of domain objects in the next layer down. It does not have state reflecting the business situation, but it can have state that reflects the progress of a task for the user or the program. Which is reinforced here: Service interfaces. Services expose a service interface to which all inbound messages are sent. You can think of a service interface as a façade that exposes the business logic implemented in the application (typically, logic in the business layer) to potential consumers. And here: The service layer should be devoid of any application or business logic and should focus primarily on a few concerns. It should wrap Business Layer calls, translate your Domain in a common language that your clients can understand, and handle the communication medium between server and requesting client. This is a serious contrast to other resources that talk about the Service Layer: The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. Or the second answer to a question I've already linked: At some point, your application will want some business logic. Also, you might want to validate the input to make sure that there isn't something evil or nonperforming being requested. This logic belongs in your service layer. "Solution"? Following the guidelines in this answer, I came up with the following approach that uses a Service Layer: class UserController : Controller { private UserService _userService; public UserController(UserService userService){ _userService = userService; } public ActionResult MakeHimPay(string username, int amount) { _userService.MakeHimPay(username, amount); return RedirectToAction("ShowUserOverview"); } public ActionResult ShowUserOverview() { return View(); } } class UserService { private IUserRepository _userRepository; public UserService(IUserRepository userRepository) { _userRepository = userRepository; } public void MakeHimPay(username, amount) { _userRepository.GetUserByName(username).makePayment(amount); } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } Conclusion All together not much has changed here: code from the controller has moved to the service layer (which is a good thing, so there is an upside to this approach). However this doesn't look like it had anything to do with my original answer. I realize design patterns are guidelines, not rules set in stone to be implemented whenever possible. Yet I have not found a definitive explanation of the service layer and how it should be regarded. Is it a means to simply extract logic from the controller and put it inside a service instead? Is it supposed to form a contract between the controller and the domain? Should there be a layer between the domain and the service layer? And, last but not least: following the original comment Business logic should really be in a service. Not in a model. Is this correct? How would I introduce my business logic in a service instead of the model?

    Read the article

  • Das T5-4 TPC-H Ergebnis naeher betrachtet

    - by Stefan Hinker
    Inzwischen haben vermutlich viele das neue TPC-H Ergebnis der SPARC T5-4 gesehen, das am 7. Juni bei der TPC eingereicht wurde.  Die wesentlichen Punkte dieses Benchmarks wurden wie gewohnt bereits von unserer Benchmark-Truppe auf  "BestPerf" zusammengefasst.  Es gibt aber noch einiges mehr, das eine naehere Betrachtung lohnt. Skalierbarkeit Das TPC raet von einem Vergleich von TPC-H Ergebnissen in unterschiedlichen Groessenklassen ab.  Aber auch innerhalb der 3000GB-Klasse ist es interessant: SPARC T4-4 mit 4 CPUs (32 Cores mit 3.0 GHz) liefert 205,792 QphH. SPARC T5-4 mit 4 CPUs (64 Cores mit 3.6 GHz) liefert 409,721 QphH. Das heisst, es fehlen lediglich 1863 QphH oder 0.45% zu 100% Skalierbarkeit, wenn man davon ausgeht, dass die doppelte Anzahl Kerne das doppelte Ergebnis liefern sollte.  Etwas anspruchsvoller, koennte man natuerlich auch einen Faktor von 2.4 erwarten, wenn man die hoehere Taktrate mit beruecksichtigt.  Das wuerde die Latte auf 493901 QphH legen.  Dann waere die SPARC T5-4 bei 83%.  Damit stellt sich die Frage: Was hat hier nicht skaliert?  Vermutlich der Plattenspeicher!  Auch hier lohnt sich eine naehere Betrachtung: Plattenspeicher Im Bericht auf BestPerf und auch im Full Disclosure Report der TPC stehen einige interessante Details zum Plattenspeicher und der Konfiguration.   In der Konfiguration der SPARC T4-4 wurden 12 2540-M2 Arrays verwendet, die jeweils ca. 1.5 GB/s Durchsatz liefert, insgesamt also eta 18 GB/s.  Dabei waren die Arrays offensichtlich mit jeweils 2 Kabeln pro Array direkt an die 24 8GBit FC-Ports des Servers angeschlossen.  Mit den 2x 8GBit Ports pro Array koennte man so ein theoretisches Maximum von 2GB/s erreichen.  Tatsaechlich wurden 1.5GB/s geliefert, was so ziemlich dem realistischen Maximum entsprechen duerfte. Fuer den Lauf mit der SPARC T5-4 wurden doppelt so viele Platten verwendet.  Dafuer wurden die 2540-M2 Arrays mit je einem zusaetzlichen Plattentray erweitert.  Mit dieser Konfiguration wurde dann (laut BestPerf) ein Maximaldurchsatz von 33 GB/s erreicht - nicht ganz das doppelte des SPARC T4-4 Laufs.  Um tatsaechlich den doppelten Durchsatz (36 GB/s) zu liefern, haette jedes der 12 Arrays 3 GB/s ueber seine 4 8GBit Ports liefern muessen.  Im FDR stehen nur 12 dual-port FC HBAs, was die Verwendung der Brocade FC Switches erklaert: Es wurden alle 4 8GBit ports jedes Arrays an die Switches angeschlossen, die die Datenstroeme dann in die 24 16GBit HBA ports des Servers buendelten.  Das theoretische Maximum jedes Storage-Arrays waere nun 4 GB/s.  Wenn man jedoch den Protokoll- und "Realitaets"-Overhead mit einrechnet, sind die tatsaechlich gelieferten 2.75 GB/s gar nicht schlecht.  Mit diesen Zahlen im Hinterkopf ist die Verdopplung des SPARC T4-4 Ergebnisses eine gute Leistung - und gleichzeitig eine gute Erklaerung, warum nicht bis zum 2.4-fachen skaliert wurde. Nebenbei bemerkt: Weder die SPARC T4-4 noch die SPARC T5-4 hatten in der gemessenen Konfiguration irgendwelche Flash-Devices. Mitbewerb Seit die T4 Systeme auf dem Markt sind, bemuehen sich unsere Mitbewerber redlich darum, ueberall den Eindruck zu hinterlassen, die Leistung des SPARC CPU-Kerns waere weiterhin mangelhaft.  Auch scheinen sie ueberzeugt zu sein, dass (ueber)grosse Caches und hohe Taktraten die einzigen Schluessel zu echter Server Performance seien.  Wenn ich mir nun jedoch die oeffentlichen TPC-H Ergebnisse ansehe, sehe ich dies: TPC-H @3000GB, Non-Clustered Systems System QphH SPARC T5-4 3.6 GHz SPARC T5 4/64 – 2048 GB 409,721.8 SPARC T4-4 3.0 GHz SPARC T4 4/32 – 1024 GB 205,792.0 IBM Power 780 4.1 GHz POWER7 8/32 – 1024 GB 192,001.1 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64 – 512 GB 162,601.7 Kurz zusammengefasst: Mit 32 Kernen (mit 3 GHz und 4MB L3 Cache), liefert die SPARC T4-4 mehr QphH@3000GB ab als IBM mit ihrer 32 Kern Power7 (bei 4.1 GHz und 32MB L3 Cache) und auch mehr als HP mit einem 64 Kern Intel Xeon System (2.27 GHz und 24MB L3 Cache).  Ich frage mich, wo genau SPARC hier mangelhaft ist? Nun koennte man natuerlich argumentieren, dass beide Ergebnisse nicht gerade neu sind.  Nun, in Ermangelung neuerer Ergebnisse kann man ja mal ein wenig spekulieren: IBMs aktueller Performance Report listet die o.g. IBM Power 780 mit einem rPerf Wert von 425.5.  Ein passendes Nachfolgesystem mit Power7+ CPUs waere die Power 780+ mit 64 Kernen, verfuegbar mit 3.72 GHz.  Sie wird mit einem rPerf Wert von  690.1 angegeben, also 1.62x mehr.  Wenn man also annimmt, dass Plattenspeicher nicht der limitierende Faktor ist (IBM hat mit 177 SSDs getestet, sie duerfen das gerne auf 400 erhoehen) und IBMs eigene Leistungsabschaetzung zugrunde legt, darf man ein theoretisches Ergebnis von 311398 QphH@3000GB erwarten.  Das waere dann allerdings immer noch weit von dem Ergebnis der SPARC T5-4 entfernt, und gerade in der von IBM so geschaetzen "per core" Metric noch weniger vorteilhaft. In der x86-Welt sieht es nicht besser aus.  Leider gibt es von Intel keine so praktischen rPerf-Tabellen.  Daher muss ich hier fuer eine Schaetzung auf SPECint_rate2006 zurueckgreifen.  (Ich bin kein grosser Fan von solchen Kreuz- und Querschaetzungen.  Insb. SPECcpu ist nicht besonders geeignet, um Datenbank-Leistung abzuschaetzen, da fast kein IO im Spiel ist.)  Das o.g. HP System wird bei SPEC mit 1580 CINT2006_rate gelistet.  Das bis einschl. 2013-06-14 beste Resultat fuer den neuen Intel Xeon E7-4870 mit 8 CPUs ist 2180 CINT2006_rate.  Das ist immerhin 1.38x besser.  (Wenn man nur die Taktrate beruecksichtigen wuerde, waere man bei 1.32x.)  Hier weiter zu rechnen, ist muessig, aber fuer die ungeduldigen Leser hier eine kleine tabellarische Zusammenfassung: TPC-H @3000GB Performance Spekulationen System QphH* Verbesserung gegenueber der frueheren Generation SPARC T4-4 32 cores SPARC T4 205,792 2x SPARC T5-464 cores SPARC T5 409,721 IBM Power 780 32 cores Power7 192,001 1.62x IBM Power 780+ 64 cores Power7+  311,398* HP ProLiant DL980 G764 cores Intel Xeon X7560 162,601 1.38x HP ProLiant DL980 G780 cores Intel Xeon E7-4870    224,348* * Keine echten Resultate  - spekulative Werte auf der Grundlage von rPerf (Power7+) oder SPECint_rate2006 (HP) Natuerlich sind IBM oder HP herzlich eingeladen, diese Werte zu widerlegen.  Aber stand heute warte ich noch auf aktuelle Benchmark Veroffentlichungen in diesem Datensegment. Was koennen wir also zusammenfassen? Es gibt einige Hinweise, dass der Plattenspeicher der begrenzende Faktor war, der die SPARC T5-4 daran hinderte, auf jenseits von 2x zu skalieren Der Mythos, dass SPARC Kerne keine Leistung bringen, ist genau das - ein Mythos.  Wie sieht es umgekehrt eigentlich mit einem TPC-H Ergebnis fuer die Power7+ aus? Cache ist nicht der magische Performance-Schalter, fuer den ihn manche Leute offenbar halten. Ein System, eine CPU-Architektur und ein Betriebsystem jenseits einer gewissen Grenze zu skalieren ist schwer.  In der x86-Welt scheint es noch ein wenig schwerer zu sein. Was fehlt?  Nun, das Thema Preis/Leistung ueberlasse ich gerne den Verkaeufern ;-) Und zu guter Letzt: Nein, ich habe mich nicht ins Marketing versetzen lassen.  Aber manchmal kann ich mich einfach nicht zurueckhalten... Disclosure Statements The views expressed on this blog are my own and do not necessarily reflect the views of Oracle. TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org, results as of 6/7/13. Prices are in USD. SPARC T5-4 409,721.8 QphH@3000GB, $3.94/QphH@3000GB, available 9/24/13, 4 processors, 64 cores, 512 threads; SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads. SPEC and the benchmark names SPECfp and SPECint are registered trademarks of the Standard Performance Evaluation Corporation. Results as of June 18, 2013 from www.spec.org. HP ProLiant DL980 G7 (2.27 GHz, Intel Xeon X7560): 1580 SPECint_rate2006; HP ProLiant DL980 G7 (2.4 GHz, Intel Xeon E7-4870): 2180 SPECint_rate2006,

    Read the article

  • DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

    - by Argenis
      Does your CHECKDB hurt, Argenis? There is a classic blog series by Paul Randal [blog|twitter] called “CHECKDB From Every Angle” which is pretty much mandatory reading for anybody who’s even remotely considering going for the MCM certification, or its replacement (the Microsoft Certified Solutions Master: Data Platform – makes my fingers hurt just from typing it). Of particular interest is the post “Consistency Options for a VLDB” – on it, Paul provides solid, timeless advice (I use the word “timeless” because it was written in 2007, and it all applies today!) on how to perform checks on very large databases. Well, here I was trying to figure out how to make CHECKDB run faster on a restored copy of one of our databases, which happens to exceed 7TB in size. The whole thing was taking several days on multiple systems, regardless of the storage used – SAS, SATA or even SSD…and I actually didn’t pay much attention to how long it was taking, or even bothered to look at the reasons why - as long as it was finishing okay and found no consistency errors. Yes – I know. That was a huge mistake, as corruption found in a database several days after taking place could only allow for further spread of the corruption – and potentially large data loss. In the last two weeks I increased my attention towards this problem, as we noticed that CHECKDB was taking EVEN LONGER on brand new all-flash storage in the SAN! I couldn’t really explain it, and were almost ready to blame the storage vendor. The vendor told us that they could initially see the server driving decent I/O – around 450Mb/sec, and then it would settle at a very slow rate of 10Mb/sec or so. “Hum”, I thought – “CHECKDB is just not pushing the I/O subsystem hard enough”. Perfmon confirmed the vendor’s observations. Dreaded @BlobEater What was CHECKDB doing all the time while doing so little I/O? Eating Blobs. It turns out that CHECKDB was taking an extremely long time on one of our frankentables, which happens to be have 35 billion rows (yup, with a b) and sucks up several terabytes of space in the database. We do have a project ongoing to purge/split/partition this table, so it’s just a matter of time before we deal with it. But the reality today is that CHECKDB is coming to a screeching halt in performance when dealing with this particular table. Checking sys.dm_os_waiting_tasks and sys.dm_os_latch_stats showed that LATCH_EX (DBCC_OBJECT_METADATA) was by far the top wait type. I remembered hearing recently about that wait from another post that Paul Randal made, but that was related to computed-column indexes, and in fact, Paul himself reminded me of his article via twitter. But alas, our pathologic table had no non-clustered indexes on computed columns. I knew that latches are used by the database engine to do internal synchronization – but how could I help speed this up? After all, this is stuff that doesn’t have a lot of knobs to tweak. (There’s a fantastic level 500 talk by Bob Ward from Microsoft CSS [blog|twitter] called “Inside SQL Server Latches” given at PASS 2010 – and you can check it out here. DISCLAIMER: I assume no responsibility for any brain melting that might ensue from watching Bob’s talk!) Failed Hypotheses Earlier on this week I flew down to Palo Alto, CA, to visit our Headquarters – and after having a great time with my Monkey peers, I was relaxing on the plane back to Seattle watching a great talk by SQL Server MVP and fellow MCM Maciej Pilecki [twitter] called “Masterclass: A Day in the Life of a Database Transaction” where he discusses many different topics related to transaction management inside SQL Server. Very good stuff, and when I got home it was a little late – that slow DBCC CHECKDB that I had been dealing with was way in the back of my head. As I was looking at the problem at hand earlier on this week, I thought “How about I set the database to read-only?” I remembered one of the things Maciej had (jokingly) said in his talk: “if you don’t want locking and blocking, set the database to read-only” (or something to that effect, pardon my loose memory). I immediately killed the CHECKDB which had been running painfully for days, and set the database to read-only mode. Then I ran DBCC CHECKDB against it. It started going really fast (even a bit faster than before), and then throttled down again to around 10Mb/sec. All sorts of expletives went through my head at the time. Sure enough, the same latching scenario was present. Oh well. I even spent some time trying to figure out if NUMA was hurting performance. Folks on Twitter made suggestions in this regard (thanks, Lonny! [twitter]) …Eureka? This past Friday I was still scratching my head about the whole thing; I was ready to start profiling with XPERF to see if I could figure out which part of the engine was to blame and then get Microsoft to look at the evidence. After getting a bunch of good news I’ll blog about separately, I sat down for a figurative smack down with CHECKDB before the weekend. And then the light bulb went on. A sparse column. I thought that I couldn’t possibly be experiencing the same scenario that Paul blogged about back in March showing extreme latching with non-clustered indexes on computed columns. Did I even have a non-clustered index on my sparse column? As it turns out, I did. I had one filtered non-clustered index – with the sparse column as the index key (and only column). To prove that this was the problem, I went and setup a test. Yup, that'll do it The repro is very simple for this issue: I tested it on the latest public builds of SQL Server 2008 R2 SP2 (CU6) and SQL Server 2012 SP1 (CU4). First, create a test database and a test table, which only needs to contain a sparse column: CREATE DATABASE SparseColTest; GO USE SparseColTest; GO CREATE TABLE testTable (testCol smalldatetime SPARSE NULL); GO INSERT INTO testTable (testCol) VALUES (NULL); GO 1000000 That’s 1 million rows, and even though you’re inserting NULLs, that’s going to take a while. In my laptop, it took 3 minutes and 31 seconds. Next, we run DBCC CHECKDB against the database: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; This runs extremely fast, as least on my test rig – 198 milliseconds. Now let’s create a filtered non-clustered index on the sparse column: CREATE NONCLUSTERED INDEX [badBadIndex] ON testTable (testCol) WHERE testCol IS NOT NULL; With the index in place now, let’s run DBCC CHECKDB one more time: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; In my test system this statement completed in 11433 milliseconds. 11.43 full seconds. Quite the jump from 198 milliseconds. I went ahead and dropped the filtered non-clustered indexes on the restored copy of our production database, and ran CHECKDB against that. We went down from 7+ days to 19 hours and 20 minutes. Cue the “Argenis is not impressed” meme, please, Mr. LaRock. My pain is your gain, folks. Go check to see if you have any of such indexes – they’re likely causing your consistency checks to run very, very slow. Happy CHECKDBing, -Argenis ps: I plan to file a Connect item for this issue – I consider it a pretty serious bug in the engine. After all, filtered indexes were invented BECAUSE of the sparse column feature – and it makes a lot of sense to use them together. Watch this space and my twitter timeline for a link.

    Read the article

  • CodePlex Daily Summary for Tuesday, July 02, 2013

    CodePlex Daily Summary for Tuesday, July 02, 2013Popular ReleasesMastersign.Expressions: Mastersign.Expressions v0.4.2: added support for if(<cond>, <true-part>, <false-part>) fixed multithreading issue with rand() improved demo applicationNB_Store - Free DotNetNuke Ecommerce Catalog Module: NB_Store v2.3.6 Rel0: v2.3.6 Is now DNN6 and DNN7 compatible Important : During update this install with overwrite the menu.xml setting, if you have changed this then make a backup before you upgrade and reapply your changes after the upgrade. Please view the following documentation if you are installing and configuring this module for the first time System Requirements Skill requirements Downloads and documents Step by step guide to a working store Please ask all questions in the Discussions tab. Document.Editor: 2013.26: What's new for Document.Editor 2013.26: New Insert Chart Improved User Interface Minor Bug Fix's, improvements and speed upsWsus Package Publisher: Release V1.2.1307.01: Fix an issue in the UI, approvals are not shown correctly in the 'Report' tabDirectX Tool Kit: July 2013: July 1, 2013 VS 2013 Preview projects added and updates for DirectXMath 3.05 vectorcall Added use of sRGB WIC metadata for JPEG, PNG, and TIFF SaveToWIC functions updated with new optional setCustomProps parameter and error check with optional targetFormatCore Server 2012 Powershell Script Hyper-v Manager: new_root.zip: Verison 1.0JSON Toolkit: JSON Toolkit 4.1.736: Improved strinfigy performance New serializing feature New anonymous type support in constructorsDotNetNuke® IFrame: IFrame 04.05.00: New DNN6/7 Manifest file and Azure Compatibility.VidCoder: 1.5.2 Beta: Fixed crash on presets with an invalid bitrate.Gardens Point LEX: Gardens Point LEX version 1.2.1: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.2.1 has new facilities for defining and manipulating character classes. These changes make the construction of large Unicode character classes more convenient. The runtime code for performing automaton backup has been re-implemented, and is now faster for scanners that need backup. Source CodeThe distribution contains a complete VS2010 project for the appli...ZXMAK2: Version 2.7.5.7: - fix TZX emulation (Bruce Lee, Zynaps) - fix ATM 16 colors for border - add memory module PROFI 512K; add PROFI V03 rom image; fix PROFI 3.XX configTwitter image Downloader: Twitter Image Downloader 2 with Installer: Application file with Install shield and Dot Net 4.0 redistributableUltimate Music Tagger: Ultimate Music Tagger 1.0.0.0: First release of Ultimate Music TaggerBlackJumboDog: Ver5.9.2: 2013.06.28 Ver5.9.2 (1) ??????????(????SMTP?????)?????????? (2) HTTPS???????????Outlook 2013 Add-In: Configuration Form: This new version includes the following changes: - Refactored code a bit. - Removing configuration from main form to gain more space to display items. - Moved configuration to separate form. You can click the little "gear" icon to access the configuration form (still very simple). - Added option to show past day appointments from the selected day (previous in time, that is). - Added some tooltips. You will have to uninstall the previous version (add/remove programs) if you had installed it ...Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for Windows Azure Media Services Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expectin...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...New ProjectsALM Rangers DevOps Tooling and Guidance: Practical tooling and guidance that will enable teams to realize a faster deployment based on continuous feedback.Core Server 2012 Powershell Script Hyper-v Manager: Free core Server 2012 powershell scripts and batch files that replace the non-existent hyper-v manager, vmconnect and mstsc.Enhanced Deployment Service (EDS): EDS is a web service based utility designed to extend the deployment capabilities of administrators with the Microsoft Deployment Toolkit.ExtendedDialogBox: Libreria DialogBoxJazdy: This project is here only because we wanted to take advantage of a public git server.Mon Examen: This web interface is meant to make examinationsneet: summaryOrchard Multi-Choice Voting: A multiple choice voting Orchard module.Particle Swarm Optimization Solving Quadratic Assignment Problem: This project is submitted for the solving of QAP using PSO algorithms with addition of some modification Porjects: 23123123PPL Power Pack: PPL Power PackProperty Builder: Visual Studio tool for speeding up process of coding class properties getters and setters.RedRuler for Redline: I tried some on-screen rulers, none of them help me measure the UI element quickly based on the Redline. So I decided to created this handy RedRuler tool. Royale Living: Mahindra Royale Community PortalSearch and booking Hotel or Tours: Ð? án nghiên c?u c?a sinh viên tdt theo mô hình mvc 4SystemBuilder.Show: This tool is a helper after you create your project in visual studio to create the respective objects and interface. TalentDesk: new ptojectTcmplex: The Training Center teaches many different kind of course such as English, French, Computer hardware and computer softwareTFS Reporting Guide: Provides guidance and samples to enable TFS users to generate reports based on WIT data.Umbraco AdaptiveImages: Adaptive Images Package for UmbracoVirtualNet - A ILcode interpreter/emulator written in C++/Assembly: VirtualNet is a interpreter/emulator for running .net code in native without having to install the .Net FrameWorkVisual Blocks: Visual Blocks ????IDE ????? ??????? ????? ????/?? Visual Studio and Cloud Based Mobile Device Testing: Practical guidance enabling field to remove blockers to adoption and to use and extend the Perfecto Mobile Cloud Device testing within the context of VS.Windows 8 Time Picker for Windows Phone: A Windows Phone implementation of the Time Picker control found in the Windows 8.1 Alarms app.???? - SmallBasic?: ?????????

    Read the article

  • T4 Performance Counters explained

    - by user13346607
    Now that T4 is out for a few month some people might have wondered what details of the new pipeline you can monitor. A "cpustat -h" lists a lot of events that can be monitored, and only very few are self-explanatory. I will try to give some insight on all of them, some of these "PIC events" require an in-depth knowledge of T4 pipeline. Over time I will try to explain these, for the time being these events should simply be ignored. (Side note: some counters changed from tape-out 1.1 (*only* used in the T4 beta program) to tape-out 1.2 (used in the systems shipping today) The table only lists the tape-out 1.2 counters) 0 0 1 1058 6033 Oracle Microelectronics 50 14 7077 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} pic name (cpustat) Prose Comment Sel-pipe-drain-cycles, Sel-0-[wait|ready], Sel-[1,2] Sel-0-wait counts cycles a strand waits to be selected. Some reasons can be counted in detail; these are: Sel-0-ready: Cycles a strand was ready but not selected, that can signal pipeline oversubscription Sel-1: Cycles only one instruction or µop was selected Sel-2: Cycles two instructions or µops were selected Sel-pipe-drain-cycles: cf. PRM footnote 8 to table 10.2 Pick-any, Pick-[0|1|2|3] Cycles one, two, three, no or at least one instruction or µop is picked Instr_FGU_crypto Number of FGU or crypto instructions executed on that vcpu Instr_ld dto. for load Instr_st dto. for store SPR_ring_ops dto. for SPR ring ops Instr_other dto. for all other instructions not listed above, PRM footnote 7 to table 10.2 lists the instructions Instr_all total number of instructions executed on that vcpu Sw_count_intr Nr of S/W count instructions on that vcpu (sethi %hi(fc000),%g0 (whatever that is))  Atomics nr of atomic ops, which are LDSTUB/a, CASA/XA, and SWAP/A SW_prefetch Nr of PREFETCH or PREFETCHA instructions Block_ld_st Block loads or store on that vcpu IC_miss_nospec, IC_miss_[L2_or_L3|local|remote]\ _hit_nospec Various I$ misses, distinguished by where they hit. All of these count per thread, but only primary events: T4 counts only the first occurence of an I$ miss on a core for a certain instruction. If one strand misses in I$ this miss is counted, but if a second strand on the same core misses while the first miss is being resolved, that second miss is not counted This flavour of I$ misses counts only misses that are caused by instruction that really commit (note the "_nospec") BTC_miss Branch target cache miss ITLB_miss ITLB misses (synchronously counted) ITLB_miss_asynch dto. but asynchronously [I|D]TLB_fill_\ [8KB|64KB|4MB|256MB|2GB|trap] H/W tablewalk events that fill ITLB or DTLB with translation for the corresponding page size. The “_trap” event occurs if the HWTW was not able to fill the corresponding TLB IC_mtag_miss, IC_mtag_miss_\ [ptag_hit|ptag_miss|\ ptag_hit_way_mismatch] I$ micro tag misses, with some options for drill down Fetch-0, Fetch-0-all fetch-0 counts nr of cycles nothing was fetched for this particular strand, fetch-0-all counts cycles nothing was fetched for all strands on a core Instr_buffer_full Cycles the instruction buffer for a strand was full, thereby preventing any fetch BTC_targ_incorrect Counts all occurences of wrongly predicted branch targets from the BTC [PQ|ROB|LB|ROB_LB|SB|\ ROB_SB|LB_SB|RB_LB_SB|\ DTLB_miss]\ _tag_wait ST_q_tag_wait is listed under sl=20. These counters monitor pipeline behaviour therefore they are not strand specific: PQ_...: cycles Rename stage waits for a Pick Queue tag (might signal memory bound workload for single thread mode, cf. Mail from Richard Smith) ROB_...: cycles Select stage waits for a ROB (ReOrderBuffer) tag LB_...: cycles Select stage waits for a Load Buffer tag SB_...: cycles Select stage waits for Store Buffer tag combinations of the above are allowed, although some of these events can overlap, the counter will only be incremented once per cycle if any of these occur DTLB_...: cycles load or store instructions wait at Pick stage for a DTLB miss tag [ID]TLB_HWTW_\ [L2_hit|L3_hit|L3_miss|all] Counters for HWTW accesses caused by either DTLB or ITLB misses. Canbe further detailed by where they hit IC_miss_L2_L3_hit, IC_miss_local_remote_remL3_hit, IC_miss I$ prefetches that were dropped because they either miss in L2$ or L3$ This variant counts misses regardless if the causing instruction commits or not DC_miss_nospec, DC_miss_[L2_L3|local|remote_L3]\ _hit_nospec D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters DTLB_miss_asynch counts all DTLB misses asynchronously, there is no way to count them synchronously DC_pref_drop_DC_hit, SW_pref_drop_[DC_hit|buffer_full] L1-D$ h/w prefetches that were dropped because of a D$ hit, counted per core. The others count software prefetches per strand [Full|Partial]_RAW_hit_st_[buf|q] Count events where a load wants to get data that has not yet been stored, i. e. it is still inside the pipeline. The data might be either still in the store buffer or in the store queue. If the load's data matches in the SB and in the store queue the data in buffer takes precedence of course since it is younger [IC|DC]_evict_invalid, [IC|DC|L1]_snoop_invalid, [IC|DC|L1]_invalid_all Counter for invalidated cache evictions per core St_q_tag_wait Number of cycles pipeline waits for a store queue tag, of course counted per core Data_pref_[drop_L2|drop_L3|\ hit_L2|hit_L3|\ hit_local|hit_remote] Data prefetches that can be further detailed by either why they were dropped or where they did hit St_hit_[L2|L3], St_L2_[local|remote]_C2C, St_local, St_remote Store events distinguished by where they hit or where they cause a L2 cache-to-cache transfer, i.e. either a transfer from another L2$ on the same die or from a different die DC_miss, DC_miss_\ [L2_L3|local|remote]_hit D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters L2_[clean|dirty]_evict Per core clean or dirty L2$ evictions L2_fill_buf_full, L2_wb_buf_full, L2_miss_buf_full Per core L2$ buffer events, all count number of cycles that this state was present L2_pipe_stall Per core cycles pipeline stalled because of L2$ Branches Count branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_taken Counts taken branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_mispred, Br_dir_mispred, Br_trg_mispred, Br_trg_mispred_\ [far_tbl|indir_tbl|ret_stk] Counter for various branch misprediction events.  Cycles_user counts cycles, attribute setting hpriv, nouser, sys controls addess space to count in Commit-[0|1|2], Commit-0-all, Commit-1-or-2 Number of times either no, one, or two µops commit for a strand. Commit-0-all counts number of times no µop commits for the whole core, cf. footnote 11 to table 10.2 in PRM for a more detailed explanation on how this counters interacts with the privilege levels

    Read the article

  • HTG Explains: Should You Build Your Own PC?

    - by Chris Hoffman
    There was a time when every geek seemed to build their own PC. While the masses bought eMachines and Compaqs, geeks built their own more powerful and reliable desktop machines for cheaper. But does this still make sense? Building your own PC still offers as much flexibility in component choice as it ever did, but prebuilt computers are available at extremely competitive prices. Building your own PC will no longer save you money in most cases. The Rise of Laptops It’s impossible to look at the decline of geeks building their own PCs without considering the rise of laptops. There was a time when everyone seemed to use desktops — laptops were more expensive and significantly slower in day-to-day tasks. With the diminishing importance of computing power — nearly every modern computer has more than enough power to surf the web and use typical programs like Microsoft Office without any trouble — and the rise of laptop availability at nearly every price point, most people are buying laptops instead of desktops. And, if you’re buying a laptop, you can’t really build your own. You can’t just buy a laptop case and start plugging components into it — even if you could, you would end up with an extremely bulky device. Ultimately, to consider building your own desktop PC, you have to actually want a desktop PC. Most people are better served by laptops. Benefits to PC Building The two main reasons to build your own PC have been component choice and saving money. Building your own PC allows you to choose all the specific components you want rather than have them chosen for you. You get to choose everything, including the PC’s case and cooling system. Want a huge case with room for a fancy water-cooling system? You probably want to build your own PC. In the past, this often allowed you to save money — you could get better deals by buying the components yourself and combining them, avoiding the PC manufacturer markup. You’d often even end up with better components — you could pick up a more powerful CPU that was easier to overclock and choose more reliable components so you wouldn’t have to put up with an unstable eMachine that crashed every day. PCs you build yourself are also likely more upgradable — a prebuilt PC may have a sealed case and be constructed in such a way to discourage you from tampering with the insides, while swapping components in and out is generally easier with a computer you’ve built on your own. If you want to upgrade your CPU or replace your graphics card, it’s a definite benefit. Downsides to Building Your Own PC It’s important to remember there are downsides to building your own PC, too. For one thing, it’s just more work — sure, if you know what you’re doing, building your own PC isn’t that hard. Even for a geek, researching the best components, price-matching, waiting for them all to arrive, and building the PC just takes longer. Warranty is a more pernicious problem. If you buy a prebuilt PC and it starts malfunctioning, you can contact the computer’s manufacturer and have them deal with it. You don’t need to worry about what’s wrong. If you build your own PC and it starts malfunctioning, you have to diagnose the problem yourself. What’s malfunctioning, the motherboard, CPU, RAM, graphics card, or power supply? Each component has a separate warranty through its manufacturer, so you’ll have to determine which component is malfunctioning before you can send it off for replacement. Should You Still Build Your Own PC? Let’s say you do want a desktop and are willing to consider building your own PC. First, bear in mind that PC manufacturers are buying in bulk and getting a better deal on each component. They also have to pay much less for a Windows license than the $120 or so it would cost you to to buy your own Windows license. This is all going to wipe out the cost savings you’ll see — with everything all told, you’ll probably spend more money building your own average desktop PC than you would picking one up from Amazon or the local electronics store. If you’re an average PC user that uses your desktop for the typical things, there’s no money to be saved from building your own PC. But maybe you’re looking for something higher end. Perhaps you want a high-end gaming PC with the fastest graphics card and CPU available. Perhaps you want to pick out each individual component and choose the exact components for your gaming rig. In this case, building your own PC may be a good option. As you start to look at more expensive, high-end PCs, you may start to see a price gap — but you may not. Let’s say you wanted to blow thousands of dollars on a gaming PC. If you’re looking at spending this kind of money, it would be worth comparing the cost of individual components versus a prebuilt gaming system. Still, the actual prices may surprise you. For example, if you wanted to upgrade Dell’s $2293 Alienware Aurora to include a second NVIDIA GeForce GTX 780 graphics card, you’d pay an additional $600 on Alienware’s website. The same graphics card costs $650 on Amazon or Newegg, so you’d be spending more money building the system yourself. Why? Dell’s Alienware gets bulk discounts you can’t get — and this is Alienware, which was once regarded as selling ridiculously overpriced gaming PCs to people who wouldn’t build their own. Building your own PC still allows you to get the most freedom when choosing and combining components, but this is only valuable to a small niche of gamers and professional users — most people, even average gamers, would be fine going with a prebuilt system. If you’re an average person or even an average gamer, you’ll likely find that it’s cheaper to purchase a prebuilt PC rather than assemble your own. Even at the very high end, components may be more expensive separately than they are in a prebuilt PC. Enthusiasts who want to choose all the individual components for their dream gaming PC and want maximum flexibility may want to build their own PCs. Even then, building your own PC these days is more about flexibility and component choice than it is about saving money. In summary, you probably shouldn’t build your own PC. If you’re an enthusiast, you may want to — but only a small minority of people would actually benefit from building their own systems. Feel free to compare prices, but you may be surprised which is cheaper. Image Credit: Richard Jones on Flickr, elPadawan on Flickr, Richard Jones on Flickr     

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • Waterfall Model (SDLC) vs. Prototyping Model

    The characters in the fable of the Tortoise and the Hare can easily be used to demonstrate the similarities and differences between the Waterfall and Prototyping software development models. This children fable is about a race between a consistently slow moving but steadfast turtle and an extremely fast but unreliable rabbit. After closely comparing each character’s attributes in correlation with both software development models, a trend seems to appear in that the Waterfall closely resembles the Tortoise in that Waterfall Model is typically a slow moving process that is broken up in to multiple sequential steps that must be executed in a standard linear pattern. The Tortoise can be quoted several times in the story saying “Slow and steady wins the race.” This is the perfect mantra for the Waterfall Model in that this model is seen as a cumbersome and slow moving. Waterfall Model Phases Requirement Analysis & Definition This phase focuses on defining requirements for a project that is to be developed and determining if the project is even feasible. Requirements are collected by analyzing existing systems and functionality in correlation with the needs of the business and the desires of the end users. The desired output for this phase is a list of specific requirements from the business that are to be designed and implemented in the subsequent steps. In addition this phase is used to determine if any value will be gained by completing the project. System Design This phase focuses primarily on the actual architectural design of a system, and how it will interact within itself and with other existing applications. Projects at this level should be viewed at a high level so that actual implementation details are decided in the implementation phase. However major environmental decision like hardware and platform decision are typically decided in this phase. Furthermore the basic goal of this phase is to design an application at the system level in those classes, interfaces, and interactions are defined. Additionally decisions about scalability, distribution and reliability should also be considered for all decisions. The desired output for this phase is a functional  design document that states all of the architectural decisions that have been made in regards to the project as well as a diagrams like a sequence and class diagrams. Software Design This phase focuses primarily on the refining of the decisions found in the functional design document. Classes and interfaces are further broken down in to logical modules based on the interfaces and interactions previously indicated. The output of this phase is a formal design document. Implementation / Coding This phase focuses primarily on implementing the previously defined modules in to units of code. These units are developed independently are intergraded as the system is put together as part of a whole system. Software Integration & Verification This phase primarily focuses on testing each of the units of code developed as well as testing the system as a whole. There are basic types of testing at this phase and they include: Unit Test and Integration Test. Unit Test are built to test the functionality of a code unit to ensure that it preforms its desired task. Integration testing test the system as a whole because it focuses on results of combining specific units of code and validating it against expected results. The output of this phase is a test plan that includes test with expected results and actual results. System Verification This phase primarily focuses on testing the system as a whole in regards to the list of project requirements and desired operating environment. Operation & Maintenance his phase primarily focuses on handing off the competed project over to the customer so that they can verify that all of their requirements have been met based on their original requirements. This phase will also validate the correctness of their requirements and if any changed need to be made. In addition, any problems not resolved in the previous phase will be handled in this section. The Waterfall Model’s linear and sequential methodology does offer a project certain advantages and disadvantages. Advantages of the Waterfall Model Simplistic to implement and execute for projects and/or company wide Limited demand on resources Large emphasis on documentation Disadvantages of the Waterfall Model Completed phases cannot be revisited regardless if issues arise within a project Accurate requirement are never gather prior to the completion of the requirement phase due to the lack of clarification in regards to client’s desires. Small changes or errors that arise in applications may cause additional problems The client cannot change any requirements once the requirements phase has been completed leaving them no options for changes as they see their requirements changes as the customers desires change. Excess documentation Phases are cumbersome and slow moving Learn more about the Major Process in the Sofware Development Life Cycle and Waterfall Model. Conversely, the Hare shares similar traits with the prototyping software development model in that ideas are rapidly converted to basic working examples and subsequent changes are made to quickly align the project with customers desires as they are formulated and as software strays from the customers vision. The basic concept of prototyping is to eliminate the use of well-defined project requirements. Projects are allowed to grow as the customer needs and request grow. Projects are initially designed according to basic requirements and are refined as requirement become more refined. This process allows customer to feel their way around the application to ensure that they are developing exactly what they want in the application This model also works well for determining the feasibility of certain approaches in regards to an application. Prototypes allow for quickly developing examples of implementing specific functionality based on certain techniques. Advantages of Prototyping Active participation from users and customers Allows customers to change their mind in specifying requirements Customers get a better understanding of the system as it is developed Earlier bug/error detection Promotes communication with customers Prototype could be used as final production Reduced time needed to develop applications compared to the Waterfall method Disadvantages of Prototyping Promotes constantly redefining project requirements that cause major system rewrites Potential for increased complexity of a system as scope of the system expands Customer could believe the prototype as the working version. Implementation compromises could increase the complexity when applying updates and or application fixes When companies trying to decide between the Waterfall model and Prototype model they need to evaluate the benefits and disadvantages for both models. Typically smaller companies or projects that have major time constraints typically head for more of a Prototype model approach because it can reduce the time needed to complete the project because there is more of a focus on building a project and less on defining requirements and scope prior to the start of a project. On the other hand, Companies with well-defined requirements and time allowed to generate proper documentation should steer towards more of a waterfall model because they are in a position to obtain clarified requirements and have to design and optimal solution prior to the start of coding on a project.

    Read the article

  • SQL SERVER – Create a Very First Report with the Report Wizard

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. What is the report Wizard? In today’s world automation is all around you. Henry Ford began building his Model T automobiles on a moving assembly line a century ago and changed the world. The moving assembly line allowed Ford to build identical cars quickly and cheaply. Henry Ford said in his autobiography “Any customer can have a car painted any color that he wants so long as it is black.” Today you can buy a car straight from the factory with your choice of several colors and with many options like back up cameras, built-in navigation systems and heated leather seats. The assembly lines now use robots to perform some tasks along with human workers. When you order your new car, if you want something special, not offered by the manufacturer, you will have to find a way to add it later. In computer software, we also have “assembly lines” called wizards. A wizard will ask you a series of questions, often branching to specific questions based on earlier answers, until you get to the end of the wizard. These wizards are used for many things, from something simple like setting up a rule in Outlook to performing administrative tasks on a server. Often, a wizard will get you part of the way to the end result, enough to get much of the tedious work out of the way. Once you get the product from the wizard, if the wizard is not capable of doing something you need, you can tweak the results. Create a Report with the Report Wizard Let’s get started with your first report!  Launch SQL Server Data Tools (SSDT) from the Start menu under SQL Server 2012. Once SSDT is running, click New Project to launch the New Project dialog box. On the left side of the screen expand Business Intelligence and select Reporting Services. Configure the properties as shown in . Be sure to select Report Server Project Wizard as the type of report and to save the project in the C:\Joes2Pros\SSRSCompanionFiles\Chapter3\Project folder. Click OK and wait for the Report Wizard to launch. Click Next on the Welcome screen.  On the Select the Data Source screen, make sure that New data source is selected. Type JProCo as the data source name. Make sure that Microsoft SQL Server is selected in the Type dropdown. Click Edit to configure the connection string on the Connection Properties dialog box. If your SQL Server database server is installed on your local computer, type in localhost for the Server name and select the JProCo database from the Select or enter a database name dropdown. Click OK to dismiss the Connection Properties dialog box. Check Make this a shared data source and click Next. On the Design the Query screen, you can use the query builder to build a query if you wish. Since this post is not meant to teach you T-SQL queries, you will copy all queries from files that have been provided for you. In the C:\Joes2Pros\SSRSCompanionFiles\Chapter3\Resources folder open the sales by employee.sql file. Copy and paste the code from the file into the Query string Text Box. Click Next. On the Select the Report Type screen, choose Tabular and click Next. On the Design the Table screen, you have to figure out the groupings of the report. How do you do this? Well, you often need to know a bit about the data and report requirements. I often draw the report out on paper first to help me determine the groups. In the case of this report, I could group the data several ways. Do I want to see the data grouped by Year and Month? Do I want to see the data grouped by Employee or Category? The only thing I know for sure about this ahead of time is that the TotalSales goes in the Details section. Let’s assume that the CIO asked to see the data grouped first by Year and Month, then by Category. Let’s move the fields to the right-hand side. This is done by selecting Page > Group or Details >, as shown in, and click Next. On the Choose the Table Layout screen, select Stepped and check Include subtotals and Enable drilldown, as shown in. On the Choose the Style screen, choose any color scheme you wish (unlike the Model T) and click Next. I chose the default, Slate. On the Choose the Deployment Location screen, change the Deployment folder to Chapter 3 and click Next. At the Completing the Wizard screen, name your report Employee Sales and click Finish. After clicking Finish, the report and a shared data source will appear in the Solution Explorer and the report will also be visible in Design view. Click the Preview tab at the top. This report expects the user to supply a year which the report will then use as a filter. Type in a year between 2006 and 2013 and click View Report. Click the plus sign next to the Sales Year to expand the report to see the months, then expand again to see the categories and finally the details. You now have the assembly line report completed, and you probably already have some ideas on how to improve the report. Tomorrow’s Post Tomorrow’s blog post will show how to create your own data sources and data sets in SSRS. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • Answers to Your Common Oracle Database Lifecycle Management Questions

    - by Scott McNeil
    We recently ran a live webcast on Strategies for Managing Oracle Database's Lifecycle. There were tons of questions from our audience that we simply could not get to during the hour long presentation. Below are some of those questions along with their answers. Enjoy! Question: In the webcast the presenter talked about “gold” configuration standards, for those who want to use this technique, could you recommend a best practice to consider or follow? How do I get started? Answer:Gold configuration standardization is a quick and easy way to improve availability through consistency. Start by choosing a reference database and saving the configuration to the Oracle Enterprise Manager repository using the Save Configuration feature. Next create a comparison template using the Oracle provided template as a starting point and modify the ignored properties to eliminate expected differences in your environment. Finally create a comparison specification using the comparison template you created plus your saved gold configuration and schedule it to run on a regular basis. Don’t forget to fill in the email addresses of those you want to notify upon drift detection. Watch the database configuration management demo to learn more. Question: Can Oracle Lifecycle Management Pack for Database help with patching an Oracle Real Application Cluster (RAC) environment? Answer: Yes, Oracle Enterprise Manager supports both parallel and rolling patch application of Oracle Real Application Clusters. The use of rolling patching is recommended as there is no downtime involved. For more details watch this demo. Question: What are some of the things administrators can do to control configuration drift? Why is it important? Answer:Configuration drift is one of the main causes of instability and downtime of applications. Oracle Enterprise Manager makes it easy to manage and control drift using scheduled configuration comparisons combined with comparison templates. Question: Does Oracle Enterprise Manager 12c Release 2 offer an incremental update feature for "gold" images? For instance, if the source binary has a higher PSU level, what is the best approach to update the existing "gold" image in the software library? Do you have to create a new image or can you just update the original one? Answer:Provisioning Profiles (Gold images) can contain the installation files and database configuration templates. Although it is possible to make some changes to the profile after creation (mainly to configuration), it is normally recommended to simply create a new profile after applying a patch to your reference database. Question: The webcast talked about enforcing in-house standards, does Oracle Enterprise Manager 12c offer verification of your databases and systems to those standards? For example, the initial "gold" image has been massively deployed over time, and there may be some changes to it. How can you do regular checks from Enterprise Manager to ensure the in-house standards are being enforced? Answer:There are really two methods to validate conformity to standards. The first method is to use gold standards which you compare other databases to report unwanted differences. This method uses a new comparison template technology which allows users to ignore known differences (i.e. SID, Start time, etc) which results in a report only showing important or non-conformant differences. This method is quick to setup and configure and recommended for those who want to get started validating compliance quickly. The second method leverages the new compliance framework which allows the creation of specific and robust validations. These compliance rules are grouped into standards which can be assigned to databases quickly and easily. Compliance rules allow for targeted and more sophisticated validation beyond the basic equals operation available in the comparison method. The compliance framework can be used to implement just about any internal or industry standard. The compliance results will track current and historic compliance scores at the overall and individual database targets. When the issue is resolved, the score is automatically affected. Compliance framework is the recommended long term solution for validating compliance using Oracle Enterprise Manager 12c. Check out this demo on database compliance to learn more. Question: If you are using the integration between Oracle Enterprise Manager and My Oracle Support in an "offline" mode, how do you know if you have the latest My Oracle Support metadata? Answer:In Oracle Enterprise Manager 12c Release 2, you now only need to download one zip file containing all of the metadata xmls files. There is no indication that the metadata has changed but you could run a checksum on the file and compare it to the previously downloaded version to see if it has changed. Question: What happens if a patch fails while administrators are applying it to a database or system? Answer:A large portion of Oracle Enterprise Manager's patch automation is the pre-requisite checks that happen to ensure the highest level of confidence the patch will successfully apply. It is recommended you test the patch in a non-production environment and save the patch plan as a template once successful so you can create new plans using the saved template. If you are using the recommended ‘out of place’ patching methodology, there is no urgency because the database is still running as the cloned Oracle home is being patched. Users can address the issue and restart the patch procedure at the point it left off. If you are using 'in place' method, you can address the issue and continue where the procedure left off. Question: Can Oracle Enterprise Manager 12c R2 compare configurations between more than one target at the same time? Answer:Oracle Enterprise Manager 12c can compare any number of target configurations at one time. This is the basis of many important use cases including Configuration Drift Management. These comparisons can also be scheduled on a regular basis and emails notification sent should any differences appear. To learn more about configuration search and compare watch this demo. Question: How is data comparison done since changes are taking place in a live production system? Answer:There are many things to keep in mind when using the data comparison feature (as part of the Change Management ability to compare table data). It was primarily intended to be used for maintaining consistency of important but relatively static data. For example, application seed data and application setup configuration. This data does not change often but is critical when testing an application to ensure results are consistent with production. It is not recommended to use data comparison on highly dynamic data like transactional tables or very large tables. Question: Which versions of Oracle Database can be monitored through Oracle Enterprise Manager 12c? Answer:Oracle Database versions: 9.2.0.8, 10.1.0.5, 10.2.0.4, 10.2.0.5, 11.1.0.7, 11.2.0.1, 11.2.0.2, 11.2.0.3. Watch the On-Demand Webcast Stay Connected: Twitter | Facebook | YouTube | Linkedin | NewsletterDownload the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Finally, upgrade from Nokia X3 to Samsung Galaxy S III

    This time, something slightly different but nonetheless not less interesting, hopefully. Living on a remote island like Mauritius, ill-praised 'Cyber Island' in the Indian Ocean, has its advantages in life style and relaxed environment to life in but in terms of technological aspects it can be quite a nightmare. Well, I guess this might be different story to report about... one day. Cyber Island Mauritius Despite it's shiny advertisement as Cyber Island and business in ICT hub to Africa, Mauritius is not on the latest track of available models in computer hardware or, in the context of this article, cellulars or smart-phone, or communication technology in general. Okay, I have to admit that this statement is only partly true. Money can buy, even here in Mauritius. Luckily, there are ways and ways to deal with this outcry of modern, read: technological, civilisation issues. Online shopping you might think? Yes, for sure, until you discover in your checkout procedure that a small island in the Indian Ocean isn't a preferred destination for delivery and the precious time you spent on putting your items into your cart and feeding your personal level of anticipation gets ruined on the last stint. Ordering from abroad saves you money Anyway, I got in touch with my personal courier and luckily there were some extra-kilos left in the luggage. First obstacle sorted, we have a Transporter! Okay, on the next occasion off to Amazon online and using their Prime service for fast delivery. Actually, the order was placed on Saturday evening and everything got delivered on Tuesday morning - nice job in less than 72 hours. Okay, among the items of that shopping rush I ordered a shiny Samsung Galaxy S III 16GB in oceanic blue - did I mention, that you hardly get a blue model in Mauritius? - for my BWE. Interesting side-notes: First, Amazon Germany dropped the prices for roughly 30% on the S3, and we got the 16GB model for less than 500 Euro (or approx. Rs. 19.500,-) compared to the usual Rs. 27.000,- on the local market. It even varies whether the local price is inclusive or exclusive VAT (15%). Second, since a while she was bothering me to get an iPhone and an iPad for her, fair enough I thought, decent hardware, posh design and reliable services. Until we watched the 'magical' introduction of Samsung's new models at the IFA exhibition, she read the bashing comments on Google+ on the iPhone 5 and I gave her a brief summary on the law suit between Apple and Samsung in the USA. So, yes, Samsung USA is right, the next big thing is already here - literally. My BWE loves the look and touch of the Galaxy S3. And for me it was more cost-effective in terms of purchases done at the App Store, ups, Play Store. Transfer of contacts, text messages and media files Okay, now that the hardware is in place, how to transfer all those contacts, text messages, media files, etc. between those two devices? In the past, I used to use the Nokia Communication Suite between various models but now for Android? Well, as usual Google and Bing are reliable friends and among the first hits I came across an article about How to Transfer Contacts from Nokia to Android. Couldn't be easier, right? Well, sort of... my main Windows systems are already running on Windows 8, and this actually caused problems with the mobile/smart-phone device drivers. The article provides the download for an older version 1.10 which upgrades to 2.11 (as time of writing this entry) but both couldn't get the Galaxy S3 and the Nokia connected. Shame on me... the product page clearly doesn't mention Windows 8 (for now) and Windows 8 isn't available for the general audience at all... After I took a spare machine running on Windows Vista everything went smooth. Software installed, upgrade done, device drivers for Android automatically downloaded and installed, and the same painless routine for the Nokia part. I think, I rebooted the system twice during the whole setup procedure but hey, it was more or less a distraction while coding some stuff in ASP.NET MVC and Telerik Kendo UI. The transfer of contacts and text messages was done via Wondershare MobileGo for Android, and all media files by moving the additional microSD card from one device to the other. But even without an external SD card, it would have been very easy to copy the files via Windows Explorer directly. Little catch and excellent service Fine, we are almost done and the only step left is to shift the SIM card... Ouch, gotcha! The X3 uses a standard size SIM card while the S III only accepts microSIM form factor. What an irony, bigger smartphone needs smaller SIM card. Luckily, the next showroom of Emtel is just 5 mins away up the road, and the service staff over there know their job. Finally, after roughly 10 mins of paper work, activation and small chit-chat, the S3 came to life on the mobile network. Owning a smart-phone now and knowing that my BWE would like to interact more on social networks away from home, especially to upload pictures and provide local 'check-ins', I activated a data package for her in advance, too. Even that it is Saturday, everything was already done and ready to be used. Nice bonus: The Emtel clerk directly offered me to set up the configuration for the Emtel data services, yes sure, go ahead, this saves me to search for that in the settings. Okay, spoiler-alert here, setting a static APN to access the Emtel network and the internet wouldn't be a challenge. But hey, she already had the phone in her hands and I could keep my eyes on the children. Well done, Emtel! Resume Thanks to the useful software package by Wondershare is was a hands-free experience to transfer all the data from a Nokia mobile on Symbian S60 to a Samsung Galaxy S III on Android Ice Cream Sandwich (ICS). In the future, this wont be a serious issue at all anymore thanks to synchronisation services and cloud storage. And for now, I'm only waiting for the official upgrades for Jelly Bean.

    Read the article

  • Developing Mobile Applications: Web, Native, or Hybrid?

    - by Michelle Kimihira
    Authors: Joe Huang, Senior Principal Product Manager, Oracle Mobile Application Development Framework  and Carlos Chang, Senior Principal Product Director The proliferation of mobile devices and platforms represents a game-changing technology shift on a number of levels. Companies must decide not only the best strategic use of mobile platforms, but also how to most efficiently implement them. Inevitably, this conversation devolves to the developers, who face the task of developing and supporting mobile applications—not a simple task in light of the number of devices and platforms. Essentially, developers can choose from the following three different application approaches, each with its own set of pros and cons. Native Applications: This refers to apps built for and installed on a specific platform, such as iOS or Android, using a platform-specific software development kit (SDK).  For example, apps for Apple’s iPhone and iPad are designed to run specifically on iOS and are written in Xcode/Objective-C. Android has its own variation of Java, Windows uses C#, and so on.  Native apps written for one platform cannot be deployed on another. Native apps offer fast performance and access to native-device services but require additional resources to develop and maintain each platform, which can be expensive and time consuming. Mobile Web Applications: Unlike native apps, mobile web apps are not installed on the device; rather, they are accessed via a Web browser.  These are server-side applications that render HTML, typically adjusting the design depending on the type of device making the request.  There are no program coding constraints for writing server-side apps—they can be written in Java, C, PHP, etc., it doesn’t matter.  Instead, the server detects what type of mobile browser is pinging the server and adjusts accordingly. For example, it can deliver fully JavaScript and CSS-enabled content to smartphone browsers, while downgrading gracefully to basic HTML for feature phone browsers. Mobile apps work across platforms, but are limited to what you can do through a browser and require Internet connectivity. For certain types of applications, these constraints may not be an issue. Oracle supports mobile web applications via ADF Faces (for tablets) and ADF Mobile browser (Trinidad) for smartphone and feature phones. Hybrid Applications: As the name implies, hybrid apps combine technologies from native and mobile Web apps to gain the benefits each. For example, these apps are installed on a device, like their pure native app counterparts, while the user interface (UI) is based on HTML5.  This UI runs locally within the native container, which usually leverages the device’s browser engine.  The advantage of using HTML5 is a consistent, cross-platform UI that works well on most devices.  Combining this with the native container, which is installed on-device, provides mobile users with access to local device services, such as camera, GPS, and local device storage.  Native apps may offer greater flexibility in integrating with device native services.  However, since hybrid applications already provide device integrations that typical enterprise applications need, this is typically less of an issue.  The new Oracle ADF Mobile release is an HTML5 and Java hybrid framework that targets mobile app development to iOS and Android from one code base. So, Which is the Best Approach? The short answer is – the best choice depends on the type of application you are developing.  For instance, animation-intensive apps such as games would favor native apps, while hybrid applications may be better suited for enterprise mobile apps because they provide multi-platform support. Just for starters, the following issues must be considered when choosing a development path. Application Complexity: How complex is the application? A quick app that accesses a database or Web service for some data to display?  You can keep it simple, and a mobile Web app may suffice. However, for a mobile/field worker type of applications that supports mission critical functionality, hybrid or native applications are typically needed. Richness of User Interactivity: What type of user experience is required for the application?  Mobile browser-based app that’s optimized for mobile UI may suffice for quick lookup or productivity type of applications.  However, hybrid/native application would typically be required to deliver highly interactive user experiences needed for field-worker type of applications.  For example, interactive BI charts/graphs, maps, voice/email integration, etc.  In the most extreme case like gaming applications, native applications may be necessary to deliver the highly animated and graphically intensive user experience. Performance: What type of performance is required by the application functionality?  For instance, for real-time look up of data over the network, mobile app performance depends on network latency and server infrastructure capabilities.  If consistent performance is required, data would typically need to be cached, which is supported on hybrid or native applications only. Connectivity and Availability: What sort of connectivity will your application require? Does the app require Web access all the time in order to always retrieve the latest data from the server? Or do the requirements dictate offline support? While native and hybrid apps can be built to operate offline, Web mobile apps require Web connectivity. Multi-platform Requirements: The terms “consumerization of IT” and BYOD (bring your own device) effectively mean that the line between the consumer and the enterprise devices have become blurred. Employees are bringing their personal mobile devices to work and are often expecting that they work in the corporate network and access back-office applications.  Even if companies restrict access to the big dogs: (iPad, iPhone, Android phones and tablets, possibly Windows Phone and tablets), trying to support each platform natively will require increasing resources and domain expertise with each new language/platform. And let’s not forget the maintenance costs, involved in upgrading new versions of each platform.   Where multi-platform support is needed, Web mobile or hybrid apps probably have the advantage. Going native, and trying to support multiple operating systems may be cost prohibitive with existing resources and developer skills. Device-Services Access:  If your app needs to access local device services, such as the camera, contacts app, accelerometer, etc., then your choices are limited to native or hybrid applications.   Fragmentation: Apple controls Apple iOS and the only concern is what version iOS is running on any given device.   Not so Android, which is open source. There are many, many versions and variants of Android running on different devices, which can be a nightmare for app developers trying to support different devices running different flavors of Android.  (Is it an Amazon Kindle Fire? a Samsung Galaxy?  A Barnes & Noble Nook?) This is a nightmare scenario for native apps—on the other hand, a mobile Web or hybrid app, when properly designed, can shield you from these complexities because they are based on common frameworks.  Resources: How many developers can you dedicate to building and supporting mobile application development?  What are their existing skills sets?  If you’re considering native application development due to the complexity of the application under development, factor the costs of becoming proficient on a each platform’s OS and programming language. Add another platform, and that’s another language, another SDK. On the other side of the equation, Web mobile or hybrid applications are simpler to make, and readily support more platforms, but there may be performance trade-offs. Conclusion This only scratches the surface. However, I hope to have suggested some food for thought in choosing your mobile development strategy.  Do your due diligence, search the Web, read up on mobile, talk to peers, attend events. The development team at Oracle is working hard on mobile technologies to help customers extend enterprise applications to mobile faster and effectively.  To learn more on what Oracle has to offer, check out the Oracle ADF Mobile (hybrid) and ADF Faces/ADF Mobile browser (Web Mobile) solutions from Oracle.   Additional Information Blog: ADF Blog Product Information on OTN: ADF Mobile Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Why We Should Learn to Stop Worrying and Love Millennials

    - by HCM-Oracle
    By Christine Mellon Much is said and written about the new generations of employees entering our workforce, as though they are a strange specimen, a mysterious life form to be “figured out,” accommodated and engaged – at a safe distance, of course.  At its worst, this talk takes a critical and disapproving tone, with baby boomer employees adamantly refusing to validate this new breed of worker, let alone determine how to help them succeed and achieve their potential.   The irony of our baby-boomer resentments and suspicions is that they belie the fact that we created the very vision that younger employees are striving to achieve.  From our frustrations with empty careers that did not fulfill us, from our opposition to “the man,” from our sharp memories of our parents’ toiling for 30 years just for the right to retire, from the simple desire not to live our lives in a state of invisibility, came the seeds of hope for something better. One characteristic of Millennial workers that grew from these seeds is the desire to experience as much as possible.  They are the “Experiential Employee”, with a passion for growing in diverse ways and expanding personal and professional horizons.  Rather than rooting themselves in a single company for a career, or even in a single career path, these employees are committed to building a broad portfolio of experiences and capabilities that will enable them to make a difference and to leave a mark of significance in the world.  How much richer is the organization that nurtures and leverages this inclination?  Our curmudgeonly ways must be surrendered and our focus redirected toward building the next generation of talent ecosystems, if we are to optimize what future generations have to offer.   Accelerating Professional Development In spite of our Boomer grumblings about Millennials’ “unrealistic” expectations, the truth is that we have a well-matched set of circumstances.  We have executives-in-waiting who want to learn quickly and a concurrent, urgent need to ramp up their development time, based on anticipated high levels of retirement in the next 10+ years.  Since we need to rapidly skill up these heirs to the corporate kingdom, isn’t it a fortunate coincidence that they are hungry to learn, develop and move fluidly throughout our organizations??  So our challenge now is to efficiently operationalize the wisdom we have acquired about effective learning and development.   We have already evolved from classroom-based models to diverse instructional methods.  The next step is to find the best approaches to help younger employees learn quickly and apply new learnings in an impactful way.   Creating temporary or even permanent functional partnerships among Millennial employees is one way to maximize outcomes.  This might take the form of 2 or more employees owning aspects of what once fell under a single role.  While one might argue this would mean duplication of resources, it could be a short term cost while employees come up to speed.  And the potential benefits would be numerous:  leveraging and validating the inherent sense of community of new generations, creating cross-functional skills with broad applicability, yielding additional perspectives and approaches to traditional work outcomes, and accelerating the performance curve for incumbents through Cooperative Learning (Johnson, D. and Johnson R., 1989, 1999).  This well-researched teaching strategy, where students support each other in the absorption and application of new information, has been shown to deliver faster, more efficient learning, and greater retention. Alternately, perhaps short term contracts with exiting retirees, or former retirees, to help facilitate the development of following generations may have merit.  Again, a short term cost, certainly.  However, the gains realized in shortening the learning curve, and strengthening engagement are substantial and lasting. Ultimately, there needs to be creative thinking applied for each organization on how to accelerate the capabilities of our future leaders in unique ways that mesh with current culture. The manner in which performance is evaluated must finally shift as well.  Employees will need to be assessed on how well they have developed key skills and capabilities vs. end-to-end mastery of functional positions they have no interest in keeping for an entire career. As we become more comfortable in placing greater and greater weight on competencies vs. tasks, we will realize increased organizational agility via this new generation of workers, which will be further enhanced by their natural flexibility and appetite for change. Revisiting Succession  For many years, organizations have failed to deliver desired succession planning outcomes.  According to CEB’s 2013 research, only 28% of current leaders were pre-identified in a succession plan. These disappointing results, along with the entrance of the experiential, Millennial employee into the workforce, may just provide the needed impetus for HR to reinvent succession processes.   We have recognized that the best professional development efforts are not always linear, and the time has come to fully adopt this philosophy in regard to succession as well.  Paths to specific organizational roles will not look the same for newer generations who seek out unique learning opportunities, without consideration of a singular career destination.  Rather than charting particular jobs as precursors for key positions, the experiences and skills behind what makes an incumbent successful must become essential in succession mapping.  And the multitude of ways in which those experiences and skills may be acquired must be factored into the process, along with the individual employee’s level of learning agility. While this may seem daunting, it is necessary and long overdue.  We have talked about the criticality of competency-based succession, however, we have not lived up to our own rhetoric.  Many Boomers have experienced the same frustration in our careers; knowing we are capable of shining in a particular role, but being denied the opportunity due to how our career history lined up, on paper, with documented job requirements.  These requirements usually emphasized past jobs/titles and specific tasks, versus capabilities, drive and willingness (let alone determination) to learn new things.  How satisfying would it be for us to leave a legacy where such narrow thinking no longer applies and potential is amplified? Realizing Diversity Another bloom from the seeds we Boomers have tried to plant over the past decades is a completely evolved view of diversity.  Millennial employees assume a diverse workforce, and are startled by anything less.  Their social tolerance, nurtured by wide and diverse networks, is unprecedented.  College graduates expect a similar landscape in the “real world” to what they experienced throughout their lives.  They appreciate and seek out divergent points of view and experiences without needing any persuasion.  The face of our U.S. workforce will likely see dramatic change as Millennials apply their fresh take on hiring and building strong teams, with an inherent sense of inclusion.  This wonderful aspect of the Millennial wave should be celebrated and strongly encouraged, as it is the fulfillment of our own aspirations. Future Perfect The Experiential Employee is operating more as a free agent than a long term player, and their commitment will essentially last as long as meaningful organizational culture and personal/professional opportunities keep their interest.  As Boomers, we have laid the foundation for this new, spirited employment attitude, and we should take pride in knowing that.  Generations to come will challenge organizations to excel in how they identify, manage and nurture talent. Let’s support and revel in the future that we’ve helped invent, rather than lament what we think has been lost.  After all, the future is always connected to the past.  And as so eloquently phrased by Antoine Lavoisier, French nobleman, chemist and politico:  “Nothing is Lost, Nothing is Created, and Everything is Transformed.” Christine has over 25 years of diverse HR experience.  She has held HR consulting and corporate roles, including CHRO positions for Echostar in Denver, a 6,000+ employee global engineering firm, and Aepona, a startup software firm, successfully acquired by Intel. Christine is a resource to Oracle clients, to assist in Human Capital Management strategy development and implementation, compensation practices, talent development initiatives, employee engagement, global HR management, and integrated HR systems and processes that support the full employee lifecycle. 

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >