Search Results

Search found 13719 results on 549 pages for 'design evolution'.

Page 521/549 | < Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >

  • How to handle updated configuration when it's already been cloned for editing

    - by alexrussell
    Really sorry about the title that probably doesn't make much sense. Hopefully I can explain myself better here as it's something that's kinda bugged me for ages, and is now becoming a pressing concern as I write a bit of software with configuration. Most software comes with default configuration options stored in the app itself, and then there's a configuration file (let's say) that a user can edit. Once created/edited for the first time, subsequent updates to the application can not (easily) modify this configuration file for fear of clobbering the user's own changes to the default configuration. So my question is, if my application adds a new configurable parameter, what's the best way to aid discoverability of the setting and allow the user (developer) to override it as nicely as possible given the following constraints: I actually don't have a canonical default config in the application per se, it's more of a 'cascading filesystem'-like affair - the config template is stored in default/config.json and when the user wishes to edit the configuration, it's copied to user/config.json. If a user config is found it is used - there is no automatic overriding of a subset of keys, the whole new file is used and that's that. If there's no user config the default config is used. When a user wishes to edit the config they run a command to 'generate' it for them (which simply copies the config.json file from the default to the user directory). There is no UI for the configuration options as it's not appropriate to the userbase (think of my software as a library or something, the users are developers, the config is done in the user/config.json file). Due to my software being library-like there's no simple way to, on updating of the software, run some tasks automatically (so any ideas of look at the current config, compare to template config, add ing missing keys) aren't appropriate. The only solution I can think of right now is to say "there's a new config setting X" in release notes, but this doesn't seem ideal to me. If you want any more information let me know. The above specifics are not actually 100% true to my situation, but they represent the problem equally well with lower complexity. If you do want specifics, however, I can explain the exact setup. Further clarification of the type of configuration I mean: think of the Atom code editor. There appears to be a default 'template' config file somewhere, but as soon as a configuration option is edited ~/.atom/config.cson is generated and the setting goes in there. From now on is Atom is updated and gets a new configuration key, this file cannot be overwritten by Atom without a lot of effort to ensure that the addition/modification of the key does not clobber. In Atom's case, because there is a GUI for editing settings, they can get away with just adding the UI for the new setting into the UI to aid 'discoverability' of the new setting. I don't have that luxury. Clarification of my constraints and what I'm actually looking for: The software I'm writing is actually a package for a larger system. This larger system is what provides the configuration, and the way it works is kinda fixed - I just do a config('some.key') kinda call and it knows to look to see if the user has a config clone and if so use it, otherwise use the default config which is part of my package. Now, while I could make my application edit the user's configuration files (there is a convention about where they're stored), it's generally not done, so I'd like to live with the constraints of the system I'm using if possible. And it's not just about discoverability either, one large concern is that the addition of a configuration key won't actually work as soon as the user has their own copy of the original template. Adding the key to the template won't make a difference as that file is never read. As such, I think this is actually quite a big flaw in the design of the configuration cascading system and thus needs to be taken up with my upstream. So, thinking about it, based on my constraints, I don't think there's going to be a good solution save for either editing the user's configuration or using a new config file every time there are updates to the default configuration. Even the release notes idea from above isn't doable as, if the user does not follow the advice, suddenly I have a config key with no value (user-defined or default). So the new question is this: what is the general way to solve the problem of having a default configuration in template config files and allowing a user to make user-specific version of these in order to override the defaults? A per-key cascade (rather than per-file cascade) where the user only specifies their overrides? In this case, what happens if a configuration value is an array - do we replace or append to the default (or, more realistically, how does the user specify whether they wish to replace or append to)? It seems like configuration is kinda hard, so how is it solved in the wild?

    Read the article

  • Microsoft BUILD 2013 Day 1&ndash;Keynote

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013-day-1ndashkeynote.aspx This one is going to be a little long because the keynote was jam-packed so bare with me. The keynote for the first day of BUILD 2013 was kicked off by Steve Balmer.  He made it very clear that Microsoft’s focus is on accelerating its time to market with products and product updates.  His quote was that “Rapid release” is the new norm.  He continued by showing off several new Lumias that have been buzzing around the internet for a while and announce that Sprint will now be carrying the HTC 8XT and Samsung ATIV. Balmer is known for repeating words or phrase for affect.  This time it was “Rapid release, rapid release” and “Touch, touch, touch, touch, touch, …”.  This was fun, but even more fun was when he announce that all attendees would receive an Acer Iconia 8” tablet. SCORE! The next subject Balmer focused on is new apps.  The three new ones were Flipboard, Facebook and NFL Fantasy Football.  I liked the first two because these are ones that people coming from other platforms are missing.  The NFL app is great just because it targets a demographic that can be fanatical.  If these types of apps keep coming than the missing app argument goes away. While many Negative Nancy’s are describing Windows 8.1 as Windows 180 Steve Balmer chose to call it a “refined blend” as in a coffee that has been improved with a new mix.  This includes more multi-tasking options and leveraging Bing straight throughout the entire ecosystem. He ended this first section by explaining that this will also bring more Bing development opportunities to the community. Steve Balmer was followed by Julie Larson-Green who spent her time on stage selling us on Windows 8 all over again from my point of view.  Something that I would not have thought was needed until I had listened to some other attendees who had a number of concerns and complaints.  She showed a number of new gestures that will come with Windows 8.1, and while they were cool I was left wondering if they really improved the experience.  I guess only time will tell. I did like the fact that it the UI implementation to bring up “All Apps” now mirrors that of Windows Phone.  The consistency is a big step forward that I hope to see continue.  The cool factor went up from there as she swiped content from a desktop (mega-tablet) to the XBox One.  This seamless experience I believe is what is really needed for any future platform to be relevant. I was much more enthused by the presentation of Antoine Leblond who humbled us by letting us know that there are 5k new API.  How that can be or how anyone would ever use all of them is another question.  His announcement was that the Visual Studio 2013 preview would be available today along with the Windows 8.1 bits.  One of the features of VS2013 that he demonstrated is the power consumption profiler.  With battery life being a key factor with consumer consumption devices this is a welcome addition. He didn’t limit his presentation to VS2013 features though.  He showed how the Store has been redesigned to enable better search and discoverability of apps and how Win 8.1 can perform multiple screen scales depending on the resolution of the device automatically.  The last feature he demoed was the real time video streaming API which he made sure we understood by attaching a Surface to a little robot.  Oh, but there was one more thing.  Antoine and Julie announce that all attendees would also be getting Surface Pros.  BONUS! How much more could there be?  Gurdeep Singh Pall was about to pile on.  He introduced us to Bing as a platform (BaaP?).  He said if they (Microsoft) could do something with and API that is good 3rd party developers can do something that is dynamite and showed us some of the tools they had produced.  These included natural user interface improvements such as voice commands that looked to put Siri to shame.  Add to that 3D, OCR and translation capabilities and the future looks to be full of opportunities. Balmer then came out to show us one last thing.  Project Spark is a game design environment that will be available for Windows 8.1, XBox 360 and XBox One.  All I can say is that if my kids get their hands on this they are going to be able to learn some of what dad does in a much more enjoyable way. At the end of it all I was both exhausted and energized by what I saw.  What could they have possibly left for the day 2 keynote?  I hear it will feature Scott Hanselman.  If that is right we are in for a treat.  See you there. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote,Bing,Visual Studio 2013,Project Spark

    Read the article

  • Migrating Core Data to new UIManagedDocument in iOS 5

    - by samerpaul
    I have an app that has been on the store since iOS 3.1, so there is a large install base out there that still uses Core Data loaded up in my AppDelegate. In the most recent set of updates, I raised the minimum version to 4.3 but still kept the same way of loading the data. Recently, I decided it's time to make the minimum version 5.1 (especially with 6 around the corner), so I wanted to start using the new fancy UIManagedDocument way of using Core Data. The issue with this though is that the old database file is still sitting in the iOS app, so there is no migrating to the new document. You have to basically subclass UIManagedDocument with a new model class, and override a couple of methods to do it for you. Here's a tutorial on what I did for my app TimeTag.  Step One: Add a new class file in Xcode and subclass "UIManagedDocument" Go ahead and also add a method to get the managedObjectModel out of this class. It should look like:   @interface TimeTagModel : UIManagedDocument   - (NSManagedObjectModel *)managedObjectModel;   @end   Step two: Writing the methods in the implementation file (.m) I first added a shortcut method for the applicationsDocumentDirectory, which returns the URL of the app directory.  - (NSURL *)applicationDocumentsDirectory {     return [[[NSFileManagerdefaultManager] URLsForDirectory:NSDocumentDirectoryinDomains:NSUserDomainMask] lastObject]; }   The next step was to pull the managedObjectModel file itself (.momd file). In my project, it's called "minimalTime". - (NSManagedObjectModel *)managedObjectModel {     NSString *path = [[NSBundlemainBundle] pathForResource:@"minimalTime"ofType:@"momd"];     NSURL *momURL = [NSURL fileURLWithPath:path];     NSManagedObjectModel *managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:momURL];          return managedObjectModel; }   After that, I need to check for a legacy installation and migrate it to the new UIManagedDocument file instead. This is the overridden method: - (BOOL)configurePersistentStoreCoordinatorForURL:(NSURL *)storeURL ofType:(NSString *)fileType modelConfiguration:(NSString *)configuration storeOptions:(NSDictionary *)storeOptions error:(NSError **)error {     // If legacy store exists, copy it to the new location     NSURL *legacyPersistentStoreURL = [[self applicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTime.sqlite"];          NSFileManager* fileManager = [NSFileManagerdefaultManager];     if ([fileManager fileExistsAtPath:legacyPersistentStoreURL.path])     {         NSLog(@"Old db exists");         NSError* thisError = nil;         [fileManager replaceItemAtURL:storeURL withItemAtURL:legacyPersistentStoreURL backupItemName:niloptions:NSFileManagerItemReplacementUsingNewMetadataOnlyresultingItemURL:nilerror:&thisError];     }          return [superconfigurePersistentStoreCoordinatorForURL:storeURL ofType:fileType modelConfiguration:configuration storeOptions:storeOptions error:error]; }   Basically what's happening above is that it checks for the minimalTime.sqlite file inside the app's bundle on the iOS device.  If the file exists, it tells you inside the console, and then tells the fileManager to replace the storeURL (inside the method parameter) with the legacy URL. This basically gives your app access to all the existing data the user has generated (otherwise they would load into a blank app, which would be disastrous). It returns a YES if successful (by calling it's [super] method). Final step: Actually load this database Due to how my app works, I actually have to load the database at launch (instead of shortly after, which would be ideal). I call a method called loadDatabase, which looks like this: -(void)loadDatabase {     static dispatch_once_t onceToken;          // Only do this once!     dispatch_once(&onceToken, ^{         // Get the URL         // The minimalTimeDB name is just something I call it         NSURL *url = [[selfapplicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTimeDB"];         // Init the TimeTagModel (our custom class we wrote above) with the URL         self.timeTagDB = [[TimeTagModel alloc] initWithFileURL:url];           // Setup the undo manager if it's nil         if (self.timeTagDB.undoManager == nil){             NSUndoManager *undoManager = [[NSUndoManager  alloc] init];             [self.timeTagDB setUndoManager:undoManager];         }                  // You have to actually check to see if it exists already (for some reason you can't just call "open it, and if it's not there, create it")         if ([[NSFileManagerdefaultManager] fileExistsAtPath:[url path]]) {             // If it does exist, try to open it, and if it doesn't open, let the user (or at least you) know!             [self.timeTagDB openWithCompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Opened the file--it already existed");                     [self refreshData];                 }             }];         }         else {             // If it doesn't exist, you need to attempt to create it             [self.timeTagDBsaveToURL:url forSaveOperation:UIDocumentSaveForCreatingcompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Created the file--it did not exist");                     [self refreshData];                 }             }];         }     }); }   If you're curious what refreshData looks like, it sends out a NSNotification that the database has been loaded: -(void)refreshData {     NSNotification* refreshNotification = [NSNotificationnotificationWithName:kNotificationCenterRefreshAllDatabaseData object:self.timeTagDB.managedObjectContext  userInfo:nil];     [[NSNotificationCenter defaultCenter] postNotification:refreshNotification];     }   The kNotificationCenterRefreshAllDatabaseData is just a constant I have defined elsewhere that keeps track of all the NSNotification names I use. I pass the managedObjectContext of the newly created file so that my view controllers can have access to it, and start passing it around to one another. The reason we do this as a Notification is because this is being run in the background, so we can't know exactly when it finishes. Make sure you design your app for this! Have some kind of loading indicator, or make sure your user can't attempt to create a record before the database actually exists, because it will crash the app.

    Read the article

  • Visual Tree Enumeration

    - by codingbloke
    I feel compelled to post this blog because I find I’m repeatedly posting this same code in silverlight and windows-phone-7 answers in Stackoverflow. One common task that we feel we need to do is burrow into the visual tree in a Silverlight or Windows Phone 7 application (actually more recently I found myself doing this in WPF as well).  This allows access to details that aren’t exposed directly by some controls.  A good example of this sort of requirement is found in the “Restoring exact scroll position of a listbox in Windows Phone 7”  question on stackoverflow.  This required that the scroll position of the scroll viewer internal to a listbox be accessed. A caveat One caveat here is that we should seriously challenge the need for this burrowing since it may indicate that there is a design problem.  Burrowing into the visual tree or indeed burrowing out to containing ancestors could represent significant coupling between module boundaries and that generally isn’t a good idea. Why isn’t this idea just not cast aside as a no-no?  Well the whole concept of a “Templated Control”, which are in extensive use in these applications, opens the coupling between the content of the visual tree and the internal code of a control.   For example, I can completely change the appearance and positioning of elements that make up a ComboBox.  The ComboBox control relies on specific template parts having set names of a specified type being present in my template.  Rightly or wrongly this does kind of give license to writing code that has similar coupling. Hasn’t this been done already? Yes it has.  There are number of blogs already out there with similar solutions.  In fact if you are using Silverlight toolkit the VisualTreeExtensions class already provides this feature.  However I prefer my specific code because of the simplicity principle I hold to.  Only write the minimum code necessary to give all the features needed.  In this case I add just two extension methods Ancestors and Descendents, note I don’t bother with “Get” or “Visual” prefixes.  Also I haven’t added Parent or Children methods nor additional “AndSelf” methods because all but Children is achievable with the addition of some other Linq methods.  I decided to give Descendents an additional overload for depth hence a depth of 1 is equivalent to Children but this overload is a little more flexible than simply Children. So here is the code:- VisualTreeEnumeration public static class VisualTreeEnumeration {     public static IEnumerable<DependencyObject> Descendents(this DependencyObject root, int depth)     {         int count = VisualTreeHelper.GetChildrenCount(root);         for (int i = 0; i < count; i++)         {             var child = VisualTreeHelper.GetChild(root, i);             yield return child;             if (depth > 0)             {                 foreach (var descendent in Descendents(child, --depth))                     yield return descendent;             }         }     }     public static IEnumerable<DependencyObject> Descendents(this DependencyObject root)     {         return Descendents(root, Int32.MaxValue);     }     public static IEnumerable<DependencyObject> Ancestors(this DependencyObject root)     {         DependencyObject current = VisualTreeHelper.GetParent(root);         while (current != null)         {             yield return current;             current = VisualTreeHelper.GetParent(current);         }     } }   Usage examples The following are some examples of how to combine the above extension methods with Linq to generate the other axis scenarios that tree traversal code might require. Missing Axis Scenarios var parent = control.Ancestors().Take(1).FirstOrDefault(); var children = control.Descendents(1); var previousSiblings = control.Ancestors().Take(1)     .SelectMany(p => p.Descendents(1).TakeWhile(c => c != control)); var followingSiblings = control.Ancestors().Take(1)     .SelectMany(p => p.Descendents(1).SkipWhile(c => c != control).Skip(1)); var ancestorsAndSelf = Enumerable.Repeat((DependencyObject)control, 1)     .Concat(control.Ancestors()); var descendentsAndSelf = Enumerable.Repeat((DependencyObject)control, 1)     .Concat(control.Descendents()); You might ask why I don’t just include these in the VisualTreeEnumerator.  I don’t on the principle of only including code that is actually needed.  If you find that one or more of the above  is needed in your code then go ahead and create additional methods.  One of the downsides to Extension methods is that they can make finding the method you actually want in intellisense harder. Here are some real world usage scenarios for these methods:- Real World Scenarios //Gets the internal scrollviewer of a ListBox ScrollViewer sv = someListBox.Descendents().OfType<ScrollViewer>().FirstOrDefault(); // Get all text boxes in current UserControl:- var textBoxes = this.Descendents().OfType<TextBox>(); // All UIElement direct children of the layout root grid:- var topLevelElements = LayoutRoot.Descendents(0).OfType<UIElement>(); // Find the containing `ListBoxItem` for a UIElement:- var container = elem.Ancestors().OfType<ListBoxItem>().FirstOrDefault(); // Seek a button with the name "PinkElephants" even if outside of the current Namescope:- var pinkElephantsButton = this.Descendents()     .OfType<Button>()     .FirstOrDefault(b => b.Name == "PinkElephants"); //Clear all checkboxes with the name "Selector" in a Treeview foreach (CheckBox checkBox in elem.Descendents()     .OfType<CheckBox>().Where(c => c.Name == "Selector")) {     checkBox.IsChecked = false; }   The last couple of examples above demonstrate a common requirement of finding controls that have a specific name.  FindName will often not find these controls because they exist in a different namescope. Hope you find this useful, if not I’m just glad to be able to link to this blog in future stackoverflow answers.

    Read the article

  • On Reflector Pricing

    - by Nick Harrison
    I have heard a lot of outrage over Red Gate's decision to charge for Reflector. In the interest of full disclosure, I am a fan of Red Gate. I have worked with them on several usability tests. They also sponsor Simple Talk where I publish articles. They are a good company. I am also a BIG fan of Reflector. I have used it since Lutz originally released it. I have written my own add-ins. I have written code to host reflector and use its object model in my own code. Reflector is a beautiful tool. The care that Lutz took to incorporate extensibility is amazing. I have never had difficulty convincing my fellow developers that it is a wonderful tool. Almost always, once anyone sees it in action, it becomes their favorite tool. This wide spread adoption and usability has made it an icon and pivotal pillar in the DotNet community. Even folks with the attitude that if it did not come out of Redmond then it must not be any good, still love it. It is ironic to hear everyone clamoring for it to be released as open source. Reflector was never open source, it was free, but you never were able to peruse the source code and contribute your own changes. You could not even use Reflector to view the source code. From the very beginning, it was never anyone's intention for just anyone to examine the source code and make their own contributions aside from the add-in model. Lutz chose to hand over the reins to Red Gate because he believed that they would be able to build on his original vision and keep the product viable and effective. He did not choose to make it open source, hoping that the community would be up to the challenge. The simplicity and elegance may well have been lost with the "design by committee" nature of open source. Despite being a wonderful and beloved tool, Reflector cannot be an easy tool to maintain. Maybe because it is so wonderful and beloved, it is even more difficult to maintain. At any rate, we have high expectations. Reflector must continue to be able to reasonably disassemble every language construct that the framework and core languages dream up. We want it to be fast, and we also want it to continue to be simple to use. No small order. Red Gate tried to keep the core product free. Sadly there was not enough interest in the Pro version to subsidize the rest of the expenses. $35 is a reasonable cost, more than reasonable. I have read the blog posts and forum posts complaining about the time associated with getting the expense approved. I have heard people complain about the cost being unreasonable if you are a developer from certain countries. Let's do the math. How much of a productivity boost is Reflector? How many hours do you think it saves you in a typical project? The next question is a little easier if you are a contractor or a consultant, but what is your hourly rate? If you are not a contractor, you can probably figure out an hourly rate. How long does it take to get a return on your investment? The value added proposition is not a difficult one to make. I have read people clamoring that Red Gate sucks and is evil. They complain about broken promises and conflicts of interest. Relax! Red Gate is not evil. The world is not coming to an end. The sun will come up tomorrow. I am sure that Red Gate will come up with options for volume licensing or site licensing for companies that want to get a licensed copy for their entire team. Don't panic, and I am sure that many great improvements are on the horizon. Switching the UI to WPF and including a tabbed interface opens up lots of possibilities.

    Read the article

  • Mixing Forms and Token Authentication in a single ASP.NET Application (the Details)

    - by Your DisplayName here!
    The scenario described in my last post works because of the design around HTTP modules in ASP.NET. Authentication related modules (like Forms authentication and WIF WS-Fed/Sessions) typically subscribe to three events in the pipeline – AuthenticateRequest/PostAuthenticateRequest for pre-processing and EndRequest for post-processing (like making redirects to a login page). In the pre-processing stage it is the modules’ job to determine the identity of the client based on incoming HTTP details (like a header, cookie, form post) and set HttpContext.User and Thread.CurrentPrincipal. The actual page (in the ExecuteHandler event) “sees” the identity that the last module has set. So in our case there are three modules in effect: FormsAuthenticationModule (AuthenticateRequest, EndRequest) WSFederationAuthenticationModule (AuthenticateRequest, PostAuthenticateRequest, EndRequest) SessionAuthenticationModule (AuthenticateRequest, PostAuthenticateRequest) So let’s have a look at the different scenario we have when mixing Forms auth and WS-Federation. Anoymous request to unprotected resource This is the easiest case. Since there is no WIF session cookie or a FormsAuth cookie, these modules do nothing. The WSFed module creates an anonymous ClaimsPrincipal and calls the registered ClaimsAuthenticationManager (if any) to transform it. The result (by default an anonymous ClaimsPrincipal) gets set. Anonymous request to FormsAuth protected resource This is the scenario where an anonymous user tries to access a FormsAuth protected resource for the first time. The principal is anonymous and before the page gets rendered, the Authorize attribute kicks in. The attribute determines that the user needs authentication and therefor sets a 401 status code and ends the request. Now execution jumps to the EndRequest event, where the FormsAuth module takes over. The module then converts the 401 to a redirect (302) to the forms login page. If authentication is successful, the login page sets the FormsAuth cookie.   FormsAuth authenticated request to a FormsAuth protected resource Now a FormsAuth cookie is present, which gets validated by the FormsAuth module. This cookie gets turned into a GenericPrincipal/FormsIdentity combination. The WS-Fed module turns the principal into a ClaimsPrincipal and calls the registered ClaimsAuthenticationManager. The outcome of that gets set on the context. Anonymous request to STS protected resource This time the anonymous user tries to access an STS protected resource (a controller decorated with the RequireTokenAuthentication attribute). The attribute determines that the user needs STS authentication by checking the authentication type on the current principal. If this is not Federation, the redirect to the STS will be made. After successful authentication at the STS, the STS posts the token back to the application (using WS-Federation syntax). Postback from STS authentication After the postback, the WS-Fed module finds the token response and validates the contained token. If successful, the token gets transformed by the ClaimsAuthenticationManager, and the outcome is a) stored in a session cookie, and b) set on the context. STS authenticated request to an STS protected resource This time the WIF Session authentication module kicks in because it can find the previously issued session cookie. The module re-hydrates the ClaimsPrincipal from the cookie and sets it.     FormsAuth and STS authenticated request to a protected resource This is kind of an odd case – e.g. the user first authenticated using Forms and after that using the STS. This time the FormsAuth module does its work, and then afterwards the session module stomps over the context with the session principal. In other words, the STS identity wins.   What about roles? A common way to set roles in ASP.NET is to use the role manager feature. There is a corresponding HTTP module for that (RoleManagerModule) that handles PostAuthenticateRequest. Does this collide with the above combinations? No it doesn’t! When the WS-Fed module turns existing principals into a ClaimsPrincipal (like it did with the FormsIdentity), it also checks for RolePrincipal (which is the principal type created by role manager), and turns the roles in role claims. Nice! But as you can see in the last scenario above, this might result in unnecessary work, so I would rather recommend consolidating all role work (and other claims transformations) into the ClaimsAuthenticationManager. In there you can check for the authentication type of the incoming principal and act accordingly. HTH

    Read the article

  • My Feelings About Microsoft Surface

    - by Valter Minute
    Advice: read the title carefully, I’m talking about “feelings” and not about advanced technical points proved in a scientific and objective way I still haven’t had a chance to play with a MS Surface tablet (I would love to, of course) and so my ideas just came from reading different articles on the net and MS official statements. Remember also that the MVP motto begins with “Independent” (“Independent Experts. Real World Answers.”) and this is just my humble opinion about a product and a technology. I know that, being an MS MVP you can be called an “MS-fanboy”, I don’t care, I hope that people can appreciate my opinion, even if it doesn’t match theirs. The “Surface” brand can be confusing for techies that knew the “original” surface concept but I think that will be a fresh new brand name for most of the people out there. But marketing department are here to confuse people… so I can understand this “recycle” of an existing name. So Microsoft is entering the hardware arena… for me this is good news. Microsoft developed some nice hardware in the past: the xbox, zune (even if the commercial success was quite limited) and, last but not least, the two arc mices (old and new model) that I use and appreciate. In the past Microsoft worked with OEMs and that model lead to good and bad things. Good thing (for microsoft, at least) is market domination by windows-based PCs that only in the last years has been reduced by the return of the Mac and tablets. Google is also moving in the hardware business with its acquisition of Motorola, and Apple leveraged his control of both the hardware and software sides to develop innovative products. Microsoft can scare OEMs and make them fly away from windows (but where?) or just lead the pack, showing how devices should be designed to compete in the market and bring back some of the innovation that disappeared from recent PC products (look at the shelves of your favorite electronics store and try to distinguish a laptop between the huge mass of anonymous PCs on displays… only Macs shine out there…). Having to compete with MS “official” hardware will force OEMs to develop better product and bring back some real competition in a market that was ruled only by prices (the lower the better even when that means low quality) and no innovative features at all (when it was the last time that a new PC surprised you?). Moving into a new market is a big and risky move, but with Windows 8 Microsoft is playing a crucial move for its future, trying to be back in the innovation run against apple and google. MS can’t afford to fail this time. I saw the new devices (the WinRT and Pro) and the specifications are scarce, misleading and confusing. The first impression is that the device looks like an iPad with a nice keyboard cover… Using “HD” and “full HD” to define display resolution instead of using the real figures and reviving the “ClearType” brand (now dead on Win8 as reported here and missed by people who hate to read text on displays, like myself) without providing clear figures (couldn’t you count those damned pixels?) seems to imply that MS was caught by surprise by apple recent “retina” displays that brought very high definition screens on tablets.Also there are no specifications about the processors used (even if some sources report NVidia Tegra for the ARM tablet and i5 for the x86 one) and expected battery life (a critical point for tablets and the point that killed Windows7 x86 based tablets). Also nothing about the price, and this will be another critical point because other platform out there already provide lots of applications and have a good user base, if MS want to enter this market tablets pricing must be competitive. There are some expansion ports (SD and USB), so no fixed storage model (even if the specs talks about 32-64GB for RT and 128-256GB for pro). I like this and don’t like the apple model where flash memory (that it’s dirt cheap used in thumdrives or SD cards) is as expensive as gold (or cocaine to have a more accurate per gram measurement) when mounted inside a tablet/phone. For big files you’ll be able to use external media and an SD card could be used to store files that don’t require super-fast SSD-like access times, I hope. To be honest I really don’t like the marketplace model and the limitation of Windows RT APIs (no local database? from a company that based a good share of its success on VB6+Access!) and lack of desktop support on the ARM (even if the support is here and has been used to port office). It’s a step toward the consumer market (where competitors are making big money), but may impact enterprise (and embedded) users that may not appreciate Windows 8 new UI or the limitations of the new app model (if you aren’t connected you are dead ). Not having compatibility with the desktop will require brand new applications and honestly made all the CPU cycles spent to convert .NET IL into real machine code in the past like a huge waste of time… as soon as a new processor architecture is supported by Windows you still have to rewrite part of your application (and MS is pushing HTML5+JS and native code more than .NET in my perception). On the other side I believe that the development experience provided by Visual Studio is still miles (or kilometres) ahead of the competition and even the all-uppercase menu of VS2012 hasn’t changed this situation. The new metro UI got mixed reviews. On my side I should say that is very pleasant to use on a touch screen, I like the minimalist design (even if sometimes is too minimal and hides stuff that, in my opinion, should be visible) but I should also say that using it with mouse and keyboard is like trying to pick your nose with boxing gloves… Metro is also very interesting for embedded devices where touch screen usage is quite common and where having an application taking all the screen is the norm. For devices like kiosks, vending machines etc. this kind of UI can be a great selling point. I don’t need a new tablet (to be honest I’m pretty happy with my wife’s iPad and with my PC), but I may change my opinion after having a chance to play a little bit with those new devices and understand what’s hidden under all this mysterious and generic announcements and specifications!

    Read the article

  • A Knights Tale

    - by Phil Factor
    There are so many lessons to be learned from the story of Knight Capital losing nearly half a billion dollars as a result of a deployment gone wrong. The Knight Capital Group (KCG N) was an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. According to the recent order (File No.3.15570) against Knight Capital by U.S. Securities and Exchange Commission?, Knight had, for many years used some software which broke up incoming “parent” orders into smaller “child” orders that were then transmitted to various exchanges or trading venues for execution. A tracking ‘cumulative quantity’ function counted the number of ‘child’ orders and stopped the process once the total of child orders matched the ‘parent’ and so the parent order had been completed. Back in the mists of time, some code had been added to it  which was excuted if a particular flag was set. It was called ‘power peg’ and seems to have had a similar design and purpose, but, one guesses, would have shared the same tracking function. This code had been abandoned in 2003, but never deleted. In 2005, The tracking function was moved to an earlier point in the main process. It would seem from the account that, from that point, had that flag ever been set, the old ‘Power Peg’ would have been executed like Godzilla bursting from the ice, making child orders without limit without any tracking function. It wasn’t, presumably because the software that set the flag was removed. In 2012, nearly a decade after ‘Power Peg’ was abandoned, Knight prepared a new module to their software to cope with the imminent Retail Liquidity Program (RLP) for the New York Stock Exchange. By this time, the flag had remained unused and someone made the fateful decision to reuse it, and replace the old ‘power peg’ code with this new RLP code. Had the two actions been done together in a single automated deployment, and the new deployment tested, all would have been well. It wasn’t. To quote… “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15) “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16) SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Knight’s shares dropped more than 20% after traders saw extreme volume spikes in a number of stocks, including preferred shares of Wells Fargo (JWF) and semiconductor company Spansion (CODE). Both stocks, which see roughly 100,000 trade per day, had changed hands more than 4 million times by late morning. Ultimately, Knight lost over $460 million from this wild 45 minutes of trading. Obviously, I’m interested in all this because, at one time, I used to write trading systems for the City of London. Obviously, the US SEC is in a far better position than any of us to work out the failings of Knight’s IT department, and the report makes for painful reading. I can’t help observing, though, that even with the breathtaking mistakes all along the way, that a robust automated deployment process that was ‘all-or-nothing’, and tested from soup to nuts would have prevented the disaster. The report reads like a Greek Tragedy. All the way along one wants to shout ‘No! not that way!’ and ‘Aargh! Don’t do it!’. As the tragedy unfolds, the audience weeps for the players, trapped by a cruel fate. All application development and deployment requires defense in depth. All IT goes wrong occasionally, but if there is a culture of defensive programming throughout, the consequences are usually containable. For financial systems, these defenses are required by statute, and ignored only by the foolish. Knight’s mistakes weren’t made by just one hapless sysadmin, but were progressive errors by an  IT culture spanning at least ten years.  One can spell these out, but I think they’re obvious. One can only hope that the industry studies what happened in detail, learns from the mistakes, and draws the right conclusions.

    Read the article

  • 2D Particle Explosion

    - by TheBroodian
    I'm developing a 2D action game, and in said game I've given my primary character an ability he can use to throw a fireball. I'm trying to design an effect so that when said fireball collides (be it with terrain or with an enemy) that the fireball will explode. For the explosion effect I've created a particle that once placed into game space will follow random, yet autonomic behavior based on random variables. Here is my question: When I generate my explosion (essentially 90 of these particles) I get one of two behaviors, 1) They are all generated with the same random variables, and don't resemble an explosion at all, more like a large mass of clumped sprites that all follow the same randomly generated path. 2) If I assign each particle a unique seed to its random number generator, they are a little bit -more- spread out, yet clumping is still visible (they seem to fork out into 3 different directions) Does anybody have any tips for producing particle-based 2D explosions? I'll include the code for my particle and the event I'm generating them in. Fire particle class: public FireParticle(xTile.Dimensions.Location StartLocation, ContentManager content) { worldLocation = StartLocation; fireParticleAnimation = new FireParticleAnimation(content); random = new Random(); int rightorleft = random.Next(0, 3); int upordown = random.Next(1, 3); int xVelocity = random.Next(0, 101); int yVelocity = random.Next(0, 101); Vector2 tempVector2 = new Vector2(0,0); if (rightorleft == 1) { tempVector2 = new Vector2(xVelocity, tempVector2.Y); } else if (rightorleft == 2) { tempVector2 = new Vector2(-xVelocity, tempVector2.Y); } if (upordown == 1) { tempVector2 = new Vector2(tempVector2.X, -yVelocity); } else if (upordown == 2) { tempVector2 = new Vector2(tempVector2.X, yVelocity); } velocity = tempVector2; scale = random.Next(1, 11); upwardForce = -10; dead = false; } public FireParticle(xTile.Dimensions.Location StartLocation, ContentManager content, int seed) { worldLocation = StartLocation; fireParticleAnimation = new FireParticleAnimation(content); random = new Random(seed); int rightorleft = random.Next(0, 3); int upordown = random.Next(1, 3); int xVelocity = random.Next(0, 101); int yVelocity = random.Next(0, 101); Vector2 tempVector2 = new Vector2(0, 0); if (rightorleft == 1) { tempVector2 = new Vector2(xVelocity, tempVector2.Y); } else if (rightorleft == 2) { tempVector2 = new Vector2(-xVelocity, tempVector2.Y); } if (upordown == 1) { tempVector2 = new Vector2(tempVector2.X, -yVelocity); } else if (upordown == 2) { tempVector2 = new Vector2(tempVector2.X, yVelocity); } velocity = tempVector2; scale = random.Next(1, 11); upwardForce = -10; dead = false; } #endregion #region Update and Draw public void Update(GameTime gameTime) { elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; fireParticleAnimation.Update(gameTime); Vector2 moveAmount = velocity * elapsed; xTile.Dimensions.Location newPosition = new xTile.Dimensions.Location(worldLocation.X + (int)moveAmount.X, worldLocation.Y + (int)moveAmount.Y); worldLocation = newPosition; velocity.Y += upwardForce; if (fireParticleAnimation.finishedPlaying) { dead = true; } } public void Draw(SpriteBatch spriteBatch) { spriteBatch.Draw( fireParticleAnimation.image.Image, new Rectangle((int)drawLocation.X, (int)drawLocation.Y, scale, scale), fireParticleAnimation.image.SizeAndsource, Color.White * fireParticleAnimation.image.Alpha); } Fireball explosion event: public override void Update(GameTime gameTime) { if (enabled) { float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; foreach (Heart_of_Fire.World_Objects.Particles.FireParticle particle in explosionParticles.ToList()) { particle.Update(gameTime); if (particle.Dead) { explosionParticles.Remove(particle); } } collisionRectangle = new Microsoft.Xna.Framework.Rectangle((int)wrldPstn.X, (int)wrldPstn.Y, 5, 5); explosionCheck = exploded; if (!exploded) { coreGraphic.Update(gameTime); tailGraphic.Update(gameTime); Vector2 moveAmount = velocity * elapsed; moveAmount = horizontalCollision(moveAmount, layer); moveAmount = verticalCollision(moveAmount, layer); Vector2 newPosition = new Vector2(wrldPstn.X + moveAmount.X, wrldPstn.Y + moveAmount.Y); if (hasCollidedHorizontally || hasCollidedVertically) { exploded = true; } wrldPstn = newPosition; worldLocation = new xTile.Dimensions.Location((int)wrldPstn.X, (int)wrldPstn.Y); } if (explosionCheck != exploded) { for (int i = 0; i < 90; i++) { explosionParticles.Add(new World_Objects.Particles.FireParticle( new Location( collisionRectangle.X + random.Next(0, 6), collisionRectangle.Y + random.Next(0, 6)), contentMgr)); } } if (exploded && explosionParticles.Count() == 0) { //enabled = false; } } }

    Read the article

  • The Best BPM Journey: More Exciting Destinations with Process Accelerators

    - by Cesare Rotundo
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Open World (OOW) earlier this month has been a great occasion to discuss with our BPM customers. It was interesting to hear definite patterns emerging from those conversations: “BPM is a journey”, “experiences to share”, “our organization now understands what BPM is”, and my favorite (with some caveats): “BPM is like wine tasting, once you start, you want to try more”. These customers have started their journey, climbed up the learning curve, and reached a vantage point that allows them to see their next BPM destination. They see the next few processes they are going to tackle and improve with BPM. These processes/destinations target both horizontal processes where BPM replaces or coordinates manual activities, and critical industry processes that the company needs to improve to compete and deliver increasing value. Each new destination generates value, allowing the organization to reduce the cost of manual processes that were not supported by apps/custom development, and increase efficiency of end-to-end processes partially covered by apps/custom dev. The question we wanted to answer is how to help organizations experience deeper success with BPM, by increasing their awareness of the potential for reaching new targets, and equipping them with the right tools. We decided that we needed to identify destinations, and plot routes to show the fastest path to those destinations. In the end we want to enable customers to reach “Process Excellence”: continuously set new targets and consistently and efficiently reach them. The result is Oracle Process Accelerators (PA), solutions built using the rich functionality in Oracle BPM Suite. PAs offers a rapidly expanding list of exciting destinations. Our launch of the latest installment of Process Accelerators at Oracle Open World includes new Industry-focused solutions such as Public Sector Incident Reporting and Financial Services Loan Origination, and improved other horizontal PAs, including Travel Request Management, Document Routing and Approval, and Internal Service Requests. Just before OOW we had extended the Oracle deployment of Travel Request Management, riding the enthusiastic response from early adopters among travelers (employees), management and support (approvers). “Getting there first” means being among the first to extract value from the PA approach, while acquiring deeper insights into the customers’ perspective. This is especially noteworthy when it comes to PAs, a set of solutions designed to be quickly deployed and iteratively improved by customers. The OOW launch has generated immediate feedback from customers, non-customers, analysts, and partners. They all confirmed that both Business and IT at organizations benefit from PAs when it comes to exploring the potential for BPM to improve their business processes. PAs help customers visualize what can be done with BPM, and PAs are made to be extended: you can see your destination, change the path to fit your needs, and deploy. We're discovering new destinations/processes that the market wants us to support, generic enough across industries and within industries. We'll keep on building sets of requirements, deliver functional design, construct solutions using Oracle BPM, and test them not only functionally but for performance, scalability, clustering, making them robust, product-quality. Delivering BPM solutions with product-grade quality is the equivalent of following a tried-and-tested path on a map. Do you know of existing destinations in your industry? If yes, we can draw a path to innovative processes together.

    Read the article

  • Oracle Cloud Applications: The Right Ingredients Baked In

    - by yaldahhakim
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Oracle Cloud Applications: The Right Ingredients Baked In Eggs, flour, milk, and sugar. The magic happens when you mix these ingredients together. The same goes for the hottest technologies fast changing how IT impacts our organizations today: cloud, social, mobile, and big data. By themselves they’re pretty good; combining them with a great recipe is what unlocks real transformation power. Choosing the right cloud can be very similar to choosing the right cake. First consider comparing the core ingredients that go into baking a cake and the core design principles in building a cloud-based application. For instance, if flour is the base ingredient of a cake, then rich functionality that spans complete business processes is the base of an enterprise-grade cloud. Cloud computing is more than just consuming an "application as service", and having someone else manage it for you. Rather, the value of cloud is about making your business more agile in the marketplace, and shortening the time it takes to deliver and adopt new innovation. It’s also about improving not only the efficiency at which we communicate but the actual quality of the information shared as well. Data from different systems, like ingredients in a cake, must also be blended together effectively and evaluated through a consolidated lens. When this doesn’t happen, for instance when data in your sales cloud doesn't seamlessly connect with your order management and other “back office” applications, the speed and quality of information can decrease drastically. It’s like mixing ingredients in a strainer with a straw – you just can’t bring it all together without losing something. Mixing ingredients is similar to bringing clouds together, and co-existing cloud applications with traditional on premise applications. This is where a shared services  platform built on open standards and Service Oriented Architecture (SOA) is critical. It’s essentially a cloud recipe that calls for not only great ingredients, but also ingredients you can get locally or most likely already have in your kitchen (or IT shop.) Open standards is the best way to deliver a cost effective, durable application integration strategy – regardless of where your apps are deployed. It’s also the best way to build your own cloud applications, or extend the ones you consume from a third party. Just like using standard ingredients and tools you already have in your kitchen, a standards based cloud enables your IT resources to ensure a cloud works easily with other systems. Your IT staff can also make changes using tools they are already familiar with. Or even more ideal, enable business users to actually tailor their experience without having to call upon IT for help at all. This frees IT resources to focus more on developing new innovative services for the organization vs. run and maintain. Carrying the cake analogy forward, you need to add all the ingredients in before you bake it. The same is true with a modern cloud. To harness the full power of cloud, you can’t leave out some of the most important ingredients and just layer them on top later. This is what a lot of our niche competitors have done when it comes to social, mobile, big data and analytics, and other key technologies impacting the way we do business. The transformational power of these technology trends comes from having a strategy from the get-go that combines them into a winning recipe, and delivers them in a unified way. In looking at ways Oracle’s cloud is different from other clouds – not only is breadth of functionality rich across functional pillars like CRM, HCM, ERP, etc. but it embeds social, mobile, and rich intelligence capabilities where they make the most sense across business processes. This strategy enables the Oracle Cloud to uniquely deliver on all three of these dimensions to help our customers unlock the full power of these transformational technologies.

    Read the article

  • Proxied calls not working as expected

    - by AndyH
    I have been modifying an application to have a cleaner client/server split to allow for load splitting and resource sharing etc. Everything is written to an interface so it was easy to add a remoting layer to the interface using a proxy. Everything worked fine. The next phase was to add a caching layer to the interface and again this worked fine and speed was improved but not as much as I would have expected. On inspection it became very clear what was going on. I feel sure that this behavior has been seen many times before and there is probably a design pattern to solve the problem but it eludes me and I'm not even sure how to describe it. It is easiest explained with an example. Let's imagine the interface is interface IMyCode { List<IThing> getLots( List<String> ); IThing getOne( String id ); } The getLots() method calls getOne() and fills up the list before returning. The interface is implemented at the client which is proxied to a remoting client which then calls the remoting server which in turn calls the implementation at the server. At the client and the server layers there is also a cache. So we have :- Client interface | Client cache | Remote client | Remote server | Server cache | Server interface If we call getOne("A") at the client interface, the call is passed to the client cache which faults. This then calls the remote client which passes the call to the remote server. This then calls the server cache which also faults and so the call is eventually passed to the server interface which actually gets the IThing. In turn the server cache is filled and finally the client cache also. If getOne("A") is again called at the client interface the client cache has the data and it gets returned immediately. If a second client called getOne("B") it would fill the server cache with "B" as well as it's own client cache. Then, when the first client calls getOne("B") the client cache faults but the server cache has the data. This is all as one would expect and works well. Now lets call getLots( [ "C", "D" ] ). This works as you would expect by calling getOne() twice but there is a subtlety here. The call to getLots() cannot directly make use of the cache. Therefore the sequence is to call the client interface which in turn calls the remote client, then the remote server and eventually the server interface. This then calls getOne() to fill the list before returning. The problem is that the getOne() calls are being satisfied at the server when ideally they should be satisfied at the client. If you imagine that the client/server link is really slow then it becomes clear why the client call is more efficient than the server call once the client cache has the data. This example is contrived to illustrate the point. The more general problem is that you cannot just keep adding proxied layers to an interface and expect it to work as you would imagine. As soon as the call goes 'through' the proxy any subsequent calls are on the proxied side rather than 'self' side. Have I failed to learn or not learned something correctly? All this is implemented in Java and I haven't used EJBs. It seems that the example may be confusing. The problem is nothing to do with cache efficiencies. It is more to do with an illusion created by the use of proxies or AOP techniques in general. When you have an object whose class implements an interface there is an assumption that a call on that object might make further calls on that same object. For example, public String getInternalString() { return InetAddress.getLocalHost().toString(); } public String getString() { return getInternalString(); } If you get an object and call getString() the result depends where the code is running. If you add a remoting proxy to the class then the result could be different for calls to getString() and getInternalString() on the same object. This is because the initial call gets 'deproxied' before the actual method is called. I find this not only confusing but I wonder how I can control this behavior especially as the use of the proxy may be by a third party. The concept is fine but the practice is certainly not what I expected. Have I missed the point somewhere?

    Read the article

  • OIM 11g - Multi Valued attribute reconciliation of a child form

    - by user604275
    This topic gives a brief description on how we can do reconciliation of a child form attribute which is also multi valued from a flat file . The format of the flat file is (an example): ManagementDomain1|Entitlement1|DIRECTORY SERVER,EMAIL ManagementDomain2|Entitlement2|EMAIL PROVIDER INSTANCE - UMS,EMAIL VERIFICATION In OIM there will be a parent form for fields Management domain and Entitlement.Reconciliation will assign Servers ( which are multi valued) to corresponding Management  Domain and Entitlement .In the flat file , multi valued fields are seperated by comma(,). In the design console, Create a form with 'Server Name' as a field and make it a child form . Open the corresponding Resource Object and add this field for reconcilitaion.While adding , choose 'Multivalued' check box. (please find attached screen shot on how to add it , Child Table.docx) Open process definiton and add child form fields for recociliation. Please click on the 'Create Reconcilitaion Profile' buttton on the resource object tab. The API methods used for child form reconciliation are : 1.           reconEventKey =   reconOpsIntf.createReconciliationEvent(resObjName, reconData,                                                            false); ·                                    ‘False’  here tells that we are creating the recon for a child table . 2.               2.       reconOpsIntf.providingAllMultiAttributeData(reconEventKey, RECON_FIELD_IN_RO, true);                RECON_FIELD_IN_RO is the field that we added in the Resource Object while adding for reconciliation, please refer the screen shot) 3.    reconOpsIntf.addDirectBulkMultiAttributeData(reconEventKey,RECON_FIELD_IN_RO, bulkChildDataMapList);                 bulkChildDataMapList  is coded as below :                 List<Map> bulkChildDataMapList = new ArrayList<Map>();                   for (int i = 0; i < stokens.length; i++) {                            Map<String, String> attributeMap = new HashMap<String, String>();                           String serverName = stokens[i].toUpperCase();                           attributeMap.put("Server Name", stokens[i]);                           bulkChildDataMapList.add(attributeMap);                         } 4                  4.       reconOpsIntf.finishReconciliationEvent(reconEventKey); 5.       reconOpsIntf.processReconciliationEvent(reconEventKey); Now, we have to register the plug-in, import metadata into MDS and then create a scheduled job to execute which will run the reconciliation.

    Read the article

  • UK OUG Conference Highlights and Insights

    - by Richard Bingham
    As per my preemptive post, this was the first time the annual conference organized by the UK Oracle User Group (UKOUG) was split into two events, one for Oracle Applications and another in December for Oracle Technology. Apps13, as it was branded, was hailed as a success, with over 1000 registered attendees and three days of sessions, exhibition, round-tables and many other types of content. As this poster on their stand illustrates, the UKOUG is a strong community with popular participants from both big and small Oracle partners and customers. The venue was a more intimate setting than previous years also, allowing everyone to casually bump into those they hoped to. It gave a real feeling of an Apps Community. The main themes over the days where CRM and Customer Experience, HCM, and FIN/SCM. This allowed people to attend just one focused day if they wanted. In addition the Apps Transformation stream ran across all three days, offering insights, advice, and details on the newer product solutions like Fusion Applications.  Here are some of the key take-aways I got from the conference, specific to my role in Fusion Applications Developer Relations: User Experience continues to be a significant reason for adopting some of the newer application products available, with immediately obvious gains in user productivity and satisfaction reported by customers. Also this doesn't stop with the baked-in UX either, with their Design Patterns proving popular and indeed currently being extended to including things like extending on ADF mobile and customizing the Simplified UI. More on this to come from us soon. The executive sessions emphasized the "it's a journey" phrase, illustrating that modern business applications are powered by technologies such as Cloud, Mobile, Social and Big Data and these can be harnessed to help propel your organization forward. Indeed the emphasis is away from the traditional vendor prescribed linear applications road map, and towards plotting a course based on business priorities supported by a broad range of integrated solutions. To help with this several conference sessions demoed the new "Applications Navigator" tool, developed in partnership with OUG members, which offers a visual framework to help organizations plan their Oracle Applications investments around business and technology imperatives. Initial reaction was positive, especially as customers do not need to decipher Oracle's huge product catalog and embeds the best blend of proven and integrated applications solutions. We'll share more on this when it is generally available. Several sessions focused around explanations and interpretation of Oracle OpenWorld 2013, helping highlight the key Oracle Applications messages and directions. With a relative small percentage of conference attendees also at OpenWorld (from a show of hands) this was a popular way to distill the information available down into specific items of interest for the community. Please note the original OpenWorld 2013 content is still available for download but will not remain available forever (via the Oracle website OpenWorld Content Catalog > pick a session > see the PDF download). With the release of E-Business Suite 12.2 the move to develop and deploy on the Fusion Middleware stack becomes a reality for many Oracle Applications customers. This coupled with recent E-Business Suite features such as the Integrated SOA Gateway and the E-Business Suite SDK for Java, illustrates how the gap between the technologies and techniques involved in extending E-Business Suite and Fusion Applications is quickly narrowing. We'll see this merging continue to evolve going forwards. Getting started with Oracle Cloud Applications is actually easier than many customers expected, with a broad selection of both large and medium sized organizations explaining how they added new features to their existing Oracle Applications portfolios. New functionality available from Fusion HCM and CX are popular extensions that do not have to disrupt those core business services. Coexistence is the buzzword here, and the available integration is also simpler than many expected, commonly involving an initial setup data load, then regularly incremental synchronizations, often without a need for real-time constant communication between systems. With much of this pre-built already the implementation process is also quite rapid. With most people dressed in suits, we wanted to get the conversations going without the traditional english reserve, so we decided to make ourselves a bit more obvious, as the photo below shows. This seemed to be quite successful and helped those interested identify and approach us. Keep a look out for similar again. In fact if you're in the UK there is an "Apps Transformation Day" planned by the UKOUG for the 19th March 2014, with more details to follow. Again something we'll be sure to participate in. I am hoping to attend the next half of the UKOUG annual conference, Tech13, that focuses more on Oracle technology and where there is more likely to be larger attendance of those interested in the lower-level aspects of applications customization and development. If you're going, let me know and maybe we can meet up.

    Read the article

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • Navigation in a #WP7 application with MVVM Light

    - by Laurent Bugnion
    In MVVM applications, it can be a bit of a challenge to send instructions to the view (for example a page) from a viewmodel. Thankfully, we have good tools at our disposal to help with that. In his excellent series “MVVM Light Toolkit soup to nuts”, Jesse Liberty proposes one approach using the MVVM Light messaging infrastructure. While this works fine, I would like to show here another approach using what I call a “view service”, i.e. an abstracted service that is invoked from the viewmodel, and implemented on the view. Multiple kinds of view services In fact, I use view services quite often, and even started standardizing them for the Windows Phone 7 applications I work on. If there is interest, I will be happy to show other such view services, for example Animation services, responsible to start/stop animations on the view. Dialog service, in charge of displaying messages to the user and gathering feedback. Navigation service, in charge of navigating to a given page directly from the viewmodel. In this article, I will concentrate on the navigation service. The INavigationService interface In most WP7 apps, the navigation service is used in quite a straightforward way. We want to: Navigate to a given URI. Go back. Be notified when a navigation is taking place, and be able to cancel. The INavigationService interface is quite simple indeed: public interface INavigationService { event NavigatingCancelEventHandler Navigating; void NavigateTo(Uri pageUri); void GoBack(); } Obviously, this interface can be extended if necessary, but in most of the apps I worked on, I found that this covers my needs. The NavigationService class It is possible to nicely pack the navigation service into its own class. To do this, we need to remember that all the PhoneApplicationPage instances use the same instance of the navigation service, exposed through their NavigationService property. In fact, in a WP7 application, it is the main frame (RootFrame, of type PhoneApplicationFrame) that is responsible for this task. So, our implementation of the NavigationService class can leverage this. First the class will grab the PhoneApplicationFrame and store a reference to it. Also, it registers a handler for the Navigating event, and forwards the event to the listening viewmodels (if any). Then, the NavigateTo and the GoBack methods are implemented. They are quite simple, because they are in fact just a gateway to the PhoneApplicationFrame. The whole class is as follows: public class NavigationService : INavigationService { private PhoneApplicationFrame _mainFrame; public event NavigatingCancelEventHandler Navigating; public void NavigateTo(Uri pageUri) { if (EnsureMainFrame()) { _mainFrame.Navigate(pageUri); } } public void GoBack() { if (EnsureMainFrame() && _mainFrame.CanGoBack) { _mainFrame.GoBack(); } } private bool EnsureMainFrame() { if (_mainFrame != null) { return true; } _mainFrame = Application.Current.RootVisual as PhoneApplicationFrame; if (_mainFrame != null) { // Could be null if the app runs inside a design tool _mainFrame.Navigating += (s, e) => { if (Navigating != null) { Navigating(s, e); } }; return true; } return false; } } Exposing URIs I find that it is a good practice to expose each page’s URI as a constant. In MVVM Light applications, a good place to do that is the ViewModelLocator, which already acts like a central point of setup for the views and their viewmodels. Note that in some cases, it is necessary to expose the URL as a string, for instance when a query string needs to be passed to the view. So for example we could have: public static readonly Uri MainPageUri = new Uri("/MainPage.xaml", UriKind.Relative); public const string AnotherPageUrl = "/AnotherPage.xaml?param1={0}&param2={1}"; Creating and using the NavigationService Normally, we only need one instance of the NavigationService class. In cases where you use an IOC container, it is easy to simply register a singleton instance. For example, I am using a modified version of a super simple IOC container, and so I can register the navigation service as follows: SimpleIoc.Register<INavigationService, NavigationService>(); Then, it can be resolved where needed with: SimpleIoc.Resolve<INavigationService>(); Or (more frequently), I simply declare a parameter on the viewmodel constructor of type INavigationService and let the IOC container do its magic and inject the instance of the NavigationService when the viewmodel is created. On supported platforms (for example Silverlight 4), it is also possible to use MEF. Or, of course, we can simply instantiate the NavigationService in the ViewModelLocator, and pass this instance as a parameter of the viewmodels’ constructor, injected as a property, etc… Once the instance has been passed to the viewmodel, it can be used, for example with: NavigationService.NavigateTo(ViewModelLocator.ComparisonPageUri); Testing Thanks to the INavigationService interface, navigation can be mocked and tested when the viewmodel is put under unit test. Simply implement and inject a mock class, and assert that the methods are called as they should by the viewmodel. Conclusion As usual, there are multiple ways to code a solution answering your needs. I find that view services are a really neat way to delegate view-specific responsibilities such as animation, dialogs and of course navigation to other classes through an abstracted interface. In some cases, such as the NavigationService class exposed here, it is even possible to standardize the implementation and pack it in a class library for reuse. I hope that this sample is useful! Happy coding. Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Collaborative Whiteboard using WebSocket in GlassFish 4 - Text/JSON and Binary/ArrayBuffer Data Transfer (TOTD #189)

    - by arungupta
    This blog has published a few blogs on using JSR 356 Reference Implementation (Tyrus) as its integrated in GlassFish 4 promoted builds. TOTD #183: Getting Started with WebSocket in GlassFish TOTD #184: Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket TOTD #186: Custom Text and Binary Payloads using WebSocket One of the typical usecase for WebSocket is online collaborative games. This Tip Of The Day (TOTD) explains a sample that can be used to build such games easily. The application is a collaborative whiteboard where different shapes can be drawn in multiple colors. The shapes drawn on one browser are automatically drawn on all other peer browsers that are connected to the same endpoint. The shape, color, and coordinates of the image are transfered using a JSON structure. A browser may opt-out of sharing the figures. Alternatively any browser can send a snapshot of their existing whiteboard to all other browsers. Take a look at this video to understand how the application work and the underlying code. The complete sample code can be downloaded here. The code behind the application is also explained below. The web page (index.jsp) has a HTML5 Canvas as shown: <canvas id="myCanvas" width="150" height="150" style="border:1px solid #000000;"></canvas> And some radio buttons to choose the color and shape. By default, the shape, color, and coordinates of any figure drawn on the canvas are put in a JSON structure and sent as a message to the WebSocket endpoint. The JSON structure looks like: { "shape": "square", "color": "#FF0000", "coords": { "x": 31.59999942779541, "y": 49.91999053955078 }} The endpoint definition looks like: @WebSocketEndpoint(value = "websocket",encoders = {FigureDecoderEncoder.class},decoders = {FigureDecoderEncoder.class})public class Whiteboard { As you can see, the endpoint has decoder and encoder registered that decodes JSON to a Figure (a POJO class) and vice versa respectively. The decode method looks like: public Figure decode(String string) throws DecodeException { try { JSONObject jsonObject = new JSONObject(string); return new Figure(jsonObject); } catch (JSONException ex) { throw new DecodeException("Error parsing JSON", ex.getMessage(), ex.fillInStackTrace()); }} And the encode method looks like: public String encode(Figure figure) throws EncodeException { return figure.getJson().toString();} FigureDecoderEncoder implements both decoder and encoder functionality but thats purely for convenience. But the recommended design pattern is to keep them in separate classes. In certain cases, you may even need only one of them. On the client-side, the Canvas is initialized as: var canvas = document.getElementById("myCanvas");var context = canvas.getContext("2d");canvas.addEventListener("click", defineImage, false); The defineImage method constructs the JSON structure as shown above and sends it to the endpoint using websocket.send(). An instant snapshot of the canvas is sent using binary transfer with WebSocket. The WebSocket is initialized as: var wsUri = "ws://localhost:8080/whiteboard/websocket";var websocket = new WebSocket(wsUri);websocket.binaryType = "arraybuffer"; The important part is to set the binaryType property of WebSocket to arraybuffer. This ensures that any binary transfers using WebSocket are done using ArrayBuffer as the default type seem to be blob. The actual binary data transfer is done using the following: var image = context.getImageData(0, 0, canvas.width, canvas.height);var buffer = new ArrayBuffer(image.data.length);var bytes = new Uint8Array(buffer);for (var i=0; i<bytes.length; i++) { bytes[i] = image.data[i];}websocket.send(bytes); This comprehensive sample shows the following features of JSR 356 API: Annotation-driven endpoints Send/receive text and binary payload in WebSocket Encoders/decoders for custom text payload In addition, it also shows how images can be captured and drawn using HTML5 Canvas in a JSP. How could this be turned in to an online game ? Imagine drawing a Tic-tac-toe board on the canvas with two players playing and others watching. Then you can build access rights and controls within the application itself. Instead of sending a snapshot of the canvas on demand, a new peer joining the game could be automatically transferred the current state as well. Do you want to build this game ? I built a similar game a few years ago. Do somebody want to rewrite the game using WebSocket APIs ? :-) Many thanks to Jitu and Akshay for helping through the WebSocket internals! Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Windows 8/Surface Lunch Event Summary

    - by Tim Murphy
    Today was a big day for Microsoft with two separate launch event.  The first for Windows 8 and all of it’s hardware partners.  The second was specifically to introduce the Microsoft Windows 8 Surface tablet.  Below are some of the take-aways I got from the webcasts. Windows 8 Launch The three general area that Microsoft focused on were the release of the OS itself, the public unveiling of the Windows Store and the new devices available from its hardware partners. The release of the OS focused on the fact that it will be available at mid-night tonight for both new PCs and for upgrades.  I can’t say that this interested me that much since it was already known to most people.  I think what they did show well was how easy the OS really is to use. The Windows Store is also not a new feature to those of us who have been running the pre-release versions of Windows 8 or have owned Windows Phone 7 for the past 2 years.  What was interesting is that the Windows Store launches with more apps available than any other platforms store at their respective launch.  I think this says a lot about how Microsoft focuses on the ability of developers to create software and make it available.  The of course were sure to emphasize that the Windows Store has better monetary terms for developers than its competitors. The also showed off the fact that XBox Music streaming is available for to all Windows 8 user for free.  Couple this with the Bing suite of apps that give you news, weather, sports and finance right out of the box and I think most people will find the environment a joy to use. I think the hardware demo, while quick and furious, really show where Windows shine: CHOICE!  They made a statement that over 1000 devices have been certified for Windows 8.  They showed tablets, laptops, desktops, all-in-ones and convertibles.  Since these devices have industry standard connectors they give a much wider variety of accessories and devices that you can use with them. Steve Balmer then came on stage and tried to see how many times he could use the “magical”.  He focused on how the Windows 8 OS is designed to integrate with SkyDrive, Skype and Outlook.com.  He also enforced that they think Windows 8 is the best choice for the Enterprise when it comes to protecting data and integrating across devices including Windows Phone 8. With that we were left to wait for the second event of the day. Surface Launch The second event of the day started with kids with magnets.  Ok, they were adults, but who doesn’t like playing with magnets.  Steven Sinofsky detached and reattached the Surface keyboard repeatedly, clearly enjoying himself.  It turns out that there are 4 magnets in the cover, 2 for alignment and 2 as connectors. They then went to giving us the details on the display.  The 10.6” display is optically bonded to the case and is optimized to reduce glare.  I think this came through very well in the demonstrations. The properties of the case were also a great selling point.  The VaporMg allowed them to drop the device on stage, on purpose, and continue working.  Of course they had to bring out the skate boards made from Surface devices. “It just has to feel right” was the reason they gave for many of their design decisions from the weight and size of the device to the way the kickstand and camera work together.  While this gave you the feeling that the whole process was trial and error you could tell that a lot of science went into the specs.  This included making sure that the magnets were strong enough to hold the cover on and still have a 3 year old remove the cover without effort. I am glad that they also decided the a USB port would be part of the spec since it give so many options.  They made the point that this allows Surface to leverage over 420 million existing devices.  That works for me. The last feature that I really thought was important was the microSD port.  Begin stuck with the onboard memory has been an aggravation of mine with many of the devices in the market today. I think they did job of really getting the audience to understand why you want this platform and this particular device.  Using personal examples like creating a video of a birthday party and being in it or the fact that the device was being used to live blog the event and control the lights and presentation.  They showed very well that it was not only fun but very capable of getting real work done.  Handing out tablets to the crowd didn’t hurt either.  In the end I really wanted a Surface even though I really have no need for one on a daily basis.  Great job Microsoft! del.icio.us Tags: Windows 8,Win8,Windows 8 Luanch

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • If I were in a Silverlight focus group, here is ten things I would say.

    - by mbcrump
    Silverlight is a great product right off the shelf. I use it, love it and spend a lot of time helping the community understand it. This however, doesn’t mean that I don’t think that it can get better. If I were invited to a Microsoft Focus Group about Silverlight here is 10 things I would say:  We need more navigation templates. I’ve found (4) templates that Microsoft has released (Cosmo, Windows 7, Accent and JetPack). This number needs to be around 16. In order to get more people developing for Silverlight, we need to give them a variety of templates to get them off the ground quickly. Silverlight needs to ship with the next version of Windows. At least version 4 needs to be pre-installed on Windows going forward. It’s small, in its own sandbox and I cannot find a reason for it not to be included. Silverlight needs to run on more platforms.  iOS and Android are the key here. I think Microsoft should shoot for Android first since I believe Android will take the lead in the mobile market (at least for the short-term). It would also be great to see Microsoft use Silverlight as the focus on their new tablets / “AppleTV”. I would even invest in getting it working with Kinect. When creating a new project in Silverlight, we should have the option to create a Unit Test. Most Silverlight developers are not unit testing. If this is surprising to you then you need to get out and talk to more developers. I partially blame this on Microsoft. When you create a new ASP.NET MVC application, you simply put a check to create a Unit Test project. We need the same thing for Silverlight. We should steer the developer into the right direction. Design patterns such as MVVM need to be easier to implement in Silverlight solutions.  I’d go so far as to say that MVVM Light should ship with Visual Studio. With the project / item templates and code snippets, Laurent puts you into the right direction. This is the way that it should have been. Easy for the 9-5 developer to grasp. I believe the majority of developers use code behind because that’s what is in all the demos provided by Microsoft. They are not trying to write sucky code it is that they simply don’t know a better way.  The XAP Files should be obfuscated/unused references deleted by default when in “Release” mode. A better Silverlight experience starts with a smaller XAP file. The less that a user has to download is the better, even with the majority of people on broadband. I would also recommend built-in obfuscation by Microsoft. People are paranoid that they can rename the .zip and run it through reflector. Get rid of the boring install experiences. Here is a great write up on what I’m talking about. The default “Install Silverlight” and “Loading screens” suck. They suck bad. We need a choice of templates that a professional designer has created.  Silverlight needs to supports more image formats. For example: it would be great to use .gif’s without converting them to .png.    Switching between Blend 4 and VS2010 to develop a Silverlight application is a pain. Probably one of the biggest issues that I can’t think of a good solution for. It would be nice if VS2012 had the best of both worlds and you never have to leave VS. We need reporting controls with SSRS included with the Silverlight Toolkit. I can’t think of another control that we need built into the toolkit. It would also be helpful to have export to .xls, .pdf and .doc included with the control. I hope that this post will at least get a few people talking. Who knows, Microsoft could be working on these things right now. Thanks for reading!  Subscribe to my feed CodeProject

    Read the article

  • Part 4 of 4 : Tips/Tricks for Silverlight Developers.

    - by mbcrump
    Part 1 | Part 2 | Part 3 | Part 4 I wanted to create a series of blog post that gets right to the point and is aimed specifically at Silverlight Developers. The most important things I want this series to answer is : What is it?  Why do I care? How do I do it? I hope that you enjoy this series. Let’s get started: Tip/Trick #16) What is it? Find out version information about Silverlight and which WebKit it is using by going to http://issilverlightinstalled.com/scriptverify/. Why do I care? I’ve had those users that its just easier to give them a site and say copy/paste the line that says User Agent in order to troubleshoot a Silverlight problem. I’ve also been debugging my own Silverlight applications and needed an easy way to determine if the plugin is disabled or not. How do I do it: Simply navigate to http://issilverlightinstalled.com/scriptverify/ and hit the Verify button. An example screenshot is located below: Results from Chrome 7 Results from Internet Explorer 8 (With Silverlight Disabled) Tip/Trick #17) What is it? Use Lambdas whenever you can. Why do I care?  It is my personal opinion that code is easier to read using Lambdas after you get past the syntax. How do I do it: For example: You may write code like the following: void MainPage_Loaded(object sender, RoutedEventArgs e) { //Check and see if we have a newer .XAP file on the server Application.Current.CheckAndDownloadUpdateAsync(); Application.Current.CheckAndDownloadUpdateCompleted += new CheckAndDownloadUpdateCompletedEventHandler(Current_CheckAndDownloadUpdateCompleted); } void Current_CheckAndDownloadUpdateCompleted(object sender, CheckAndDownloadUpdateCompletedEventArgs e) { if (e.UpdateAvailable) { MessageBox.Show( "An update has been installed. To see the updates please exit and restart the application"); } } To me this style forces me to look for the other Method to see what the code is actually doing. The style located below is much easier to read in my opinion and does the exact same thing. void MainPage_Loaded(object sender, RoutedEventArgs e) { //Check and see if we have a newer .XAP file on the server Application.Current.CheckAndDownloadUpdateAsync(); Application.Current.CheckAndDownloadUpdateCompleted += (s, e) => { if (e.UpdateAvailable) { MessageBox.Show( "An update has been installed. To see the updates please exit and restart the application"); } }; } Tip/Trick #18) What is it? Prevent development Web Service references from breaking when Visual Studio auto generates a new port number. Why do I care?  We have all been there, we are developing a Silverlight Application and all of a sudden our development web services break. We check and find out that the local port number that Visual Studio assigned has changed and now we need up to update all of our service references. We need a way to stop this. How do I do it: This can actually be prevented with just a few mouse click. Right click on your web solution and goto properties. Click the tab that says, Web. You just need to click the radio button and specify a port number. Now you won’t be bothered with that anymore. Tip/Trick #19) What is it? You can disable the Close Button a ChildWindow. Why do I care?  I wouldn’t blog about it if I hadn’t seen it. Devs trying to override keystrokes to prevent users from closing a Child Window. How do I do it: A property exist on the ChildWindow called “HasCloseButton”, you simply change that to false and your close button is gone. You can delete the “Cancel” button and add some logic to the OK button if you want the user to respond before proceeding. Tip/Trick #20) What is it? Cleanup your XAML. Why do I care?  By removing unneeded namespaces, not naming all of your controls and getting rid of designer markup you can improve code quality and readability. How do I do it: (This is a 3 in one tip) Remove unused Designer markup: 1) Have you ever wondered what the following code snippet does? xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" This code is telling the designer to do something special with this page in “Design mode” Specifically the width and the height of the page. When its running in the browser it will not use this information and it is actually ignored by the XAML parser. In other words, if you don’t need it then delete it. 2) If you are not using a namespace then remove it. In the code sample below, I am using Resharper which will tell me the ones that I’m not using by the grayed out line below. If you don’t have resharper you can look in your XAML and manually remove the unneeded namespaces. 3) Don’t name an control unless you actually need to refer to it in procedural code. If you name a control you will take a slight performance hit that is totally unnecessary if its not being called. <TextBlock Height="23" Text="TextBlock" />   That is the end of the series. I hope that you enjoyed it and please check out Part 1 | Part 2 | Part 3 if your hungry for more.  Subscribe to my feed CodeProject

    Read the article

  • Viewing the NetBeans Central Registry (Part 2)

    - by Geertjan
    Jens Hofschröer, who has one of the very best NetBeans Platform blogs (if you more or less understand German), and who wrote, sometime ago, the initial version of the Import Statement Organizer, as well as being the main developer of a great gear design & manufacturing tool on the NetBeans Platform in Aachen, commented on my recent blog entry "Viewing the NetBeans Central Registry", where the root Node of the Central Registry is shown in a BeanTreeView, with the words: "I wrapped that Node in a FilterNode to provide the 'position' attribute and the 'file extension'. All Children are wrapped too. Then I used an OutlineView to show these two properties. Great tool to find wrong layer entries." I asked him for the code he describes above and he sent it to me. He discussed it here in his blog, while all the code involved can be read below. The result is as follows, where you can see that the OutlineView shows information that my simple implementation (via a BeanTreeView) kept hidden: And so here is the definition of the Node. class LayerPropertiesNode extends FilterNode { public LayerPropertiesNode(Node node) { super(node, isFolder(node) ? Children.create(new LayerPropertiesFactory(node), true) : Children.LEAF); } private static boolean isFolder(Node node) { return null != node.getLookup().lookup(DataFolder.class); } @Override public String getDisplayName() { return getLookup().lookup(FileObject.class).getName(); } @Override public Image getIcon(int type) { FileObject fo = getLookup().lookup(FileObject.class); try { DataObject data = DataObject.find(fo); return data.getNodeDelegate().getIcon(type); } catch (DataObjectNotFoundException ex) { Exceptions.printStackTrace(ex); } return super.getIcon(type); } @Override public Image getOpenedIcon(int type) { return getIcon(type); } @Override public PropertySet[] getPropertySets() { Set set = Sheet.createPropertiesSet(); set.put(new PropertySupport.ReadOnly<Integer>( "position", Integer.class, "Position", null) { @Override public Integer getValue() throws IllegalAccessException, InvocationTargetException { FileObject fileEntry = getLookup().lookup(FileObject.class); Integer posValue = (Integer) fileEntry.getAttribute("position"); return posValue != null ? posValue : Integer.valueOf(0); } }); set.put(new PropertySupport.ReadOnly<String>( "ext", String.class, "Extension", null) { @Override public String getValue() throws IllegalAccessException, InvocationTargetException { FileObject fileEntry = getLookup().lookup(FileObject.class); return fileEntry.getExt(); } }); PropertySet[] original = super.getPropertySets(); PropertySet[] withLayer = new PropertySet[original.length + 1]; System.arraycopy(original, 0, withLayer, 0, original.length); withLayer[withLayer.length - 1] = set; return withLayer; } private static class LayerPropertiesFactory extends ChildFactory<FileObject> { private final Node context; public LayerPropertiesFactory(Node context) { this.context = context; } @Override protected boolean createKeys(List<FileObject> list) { FileObject folder = context.getLookup().lookup(FileObject.class); FileObject[] children = folder.getChildren(); List<FileObject> ordered = FileUtil.getOrder(Arrays.asList(children), false); list.addAll(ordered); return true; } @Override protected Node createNodeForKey(FileObject key) { AbstractNode node = new AbstractNode(org.openide.nodes.Children.LEAF, key.isFolder() ? Lookups.fixed(key, DataFolder.findFolder(key)) : Lookups.singleton(key)); return new LayerPropertiesNode(node); } } } Then here is the definition of the Action, which pops up a JPanel, displaying an OutlineView: @ActionID(category = "Tools", id = "de.nigjo.nb.layerview.LayerViewAction") @ActionRegistration(displayName = "#CTL_LayerViewAction") @ActionReferences({ @ActionReference(path = "Menu/Tools", position = 1450, separatorBefore = 1425) }) @Messages("CTL_LayerViewAction=Display XML Layer") public final class LayerViewAction implements ActionListener { @Override public void actionPerformed(ActionEvent e) { try { Node node = DataObject.find(FileUtil.getConfigRoot()).getNodeDelegate(); node = new LayerPropertiesNode(node); node = new FilterNode(node) { @Override public Component getCustomizer() { LayerView view = new LayerView(); view.getExplorerManager().setRootContext(this); return view; } @Override public boolean hasCustomizer() { return true; } }; NodeOperation.getDefault().customize(node); } catch (DataObjectNotFoundException ex) { Exceptions.printStackTrace(ex); } } private static class LayerView extends JPanel implements ExplorerManager.Provider { private final ExplorerManager em; public LayerView() { super(new BorderLayout()); em = new ExplorerManager(); OutlineView view = new OutlineView("entry"); view.addPropertyColumn("position", "Position"); view.addPropertyColumn("ext", "Extension"); add(view); } @Override public ExplorerManager getExplorerManager() { return em; } } }

    Read the article

  • What are some concise and comprehensive introductory guide to unit testing for a self-taught programmer [closed]

    - by Superbest
    I don't have much formal training in programming and I have learned most things by looking up solutions on the internet to practical problems I have. There are some areas which I think would be valuable to learn, but which ended up both being difficult to learn and easy to avoid learning for a self-taught programmer. Unit testing is one of them. Specifically, I am interested in tests in and for C#/.NET applications using Microsoft.VisualStudio.TestTools in Visual Studio 2010 and/or 2012, but I really want a good introduction to the principles so language and IDE shouldn't matter much. At this time I'm interested in relatively trivial tests for small or medium sized programs (development time of weeks or months and mostly just myself developing). I don't necessarily intend to do test-driven development (I am aware that some say unit testing alone is supposed to be for developing features in TDD, and not an assurance that there are no bugs in the software, but unit testing is often the only kind of testing for which I have resources). I have found this tutorial which I feel gave me a decent idea of what unit tests and TDD looks like, but in trying to apply these ideas to my own projects, I often get confused by questions I can't answer and don't know how to answer, such as: What parts of my application and what sorts of things aren't necessarily worth testing? How fine grained should my tests be? Should they test every method and property separately, or work with a larger scope? What is a good naming convention for test methods? (since apparently the name of the method is the only way I will be able to tell from a glance at the test results table what works in my program and what doesn't) Is it bad to have many asserts in one test method? Since apparently VS2012 reports only that "an Assert.IsTrue failed within method MyTestMethod", and if MyTestMethod has 10 Assert.IsTrue statements, it will be irritating to figure out why a test is failing. If a lot of the functionality deals with writing and reading data to/from the disk in a not-exactly trivial fashion, how do I test that? If I provide a bunch of files as input by placing them in the program's directory, do I have to copy those files to the test project's bin/Debug folder now? If my program works with a large body of data and execution takes minutes or more, should my tests have it do the whole use all of the real data, a subset of it, or simulated data? If latter, how do I decide on the subset or how to simulate? Closely related to the previous point, if a class is such that its main operation happens in a state that is arrived to by the program after some involved operations (say, a class makes calculations on data derived from a few thousands of lines of code analyzing some raw data) how do I test just that class without inevitably ending up testing that class and all the other code that brings it to that state along with it? In general, what kind of approach should I use for test initialization? (hopefully that is the correct term, I mean preparing classes for testing by filling them in with appropriate data) How do I deal with private members? Do I just suck it up and assume that "not public = shouldn't be tested"? I have seen people suggest using private accessors and reflection, but these feel like clumsy and unsuited for regular use. Are these even good ideas? Is there anything like design patterns concerning testing specifically? I guess the main themes in what I'd like to learn more about are, (1) what are the overarching principles that should be followed (or at least considered) in every testing effort and (2) what are popular rules of thumb for writing tests. For example, at one point I recall hearing from someone that if a method is longer than 200 lines, it should be refactored - not a universally correct rule, but it has been quite helpful since I'd otherwise happily put hundreds of lines in single methods and then wonder why my code is so hard to read. Similarly I've found ReSharpers suggestions on member naming style and other things to be quite helpful in keeping my codebases sane. I see many resources both online and in print that talk about testing in the context of large applications (years of work, 10s of people or more). However, because I've never worked on such large projects, this context is very unfamiliar to me and makes the material difficult to follow and relate to my real world problems. Speaking of software development in general, advice given with the assumptions of large projects isn't always straightforward to apply to my own, smaller endeavors. Summary So my question is: What are some resources to learn about unit testing, for a hobbyist, self-taught programmer without much formal training? Ideally, I'm looking for a short and simple "bible of unit testing" which I can commit to memory, and then apply systematically by repeatedly asking myself "is this test following the bible of testing closely enough?" and then amending discrepancies if it doesn't.

    Read the article

  • The architecture and technologies to use for a secure, fast, reliable and easily scalable web application

    - by DSoul
    ^ For actual questions, skip to the lists down below I understand, that his is a vague topic, but please, before you turn the other way and disregard me, hear me out. I am currently doing research for a web application(I don't know if application is the correct word for it, but I will proceed w/ that for now), that one day might need to be everything mentioned in the title. I am bound by nothing. That means that every language, OS and framework is acceptable, but only if it proves it's usefulness. And if you are going to say, that scalability and speed depend on the code I write for this application, then I agree, but I am just trying to find something, that wouldn't stand in my way later on. I have done quite a bit reading on this subject, but I still don't have a clear picture, to what suits my needs, so I come to you, StackOverflow, to give me directions. I know you all must be wondering what I'm building, but I assure you, that it doesn't matter. I have heard of 12 factor app though, if you have any similar guidelines or what is, to suggest the please, go ahead. For the sake of keeping your answers as open as possible, I'm not gonna provide you my experience regarding anything written in this question. ^ Skippers, start here First off - the weights of the requirements are probably something like that (on a scale of 10): Security - 10 Speed - 5 Reliability (concurrency) - 7.5 Scalability - 10 Speed and concurrency are not a top priority, in the sense, that the program can be CPU intensive, and therefore slow, and only accept a not-that-high number of concurrent users, but both of these factors must be improvable by scaling the system Anyway, here are my questions: How many layers should the application have, so it would be future-proof and could best fulfill the aforementioned requirements? For now, what I have in mind is the most common version: Completely separated front end, that might be a web page or an MMI application or even both. Some middle-ware handling communication between the front and the back end. This is probably a server that communicates w/ the front end via HTTP. How the communication w/ the back end should be handled is probably dependent on the back end. The back end. Something that handles data through resources like DB and etc. and does various computations w/ the data. This, as the highest priority part of the software, must be easily spread to multiple computers later on and have no known security holes. I think ideally the middle-ware should send a request to a queue from where one of the back end processes takes this request, chops it up to smaller parts and buts these parts of the request back onto the same queue as the initial request, after what these parts will be then handled by other back end processes. Something *map-reduce*y, so to say. What frameworks, languages and etc. should these layers use? The technologies used here are not that important at this moment, you can ignore this part for now I've been pointed to node.js for this part. Do you guys know any better alternatives, or have any reasons why I should (not) use node.js for this particular job. I actually have no good idea, what to use for this job, there are too many options out there, so please direct me. This part (and the 2. one also, I think) depend a lot on the OS, so suggest any OSs alongside w/ the technologies/frameworks. Initially, all computers (or 1 for starters) hosting the back end are going to be virtual machines. Please do give suggestions to any part of the question, that you feel you have comprehensive knowledge and/or experience of. And also, point out if you feel that any part of the current set-up means an instant (or even distant) failure or if I missed a very important aspect to consider. I'm not looking for a definitive answer for how to achieve my goals, because there certainly isn't one, for I haven't provided you w/ all the required information. I'm just looking for recommendations and directions on what to look into. Also, bare in mind, that this isn't something that I have to get done quickly, to sell and let it be re-written by the new owner (which, I've been told for multiple times, is what I should aim for). I have all the time in the world and I really just want to learn doing something really high-end. Also, excuse me if my language isn't the best, I'm not a native. Anyway. Thanks in advance to anyone, who takes the time to help me out here. PS. When I do seem to come up w/ a good architecture/design for this project, I will certainly make it an open project and keep you guys up to date w/ it's development. As in what you could have told me earlier and etc. For obvious reasons the very same question got closed on SO, but could you guys still help me?.

    Read the article

  • Spring??Java EE 6?! ???????????????·????????Java Developers Workshop 2012 Summer????

    - by ???02
    ?Java EE 6??????????????????????????????????????/?????????????????????????Spring Framework?????????????????????????????????????????????????????????????????????????????? “Spring to Java EE 6”????·????????????????????Java EE 6????????????????????IT???????????????Java??????????????·???????????????8???????Java Developers Workshop 2012 Summer????????????????Best Practices for Migrating Spring to Java EE 6???????????????(???) ???????????????Spring Framework??Java EE 6???? ?????IT??????????????????·?????????????????? “Spring to Java EE 6”?????????????????????????????????????????????????????????????????????? ???????????Java?????????????????????????????????Java??????????????????????????????????????????????????100??Java??????????????????????????????????? ???????Java?????????????????1??????????·??????????????Java?????????????????????????????Java???????????????????·??????????????????·????????????Java????????????????????????????Java??????????????????????????????? ??????????????????????????????????????????????????Spring??Java EE 6??????????·???????Spring????????????????????Java EE 6?????????????????????????????????????????????????????Java EE 6??????????????????? 8???????Java Developers Workshop 2012 Summer??????????Java??????????????????????????????????Best Practices for Migrating Spring to Java EE 6??????????“Spring to Java EE 6”????????????????????????????“Spring to Java EE 6”??????????????????? Java??????????“Spring to Java EE 6”????·?????? ?????Spring??Java EE????????????????????????????????????????? Spring????????·?????????????J2EE Design and Development?????J2EE 1.4??????????????????????????Spring ?????????????????????????????? ???Spring???8?????????????????????????????????????Java EE?????????????????????????????????????????????????? ???????Java EE??????????????????????????Java EE??????????????????????????????????????????????????????????????????????????????????Spring????????Java EE 6???????????????????????????? ???????????????????????????????Java EE?Spring?????????????????????????????????????????Spring??Java EE 6???????????????????????????????? ?????“??”Spring??Java EE 6?????????? ???????? “??”?Spring??Java EE 6?????????????????????????????????? Spring????????????????????????????5?6???????????Spring?????????????????????????????????????????????????????????????????????? ????Spring?????????????????????????????????????????????????????????????????????????????????????????????????????? ???Spring????????????5??10?????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????????????????????·????????????????Java EE 6??????????????1????????????? Java EE 6???????????3????????????????·?????????????????????????????????????????????????????????????Java EE 6???????????????????????? ?????1???????????????Spring????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????????????????????????????????????? Java EE???????????? ?????????????????????????Java EE?????????????????????????????Java EE??????????Spring?????????????????? ????Java EE?????? ??1????????Java EE?????????????????????Java EE?????????????????????????????????????????????????????????????? Java Developers Workshop 2012 Summer??????Java EE 6???????????? ????GlassFish?????????????????Java EE 6???WAR?????????????????????????????????????????????????????????????·????????????????????100KB?????????????????????????????????????????? Spring??????2????????Java EE???Dependency Injection(DI)??????????????DI??Spring?????????????????????????????Java EE 6???Context Dependency Injection(CDI)???????????DI?????????????Java EE 6????????????????CDI??????????????????? 3?????????????Aspect Oriented Programming(AOP)???????????????????????????????AOP?????????????????????????????????????????????????????????????AOP????????????????????????????????AOP??????????????????????????????????Java EE 6???Interceptor???????Spring AOP??????????????????????????????????????????? 4????????Java EE??????????IDE????????????????????EJB?????????????????????????????????????????Java EE 6?????????????????????????NetBeans?Eclipse?????????????IDE????????????????????????????????????????????????????????????????????????????????????????? ???????Spring?Java EE????????????? ??????????????????Spring??????????????????Java EE 6??????????? ????Java EE???????????(integration testing)?????????????????????Arquillian???????????????????????·????????Arqullian??????????????????????????????????????????????????????????????GlassFish?JBoss???????????·???????????JUnit??????·????????????????????????????? Spring??Java EE?????????5??????? Java EE 6??Spring?????????????????????????????Spring??Java EE 6???????????????????????????????????Spring????????????????????????????? ???????????“????·??·????”????????????????????????????????(????????·??)????????????Java EE 6?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ?????1-Spring?????????? ????????????????????Spring?????????????????XML?????????????????????O/R?????·?????????????????????Web????????·???????????????????????? ???????????????????????Spring??????????????????????Spring????????????????????????·???????????????????Spring 3.x??JPA?JSF????Java EE 6??????????????Spring???????????????????????Java EE 6??????????????????????????????????????????? ?????2-Spring?????????????????? ????????????????????????????Spring???????????????????????????????????????????????????Spring???????????????·???????????????????? ????????????????????????????????????“???(??)”???????Web MVC???Kodo??????????????JSF?JPA????????????????????????????????????????????????????????????????????????????????? ?????????????????JSF?????????????Playframework????????????????????????Java EE 6??????????????????????????????????????????????????????????Spring???API??????????????????? ?????3:Spring?????????Java EE 6???????????????? ??????????????????????????Spring??????Java EE????????·??????????????? ????Spring?????????????????????????·???????WAR?????????????????????????????Spring??????????????Spring Beans???????????????????????? ???Java EE??????????????????????????????????????????Java EE 6????????????????·???????CDI????EJB??????????????????????????WAR???????CDI Bean?Session Bean????????????????CDI/EJB????????????????????????????EJB??????Spring?????????????????????? Spring??????Java EE????????·?????????????????????????????????? ???????Spring??????????Java EE???????????WAR??????????????????????????Java EE?????????Spring??????????????????????Java EE??????????? ?????????????????????Spring?Java EE???????????????????????????????????????????????????????????????(???)??????????????????????3????????????????? ????????????Spring?Java EE?????????????? Spring????????????????????????EJB????????CDI??????????????????????????????????????????????????????????CDI????????@Inject????????????????????????? ?????4:Spring??????????? 4??????????????Spring??????????????????????3???????Spring?Java EE??????????? ????3???Java EE 6?????????????Spring???????????????????Java EE 6??????????????????????Java EE 6??????????????????????????????? ?????????Spring JDBC???????????????????????????????????????Spring JDBC????????????????????????????????????? ??????????????·?????????????????????????????????????????????Spring??????????EJB?DAO(Data Access Object)???????????Spring???????????????????????Java EE?????????(???????????????·???????????)? Java EE 6???EJB?JPA??????????????????EJB?????????????????????????JPA????????????????????????Java EE???????????????EJB???????????????????????Java EE 6???EJB????????????????????Spring DAO?Java EE????????????????????????????EJB???????????????Java EE 6?????EJB?????????????????????? Spring?JDBC Templates??????????????“???”?????????O/R????????????????JDBC Templates???????????????????????????????????????O/R???????????????????? ????????????????????JDBC Template??????????????????????????????????????????????????????????????CDI?????????????JDBC Template??????Spring???????????????????????????????????????JDBC Template?Spring????????????????????????????????Template Producer???????CDI??JDBC Template???????????? ?????5:Spring????????? ???????????Spring?????????????????????????????Spring??????????????????Java EE 6?????????????????????pom.xml?????????????????????????????????????????????·????????????WAR????????????????????? ?2????????????? ???Spring??Java EE 6????????????????????????????????????????????????????????????????????????? ????????????????2???????1????????????????????????????????????????????????????????????????????????????????Spring????????????????????????????????????????????????????????????????????????????????????????????????????10????????????????????????????????????????Java EE?????10??????????????·?????????????????????????????????????????? ??????????????????????????????????Spring???????????????????????????????Java EE?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????“?”???????? Spring????????????·????????????????????????????????????JCP???????·?????????????????????????????????????????????????????????????????????????????????????????????????Hibernate?JPA????????????Spring???????????????? ?????????????????????!?Java Developers Workshop 2012 Summer???????????????!! ???Java Developers Workshop 2012 Summer??????????????Oracle Technology Network?????·???? ????????????????????????????????????????????????OTN??????????????? ? ????????? ? Oracle Technology Network?????·???? ???? ?Java Developers Workshop 2012 Summer?– Java EE ?????? – ? ??????????oracle.com???????????????oracle.com????????????????????????????(???????)????????????????????????????????????????????????????????????????????

    Read the article

< Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >