Search Results

Search found 2471 results on 99 pages for 'tyler smart'.

Page 89/99 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Developer’s Life – Summary of Superhero Articles

    - by Pinal Dave
    Earlier this year, I wrote an article series where I talked about developer’s life and compared it with Superhero. I have got amazing response to this series and I have been receiving quite a lots of email suggesting that I should write more blog post about them. Currently I am not planning to write more blog post but I will soon continue another series. In this blog post, I have summarized the entire series. Let me know if you want me to write about any superhero. I will see what I can do about that hero. Developer’s Life – Every Developer is a Captain America Captain America was first created as a comic book character in the 1940’s as a way to boost morale during World War II.  Aimed at a children’s audience, his legacy faded away when the war ended.  However, he has recently has a major reboot to become a popular movie character that deals with modern issues. Developer’s Life – Every Developer is the Incredible Hulk The Incredible Hulk is possibly one of the scariest superheroes out there.  All superheroes are meant to be “out of this world” and awe-inspiring, but I think most people will agree with I say The Hulk takes this to the next level.  He is the result of an industrial accident, which is scary enough in it’s own right.  Plus, when mild-mannered Bruce Banner is angered, he goes completely out-of-control and transforms into a destructive monster that he cannot control and has no memories of. Developer’s Life – Every Developer is a Wonder Woman We have focused a lot lately on this “superhero series.”  I love fantasy books and movies, and I feel like there is a lot to be learned from them.  As I am writing this series, though, I have noticed that every super hero I write about is a man.  So today, I would like to talk about the major female super hero – Wonder Woman. Developer’s Life – Every Developer is a Harry Potter Harry Potter might not be a superhero in the traditional sense, but I believe he still has a lot to teach us and show us about life as a developer.  If you have been living under a rock for the last 17 years, you might not know that Harry Potter is the main character in an extremely popular series of books and movies documenting the education and tribulation of a young wizard (and his friends). Developer’s Life – Every Developer is Like Transformers Transformers may not be superheroes – they don’t wear capes, they don’t have amazing powers outside of their size and folding ability, they’re not even human (technically).  Part of their enduring popularity is that while we are enjoying over-the-top movies, we are learning about good leadership and strong personal skills. Developer’s Life – Every Developer is a Iron Man Iron Man is another superhero who is not naturally “super,” but relies on his brain (and money) to turn him into a fighting machine.  While traditional superheroes are still popular, a three-movie franchise and incorporation into the new Avengers series shows that Iron Man is popular enough on his own. Developer’s Life – Every Developer is a Sherlock Holmes I have been thinking a lot about how developers are like super heroes, and I have written two blog posts now comparing them to Spiderman and Superman.  I have a lot of love and respect for developers, and I hope that they are enjoying these articles, and others are learning a little bit about the profession.  There is another fictional character who, while not technically asuper hero, is very powerful, and I also think stands as a good example of a developer. That character is Sherlock Holmes.  Sherlock Holmes is a British detective, first made popular at the turn of the 19thcentury by author Sir Arthur Conan Doyle.  The original Sherlock Holmes was a brilliant detective who could solve the most mind-boggling crime through simple observations and deduction. Developer’s Life – Every Developer is a Chhota Bheem Chhota Bheem is a cartoon character that is extremely popular where I live.  He is my daughter’s favorite characters.  I like to say that children love Chhota Bheem more than their parents – it is lucky for us he is not real!  Children love Chhota Bheem because he is the absolute “good guy.”  He is smart, loyal, and strong.  He and his friends live in Dholakpur and fight off their many enemies – and always win – in every episode.  In each episode, they learn something about friendship, bravery, and being kind to others.  Chhota Bheem is a good role model for children, and I think that he is a good role model for developers are well. Developer’s Life – Every Developer is a Batman Batman is one of the darkest superheroes in the fantasy canon.  He does not come to his powers through any sort of magical coincidence or radioactive insect, but through a lot of psychological scarring caused by witnessing the death of his parents.  Despite his dark back story, he possesses a lot of admirable abilities that I feel bear comparison to developers. Developer’s Life – Every Developer is a Superman I enjoyed comparing developers to Spiderman so much, that I have decided to continue the trend and encourage some of my favorite people (developers) with another favorite superhero – Superman.  Superman is probably the most famous superhero – and one of the most inspiring. Developer’s Life – Every Developer is a Spiderman I have to admit, Spiderman is my favorite superhero.  The most recent movie recently was released in theaters, so it has been at the front of my mind for some time. Spiderman was my favorite superhero even before the latest movie came out, but of course I took my whole family to see the movie as soon as I could!  Every one of us loved it, including my daughter.  We all left the movie thinking how great it would be to be Spiderman.  So, with that in mind, I started thinking about how we are like Spiderman in our everyday lives, especially developers. I would like to know which Superhero is your favorite hero! Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Developer, Superhero

    Read the article

  • Adobe Photoshop Vs Lightroom Vs Aperture

    - by Aditi
    Adobe Photoshop is the standard choice for photographers, graphic artists and Web designers. Adobe Photoshop Lightroom  & Apple’s Aperture are also in the same league but the usage is vastly different. Although Photoshop is most popular & widely used by photographers, but in many ways it’s less relevant to photographers than ever before. As Lightroom & Aperture is aimed squarely at photographers for photo-processing. With this write up we are going to help you choose what is right for you and why. Adobe Photoshop Adobe Photoshop is the most liked tool for the detailed photo editing & designing work. Photoshop provides great features for rollover and Image slicing. Adobe Photoshop includes comprehensive optimization features for producing the highest quality Web graphics with the smallest possible file sizes. You can also create startling animations with it. Designers & Editors know how important precise masking is, PhotoShop lets you do that with various detailing tools. Art history brush, contact sheets, and history palette are some of the smart features, which add to its viability. Download Whether you’re producing printed pages or moving images, you can work more efficiently and produce better results because of its smooth integration across other adobe applications. Buy supporting layer effects, it allows you to quickly add drop shadows, inner and outer glows, bevels, and embossing to layers. It also provides Seamless Web Graphics Workflow. Photoshop is hands-down the BEST for editing. Photoshop Cons: • Slower, less precise editing features in Bridge • Processing lots of images requires actions and can be slower than exporting images from Lightroom • Much slower with editing and processing a large number of images Aperture Apple Aperture is aimed at the professional photographer who shoots predominantly raw files. It helps them to manage their workflow and perform their initial Raw conversion in a better way. Aperture provides adjustment tools such as Histogram to modify color and white balance, but most of the editing of photos is left for Photoshop. It gives users the option of seeing their photographs laid out like slides or negatives on a light table. It boasts of – stars, color-coding and easy techniques for filtering and picking images. Aperture has moved forward few steps than Photoshop, but most of the editing work has been left for Photoshop as it features seamless Photoshop integration. Aperture Pros: Aperture is a step up from the iPhoto software that comes with every Mac, and fairly easy to learn. Adjustments are made in a logical order from top to bottom of the menu. You can store the images in a library or any folder you choose. Aperture also works really well with direct Canon files. It is just $79 if you buy it through Apple’s App Store Moving forward, it will run on the iPad, and possibly the iPhone – Adobe products like Lightroom and Photoshop may never offer these options It is much nicer and simpler user interface. Lightroom Lightroom does a smashing job of basic fixing and editing. It is more advanced tool for photographers. They can use it to have a startling photography effect. Light room has many advanced features, which makes it one of the best tools for photographers and far ahead of the other two. They are Nondestructive editing. Nothing is actually changed in an image until the photo is exported. Better controls over organizing your photos. Lightroom helps to gather a group of photos to use in a slideshow. Lightroom has larger Compare and Survey views of images. Quickly customizable interface. Simple keystrokes allow you to perform different All Lightroom controls are kept available in panels right next to the photos. Always-available History palette, it doesn’t go when you close lightroom. You gain more colors to work with compared to Photoshop and with more precise control. Local control, or adjusting small parts of a photo without affecting anything else, has long been an important part of photography. In Lightroom 2, you can darken, lighten, and affect color and change sharpness and other aspects of specific areas in the photo simply by brushing your cursor across the areas. Photoshop has far more power in its Cloning and Healing Brush tools than Lightroom, but Lightroom offers simple cloning and healing that’s nondestructive. Lightroom supports the RAW formats of more cameras than Aperture. Lightroom provides the option of storing images outside the application in the file system. It costs less than photoshop. Download Why PhotoShop is advanced than Lightroom? There are countless image processing plug-ins on the market for doing specialized processing in Photoshop. For example, if your image needs sophisticated noise reduction, you can use the Noiseware plug-in with Photoshop to do a much better job or noise removal than Lightroom can do. Lightroom’s advantages over Aperture 3 Will always have better integration with Photoshop. Lightroom is backed by bigger and more active user community (So abundant availability for tutorials, etc.) Better noise reduction tool. Especially for photographers the Lens-distortion correction tool  is perfect Lightroom Cons: • Have to Import images to work on them • Slows down with over 10,000 images in the catalog • For processing just one or two images this is a slower workflow Photoshop Pros: • ACR has the same RAW processing controls as Lightroom • ACR Histogram is specialized to the chosen color space (Lightroom is locked into ProPhoto RGB color space with an sRGB tone curve) • Don’t have to Import images to open in Bridge or ACR • Ability to customize processing of RAW images with Photoshop Actions Pricing and Availability Get LightRoomGet PhotoShop Latest version Of Photoshop can be purchased from Adobe store and Adobe authorized reseller and it costs US$999. Latest version of Aperture can be bought for US$199 from Apple Online store or Mac App Store. You can buy latest version of LightRoom from Adobe Store or Adobe Authorized reseller for US$299. Related posts:Adobe Photoshop CS5 vs Photoshop CS5 extended Web based Alternatives to Photoshop 10 Free Alternatives for Adobe Photoshop Software

    Read the article

  • MySQL 5.5 brings in new ways to authenticate users

    - by Georgi Kodinov
    Ever wanted to use your server's OS for authenticating MySQL users ? Or the corporate LDAP repository ? Unfortunately options like the above are plentiful nowadays. And providing hard-coded support for protocol X or service Y is not the best possible idea. MySQL 5.5 has taken the step into the right direction by providing an infrastructure allowing one to make the server understand different authentication protocols by creating a set of simple plugins (one for the client and one for the server). So now you can easily extend MySQL to search for and authenticate users in your favorite user directory. In fact the API supplied is so versatile that we took the possibility to re-design the current "native" authentication mechanism into a built-in always-on plugin ! OK, let me give you an example: Imagine we have a bunch of users defined in your OS, e.g. we have a user joro with his respective password. And we have a MySQL instance running on the same computer. It would not be unexpected to need to let joro access and/or modify MySQL data. The first step is to define him as a MySQL user. And there's a problem right there : MySQL's CREATE USER joro@localhost IDENTIFIED BY 'joros_password' statement needs a password. And this is a password in no way related to the password that joro have set up in the OS. What's worse : if joro changes his OS password this will in no way be reflected in MySQL. So he'll need to change his MySQL password in a separate step. Not very convenient, specially when you have a lot of users. This is a laborious setup for joro's DBA as well : he'll have to disable his access in both MySQL and the OS should he decides that joro's out of the "nice" list. Now mysql 5.5 to the rescue: Imagine that the smart DBA has created a MySQL server plugin that will check if the name of the user logging in is a valid and enabled OS name and if the password supplied to the mysql client matches the OS and has called this plugin 'auth_os'. Now all that's left to do is to define joro as a MySQL user that will be authenticated externally. This is done by the following command : CREATE USER 'joro'@'localhost' IDENTIFIED WITH 'auth_os'; Now joro can login to MySQL using his current OS password. Note : joro is still a valid MySQL user, so you can grant privileges to him just like you would for all other users. What's better: you can have users that authenticate using different mechanisms in the same server. So you can e.g. safely experiment with external authentication for selected users while keeping your current user base operational. What happens under the hood when joro logs in ? The server will find out by the user definition that it needs to use a non-default authentication and will ask the client to "switch" to using the appropriate client-side plugin (if of course the client is not already using it). If the client can't do this (e.g. because it's an old client or doesn't have the necessary plugin available) the server will reject the login. Otherwise the server will let the server-side plugin decide (while possibly talking to the client side plugin and the OS user directory) if this is a valid login or not. If it is the login process will continue as usual, while if it's not the login will get rejected. There's a lot more that MySQL 5.5 can do for you than just the simple case above. Stay tuned for more advanced use cases like mapping groups of external users to a single MySQL user (so you won't have to have 1-to-1 mapping between your external user directory and your mysql user repository) or ways to control the process as a DBA. Or you can simply skip ahead and read the relevant topics from MySQL's excellent online documentation. Or take a look at the example plugins in plugin/auth. Or take a look at the test suite in mysql-test/t/plugin_auth.test. Changelog entry: http://dev.mysql.com/doc/refman/5.5/en/news-5-5-7.html Primary new sections: Pluggable authentication Proxy users Client plugin C API functions Revised sections: New PROXY privilege New proxies_priv grant table Passwords might be external New external_user and proxy_user system variables New --default-auth and --plugin-dir mysql options New MYSQL_DEFAULT_AUTH and MYSQL_PLUGIN_DIR options for mysql_options() CREATE USER has IDENTIFIED WITH clause to specify auth plugin GRANT has PROXY privilege, IDENTIFIED WITH clause to specify auth plugin The data structure for writing client plugins

    Read the article

  • jQuery 1.4 Opacity and IE Filters

    - by Rick Strahl
    Ran into a small problem today with my client side jQuery library after switching to jQuery 1.4. I ran into a problem with a shadow plugin that I use to provide drop shadows for absolute elements – for Mozilla WebKit browsers the –moz-box-shadow and –webkit-box-shadow CSS attributes are used but for IE a manual element is created to provide the shadow that underlays the original element along with a blur filter to provide the fuzziness in the shadow. Some of the key pieces are: var vis = el.is(":visible"); if (!vis) el.show(); // must be visible to get .position var pos = el.position(); if (typeof shEl.style.filter == "string") sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')'); sh.show() .css({ position: "absolute", width: el.outerWidth(), height: el.outerHeight(), opacity: opt.opacity, background: opt.color, left: pos.left + opt.offset, top: pos.top + opt.offset }); This has always worked in previous versions of jQuery, but with 1.4 the original filter no longer works. It appears that applying the opacity after the original filter wipes out the original filter. IOW, the opacity filter is not applied incrementally, but absolutely which is a real bummer. Luckily the workaround is relatively easy by just switching the order in which the opacity and filter are applied. If I apply the blur after the opacity I get my correct behavior back with both opacity: sh.show() .css({ position: "absolute", width: el.outerWidth(), height: el.outerHeight(), opacity: opt.opacity, background: opt.color, left: pos.left + opt.offset, top: pos.top + opt.offset }); if (typeof shEl.style.filter == "string") sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')'); While this works this still causes problems in other areas where opacity is implicitly set in code such as for fade operations or in the case of my shadow component the style/property watcher that keeps the shadow and main object linked. Both of these may set the opacity explicitly and that is still broken as it will effectively kill the blur filter. This seems like a really strange design decision by the jQuery team, since clearly the jquery css function does the right thing for setting filters. Internally however, the opacity setting doesn’t use .css instead hardcoding the filter which given jQuery’s usual flexibility and smart code seems really inappropriate. The following is from jQuery.js 1.4: var style = elem.style || elem, set = value !== undefined; // IE uses filters for opacity if ( !jQuery.support.opacity && name === "opacity" ) { if ( set ) { // IE has trouble with opacity if it does not have layout // Force it by setting the zoom level style.zoom = 1; // Set the alpha filter to set the opacity var opacity = parseInt( value, 10 ) + "" === "NaN" ? "" : "alpha(opacity=" + value * 100 + ")"; var filter = style.filter || jQuery.curCSS( elem, "filter" ) || ""; style.filter = ralpha.test(filter) ? filter.replace(ralpha, opacity) : opacity; } return style.filter && style.filter.indexOf("opacity=") >= 0 ? (parseFloat( ropacity.exec(style.filter)[1] ) / 100) + "": ""; } You can see here that the style is explicitly set in code rather than relying on $.css() to assign the value resulting in the old filter getting wiped out. jQuery 1.32 looks a little different: // IE uses filters for opacity if ( !jQuery.support.opacity && name == "opacity" ) { if ( set ) { // IE has trouble with opacity if it does not have layout // Force it by setting the zoom level elem.zoom = 1; // Set the alpha filter to set the opacity elem.filter = (elem.filter || "").replace( /alpha\([^)]*\)/, "" ) + (parseInt( value ) + '' == "NaN" ? "" : "alpha(opacity=" + value * 100 + ")"); } return elem.filter && elem.filter.indexOf("opacity=") >= 0 ? (parseFloat( elem.filter.match(/opacity=([^)]*)/)[1] ) / 100) + '': ""; } Offhand I’m not sure why the latter works better since it too is assigning the filter. However, when checking with the IE script debugger I can see that there are actually a couple of filter tags assigned when using jQuery 1.32 but only one when I use jQuery 1.4. Note also that the jQuery 1.3 compatibility plugin for jQUery 1.4 doesn’t address this issue either. Resources ww.jquery.js (shadow plug-in $.fn.shadow) © Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • Dynamic Grouping and Columns

    - by Tim Dexter
    Some good collaboration between myself and Kan Nishida (Oracle BIP Consulting) over at bipconsulting on a question that came in yesterday to an internal mailing list. Is there a way to allow columns to be place into a template dynamically? This would be similar to the Answers Column selector. A customer has said Crystal can do this and I am trying to see how BI Pub can do the same. Example: Report has Regions as a dimension in a table, they want the user to select a parameter that will insert either Units or Dollars without having to create multiple templates. Now whether Crystal can actually do it or not is another question, can Publisher? Yes we can! Kan took the first stab. His approach, was to allow to swap out columns in a table in the report. Some quick steps: 1. Create a parameter from BIP server UI 2. Declare the parameter in RTF template You can check this post to see how you can declare the parameter from the server. http://bipconsulting.blogspot.com/2010/02/how-to-pass-user-input-values-to-report.html 3. Use the parameter value to condition if a particular column needs to be displayed or not. You can use <?if@column:.....?> syntax for Column level IF condition. The if@column is covered in user documentation. This would allow a developer to create a report with the parameter or multiple parameters to allow the user to pick a column to be included in the report. I took a slightly different tack, with the mention of the column selector in the Answers report I took that to mean that the user wanted to select more of a dimensional column and then have the report recalculate all its totals and subtotals based on that selected column. This is a little bit more involved and involves some smart XSL and XPATH expressions, but still very doable. The user can select a column as a parameter, that is passed to the template rather than the query. The parameter value that is actually passed is the element name that you want to regroup the data by. Inside the template we then reference that parameter value in our for-each-group loop. That's where we need the trixy XSL/XPATH code to get the regrouping to happen. At this juncture, I need to hat tip to Klaus, for his article on dynamic sorting that he wrote back in 2006. I basically took his sorting code and applied it to the for-each loop. You can follow both of Kan's first two steps above i.e. Create a parameter from BIP server UI - this just needs to be based on a 'list' type list of value with name/value pairs e.g. Department/DEPARTMENT_NAME, Job/JOB_TITLE, etc. The user picks the 'friendly' value and the server passes the element name to the template. Declare the parameter in RTF template - been here before lots of times right? <?param@begin:group1;'"DEPARTMENT_NAME"'?> I have used a default value so that I can test the funtionality inside the template builder (notice the single and double quotes.) Next step is to use the template builder to build a re-grouped report layout. It does not matter if its hard coded right now; we will add in the dynamic piece next. Once you have a functioning template that is re-grouping correctly. Open up the for-each-group field and modify it to use the parameter: <?for-each-group:ROW;./*[name(.) = $group1]?> 'group1' is my grouping parameter, declared above. We need the XPATH expression to find the column in the XML structure we want to group that matches the one passed by the parameter. Its essentially looking through the data tree for a match. We can show the actual grouping value in the report output with a similar XPATH expression <?./*[name(.) = $group1]?> In my example, I took things a little further so that I could have a dynamic label for the parameter value. For instance if I am using MANAGER as the parameter I want to show: Manager: Tim Dexter My XML elements are readable e.g. DEPARTMENT_NAME. Its a simple case of replacing the underscore with a space and then 'initcapping' the result: <?xdoxslt:init_cap(translate($group1,'_',' '))?> With this in place, the user can now select a grouping column in the BIP report viewer and the layout will re-group the data and any calculations based on that column. I built a group above report but you could equally build the group left version to truly mimic the Answers column selector. If you are interested you can get an example report, sample data and layout template here. Of course, you can combine Klaus' dynamic sorting, Kan's conditional column approach and this dynamic grouping to build a real kick ass report for users that will keep them happy for hours..

    Read the article

  • Knockoutjs - stringify to handling observables and custom events

    - by Renso
    Goal: Once you viewmodel has been built and populated with data, at some point it goal of it all is to persist the data to the database (or some other media). Regardless of where you want to save it, your client-side viewmodel needs to be converted to a JSON string and sent back to the server. Environment considerations: jQuery 1.4.3+ Knockoutjs version 1.1.2   How to: So let’s set the stage, you are using Knockoutjs and you have a viewmodel with some Knockout dependencies. You want to make sure it is in the proper JSON format and via ajax post it to the server for persistence.   First order of business is to deal with the viewmodel (JSON) object. To most the JSON stringifier sounds familiar. The JSON stringifier converts JavaScript data structures into JSON text. JSON does not support cyclic data structures, so be careful to not give cyclical structures to the JSON stringifier. You may ask, is this the best way to do it? What about those observables and other Knockout properties that I don’t want to persist or want their actual value persisted and not their function, etc. Not sure if you were aware, but KO already has a method; ko.utils.stringifyJson() - it's mostly just a wrapper around JSON.stringify. (which is native in some browsers, and can be made available by referencing json2.js in others). What does it do that the regular stringify does not is that it automatically converts observable, dependentObservable, or observableArray to their underlying value to JSON. Hold on! There is a new feature in this version of Knockout, the ko.toJSON. It is part of the core library and it will clone the view model’s object graph, so you don’t mess it up after you have stringified  it and unwrap all its observables. It's smart enough to avoid reference cycles. Since you are using the MVVM pattern it would assume you are not trying to reference DOM nodes from your view. Wait a minute. I can already see this info on the http://knockoutjs.com/examples/contactsEditor.html website, why mention it all here? First of this is a much nicer blog, no orange ? At this time, you may want to have a look at the blog and see what I am talking about. See the save event, how they stringify the view model’s contacts only? That’s cool but what if your view model is a representation of your object you want to persist, meaning it has no property that represents the json object you want to persist, it is the view model itself. The example in http://knockoutjs.com/examples/contactsEditor.html assumes you have a list of contacts you may want to persist. In the example here, you want to persist the view model itself. The viewmodel here looks something like this:     var myViewmodel = {         accountName: ko.observable(""),         accountType: ko.observable("Active")     };     myViewmodel.isItActive = ko.dependentObservable(function () {         return myViewmodel.accountType() == "Active";     });     myViewmodel.clickToSaveMe = function() {         SaveTheAccount();     }; Here is the function in charge of saving the account: Function SaveTheAccount() {     $.ajax({         data: ko.toJSON(viewmodel),         url: $('#ajaxSaveAccountUrl').val(),         type: "POST",         dataType: "json",         async: false,         success: function (result) {             if (result && result.Success == true) {                 $('#accountMessage').html('<span class="fadeMyContainerSlowly">The account has been saved</span>').show();                 FadeContainerAwaySlowly();             }         },         error: function (xmlHttpRequest, textStatus, errorThrown) {             alert('An error occurred: ' + errorThrown);         }     }); //ajax }; Try run this and your browser will eventually freeze up or crash. Firebug will tell you that you have a repetitive call to the first function call in your model that keeps firing infinitely.  What is happening is that Knockout serializes the view model to a JSON string by traversing the object graph and firing off the functions, again-and-again. Not sure why it does that, but it does. So what is the work around: Nullify your function calls and then post it:         var lightweightModel = viewmodel.clickToSaveMe = null;         data: ko.toJSON(lightweightModel), So then I traced the JSON string on the server and found it having issues with primitive types. C#, by the way. So I changed ko.toJSON(model) to ko.toJS(model), and that solved my problem. Of course you could just create a property on the viewmodel for the account itself, so you only have to serialize the property and not the entire viewmodel. If that is an option then that would be the way to go. If your view model contains other properties in the view model that you also want to post then that would not be an option and then you’ll know what to watch out for. Hope this helps.

    Read the article

  • My History with Agile

    - by Robert May
    I’m going to write my history with Agile here.  That way, in future posts, I can refer back to it, instead of typing it out in the post that contains information you may actually want to read.  Note that I’m actually a pretty senior developer, and do lots of technical interviews.  I’m an Agile fan because of the difference it makes in peoples lives and the improvement in quality it brings, and I’ll sacrifice my technological advance to help teams. Management History I started management pretty early in my career, starting with the first job that I ever had.  I actually do NOT have a CS or similar degree.  I have a Bachelor’s of Business Administration with an emphasis in Computer Information Systems. My first management gigs were around call center work and were very schedule oriented.  I didn’t understand the true value of teams, and I’m ashamed to admit, I actually installed a fingerprint scanner as a time clock in this job.  I shudder to think of the impact that I had on the team spirit.  I didn’t even trust them enough to fill out their time cards correctly.  How sad. I was managing nearly 100 people in this position, with the help of a great set of subordinates. I did try to come up with reward programs for the team, but again, didn’t understand the concept of team, so instead of letting the team determine how the rewards should work, I mandated from on high, which isn’t a good thing. I was told that I wasn’t the type that would be a good manager by people whom I respected a lot.  They said it because I was a computer geek, since they don’t understand good management either, but in retrospect, they were right about me then.  I was too green. After my first job, I went on to other jobs and with the exception of one job, I’ve managed people at them all.  The rest of the management story is important for understanding agile, so I’ll save it for my next post. Technical History I’ve been in software development for many, many years.  I technically started programming on a commodore 64 in basic.  I didn’t know that I was programming, but I was sure having fun.  That was followed by batch files, Gorilla hacking (I always had to win), WordPerfect Macro programming and other things that taught me the basics. My first “real” job was with a telephone company, and that’s where I made my first database application in DataEase, wrote my first VBA app and started using real programming tools, like turbo pascal, vb3-vb5, and semi-real tools like RPG and VisualRPG.  I wrote my first web page in 1994, and built my first data driven web page in 1995 using perlDB.  You really can do anything with Perl.  At this time, I also started a Linux based internet service provider that is still in operation today.  One of the people I worked with is now a Microsoft employee building and designing frameworks you probably know well.  Smart guy.  I also built my first ASP applications connecting to Sql Server 6.5, setup Exchange 5.5 for the company, and many other system administration stuff.  I’m a programmer by choice, mostly because I don’t really like PC support. From there, I went on to a large state agency.  I got to see and maintain true waterfall projects.  5 years of maintaining the 200 VB COM+ (MTS, actually) dlls that were used to calculate a single number is a long time.  That was all Microsoft DNS technologies.  SQL Server and VB6 were the tools of choice, although .net started to be a factor near the end of employment.  I did some heavy XML work at this job and even wrote an XSD parser and validator in VB6 that was a shim until MSXML 3.0 came out.  Prior to 3.0, XSD’s weren’t supported, and I didn’t want to write DTDs. Ironically, jobs after this were more generic.  I pretty much settled in on the .net framework and revisions of it.  Lots of WPF, some silverlight, lots of ASP.NET, some SQL Azure, lots of SQL Server, some Oracle, but I don’t think that I was as passionate about development and technologies.  I was more into the management of development.  I like people. Technorati Tags: Agile,history

    Read the article

  • BI&EPM in Focus June 2014

    - by Mike.Hallett(at)Oracle-BI&EPM
    Applications Webcast Centre – A Library of Discussion and Research for Best Practice: Achieving Reliable Planning, Budgeting and Forecasting Talent Analytics and Big Data – Is HR ready for the challenge Enterprise Data – The cost of non-quality Customers Josephine Niemiec from ADP talks about Oracle Hyperion Workforce Planning at Collaborate 2014 (link) Video Chris Nelms from Ameren talks about Oracle BI Spend and Procurement Analytics at Collaborate 2014 (link) Video Leggett & Platt Leverages Oracle Hyperion EPM and Demantra (link) Video Pella Corporation Accelerates Close Cycle by Cutting Time for Financial Consolidation from Three Days to Less Than One Day (link) Secretaría General de Administración de Justicia en España Enhances Citizen Services with Near-Real-Time Business Intelligence Gleaned from 500 Databases  (link) Bellco Credit Union Speeds Budget Development by 30%—Gains Insight into Specific Branch and Financial Product Profitability  (link)  Video QDQ media Speeds up Financial Reporting by 24x, Gains Business Agility, and Integrates Seamlessly into Corporate Accounting System  (link) Westfield Group Maximizes Shopping Mall Revenue, Shortens Year-End Financial Consolidation by 75%  (link)  IL&FS Transportation Networks Shortens Financial Consolidation and Reporting Cycle by Eight Days, Gains In-Depth Insight into Business Performance   (link) Angel Trains Optimizes Rail Operations for Purchasing, Sourcing, and Project Management to Meet Challenges of Evolving Rail Industry  (link) Enterprise Performance Management June 11, at Oracle Utrecht, NL: Morning session: Explore Planning and Budgeting in the Cloud (link) June 12, London: PureApps Presents: Best Practice Financial Consolidation and Reporting Workshop (link) July 3, Koln: Oracle Hyperion Business Analytics Roundtable (link) Blog: What's Your Tax Strategy? Automate the Operational Transfer Pricing Process (link) YouTube Video: Automate Tax Reporting with Oracle Hyperion Tax Provision (link) YouTube Video: Introducing Oracle Hyperion Planning’s Tablet Optimized Interface (link) OracleEPMWebcasts @ YouTube (link) Partner webcasts: Wednesday, 4 June, 5.00 GMT - Case Study:  Lessons Learned from Edgewater Ranzal's Internal Implementation of Oracle Planning & Budgeting Cloud Service (PBCS) - Learn more and register here! Thursday, 5 June, 4.00 GMT - Achieving Accountable Care Using Oracle Technology - Learn more and register here! Tuesday, 17 June, 4.00 GMT - Optimizing Performance for Oracle EPM Systems - Learn more and register here! Oracle University Blog: The Coolest Features Available with Oracle Hyperion 11.1.2.3 – Training from OU to help you to best use them (link) Support: Proactive Support: EPM Hyperion Planning 11.1.2.3.500 Using RMI Service [Blog] Proactive Support: Planning and Budgeting Cloud Service Videos (link) Planning and Budgeting Cloud Service (PBCS) 11.1.2.3.410 Patch Bundle [Doc ID 1670981.1] Hyperion Analytic Provider Services 11.1.2.2.106 Patch Set Update [Doc ID 1667350.1] Hyperion Essbase 11.1.2.2.106 Patch Set Update [Doc ID 1667346.1] Hyperion Essbase Administration Services 11.1.2.2.106 Patch Set Update [Doc ID 1667348.1] Hyperion Essbase Studio 11.1.2.2.106 Patch Set Update [Doc ID 1667329.1] Hyperion Smart View 11.1.2.5.210 Patch Set Update [Doc ID 1669427.1] Using HPCM, HSF or DRM Communities (link) Business Intelligence June 12, Birmingham, UK: Oracle Big Data at Work - Use Cases and Architecture (link) June 17, London: Oracle at Cloud & Big Data World Forums (link) June 17, Partner Webcast: Transform your Planning Capabilities with Peloton's CloudAccelerator for Oracle PBCS (link) June 19, London: Oracle at the Whitehall Media Big Data Analytics Conference and Exhibition (link) June 19, London: Partner Event - Agile BI Conference by Peak Indicators [link] June 25, Munich: Oracle Special Day auf der TDWI 2014 Konferenz (link) July 15, London: Oracle Endeca Information Discovery Workshop (link) July 16, London: BI Applications Workshop – Financial Analytics & Procurement Analytics (link) July 17, London: BI Applications Workshop – HR Analytics (link) Milan, Italy: L’Osservatorio Big Data Analytics & Business Intelligence with Politecnico di Milano (link) OBIA 11.1.1.8.1 - Now Available [Blog] What’s New in OBIA 11.1.1.8.1 [Blog] BI Blog: A closer look at Oracle BI Applications 11.1.1.8.1 release (link) Press Release: BI Applications Deliver Greater Insight into Talent and Procurement (link) Support Blog: OBIA 11.1.1.8.1 Upgrade Guide & Documentation (link) YouTube Video: Glenn Hoormann of Ludus talks to us about Oracle Business Intelligence and ERP at Collaborate 2014 (Link) YouTube Video: Performance Architects talks about key BI and Mobile trends, including Endeca at Collaborate 2014 (link) Big Data Blog: 3 Keys for Using Big Data Effectively for Enhanced Customer Experience (link) Big Data Lite Demo VM 3.0 Now Available on OTN BI Blog: Data Relationship Governance - Workflow in a Bottle (link) MDM Blog: Register for Product Data Management Weekly Cloudcasts (link) MDM Blog: Improve your Customer Experience with High Quality Information (link) MDM Blog: Big Data Challenges & Considerations (link) Oracle University: Oracle BI Applications 11g: Implementation using ODI (link) Proactive Support: Monthly Index [Blog] My Oracle Support: Partner Accreditation for Business Analytics Support [Blog] OBIEE 11g Test-to-Production (T2P) / Clone Procedures Guide [Blog] Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • The way I think about Diagnostic tools

    - by Daniel Moth
    Every software has issues, or as we like to call them "bugs". That is not a discussion point, just a mere fact. It follows that an important skill for developers is to be able to diagnose issues in their code. Of course we need to advance our tools and techniques so we can prevent bugs getting into the code (e.g. unit testing), but beyond designing great software, diagnosing bugs is an equally important skill. To diagnose issues, the most important assets are good techniques, skill, experience, and maybe talent. What also helps is having good diagnostic tools and what helps further is knowing all the features that they offer and how to use them. The following classification is how I like to think of diagnostics. Note that like with any attempt to bucketize anything, you run into overlapping areas and blurry lines. Nevertheless, I will continue sharing my generalizations ;-) It is important to identify at the outset if you are dealing with a performance or a correctness issue. If you have a performance issue, use a profiler. I hear people saying "I am using the debugger to debug a performance issue", and that is fine, but do know that a dedicated profiler is the tool for that job. Just because you don't need them all the time and typically they cost more plus you are not as familiar with them as you are with the debugger, doesn't mean you shouldn't invest in one and instead try to exclusively use the wrong tool for the job. Visual Studio has a profiler and a concurrency visualizer (for profiling multi-threaded apps). If you have a correctness issue, then you have several options - that's next :-) This is how I think of identifying a correctness issue Do you want a tool to find the issue for you at design time? The compiler is such a tool - it gives you an exact list of errors. Compilers now also offer warnings, which is their way of saying "this may be an error, but I am not smart enough to know for sure". There are also static analysis tools, which go a step further than the compiler in identifying issues in your code, sometimes with the aid of code annotations and other times just by pointing them at your raw source. An example is FxCop and much more in Visual Studio 11 Code Analysis. Do you want a tool to find the issue for you with code execution? Just like static tools, there are also dynamic analysis tools that instead of statically analyzing your code, they analyze what your code does dynamically at runtime. Whether you have to setup some unit tests to invoke your code at runtime, or have to manually run your app (and interact with it) under the tool, or have to use a script to execute your binary under the tool… that varies. The result is still a list of issues for you to address after the analysis is complete or a pause of the execution when the first issue is encountered. If a code path was not taken, no analysis for it will exist, obviously. An example is the GPU Race detection tool that I'll be talking about on the C++ AMP team blog. Another example is the MSR concurrency CHESS tool. Do you want you to find the issue at design time using a tool? Perform a code walkthrough on your own or with colleagues. There are code review tools that go beyond just diffing sources, and they help you with that aspect too. For example, there is a new one in Visual Studio 11 and searching with my favorite search engine yielded this article based on the Developer Preview. Do you want you to find the issue with code execution? Use a debugger - let’s break this down further next. This is how I think of debugging: There is post mortem debugging. That means your code has executed and you did something in order to examine what happened during its execution. This can vary from manual printf and other tracing statements to trace events (e.g. ETW) to taking dumps. In all cases, you are left with some artifact that you examine after the fact (after code execution) to discern what took place hoping it will help you find the bug. Learn how to debug dump files in Visual Studio. There is live debugging. I will elaborate on this in a separate post, but this is where you inspect the state of your program during its execution, and try to find what the problem is. More from me in a separate post on live debugging. There is a hybrid of live plus post-mortem debugging. This is for example what tools like IntelliTrace offer. If you are a tools vendor interested in the diagnostics space, it helps to understand where in the above classification your tool excels, where its primary strength is, so you can market it as such. Then it helps to see which of the other areas above your tool touches on, and how you can make it even better there. Finally, see what areas your tool doesn't help at all with, and evaluate whether it should or continue to stay clear. Even though the classification helps us think about this space, the reality is that the best tools are either extremely excellent in only one of this areas, or more often very good across a number of them. Another approach is to offer a toolset covering all areas, with appropriate integration and hand off points from one to the other. Anyway, with that brain dump out of the way, in follow-up posts I will dive into live debugging, and specifically live debugging in Visual Studio - stay tuned if that interests you. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Recent Innovations to ILOM

    - by B.Koch
    by Josh Rosen If you are wondering how Oracle can make some of the most advanced, reliable, and fault tolerant servers on the market, look no further than Oracle Integrated Lights Out Manager or ILOM.  We build ILOM into every server we create, from Oracle x86 Systems such as X3-2 to the SPARC T-Series family. Oracle ILOM is an embedded service processor, but it's really more than that.  It's a computer within a computer.  It's smart, it's tightly integrated into all aspects of the server's operation, and it's a big reason why Oracle servers are used for some of the most mission-critical workloads out there. To understand the value of ILOM, there is no better place to start than its fault management capability.  We have taken the sophisticated fault management architecture from Solaris, developed and refined over a decade, and built it into each and every ILOM. ILOM detects a potential issue at its earliest stage, watching low-level sensors.   If the root cause of a problem is not clear from a single error reading, ILOM will look for other clues and combine multiple pieces of information to correctly identify a failing component. ILOM provides peace of mind. We tailor our fault management for each new server platform that we produce.  You can rest assured that it's always actively keeping the server healthy.  And if there is a problem, you can be confident it will let you know by sending you a notification by e-mail or trap. We also heard IT managers tell us they needed a Ph.D. in computer engineering to manage today's servers. It doesn't have to be that way.  Thanks to the latest innovations to Oracle ILOM, we present hardware inventory and status in way that makes sense – to anyone.  Green means everything is healthy and red means something is wrong.  When a component needs to be replaced a clear message indicates where the problem is and points you at a knowledge article about that problem.  It's that simple. Simpler management and simple interfaces mean reduced complexity and lower costs to manage.  And we know that's really important. ILOM does all this while also providing advanced service processor features you depend on for managing enterprise class systems.  You can remotely control the server power, interact with a virtual video console for the server, and mount media on the server remotely.  There is no need to spend money on a KVM switch to get this functionality. And when people hear how advanced ILOM is, they can't believe ILOM is free.  All features are enabled and included with each Oracle server that you buy.  There are no advanced licenses you need to purchase or features to unlock. Configuring ILOM has also never been easier.  It is now possible to configure almost all aspects of the server directly from ILOM.  This includes changing BIOS settings, persistently modifying boot order, and optimizing power settings -- all directly from ILOM. But Oracle's innovation does not stop with ILOM.  Oracle has engineered Oracle Enterprise Manager Ops Center to integrate directly with ILOM, providing centralized management across all of our servers. Ops Center will discover each of your Oracle servers over the network by searching for ILOMs.  When it finds one, it knows how to communicate with ILOM to monitoring and configure that server from application to disk. Since every server that Oracle produces, from x86 Systems to SPARC T-Series up and down the line, comes with Oracle ILOM, you can manage all Oracle servers in the same way.  And while all of our servers may have different components on the inside, each with their specialized functions, the way you integrate them and the way you monitor and manage them is exactly the same. Oracle ILOM is state-of-art.  If you are looking for a server that make systems management simple and is easy to integrate and maintain, check out the latest advances to Oracle ILOM. Josh Rosen is a Principal Product Manager at Oracle and previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • Weekend With #iPad

    - by andrewbrust
    Saturday morning, I got up, got dressed and took a 7-minute walk up to the Apple Store in New York’s Meatpacking District to pick up my reserved iPad.  This precinct, which borders Greenwich Village (where I live and grew up) was, when I was a kid, a very industrial and smelly neighborhood during the day  and a rough neighborhood at night.  So imagine my sense of irony as I walked up Hudson Street towards 14th Street, to go wait in line with a bunch of hipsters to buy an iPad on launch day. Numerous blue T-shirt-clad Apple store workers were on hand to check people in to the line specifically identified for people who had reserved an iPad.  Others workers passed out water and all of them, I kid you not, applauded people as they got their chance to go into the store and buy their devices.  They also cheered people and yelled “congratulations” as they left.  The event had all the charm of a mass wedding officiated by Reverend Sung Myung Moon.  Once inside, a nice dude named Trey, with lots of tattoos on his calves, helped me and I acquired my device in short order.  Another guy helped me activate the device, which was comical, because that has to be done through iTunes, which I hadn’t logged into in a while. Turns out my user id was my email address from the company I sold 5 1/2 years ago.  Who knew?  Regardless, I go the device working, packed up and left the store, shuddering as I was cheered and congratulated.  By this time (about 10:30am) the line for reserved units and even walk-ins, was gone.  The iPhone launch this was not. As much as I detested the Apple Store experience, I must say the device is really nice.  the screen is bright, the colors are bold, and the experience is ultra-smooth.  I quickly tested Safari, YouTube, Google Maps, and then installed a few apps, including the New York Times Editors’ Choice and a couple of Twitter clients. Some initial raves: Google Maps and Street View on the iPad is just amazing.  The screen is full-size like a PC or Mac, but it’s right in front of you and responding to taps and flicks and pinches and it’s really engulfing.  Video and photos are really nice on this device, despite the fact that 16:9 and anamorphic aspect ration content is letter boxed.  It still looks amazing.  And apps that are designed especially for the iPad, including The Weather Channel and Gilt and Kayak just look stunning.  The richness, the friendly layout, the finger-friendly UIs, and the satisfaction of not having a keyboard between you and the information you’re managing, while you sit on a couch or an easy chair, is just really a beautiful thing.  The mere experience of seeing these apps’ splash screens causes a shiver and Goosebumps.  Truly.  The iPad is not a desktop machine, and it’s not pocket device.  That doesn’t mean it’s useless though.  It’s the perfect “couchtop” computer. Now some downsides: the WiFi radio seems a bit flakey.  More than a few times, I have had to toggle the WiFi off and back on to get it to connect properly.  Worse yet, the iPad is totally bamboozled by the fact that I have four WiFi access points in my house, each with the same SSID.  My laptops are smart enough to roam from one to the other, but the iPad seems to maintain an affinity for the downstairs access point, even if I’m turning it on two flights up.  Telling the iPad to “forget” my WiFi network and then re-associate with it doesn’t help. More downers: as you might expect, there are far more applications developed for the iPhone than the iPad.  And although iPhone apps run on the iPad, that provides about the same experience as watching standard def on a big HD flat panel, complete with the lousy choice of thick black borders or zooming the picture in to fill the screen.  And speaking of iPhone Apps, I can’t get the Sonos one to work.  Ideally, they’d have a dedicated iPad app and it would work on the first try.  And the iPad is just as bad as any netbook when it comes to being a magnet for fingerprints.  The lack of multi-tasking is quite painful too – truly, I don’t mind if only one app can be active at once, but the lack of ability to switch between apps, and the requirement to return to the home screen and re-launch a previous app to switch back, is already old and I’ve had the thing less than 48 hours. These are just initial impressions.  I’ll have a fuller analysis soon, after I’ve had some more break-in time with my new toy.  I’ll be thinking not just about the iPad and iPhone but also about Android, the 2.1 update for which was pushed to my Droid today, and Windows Phone 7, whose “hub” concept I now understand the value of.  This has been a great year for alternative computing devices, and I see no net downside for Apple, Google or Microsoft.  Exciting times.

    Read the article

  • JavaOne Afterglow by Simon Ritter

    - by JuergenKress
    Last week was the eighteenth JavaOne conference and I thought it would be a good idea to write up my thoughts about how things went. Firstly thanks to Yoshio Terada for the photos, I didn't bother bringing a camera with me so it's good to have some pictures to add to the words. Things kicked off full-throttle on Sunday.  We had the Java Champions and JUG leaders breakfast, which was a great way to meet up with a lot of familiar faces and start talking all things Java.  At midday the show really started with the Strategy and Technical Keynotes.  This was always going to be tougher job than some years because there was no big shiny ball to reveal to the audience.  With the Java EE 7 spec being finalised a few months ago and Java SE 8, Java ME 8 and JDK8 not due until the start of next year there was not going to be any big announcement.  I thought both keynotes worked really well each focusing on the things most important to Java developers: Strategy One of the things that is becoming more and more prominent in many companies marketing is the Internet of Things (IoT).  We've moved from the conventional desktop/laptop environment to much more mobile connected computing with smart phones and tablets.  The next wave of the internet is not just billions of people connected, but 10s or 100s of billions of devices connected to the network, all generating data and providing much more precise control of almost any process you can imagine.  This ties into the ideas of Big Data and Cloud Computing, but implementation is certainly not without its challenges.  As Peter Utzschneider explained it's about three Vs: Volume, Velocity and Value.  All these devices will create huge volumes of data at very high speed; to avoid being overloaded these devices will need some sort of processing capabilities that can filter the useful data from the redundant.  The raw data then needs to be turned into useful information that has value.  To make this happen will require applications on devices, at gateways and on the back-end servers, all very tightly integrated.  This is where Java plays a pivotal role, write once, run everywhere becomes essential, having nine million developers fluent in the language makes it the defacto lingua franca of IoT.  There will be lots more information on how this will become a reality, so watch this space. Technical How do we make the IoT a reality, technically?  Using the game of chess Mark Reinhold, with the help of people like John Ceccarelli, Jasper Potts and Richard Bair, showed what you could do.  Using Java EE on the back end, Java SE and JavaFX on the desktop and Java ME Embedded and JavaFX on devices they showed a complete end-to-end demo. This was really impressive, using 3D features from JavaFX 8 (that's included with JDK8) to make a 3D animated Duke chess board.  Jasper also unveiled the "DukePad" a home made tablet using a Raspberry Pi, touch screen and accelerometer. Although the Raspberry Pi doesn't have earth shattering CPU performance (about the same level as a mid 1990s Pentium), it does have really quite good GPU performance so the GUI works really well.  The plans are all open sourced and available here.  One small, but very significant announcement was that Java SE will now be included with the NOOB and Raspbian Linux distros provided by the Raspberry Pi foundation (these can be found here).  No more hassle having to download and install the JDK after you've flashed your SD card OS image.  The finale was the Raspberry Pi powered chess playing robot.  Really very, very cool.  I talked to Jasper about this and he told me each of the chess pieces had been 3D printed and then he had to use acetone to give them a glossy finish (not sure what his wife thought of him spending hours in the kitchen in a gas mask!)  The way the robot arm worked was very impressive as it did not have any positioning data (like a potentiometer connected to each motor), but relied purely on carefully calibrated timings to get the arm to the right place.  Having done things like this myself in the past I know how easy it is to find a small error gets magnified into very big mistakes. Here's some pictures from the keynote: The "Dukepad" architecture Nice clear perspex case so you can see the innards. The very nice 3D chess set.  Maya's obviously a great tool. Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Simon Ritter,Java One,OOW,Oracle OpenWorld,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Gene Hunt Says:

    - by BizTalk Visionary
    "She's as nervous as a very small nun at a penguin shoot"   "He's got fingers in more pies than a leper on a cookery course" "You so much as belch out of line and I'll have your scrotum on a barbed wire plate" "Let's go play slappyface" "your surrounded by armed barstewards" “Right, get out and find this murdering scum right now!” [pause] “Scratch that, we start 9am sharp tomorrow, it's beer-o-clock.” "So then Cartwright, you're such a good Detective.... Go and Detect me a packet of Garibaldies" "You're not the one who is going to have to knit himself a new arsehole after 25 years of aggressive male love in prison" “A dream for me is Diana Dors and a bottle of chip fat." “A dream for me is Diana Dors and a bottle of chip fat." “They reckon you've got concussion - but personally, I couldn't give a tart's furry cup if half your brains are falling out. Don't ever waltz into my kingdom playing king of the jungle.” “You great... soft... sissy... girlie... nancy... french... bender... Man-United supporting POOF!!” “Drugs eh? What's the point. They make you forget, make you talk funny, make you see things that aren't there. My old grandma got all of that for free when she had a stroke.” “He's Dead! It's quite serious!” “Fanny in the flat...Nice Work” “SoopaDoopa” “Tits in a Jumper!” “Drop your weapons! You are surrounded by armed bastards!” “It's 1973, almost dinnertime. I'm 'avin 'oops!” “Trust the Gene Genie!” “I wanna hump Britt Ekland...What're we gonna do...!” “Was that 'E' and you don't know the rest?! or you going 'Eeee, I Dunno'” “Good Girl! Prostate probe and no jelly. “ “Give over, it's nothing like Spain!” “I'll come over your houses and stamp on all your toys!” “The Wizard will sort it out. It's cos of the wonderful things he does” “Cartwright can jump up and down on his knackers!” “It's not a windup love, he really thinks like this!” “Women! You can't say two words to them” “I was thinking, maybe, a Berni Inn!” “If I wanted a bollocking for drinking too much...!” “Shhhh...hear that...that's the sound of this case being closed! “Chicken!? In a basket!?” “Seems a large quantity of cocaine...” “You probably thought he kept his cock in his keks!” “The tail-end of Rays demotion speech!” “Stephen Warren is gay!?” “You're a smart boy, use your initiative!” “Don't be such a Jessie!” “I find the idea of a bird brushing her teeth...!” “Never been tempted to the Magic talcum powder?” “Make sure she's got nice tits!” “You're more likely to find an ostrich with a plum up it's arse!” “Drink this lot under the table and have a pint on the way home!” “Never be a female Prime Minister!” “Pub? Pub! pub!.....Pub!” “Thou shalt not suck off rent boys!” “The number for the special clinic is on the notice board!” “If me uncle had tits, would he be me auntie!” “Got your vicars in a twist!” “We Done?!” “Your mates got balls...If they were any bigger he'd need a wheelbarrow!” “The Ending - from 'I want to go home' to the end music.”

    Read the article

  • XNA Notes 005

    - by George Clingerman
    Another week and another crazy amount of activity going on in the XNA community. I’m fairly certain I missed over half of it. In fact, if I am missing things, make sure to email me and I’ll try and make sure I catch it next week! ([email protected]). Also, if you’ve got any advice, things you like/don’t like about the way these XNA Notes are going let me know. I always appreciate feedback (currently spammers are leaving me the nicest comments so you guys have work to do!) Without further ado, here’s this week’s notes! Time Critical XNA News The XNA Team Blob reminds us that February 7th is the last day to submit XNA 3.1 games to peer review! http://blogs.msdn.com/b/xna/archive/2011/01/31/7-days-left-to-submit-xna-gs-3-1-games-on-app-hub.aspx XNA MVPS Chris Williams kicks off the marketing campaign for our book http://geekswithblogs.net/cwilliams/archive/2011/01/28/143680.aspx Catalin Zima posts the comparison cheat sheet for why Angry Birds is different than Chickens Can’t Fly http://www.amusedsloth.com/2011/02/comparison-cheat-sheet-for-chickens-cant-fly-and-angry-birds/ Jim Perry congratulates the developers selected by Game Developer Magazine for Best Xbox LIVE Indie Games of 2010 http://machxgames.com/blog/?p=24 @NemoKrad posts his XNAKUUG talks for all to enjoy http://twitter.com/NemoKrad/statuses/33142362502864896 http://xna-uk.net/blogs/randomchaos/archive/2011/02/03/xblig-uk-2011-january-amp-february-talk.aspx George  (that’s me!) preps for his XNA talk coming up on the 8th http://twitter.com/clingermangw/statuses/32669550554124288 http://www.portlandsilverlight.net/Meetings/Details/15 XNA Developers FireFly posts the last tutorial in his XNA Tower Defense tutorial series http://forums.create.msdn.com/forums/p/26442/451460.aspx#451460 http://xnatd.blogspot.com/2011/01/tutorial-14-polishing-game.html @fredericmy posts the main difference when porting a game from Windows Phone 7 to Xbox 360 http://fairyengine.blogspot.com/2011/01/main-differences-when-porting-game-from.html @ElementCy creates a pretty rad video of a MineCraft type terrain created using XNA http://www.youtube.com/watch?v=Waw1f7wnl9I Andrew Russel gets the first XNA badge on gamedev.stackexchange http://twitter.com/_AndrewRussell/statuses/32322877004972032 http://gamedev.stackexchange.com/badges?tab=tags And his funding for ExEn has passed $7000 only $3000 to go http://twitter.com/_AndrewRussell/statuses/33042412804771840 Subodh Pushpak blogs about his Windows Phone 7 XNA talk http://geekswithblogs.net/subodhnpushpak/archive/2011/02/01/windows-phone-7-silverlight--xna-development-talk.aspx Slyprid releases the latest version of Transmute and needs more people to test http://twitter.com/slyprid/statuses/32452488418299904 http://forgottenstarstudios.com/ SpynDoctorGames celebrates the 1 year anniversary of Your Doodles Are Bugged! Congrats! http://twitter.com/SpynDoctorGames/statuses/32511689068908544 Noogy (creator of Dust the Elysian Tail) prepares his conversion to XNA 4.0 http://twitter.com/NoogyTweet/statuses/32522008449253376 @philippedasilva posts about the Indiefreaks Game Framework v0.2.0.0 Input management system http://twitter.com/philippedasilva/statuses/32763393957957632 http://indiefreaks.com/2011/02/02/behind-smart-input-system-feature/ Mommy’s Best Games debates what to do about High Scores with their new update http://mommysbest.blogspot.com/2011/02/high-score-shake-up.html @BinaryTweedDeej want to know if there’s anything the community needs to make XNA games for the PC. Give him some feedback! http://twitter.com/BinaryTweedDeej/status/32895453863354368 @mikebmcl promises to write us all a book (I can’t wait to read it!) http://twitter.com/mikebmcl/statuses/33206499102687233 @werezompire is going to live, LIVE, thanks to all the generosity and support from the community! http://twitter.com/werezompire/statuses/32840147644977153 Xbox LIVE Indie Games (XBLIG) Making money in Xbox 360 indie game development. Is it possible? http://www.bitmob.com/articles/making-money-in-xbox-360-indie-game-development-is-it-possible @AlejandroDaJ posts some thoughts abut the bitmob article http://twitter.com/AlejandroDaJ/statuses/31068552165330944 http://www.apathyworks.com/blog/view.php?id=00215 Kobun gets my respect as an XBLIG champion. I’m not sure who Kobun is, but if you’ve every read through the comment sections any time Kotaku writes about XBLIGs you’ll see a lot of confusion, disinformation in there. Kobun has been waging a secret war battling that lack of knowledge and he does it well. Also he’s running a pretty kick ass site for Xbox LIVE Indie Game reviews http://xboxindies.teamkobun.com/ @radiangames releases his last Xbox LIVE Indie Game...for now http://bit.ly/gMK6lE Playing Avaglide with the Kinect controller http://www.youtube.com/watch?v=UqAYbHww53o http://www.joystiq.com/2011/01/30/kinect-hacks-take-to-the-skies-with-avaglide/ Luke Schneider of Radiangames interviewed in Edge magazine http://www.next-gen.biz/features/radiangames-venture Digital Quarters posts thoughts on why XBLIG’s online requirement kills certain genres http://digitalquarters.blogspot.com/2011/02/thoughts-why-xbligs-online-requirement.html Mommy’s Best Games shares the news that several XBLIGs were featured in the March 2011 issue of Famitsu 360 http://forums.create.msdn.com/forums/p/33455/451487.aspx#451487 NaviFairy continues with his Indie-Game-A-Day http://gaygamer.net/2011/02/indie_game_a_day_epic_dungeon.html http://gaygamer.net/2011/02/indie_game_a_day_break_limit_r.html and more every day...that’s kind of the point! Keep your eye on this series! VVGTV continues with it’s awesome reviews/promotions for XBLIGs http://vvgtv.com/ http://vvgtv.com/2011/02/03/iredia-atrams-secret-xblig-review-2/ http://vvgtv.com/2011/02/02/poopocalypse-coming-soon-to-xblig/ ….and even more, you get the point. Magicka is an Indie Game doing really well on Steam AND it’s made using XNA http://www.magickagame.com/ http://twitter.com/Magickagame/statuses/32712762580799488 GameMarx reviews Antipole http://www.gamemarx.com/reviews/73/antipole-is-vvvvvvery-good.aspx Armless Octopus review Alpha Squad http://www.armlessoctopus.com/2011/01/28/xbox-indie-review-alpha-squad/ An interesting article about Kodu that Jim Perry found http://twitter.com/MachXGames/statuses/32848044105924608 http://www.develop-online.net/news/36915/10-year-old-Jordan-makes-games-The-UK-needs-more-like-her XNA Game Development Sgt. Conker posts about the Natur beta, a new book and whether you can make money with XBLIG http://www.sgtconker.com/ http://www.sgtconker.com/2011/01/a-new-book-on-the-block-and-a-new-natur-beta/ http://www.sgtconker.com/2011/01/making-money-in-xbox-360-indie-game-development-is-it-possible/ Tips for setting up SVN http://bit.ly/fKxgFh @bsimser found tons of royalty free music and soundfx for your XNA Games http://twitter.com/bsimser/statuses/31426632933711872 Post on the new features in the next Sunburn Editor http://www.synapsegaming.com/blogs/fivesidedbarrel/archive/2011/01/28/new-editor-features-prefabs-components-and-more.aspx @jasons_novaleaf posts source code for light pre-pass optimizations for #xna http://twitter.com/jasons_novaleaf/statuses/33348855403642880 http://jcoluna.wordpress.com/2011/02/01/xna-4-0-light-pre-pass-optimization-round-one/ I’ve been learning about doing an A.I. for turn based games and this article was a great resource. http://www.gamasutra.com/view/feature/1535/designing_ai_algorithms_for_.php?print=1

    Read the article

  • Do You Develop Your PL/SQL Directly in the Database?

    - by thatjeffsmith
    I know this sounds like a REALLY weird question for many of you. Let me make one thing clear right away though, I am NOT talking about creating and replacing PLSQL objects directly into a production environment. Do we really need to talk about developers in production again? No, what I am talking about is a developer doing their work from start to finish in a development database. These are generally available to a development team for building the next and greatest version of your databases and database applications. And of course you are using a third party source control system, right? Last week I was in Tampa, FL presenting at the monthly Suncoast Oracle User’s Group meeting. Had a wonderful time, great questions and back-and-forth. My favorite heckler was there, @oraclenered, AKA Chet Justice.  I was in the middle of talking about how it’s better to do your PLSQL work in the Procedure Editor when Chet pipes up - Don’t do it that way, that’s wrong Just press play to edit the PLSQL directly in the database Or something along those lines. I didn’t get what the heck he was talking about. I had been showing how the Procedure Editor gives you much better feedback and support when working with PLSQL. After a few back-and-forths I got to what Chet’s main objection was, and again I’m going to paraphrase: You should develop offline in your SQL worksheet. Don’t do anything in the database until it’s done. I didn’t understand. Were developers expected to be able to internalize and mentally model the PL/SQL engine, see where their errors were, etc in these offline scripts? No, please give Chet more credit than that. What is the ideal Oracle Development Environment? If I were back in the ‘real world’ of database development, I would do all of my development outside of the ‘dev’ instance. My development process looks a little something like this: Do I have a program that already does something like this – copy and paste Has some smart person already written something like this – copy and paste Start typing in the white-screen-of-panic and bungle along until I get something that half-works Tweek, debug, test until I have fooled my subconscious into thinking that it’s ‘good’ As you might understand, I don’t want my co-workers to see the evolution of my code. It would seriously freak them out and I probably wouldn’t have a job anymore (don’t remind me that I already worked myself out of development.) So here’s what I like to do: Run a Local Instance of Oracle on my Machine and Develop My Code Privately I take a copy of development – that’s what source control is for afterall – and run it where no one else can see it. I now get to be my own DBA. If I need a trace – no problem. If I want to run an ASH report, no worries. If I need to create a directory or run some DataPump jobs, that’s all on me. Now when I get my code ‘up to snuff,’ then I will check it into source control and compile it into the official development instance. So my teammates suddenly go from seeing no program, to a mostly complete program. Is this right? If not, it doesn’t seem wrong to me. And after talking to Chet in the car on the way to the local cigar bar, it seems that he’s of the same opinion. So what’s so wrong with coding directly into a development instance? I think ‘wrong’ is a bit strong here. But there are a few pitfalls that you might want to look out for. A few come to mind – and I’m sure Chet could add many more as my memory fails me at the moment. But here goes: Development instance isn’t properly backed up – would hate to lose that work Development is wiped once a week and copied over from Prod – don’t laugh Someone clobbers your code You accidentally on purpose clobber someone else’s code The more developers you have in a single fish pond, the greater chance something ‘bad’ will happen This Isn’t One of Those Posts Where I Tell You What You Should Be Doing I realize many shops won’t be open to allowing developers to stage their own local copies of Oracle. But I would at least be aware that many of your developers are probably doing this anyway – with or without your tacit approval. SQL Developer can do local file tracking, but you should be using Source Control too! I will say that I think it’s imperative that you control your source code outside the database, even if your development team is comprised of a single developer. Store your source code in a file, and control that file in something like Subversion. You would be shocked at the number of teams that do not use a source control system. I know I continue to be shocked no matter how many times I meet another team running by the seat-of-their-pants. I’d love to hear how your development process works. And of course I want to know how SQL Developer and the rest of our tools can better support your processes. And one last thing, if you want a fun and interactive presentation experience, be sure to have Chet in the room

    Read the article

  • Backpacks and Booth Paint: TechEd 2012

    - by The Un-T Guy
    Arriving in the parking lot of the Orange County Convention Center, I immediately knew I was in the right place. As far as the eye could see, the acres of asphalt were awash in backpacks, quirky (to be kind) outfits, and bad haircuts. This was the place. This was Microsoft Mecca v2012 for geeks and nerds, the Central Florida event of the year, a gathering of high tech professionals whose skills I both greatly respect and, frankly, fear a little. I was wholly and completely out of element, a dork in a vast sea of geek jumbo. It like was wearing dockers and a golf shirt walking into a RenFaire, but one with really crappy costumes and no turkey legs...save those attached to some of the attendees. Of course the corporate whores...errrr, vendors were in place, ready to parlay the convention's fre-nerd-ic energy into millions of dollars by convincing the big-brained and under-sexed in the crowd (i.e., virtually all of them...present company excluded, of course) that their product or service was the only thing standing between them and professional success, industry fame, and clear skin. "With KramTech 2012," they seemed to scream, "you will be THE ROCK STAR of your company's IT department!" As car shows and tattoo parlors learned long ago, Tech companies seem to believe that the best way to attract the attention of this crowd is through the hint of the promise of sex. They recruit and deploy an army of "sales reps" whose primary qualifications appear to be long hair, short skirts, high heels, and a vagina. Unlike their distant cousins in the car and body art industries, however, this sub-species of booth paint (semi-gloss decoration that adds nothing to the substance of the product) seems torn between committing to being all-out sex objects and recognition that they are in the presence of intelligent, discerning people. People who are smart enough to know exactly what these vendors are doing. Also unlike their distant car show and tattoo shop cousins, these young women (what…are there no gay tech professionals who could use some eye candy?) seem to realize that while IT remains a male-dominated field, there are ever-increasing numbers of intelligent, capable, strong professional women – women who’ve battled to make it in this field through hard work and work performance rather than a hard body and performing after work. This is not to say that all of the young female sales reps are there only because of their physical attributes. Many are competent, intelligent, and driven -- not to mention attractive. They're working hard on the front lines of delivering the next generation of technology. The distinction is pretty clear, however, between these young professionals and the booth paint. The former enthusiastically deliver credible information about the products they’re hawking. The latter are positioned in the aisles, uncomfortably avoiding eye contact as they struggle to operate the badge readers. Surprisingly, not all of the women in attendance seemed to object to the objectification of their younger sisters. One IT professional woman who came of age in the industry (mostly in IT marketing) said, “I have no problem with it. I was a ‘booth babe’ for years and it doesn’t bother me at all.” Others, however, weren’t quite so gracious. One woman I spoke with, an IT manager from Cheyenne, Wyoming, said it was demeaning and frankly, as more and more women grow into IT management positions, not a great marketing idea. “Using these young women is, to me, no different than vendors giving out t-shirts to attract attention. It’s sad because it’s still hard for a woman to be respected in the IT field and this just perpetuates the outdated notion that IT is a male-dominated field.” She went on to say that decisions by vendors to employ these young women in this “inappropriate way” could impact her purchasing decisions. “I might be swayed toward a vendor who has women on staff who are intelligent and dynamic rather than the vendors who use the ‘decoration’ girls.” So in many ways, the IT industry is no different than most other industries as it struggles to maximize performance by finding and developing talent – all of the talent, not just the 50% with a penis. Women in IT, like their brethren, struggle to find their niche in the field, to grow professionally, and reach for the brass ring, struggling to overcome obstacles as they climb the mountain of professional success in a never-ending cycle of economic uncertainty. But as (generally) well-educated and highly-trained professionals, they are probably better positioned than those in many other industries. Beside, they’ve got one other advantage over their non-IT counterparts as they attempt their ascent to the summit: They’ve already got the backpacks.

    Read the article

  • Center Pictures and Other Objects in Office 2007 & 2010

    - by Matthew Guay
    Sometimes it can be difficult to center a picture in a document just by dragging it dragging it around. Today we show you how to center pictures, images, and other objects perfectly in Word and PowerPoint. Note: For this tutorial we’re using Office 2010, but the steps are nearly identical in 2007. Centering a Picture in Word First let’s insert a picture into our document.  Click the Insert tab, and then click Picture. Once you select the picture you want, it will be added to your document.  Usually, pictures are added wherever your curser was in the document, so in a blank document it will be added at the top left. Also notice Picture Tools show up in the Ribbon after inserting an image. Note: The following menu items are available in Picture Tools Format tab which is displayed when you select the object or image you’re working with. How do we align the picture just like we want?  Click Position to get some quick placement options, including centered in the middle of the document or on the top.    However, for more advanced placement, we can use the Align tool.  If Word isn’t maximized, you may only see the icon without the “Align” label. Notice the tools were grayed out in the menu by default.  To be able to change the Alignment, we need to first change the text wrap settings. Click the Wrap Text button, and any option other than “In Line with Text”.  Your choice will depend on the document you’re writing, just choose the option that works best in the document.   Now, select the Align tools again.  You can now position your image precisely with these options. Align Center will position your picture in the center of the page widthwise. Align Middle will put the picture in the middle of the page height-wise. This works the same with textboxes.  Simply click the Align button in the Format tab, and you can center it in the page. And if you’d like to align several objects together, simply select them all, click Group, and then select Group from the menu.   Now, in the align tools, you can center the whole group on your page for a heading, or whatever you want to use the pictures for. These steps also work the same with Office 2007. Center objects in PowerPoint This works similar in PowerPoint, except that pictures are automatically set for square wrapping automatically, so you don’t have to change anything.  Simply insert the picture or other object of your choice, click Align, and choose the option you want. Additionally, if one object is already aligned like you want, drag another object near it and you will see a Smart Guide to help you align or center the second object with the first.  This only works with shapes in PowerPoint 2010 beta, but will work with pictures, textboxes, and media in the final release this summer. Conclusion These are good methods for centering images and objects in Word and PowerPoint.  From designing perfect headers to emphasizing your message in a PowerPoint presentation, this is something we’ve found useful and hope you will too. Since we’re talking about Office here, it’s worth mentioning that Microsoft has announced the Technology Guarantee Program for Office 2010. Essentially what this means is, if you purchase a version of Office 2007 between March 5th and September 30th of this year, when Office 2010 is released you’ll be able to upgrade to it for free! Similar Articles Productive Geek Tips Add or Remove Apps from the Microsoft Office 2007 or 2010 SuiteAdd More Functions To Office 2007 By Installing Add-InsCustomize Your Welcome Picture Choices in Windows VistaEasily Rotate Pictures In Word 2007Add Effects To Your Pictures in Word 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Discover New Bundled Feeds in Google Reader Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox)

    Read the article

  • Windows Workflow Foundation (WF) and things I wish were more intuitive

    - by pjohnson
    I've started using Windows Workflow Foundation, and so far ran into a few things that aren't incredibly obvious. Microsoft did a good job of providing a ton of samples, which is handy because you need them to get anywhere with WF. The docs are thin, so I've been bouncing between samples and downloadable labs to figure out how to implement various activities in a workflow. Code separation or not? You can create a workflow and activity in Visual Studio with or without code separation, i.e. just a .cs "Component" style object with a Designer.cs file, or a .xoml XML markup file with code behind (beside?) it. Absence any obvious advantage to one or the other, I used code separation for workflows and any complex custom activities, and without code separation for custom activities that just inherit from the Activity class and thus don't have anything special in the designer. So far, so good. Workflow Activity Library project type - What's the point of this separate project type? So far I don't see much advantage to keeping your custom activities in a separate project. I prefer to have as few projects as needed (and no fewer). The Designer's Toolbox window seems to find your custom activities just fine no matter where they are, and the debugging experience doesn't seem to be any different. Designer Properties - This is about the designer, and not specific to WF, but nevertheless something that's hindered me a lot more in WF than in Windows Forms or elsewhere. The Properties window does a good job of showing you property values when you hover the mouse over the values. But they don't do the same to find out what a control's type is. So maybe if I named all my activities "x1" and "x2" instead of helpful self-documenting names like "listenForStatusUpdate", then I could easily see enough of the type to determine what it is, but any names longer than those and all I get of the type is "System.Workflow.Act" or "System.Workflow.Compone". Even hitting the dropdown doesn't expand any wider, like the debugger quick watch "smart tag" popups do when you scroll through members. The only way I've found around this in VS 2008 is to widen the Properties dialog, losing precious designer real estate, then shrink it back down when you're done to see what you were doing. Really? WF Designer - This is about the designer, and I believe is specific to WF. I should be able to edit the XML in a .xoml file, or drag and drop using the designer. With WPF (at least in VS 2010 Ultimate), these are side by side, and changes to one instantly update the other. With WF, I have to right-click on the .xoml file, choose Open With, and pick XML Editor to edit the text. It looks like this is one way where WF didn't get the same attention WPF got during .NET Fx 3.0 development. Service - In the WF world, this is simply a class that talks to the workflow about things outside the workflow, not to be confused with how the term "service" is used in every other context I've seen in the Windows and .NET world, i.e. an executable that waits for events or requests from a client and services them (Windows service, web service, WCF service, etc.). ListenActivity - Such a great concept, yet so unintuitive. It seems you need at least two branches (EventDrivenActivity instances), one for your positive condition and one for a timeout. The positive condition has a HandleExternalEventActivity, and the timeout has a DelayActivity followed by however you want to handle the delay, e.g. a ThrowActivity. The timeout is simple enough; wiring up the HandleExternalEventActivity is where things get fun. You need to create a service (see above), and an interface for that service (this seems more complex than should be necessary--why not have activities just wire to a service directly?). And you need to create a custom EventArgs class that inherits from ExternalDataEventArgs--you can't create an ExternalDataEventArgs event handler directly, even if you don't need to add any more information to the event args, despite ExternalDataEventArgs not being marked as an abstract class, nor a compiler error nor warning nor any other indication that you're doing something wrong, until you run it and find that it always times out and get to check every place mentioned here to see why. Your interface and service need an event that consumes your custom EventArgs class, and a method to fire that event. You need to call that method from somewhere. Then you get to hope that you did everything just right, or that you can step through code in the debugger before your Delay timeout expires. Yes, it's as much fun as it sounds. TransactionScopeActivity - I had the bright idea of putting one in as a placeholder, then filling in the database updates later. That caused this error: The workflow hosting environment does not have a persistence service as required by an operation on the workflow instance "[GUID]". ...which is about as helpful as "Object reference not set to an instance of an object" and even more fun to debug. Google led me to this Microsoft Forums hit, and from there I figured out it didn't like that the activity had no children. Again, a Validator on TransactionScopeActivity would have pointed this out to me at design time, rather than handing me a nearly useless error at runtime. Easily enough, I disabled the activity and that fixed it. I still see huge potential in my work where WF could make things easier and more flexible, but there are some seriously rough edges at the moment. Maybe I'm just spoiled by how much easier and more intuitive development elsewhere in the .NET Framework is.

    Read the article

  • Oracle Delivers Latest Release of Oracle Enterprise Manager 12c

    - by Scott McNeil
    Richer Service Catalog for Database and Middleware as a Service; Enhanced Database and Middleware Management Help Drive Enterprise-Scale Private Cloud Adoption News Summary IT organizations are adopting private clouds as a stepping-stone to business-driven, self-service IT. Successful implementations hinge on the ability to efficiently deploy and manage cloud services at enterprise scale. Having a complete cloud management solution integrated with an enterprise-class technology stack is a fundamental requirement for IT. Oracle Enterprise Manager 12c Release 4 meets that requirement by helping businesses become more agile and responsive, while reducing cost, complexity, and risk. News Facts Oracle Enterprise Manager 12c Release 4, available today, lets organizations rapidly adopt Oracle-based, enterprise-scale private clouds. New capabilities provide advanced technology stack management, secure database administration, and enterprise service governance, enabling Oracle customers and partners to maximize database and application performance and drive innovation using self-service IT platforms. The enhancements have been driven by customers and the growing Oracle Enterprise Manager Ecosystem, comprised of more than 750 Oracle PartnerNetwork (OPN) Specialized partners. Oracle and its partners and customers have built over 140 plug-ins and connectors for Oracle Enterprise Manager. Watch the video highlights. Automation for Broader Cloud Services Oracle Enterprise Manager 12c Release 4 allows for a rapid enterprise-wide adoption of database, middleware and infrastructure services in the private cloud, driven by an enhanced API-enabled service catalog. The release features “push button” style provisioning of complete environments such as SOA and Oracle Active Data Guard, and fast data cloning that enables rapid deployment and testing of enterprise applications. Out-of-the-box capabilities to detect data and configuration vulnerabilities provide enhanced cloud service governance along with greater operational control through a flexible and extensible showback mechanism. Enhanced Database Management A new performance warehouse enables predictive database diagnostics and trend analysis and helps identify database problems before they occur. New enterprise data-governance capabilities enhance security by helping systematically discover and protect sensitive data. Step-by-step orchestration of upgrades with the ability to rollback changes enables faster adoption of Oracle Database 12c. Expanded Fusion Middleware Management A new consolidated view of Oracle Fusion Middleware 12c deployments with a guided management capability lets administrators apply best management practices to diverse middleware environments and identify performance issues quickly. A Java VM Diagnostics as a Service feature allows governed access to diagnostics data for IT workers across multiple disciplines for accelerated DevOps resolutions of defects and performance optimization. New automated provisioning for SOA lets middleware administrators perform mass SOA provisioning with ease. Superior Enterprise-Grade Management Private roles and preferred credentials have been added to Oracle Enterprise Manager to provide additional fine-grained security for organizations with complex access control requirements. A new security console provides a single point of control for managing the security of Oracle Enterprise Manager environments. Support for the latest industry standard SNMP v3 protocol, including encryption, enables more secure heterogeneous management. “Smart monitoring” adapts to observed environmental changes and adds self-management capabilities to help Oracle Enterprise Manager run at peak performance, while demanding less IT supervision. Supporting Quotes “Lawrence Livermore National Laboratory has a strong tradition of technology breakthroughs and leadership. As a member of Oracle’s Customer Advisory Board for Oracle Enterprise Manager, we have consistently provided feedback and guidance in the areas of enterprise-scale cloud, self-diagnosability, and secure administration for the product,” said Tim Frazier, CIO, NIF and Photon Sciences, Lawrence Livermore National Laboratory. “We intend to take advantage of the Release 4 features that support enterprise-scale availability and fine-grained security capabilities for private cloud deployments.” “IDC's most recent CloudTrack survey shows that most enterprises plan to adopt hybrid cloud architectures over the next three years,” said Mary Johnston Turner, Research Vice President, Enterprise System Management Software, IDC. “These organizations plan to deploy a wide range of workloads into cloud environments including mission critical database and middleware services that require high levels of fault tolerance and disaster recovery. Such capabilities were traditionally custom configured for each application but cloud offers the possibility to incorporate such properties within the service definition, enabling organizations to adopt cloud without compromise. With the latest release of Oracle Enterprise Manager 12c, Oracle is providing customers with an out-of-the-box experience for delivering highly-resilient cloud services for databases and applications.” “Since its inception, Oracle has been leading the way in innovative, scalable and high performance solutions for the enterprise. With this release of Oracle Enterprise Manager, we are extending this leadership by providing enterprise-scale capabilities for planning, delivering, and managing private clouds. We call this ‘zero-to-cloud – accelerated.’ These enhancements help our customers to expedite their adoption of cloud computing and prepares them for the next generation of self-service IT,” said Prakash Ramamurthy, senior vice president of Systems and Cloud Management at Oracle. Supporting Resources Oracle Enterprise Manager 12c Video: Cerner Delivers High Performance Private Cloud Video: BIAS Achieves Outstanding Results with Private Cloud Press Release Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • Oracle Social Network Developer Challenge Winners

    - by kellsey.ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Now that OpenWorld 2012 has wrapped, I have time to tell you all about what happened. Maybe you recall that Noel (@noelportugal) and I were running a modified hackathon during the show, the Oracle Social Network Developer Challenge. Without further ado, congratulations to Dimitri Gielis (@dgielis) and Martin Giffy D’Souza (@martindsouza) on their winning entry, an integration between Oracle APEX and Oracle Social Network that integrates feedback and bug submission with Oracle Social Network Conversations, allowing developers, end-users and project leaders to view and discuss the feedback on their APEX applications from within Oracle Social Network. Update: Bob Rhubart of OTN (@brhubart) interviewed Dimitri and Martin right after their big win. Money quote from Dimitri when asked what he’d buy with the $500 in Amazon gift cards, “Oracle Social Network.” Nice one. In their own words: In the developers perspective it’s important to get feedback soon, so after a first iteration and end-users start to test, they can give feedback of the application. Previously it stopped there, and it was up to the developer to communicate further with email, phone etc. With OSN every feedback and communication gets logged and other people can see the discussion immediately as well. For the end users perspective he can now communicate in a more efficient way to not only the developers, but also between themselves. Maybe many end-users (in different locations) would like to change some behaviour, by using OSN they can see the entry somebody put in with a screenshot and they can just start to chat about it. Some key technical end users can have lighten the tasks of the development team by looking at the feedback first and start to communicate with their peers. For the project manager he has now the ability to really see what communication has taken place in certain areas and can make decisions on that. Later, if things come up again, he can always go back in OSN and see what was said at that moment in time. Integrating OSN in the APEX applications enhances the user experience, makes the lives of the developers easier and gives a better overview to project managers. Incidentally, you may already know Dimitri and Martin, since both are Oracle Ace Directors. I ran into Martin at the Ace Director briefings Friday before the conference started, and at that point, he wasn’t sure he’d have time to enter the Challenge. After some coaxing, he and Dimitri agreed to give it a go and banged out their entry on Tuesday night, or more accurately, very early Wednesday morning, the day of the Challenge judging. I think they said it took them about four hours of hardcore coding to get it done, very much like a traditional hackathon, which is essentially a code sprint from idea to finished product. Here are some screenshots of the workflow they built. #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; } I love this idea, i.e. closing the loop between web developers and users, a very common pain point, and so did our judges. Speaking of, special thanks to our panel of three judges: Reggie Bradford (@reggiebradford), serial entrepreneur, founder of Vitrue and SVP of Cloud Product Development at Oracle Robert Hipps (@roberthipps), VP of Development for Oracle Social Network and my former boss Roland Smart (@rsmartx), VP of Social Marketing and the brains behind the Oracle Social Developer Community Finally, thanks to everyone who made this possible, including: The three other teams from HarQen (@harqen), TEAM Informatics (@teaminformatics) and Fishbowl Solutions (@fishbowle20) featuring Friend of the ‘Lab John Sim (@jrsim_uix), who finished and presented entries. I’ll be posting the details of their work this week. The one guy who finished an entry, but couldn’t make the judging, Bex Huff (@bex). Bex rallied from a hospitalization due to an allergic reaction during the show; he’s fine, don’t worry. I’ll post details of his work next week, too. The 40-plus people who registered to compete in the Challenge. Noel for all his hard work, sample code, and flying monkey target, more on that to come. The Oracle Social Network development team for supporting this event. Everyone in legal and the beta program office for their help. And finally, the Oracle Technology Network (@oracletechnet) for hosting the event and providing countless hours of operational and moral support. Sorry if I’ve missed some people, since this was a huge team effort. This event was a big success, and we plan to do similar events in the future. Stay tuned to this channel for more. 

    Read the article

  • .NET 4: &ldquo;Slim&rdquo;-style performance boost!

    - by Vitus
    RTM version of .NET 4 and Visual Studio 2010 is available, and now we can do some test with it. Parallel Extensions is one of the most valuable part of .NET 4.0. It’s a set of good tools for easily consuming multicore hardware power. And it also contains some “upgraded” sync primitives – Slim-version. For example, it include updated variant of widely known ManualResetEvent. For people, who don’t know about it: you can sync concurrency execution of some pieces of code with this sync primitive. Instance of ManualResetEvent can be in 2 states: signaled and non-signaled. Transition between it possible by Set() and Reset() methods call. Some shortly explanation: Thread 1 Thread 2 Time mre.Reset(); mre.WaitOne(); //code execution 0 //wating //code execution 1 //wating //code execution 2 //wating //code execution 3 //wating mre.Set(); 4 //code execution //… 5 Upgraded version of this primitive is ManualResetEventSlim. The idea in decreasing performance cost in case, when only 1 thread use it. Main concept in the “hybrid sync schema”, which can be done as following:   internal sealed class SimpleHybridLock : IDisposable { private Int32 m_waiters = 0; private AutoResetEvent m_waiterLock = new AutoResetEvent(false);   public void Enter() { if (Interlocked.Increment(ref m_waiters) == 1) return; m_waiterLock.WaitOne(); }   public void Leave() { if (Interlocked.Decrement(ref m_waiters) == 0) return; m_waiterLock.Set(); }   public void Dispose() { m_waiterLock.Dispose(); } } It’s a sample from Jeffry Richter’s book “CLR via C#”, 3rd edition. Primitive SimpleHybridLock have two public methods: Enter() and Leave(). You can put your concurrency-critical code between calls of these methods, and it would executed in only one thread at the moment. Code is really simple: first thread, called Enter(), increase counter. Second thread also increase counter, and suspend while m_waiterLock is not signaled. So, if we don’t have concurrent access to our lock, “heavy” methods WaitOne() and Set() will not called. It’s can give some performance bonus. ManualResetEvent use the similar idea. Of course, it have more “smart” technics inside, like a checking of recursive calls, and so on. I want to know a real difference between classic ManualResetEvent realization, and new –Slim. I wrote a simple “benchmark”: class Program { static void Main(string[] args) { ManualResetEventSlim mres = new ManualResetEventSlim(false); ManualResetEventSlim mres2 = new ManualResetEventSlim(false);   ManualResetEvent mre = new ManualResetEvent(false);   long total = 0; int COUNT = 50;   for (int i = 0; i < COUNT; i++) { mres2.Reset(); Stopwatch sw = Stopwatch.StartNew();   ThreadPool.QueueUserWorkItem((obj) => { //Method(mres, true); Method2(mre, true); mres2.Set(); }); //Method(mres, false); Method2(mre, false);   mres2.Wait(); sw.Stop();   Console.WriteLine("Pass {0}: {1} ms", i, sw.ElapsedMilliseconds); total += sw.ElapsedMilliseconds; }   Console.WriteLine(); Console.WriteLine("==============================="); Console.WriteLine("Done in average=" + total / (double)COUNT); Console.ReadLine(); }   private static void Method(ManualResetEventSlim mre, bool value) { for (int i = 0; i < 9000000; i++) { if (value) { mre.Set(); } else { mre.Reset(); } } }   private static void Method2(ManualResetEvent mre, bool value) { for (int i = 0; i < 9000000; i++) { if (value) { mre.Set(); } else { mre.Reset(); } } } } I use 2 concurrent thread (the main thread and one from thread pool) for setting and resetting ManualResetEvents, and try to run test COUNT times, and calculate average execution time. Here is the results (I get it on my dual core notebook with T7250 CPU and Windows 7 x64): ManualResetEvent ManualResetEventSlim Difference is obvious and serious – in 10 times! So, I think preferable way is using ManualResetEventSlim, because not always on calling Set() and Reset() will be called “heavy” methods for working with Windows kernel-mode objects. It’s a small and nice improvement! ;)

    Read the article

  • The standards that fail us and the intellectual bubble

    - by Jeff
    There has been a great deal of noise in the techie community about standards, and a sudden and unexplainable hate for Flash. This noise isn't coming from consumers... the countless soccer moms, teens and your weird uncle Bob, it's coming from the people who build (or at least claim to build) the stuff those consumers consume. If you could survey the position of consumers on the topic, they'd likely tell you that they just want stuff on the Web to work.The noise goes something like this: Web standards are the correct and right thing to use across the Intertubes, and anything not a part of those standards (Flash) is bad. Furthermore, the more recent noise is centered around the idea that HTML 5, along with Javascript, is the right thing to use. The arguments against Flash are, well, the truth is I haven't seen a good argument. I see anecdotal nonsense about high CPU usage and things I'd never think to check when I'm watching Piano Cat on YouTube, but these aren't arguments to me. Sure, I've seen it crash a browser a few times, but it's totally rare.But let's go back to standards. Yes, standards have played an important role in establishing the ubiquity of the Web. The protocols themselves, TCP/IP and HTTP, have been critical. HTML, which has served us well for a very long time, established an incredible foundation. Javascript did an OK job, and thanks to clever programmers writing great frameworks like JQuery, is becoming more and more useful. CSS is awful (there, I said it, I feel SO much better), and I'll never understand why it's so disconnected and different from anything else. It doesn't help that it's so widely misinterpreted by different browsers. Still, there's no question that standards are a good thing, and they've been good for the Web, consumers and publishers alike.HTML 4 has been with us for more than a decade. In Web years, that might as well be 80. HTML 5, contrary to popular belief, is not a standard, and likely won't be for many years to come. In fact, the Web hasn't really evolved at all in terms of its standards. The tools that generate the standard markup and script have, but at the end of the day, we're still living with standards that are more than ten years old. The "official" standards process has failed us.The Web evolved anyway, and did not wait for standards bodies to decide what to do next. It evolved in part because Macromedia, then Adobe, kept evolving Flash. In the earlier days, it mostly just did obnoxious splash pages, but then it started doing animation, and then rich apps as they added form input. Eventually it found its killer app: video. Now more than 95% of browsers have Flash installed. Consumers are better for it.But I'll do it one better... I'll go out on a limb and say that Flash is a standard. If it's that pervasive, I don't care what you tell me, it's a standard. Just because a company owns it doesn't mean that it's evil or not a standard. And hey, it pains me to say that as a developer, because I think the dev tools are the suck (more on that in a minute). But again, consumers don't care. They don't even pay for Flash. The bottom line is that if I put something Flash based on the Internet, it's likely that my audience will see it.And what about the speed of standards owned by a company? Look no further than Silverlight. Silverlight 2 (which I consider the "real" start to the story) came out about a year and a half ago. Now version 4 is out, and it has come a very long way in its capabilities. If you believe Riastats.com, more than half of browsers have it now. It didn't have to wait for standards bodies and nerds drafting documents, it's out today. At this rate, Silverlight will be on version 6 or 7 by the time HTML 5 is a ratified standard.Back to the noise, one of the things that has continually disappointed me about this profession is the number of people who get stuck in an intellectual bubble, color it with dogmatic principles, and completely ignore the actual marketplace where this stuff all has to live. We aren't machines; Binary thinking that forces us to choose between "open standards" and "proprietary lock-in" (the most loaded b.s. FUD term evar) isn't smart at all. The truth is that the <object> tag has allowed us to build incredible stuff on top of the old standards, and consumers have benefitted greatly. Consumer desire, capitalism, and yes, standards ratified by nerds who think about this stuff for years have all played a role in the broad adoption of the Interwebs.We could all do without the noise. At the end of the day, I'm going to build stuff for the Web that's good for my users, and I'm not going to base my decisions on a techie bubble religion. Imagine what the brilliant minds behind the noise could do for the Web if they joined me in that pursuit.

    Read the article

  • Oracle WebCenter at the Enterprise 2.0 Conference

    - by Brian Dirking
    We had a great week at the E20 Conference, presenting in four sessions – Andy MacMillan gave a session titled Today’s Successful Enterprises are Social Enterprises and was on a panel that Tony Byrne moderated; Christian Finn spoke on a panel on Unified Communications Unified Communications + Social Computing = Best of Both Worlds?, Mark Bennett spoke on a panel on The Evolution of Talent Management. The key areas of focus this year were sentiment analysis, adoption and community building, the benefits of failure, and social’s role in process applications. Sentiment analysis. This was focused not on external audiences but more on employee sentiment. Tim Young showed his internal "NikoNiko" project, where employees use smilies to report their current mood. The result was a dashboard that showed the company mood by department. Since the goal is to improve productivity, people can see which departments are running into issues and try and address them. A company might otherwise wait until the end of the quarter financials to find out that there was a problem and product didn’t ship. This is a way to identify issues immediately. Tim is great – he had the crowd laughing as soon as he hit the stage, with his proposed hastag for his session: by making it 138 characters long, people couldn’t say much behind his back. And as I tweeted during his session, I loved his comment that complexity diffuses energy - it sounds like something Sun Tzu would say. Another example of employee sentiment analysis was CubeVibe. Founder and CEO Aaron Aycock, in his 3 minute pitch or die session talked about how engaged employees perform better. It was too bad he got gonged, he was just picking up speed, but CubeVibe did win the vote – congratulations to them. Internal adoption, community building, and involvement. On this topic I spoke to Terri Griffith, and she said there is some good work going on at University of Indiana regarding this, and hinted that she might be blogging about it in the near future. This area holds lots of interest for me. Amongst our customers, - CPAC stands out as an organization that has successfully built a community. So, I wonder - what are the building blocks? A strong leader? A common or unifying purpose? A certain level of engagement? I imagine someone has created an equation that says “for a community to grow at 30% per month, there must be an engagement level x to the square root of y, where x equals current community size, and y equals the expected growth rate, and the result is how many engagements the average user must contribute to maintain that growth.” Does anyone have a framework like that? The net result of everyone’s experience is that there is nothing to do but start early and fail often. Kevin Jones made this the focus of his keynote. He talked about the types of failure and what they mean. And he showed his famous kids at work video: Kevin’s blog also has this post: Social Business Failure #8: Workflow Integration. This is something that we’ve been working on at Oracle. Since so much of business is based in enterprise applications such as ERP and CRM (and since Oracle offers e-Business Suite, Siebel, PeopleSoft, and JD Edwards, as well as Fusion Applications), it makes sense that the social capabilities of Oracle WebCenter is built right into these applications. There are two types of social collaboration – ad-hoc, and exception handling. When you are in a business process and encounter an exception, you immediately look for 1) the document that tells you how to handle it, or 2) the person who can tell you how to handle it. With WebCenter built into these processes, people either search their content management system, or engage in expertise location and conversation. The great thing is, THEY DON’T HAVE TO LEAVE THE APPLICATION TO DO IT. Oracle has built the social capabilities right into the applications and business processes. I don’t think enough folks were able to see that at the event, but I expect that over the next six months folks will become very aware of it. WebCenter also provides the ability to have ad-hoc collaboration, search, and expertise location that folks need when they are innovating or collaborating. We demonstrated Oracle Social Network. It’s built on our Oracle WebCenter product to provide social collaboration inside and outside of your company. When we showed it to people, there were a number of areas that they commented on that were different from the other products being shown at the conference: Screenshots from within the product Many authors working on documents simultaneously Flagging people for follow up Direct ability to call out to people Ability to see presence not just if someone is online, but which conversation they are actively in Great stuff, the conference was full of smart people that that we enjoy spending time with. We’ll keep up in the meantime, but we look forward to seeing you in Boston.

    Read the article

  • Adopt-a-JSR for Java EE 7 - Getting Started

    - by arungupta
    Adopt-a-JSR is an initiative started by JUG leaders to encourage JUG members to get involved in a JSR, in order to increase grass roots participation. This allows JUG members to provide early feedback to specifications before they are finalized in the JCP. The standards in turn become more complete and developer-friendly after getting feedback from a wide variety of audience. adoptajsr.org provide more details about the logistics and benefits for you and your JUG. A similar activity was conducted for OpenJDK as well. Markus Eisele also provide a great introduction to the program (in German). Java EE 7 (JSR 342) is scheduled to go final in Q2 2013. There are several new JSRs that are getting included in the platform (e.g. WebSocket, JSON, and Batch), a few existing ones are getting an overhaul (e.g. JAX-RS 2 and JMS 2), and several other getting minor updates (e.g. JPA 2.1 and Servlets 3.1). Each Java EE 7 JSR can leverage your expertise and would love your JUG to adopt a JSR. What does it mean to adopt a JSR ? Your JUG is going to identify a particular JSR, or multiple JSRs, that is of interest to the JUG members. This is mostly done by polling/discussing on your local JUG members list. Your JUG will download and review the specification(s) and javadocs for clarity and completeness. The complete set of Java EE 7 specifications, their download links, and EG archives are listed here. glassfish.org/adoptajsr provide specific areas where different specification leads are looking for feedback. Your JUG can then think of a sample application that can be built using the chosen specification(s). An existing use case (from work or a personal hobby project) may be chosen to be implemented instead. This is where your creativity and uniqueness comes into play. Most of the implementations are already integrated in GlassFish 4 and others will be integrated soon. You can also explore integration of multiple technologies and provide feedback on the simplicity and ease-of-use of the programming model. Especially look for integration with existing Java EE technologies and see if you find any discrepancies. Report any missing features that may be included in future release of the specification. The most important part is to provide feedback by filing bugs on the corresponding spec or RI project. Any thing that is not clear either in the spec or implementation should be filed as a bug. This is what will ensure that specification and implementation leads are getting the required feedback and improving the quality of the final deliverable of the JSR. How do I get started ? A simple way to get started can be achieved by following S.M.A.R.T. as explained below. Specific Identify who all will be involved ? What would you like to accomplish ? For example, even though building a sample app will provide real-world validity of the API but because of time constraints you may identify that reviewing the specification and javadocs only can be accomplished. Establish a time frame by which the activities need to be complete. Measurable Define a success for metrics. For example, this could be the number of bugs filed. Remember, quality of bugs is more important that quantity of bugs. Define your end goal, for example, reviewing 4 chapters of the specification or completing the sample application. Create a dashboard that will highlight your JUG's contribution to this effort. Attainable Make sure JUG members understand the time commitment required for providing feedback. This can vary based upon the level of involvement (any is good!) and the number of specifications picked. adoptajsr.org defines different categories of involvement. Once again, any level of involvement is good. Just reviewing a chapter, a section, or javadocs for your usecase is helpful. Relevant Pick JSRs that JUG members are willing and able to work. If the JUG members are not interested then they might loose motivation half-way through. The "able" part is tricky as you can always stretch yourself and learn a new skill ;-) Time-bound Define a time table of activities with clearly defined tasks. A tentative time table may look like: Dec 25: Discuss and agree upon the specifications with JUG Jan 1: Start Adopt-a-JSR for Java EE 7 Jan 15: Initial spec reading complete. Keep thinking through the application that will be implemented. Jan 22: Early design of the sample application is ready Jan 29: JUG members agree upon the application Next 4 weeks: Implement the application Of course, you'll need to alter this based upon your commitment. Maintaining an activity dashboard will help you monitor and track the progress. Make sure to keep filing bugs through out the process! 12 JUGs from around the world (SouJava, Campinas JUG, Chennai JUG, London Java Community, BeJUG, Morocco JUG, Peru JUG, Indonesia JUG, Congo JUG, Silicon Valley JUG, Madrid JUG, and Houston JUG) have already adopted one of the Java EE 7 JSRs. I'm already helping some JUGs bootstrap and would love to help your JUG too. What are you waiting for ?

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2

    - by pinaldave
    Social media has created an Always Connected World for us. Recently I enrolled myself to learn new technologies as a student. I had decided to focus on learning and decided not to stay connected on the internet while I am in the learning session. On the second day of the event after the learning was over, I noticed lots of notification from my friend on my various social media handle. He had connected with me on Twitter, Facebook, Google+, LinkedIn, YouTube as well SMS, WhatsApp on the phone, Skype messages and not to forget with a few emails. I right away called him up. The problem was very unique – let us hear the problem in his own words. “Pinal – we are in big trouble we are not able to figure out what is going on. Our product details table is continuously loosing rows. Lots of rows have disappeared since morning and we are unable to find why the rows are getting deleted. We have made sure that there is no DELETE command executed on the table as well. The matter of the fact, we have removed every single place the code which is referencing the table. We have done so many crazy things out of desperation but no luck. The rows are continuously deleted in a random pattern. Do you think we have problems with intrusion or virus?” After describing the problems he had pasted few rants about why I was not available during the day. I think it will be not smart to post those exact words here (due to many reasons). Well, my immediate reaction was to get online with him. His problem was unique to him and his team was all out to fix the issue since morning. As he said he has done quite a lot out in desperation. I started asking questions from audit, policy management and profiling the data. Very soon I realize that I think this problem was not as advanced as it looked. There was no intrusion, SQL Injection or virus issue. Well, long story short first - It was a very simple issue of foreign key created with ON UPDATE CASCADE and ON DELETE CASCADE.  CASCADE allows deletions or updates of key values to cascade through the tables defined to have foreign key relationships that can be traced back to the table on which the modification is performed. ON DELETE CASCADE specifies that if an attempt is made to delete a row with a key referenced by foreign keys in existing rows in other tables, all rows containing those foreign keys are also deleted. ON UPDATE CASCADE specifies that if an attempt is made to update a key value in a row, where the key value is referenced by foreign keys in existing rows in other tables, all of the foreign key values are also updated to the new value specified for the key. (Reference: BOL) In simple words – due to ON DELETE CASCASE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. In my friend’s case, they had two tables, Products and ProductDetails. They had created foreign key referential integrity of the product id between the table. Now the as fall was up they were updating their catalogue. When they were updating the catalogue they were deleting products which are no more available. As the changes were cascading the corresponding rows were also deleted from another table. This is CORRECT. The matter of the fact, there is no error or anything and SQL Server is behaving how it should be behaving. The problem was in the understanding and inappropriate implementations of business logic.  What they needed was Product Master Table, Current Product Catalogue, and Product Order Details History tables. However, they were using only two tables and without proper understanding the relation between them was build using foreign keys. If there were only two table, they should have used soft delete which will not actually delete the record but just hide it from the original product table. This workaround could have got them saved from cascading delete issues. I will be writing a detailed post on the design implications etc in my future post as in above three lines I cannot cover every issue related to designing and it is also not the scope of the blog post. More about designing in future blog posts. Once they learn their mistake, they were happy as there was no intrusion but trust me sometime we are our own enemy and this is a great example of it. In tomorrow’s blog post we will go over their code and workarounds. Feel free to share your opinions, experiences and comments. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >