Search Results

Search found 11590 results on 464 pages for 'world history'.

Page 12/464 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Hello World - My Name is Christian Finn and I'm a WebCenter Evangelist

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}  Good Morning World! I'd like to introduce a new member of the Oracle WebCenter Team, Christian Finn. We decided to let him do his own intros today. Look for his guest posts next week and he'll be a frequent contributor to WebCenter blog and voice of the community. Hello (Oracle) World! Hi everyone, my name is Christian Finn. It’s a coder’s tradition to have “hello world” be the first output from a new program or in a new language. While I have left my coding days far behind, it still seems fitting to start my new role here at Oracle by saying hello to all of you—our customers, partners and my colleagues. So by way of introduction, a little background about me. I am the new senior director for evangelism on the WebCenter product management team. Not only am I new to Oracle, but the evangelism team is also brand new. Our mission is to raise the profile of Oracle in all of the markets/conversations in which WebCenter competes—social business, collaboration, portals, Internet sites, and customer/audience engagement. This is all pretty familiar turf for me because, as some of you may know, until recently I was the director of product management at Microsoft for Microsoft SharePoint Server and several other SharePoint products. And prior to that, I held management roles at Microsoft in marketing, channels, learning, and enterprise sales. Before Microsoft, I got my start in the industry as a software trainer and Lotus Notes consultant. I am incredibly excited to be joining Oracle at this time because of the tremendous opportunity that lies ahead to improve how people and businesses work. Of all the vendors offering a vision for social business, Oracle is unique in having best of breed strength in market (or coming soon) in all three critical areas: customer experience management; the middleware and back-end applications that run your business; and in the social, collaboration, and content technologies that are the connective tissue between them. Everyone else can offer one or two of the above, but not all three unified together. So it is a great time to come board and there’s a fantastic team of people hard at work on building great products for you. In the coming weeks and months you’ll be hearing much more from us. For now, we’ll kick things off with some blog posts here on the WebCenter blog. Enjoy the reads and please share your thoughts with me over Twitter on @cfinn.

    Read the article

  • At what year in history was computers first used to store porn? [closed]

    - by Emil H
    Of course this sounds like a joke question, but it's meant seriously. I remember being told by an old system administrator back in the early nineties about people asking about good FTPs for porn, and that they would as a joke always tell them to connect to 127.0.0.1. They would come back saying that there was a lot of porn at that address, but that oddly enough it seemed like they already had it all. Point being, it seems like it's been around for quite a while. Anyway. Considering that a considerable portion of the internet is devoted to porn these days, it would be interesting to know if someone has any kind of idea as to when and where the phenomena first arose? There must be some mention of this in old hacker folk lore? (Changed to CW to emphasize that this isn't about rep, but about genuine curiousity. :)

    Read the article

  • Does deleting a branch in git remove it from the history?

    - by Ken Liu
    Coming from svn, just starting to become familiar with git. When a branch is deleted in git, is it removed from the history? In svn, you can easily recover a branch by reverting the delete operation (reverse merge). Like all deletes in svn, the branch is never really deleted, it's just removed from the current tree. If the branch is actually deleted from the history in git, what happens to the changes that were merged from that branch? Are they retained?

    Read the article

  • How to integrate history.js to my ajax site?

    - by vzhen
    I have left and right div in my website and with navigation buttons. Let's say button_1, button_2. When clicking on these button will change the right div content using ajax so means the left div do nothing. The above is what I have done but I have a problem here, the url doesn't change. And I found history.js is able to solve this issue but I cannot find any tutorial about history.js + ajax. Can anyone help me out? Thank in adv

    Read the article

  • History tables pros, cons and gotchas - using triggers, sproc or at application level.

    - by Nathan W
    I am currently playing around with the idea of having history tables for some of my tables in my database. Basically I have the main table and a copy of that table with a modified date and an action column to store what action was preformed eg Update,Delete and Insert. So far I can think of three different places that you can do the history table work. Triggers on the main table for update, insert and delete. (Database) Stored procedures. (Database) Application layer. (Application) My main question is, what are the pros, cons and gotchas of doing the work in each of these layers. One advantage I can think of by using the triggers way is that integrity is always maintained no matter what program is implmentated on top of the database.

    Read the article

  • javascript "window.history.forward(1);" not working.

    - by Ray L.
    Hi, I'm trying to prevent the back button from working on one of my asp.net mvc pages. I've read a couple of places that if i put "window.history.forward(1);" in my page it will prevent the back button from working on a given page. This is what I did in my page: <script type="text/javascript"> $(document).ready(function () { window.history.forward(1); }); </script> It doesn't seem to be working. Am I using this incorrectly or is this approach wrong? thanks.

    Read the article

  • Is it possible for PHP to generate a fresh page on every Javascript history.go(-1) ?

    - by Ho
    Hello, I have a PHP page (a.php) which is already sending these headers: <?php header('Cache-Control: no-cache, no-store, max-age=0, must-revalidate'); header('Expires: Mon, 26 Jul 1997 05:00:00 GMT'); header('Pragma: no-cache'); ?> And on the PHP page (a.php) , it has a link to another page (b.html) on b.html, it has a javascript code to: <script type="text/javascript"> history.go(-1); </scirpt> It seems to me that, when the browser is "going back" to a.php,the content isn't fresh at all. Would you please advise me if generating a completely fresh page on history.go(-1) is possible? Thank you.

    Read the article

  • Procedural Generation of tile-based 2d World

    - by Matthias
    I am writing a 2d game that uses tile-based top-down graphics to build the world (i.e. the ground plane). Manually made this works fine. Now I want to generate the ground plane procedurally at run time. In other words: I want to place the tiles (their textures) randomised on the fly. Of course I cannot create an endless ground plane, so I need to restrict how far from the player character (on which the camera focuses on) I procedurally generate the ground floor. My approach would be like this: I have a 2d grid that stores all tiles of the floor at their correct x/y coordinates within the game world. When the players moves the character, therefore also the camera, I constantly check whether there are empty locations in my x/y map within a max. distance from the character, i.e. cells in my virtual grid that have no tile set. In such a case I place a new tile there. Therefore the player would always see the ground plane without gaps or empty spots. I guess that would work, but I am not sure whether that would be the best approach. Is there a better alternative, maybe even a best-practice for my case?

    Read the article

  • Writing a "Hello World" Device Driver for kernel 2.6 using Eclipse

    - by Isaac
    Goal I am trying to write a simple device driver on Ubuntu. I want to do this using Eclipse (or a better IDE that is suitable for driver programming). Here is the code: #include <linux/module.h> static int __init hello_world( void ) { printk( "hello world!\n" ); return 0; } static void __exit goodbye_world( void ) { printk( "goodbye world!\n" ); } module_init( hello_world ); module_exit( goodbye_world ); My effort After some research, I decided to use Eclipse CTD for developing the driver (while I am still not sure if it supports multi-threading debugging tools). So I: Installed Ubuntu 11.04 desktop x86 on a VMWare virtual machine, Installed eclipse-cdt and linux-headers-2.6.38-8 using Synaptic Package Manager, Created a C Project named TestDriver1 and copy-pasted above code to it, Changed the default build command, make, to the following customized build command: make -C /lib/modules/2.6.38-8-generic/build M=/home/isaac/workspace/TestDriver1 The problem I get an error when I try to build this project using eclipse. Here is the log for the build: **** Build of configuration Debug for project TestDriver1 **** make -C /lib/modules/2.6.38-8-generic/build M=/home/isaac/workspace/TestDriver1 all make: Entering directory '/usr/src/linux-headers-2.6.38-8-generic' make: *** No rule to make target vmlinux', needed byall'. Stop. make: Leaving directory '/usr/src/linux-headers-2.6.38-8-generic' Interestingly, I get no error when I use shell instead of eclipse to build this project. To use shell, I just create a Makefile containing obj-m += TestDriver1.o and use the above make command to build. So, something must be wrong with the eclipse Makefile. Maybe it is looking for the vmlinux architecture (?) or something while current architecture is x86. Maybe it's because of VMWare? As I understood, eclipse creates the makefiles automatically and modifying it manually would cause errors in the future OR make managing makefile difficult. So, how can I compile this project on eclipse?

    Read the article

  • Oracle Enterprise Manager sessions on the last day of the Oracle Open World

    - by Anand Akela
    Hope you had a very productive Oracle Open World so far . Hopefully, many of you attended the customer appreciation event yesterday night at the Treasures Islands.   We still have many enterprise manager related sessions today on Thursday, last day of Oracle Open World 2012. Download the Oracle Enterprise Manager 12c OpenWorld schedule (PDF) Oracle Enterprise Manager Cloud Control 12c (and Private Cloud) Time Title Location 11:15 AM - 12:15 PM Application Performance Matters: Oracle Real User Experience Insight Palace Hotel - Sea Cliff 11:15 AM - 12:15 PM Advanced Management of JD Edwards EnterpriseOne with Oracle Enterprise Manager InterContinental - Grand Ballroom B 11:15 AM - 12:15 PM Spark on SPARC Servers: Enterprise-Class IaaS with Oracle Enterprise Manager 12c Moscone West - 3018 11:15 AM - 12:15 PM Pinpoint Production Applications’ Performance Bottlenecks by Using JVM Diagnostics Marriott Marquis - Golden Gate C3 11:15 AM - 12:15 PM Bringing Order to the Masses: Scalable Monitoring with Oracle Enterprise Manager 12c Moscone West - 3020 12:45 PM - 1:45 PM Improving the Performance of Oracle E-Business Suite Applications: Tips from a DBA’s Diary Moscone West - 2018 12:45 PM - 1:45 PM Advanced Management of Oracle PeopleSoft with Oracle Enterprise Manager Moscone West - 3009 12:45 PM - 1:45 PM Managing Sun Servers and Oracle Engineered Systems with Oracle Enterprise Manager Moscone West - 2000 12:45 PM - 1:45 PM Strategies for Configuring Oracle Enterprise Manager 12c in a Secure IT Environment Moscone West - 3018 12:45 PM - 1:45 PM Using Oracle Enterprise Manager 12c to Control Operational Costs Moscone South - 308 2:15 PM - 3:15 PM My Oracle Support: The Proactive 24/7 Assistant for Your Oracle Installations Moscone West - 3018 2:15 PM - 3:15 PM Functional and Load Testing Tips and Techniques for Advanced Testers Moscone South - 307 2:15 PM - 3:15 PM Oracle Enterprise Manager Deployment Best Practices Moscone South - 104 Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Future Air Plane – A new world

    - by Rekha
    For the first time in my life, I wished I had more number of years to live. The world has evolved from the cave man life to the man who is almost The Creator. When I was about 12 years old, I was taken to Chennai Planetarium for my school excursion. That day we were made to lie down in a dark room and the ceiling was full of stars and planets. All those were just videos but the day still stands in my mind. Same kind of experience in real is waiting for our future generations.Even though the English movies have gone beyond imaginations, we still have chances to bring those imaginations to real. You must be wondering why all these hype. Recently Airbus unveiled a news on transparent Airplane in 2050. This Airplane will have a body transparent to view the sky from all sides of the airplane when we are flying high above the grounds. And it will have all possible technologies under one roof that would give immense pleasure for the passengers. The journey would be an unforgettable one for each one of us. Image and News Credit: Daily Telegraph This article titled,Future Air Plane – A new world, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How To Access Your eBook Collection Anywhere in the World

    - by Jason Fitzpatrick
    If you have an eBook reader it’s likely you already have a collection of eBooks you sync to your reader from your home computer. What if you’re away from home or not sitting at your computer? Learn how to download books from your personal collection anywhere in the world (or just from your backyard). You have an eBook reader, you have an eBook collection, and when you remember to sync your books to the collection on your computer everything is rosy. What about when you forget or when the syncing process for your device is a bit of a hassle? (We’re looking at you, iPad.) Today we’re going to show you how to download eBooks to your eBook reader from anywhere in the world using a cross-platform solution Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0 A Look Back at 2010 Through Infographics Monitor the Weather with the Weather Forecast Extension for Opera Orbiting at the Edge of the Atmosphere Wallpaper

    Read the article

  • Google Glasses–A new world in front of your eyes

    - by Gopinath
    Google is getting into a whole new business that would help us to see the world in a new dimension and free us from all gadgets we carry we today. Google Glasses is a wearable tiny computer that brings information in front of your eyes and lets you interact with it using voice commands. It’s a kind of glasses(spectacles) that you can wear to see and interact with the world in a new way.  With Google Glasses, for example you can look at a beautiful location and through voice you can instruct it to capture a photograph and share it to your friends. You don’t need a camera to capture the beautiful scene, you don’t need an App to upload and share it.  All you need is just Google Glasses By the way these glasses are not heavy head mountable stuff, they are very tiny one and look beautiful too. Check out the embedded video demo released by Google to see them in action and for sure you are going to be amazed.   Last year December 9 to 5 Google posted details about this secret project and NY Times says that these glasses would be available to everyone at affordable cost, anywhere between $250 and $600. It is powered by Android OS and the contains a GPS, motion sensor, camera, voice input & output devices. Check out Project Glass for more details.

    Read the article

  • UPK Basics Hands On Lab at Oracle Open World Latin America

    - by user581320
    Orrcle Open World Latin America 2012 will be in Sao Paulo, Brazil December fourth through the sixth. There's so much to see and learn from at Oracle OpenWorld : keynotes, technical sessions, Oracle and partner demonstrations, hands-on labs, networking events, and more.  I will be presenting a hands-on lab at the show this year, Introduction to Oracle User Productivity Kit - Learn the Basics in the afternoon on Tuesday December 4th.  This nonstop one hour lab covers topics from Getting Started with UPK to the basics of creating an outline, some typical content and concluding with publishing some of the many outputs UPK is capable of.   If you are planning on attending the show, come by the lab and see what UPK is all about.  I’ll be in Sao Paulo all week to fulfill my need to extend California’s summer by another week (trip bonus) and to meet and discuss all things UPK with our customers and partners.  If you’re not registered for the show there is still time. Check out the Oracle Open World Latin America 2012 web site for all the details. I look forward to seeing you in Sao Paulo!  Peter Maravelias Principal Product Strategy Manager, Oracle UPK 

    Read the article

  • Woman Is the World's First Computer Programmer? [closed]

    - by Sveta Bondarenko
    This week, on 10th December, we celebrate the 197th birth anniversary of Ada Lovelace, often considered as the world's first computer programmer. Ada became famous not only as a daughter of romantic poet Lord Byron but also as an outstanding 19th century mathematician. Her works on analytical engine are recognized as the first algorithm intended to be processed by a machine. Women always played a crucial role in the computer science evolution, but unfortunately, they are considered to be not so good at programming and engineering as men. Even though the fair sex makes up a growing portion of computer and Internet users, there is still a large gender gap in the field of Computer Science. But all is not lost! According to the study women's enrollment in the computer science raised from 7 percent in 1995 to 42 percent in 2000. And it is still increasing. Soon women will take a well-deserved position among the world's top computer programmers. After all, a number of notable female computer pioneers such as Ada Lovelace, Grace Hopper, and Anita Borg have proven that women make great computer scientists. But will women make great contributions to the modern technologies industry? Or successful and famous female computer programmer is just a pipe dream?

    Read the article

  • Efficient mapping layout in 2D side-scroller, and collisions between character and the world

    - by Jack
    I haven't touched Visual Studio for a couple months now, but I was playing a game from the '90s toady and had an epiphany: I was looking for something what i didn't need, and I wasn't using what I knew correctly. One of those realizations was collision, so let me tell you a bit about my project that I was working on. The project's graphics looks like Mario or Dangerous Dave, etc., you get the idea - old-school pixels. So anyway I remember trying to think of something else than AABB for character form, but I couldn't think of anything. Perhaps I could get a suggestion for this? Another thing is the world - I don't want it to be just linear world, I want mountains, etc.. My idea is to use triangles, and no idea yet what to do if I want just part of the cube, say 3/4 or 2/4 or whatever. Hard-coding such things seems inefficient. P.S. I am not looking at the precision level offered by Box2D. Actually I remember trying to implement it at first, but I failed as my understanding of C++ wasn't advanced enough, as it'll be mentioned below. P.P.S. I am programming in C++, and I haven't done it for a couple months now. I have no means of testing it either, as my PC is broken down, and this one can barely run games from late '90s, not to speak about a compiler or a program with inefficient resource management... I am also not an expert (obviously), I don't even know if I can consider myself an average programmer. In short, I am simply curious about my thoughts and my past experience when programming the game. I may come back to it when my PC is fixed, I'm already filling a note about these things.

    Read the article

  • Heading Out to Oracle Open World

    - by rickramsey
    In case you haven't figured it out by now, Oracle reserves an awful lot of announcements for Oracle Open World. As a result, the show is always a lot of fun for geeks. What will the Oracle Solaris team have to say? Will the Oracle Linux team have any surprises? And what about Oracle hardware? For my part, I'll be one of the lizards at the OTN Lounge with the OTN crew, handing out t-shirts to system admins and developers, or anyone who is willing to impersonate one. I understand, not everyone can have the raw animal magnetism of a sysadmin, or the debonair sophistication of a C++ developer, so some of you have no choice but to pretend. I won't judge. I'll also be doing video interviews of as many techie people as I can corner. I've got more than 30 interviews already scheduled. Most of them will be 3-5 minutes long. I'll be asking our best technical minds what's cool about their latest technologies and what impact it will have on system admins or system developers. I'll be posting those videos here: Find OTN Systems Videos from Oracle Open World Here! We've got some great topics in mind. A dummies guide to hardware-assisted cryptography with Glenn Brunette. ZFS deduplication. The momentum building around Oracle Solaris 11, with Lynn Rohrer, plus conversations with partners who have deployed Oracle Solaris 11. Migrating to Oracle Database with SQL Developer. The whole database cloud thing. Oracle VM and, of course, Oracle Linux. So even if you can't be part of the fun, keep an eye out for the videos on our YouTube channel. - Rick Website Newsletter Facebook Twitter

    Read the article

  • about freelancer in third world countries

    - by MaKo
    hello guys, one question that is been bogging me... first of all I want to say that I actually come from a third world country, so I am all up for opportunities for everybody... so here comes my consideration,,, I have been working as a programmer for Iphone apps (noob in the company), now in my new "first" world country (immigration can be good!!!), but seem to be getting more and more advertisement from sites like freelancer.com etc,,, so I would want to know what do you think about all this???, would the jobs be getting cheaper?? if a project can be done by say 10% of the cost overseas, what is stopping the employers of doing just that? is it worth it? how about the quality? from a local job and overseas job? and all other aspects I cannot think about?? I just want to know if all this years of learning are going to pay off? or if in a near future all programming jobs will just go to cheaper labor? (sweat shops??) ok hope to make sense in my ramblings,, cheers;)

    Read the article

  • Facebook Payments & Credits vs. Real-World & Charities

    - by Adam Tannon
    I am having a difficult time understanding Facebook's internal "e-commerce microcosm" and what it allows Facebook App developers to do (and what it restricts them from doing). Two use cases: I'm an e-com retailer selling clothes and coffee mugs (real-world goods) on my website; I want to write a Facebook App that allows Facebook users to buy my real-world goods from inside of Facebook using real money ($ USD) I'm highschool student trying to raise money for my senior class trip and want to build a Facebook App that allows Facebook users to donate to our class using real money ($ USD) Are these two scenarios possible? If not, why (what Facebook policies prohibit me from doing so)? If so, what APIs do I use: Payments or Credits? And how (specifically) would it work? Do Facebook Users have to first buy "credits" (which are mapped to $ USD values under the hood) and pay/donate with credits, or can they whip out their credit card and pay/donate right through my Facebook App? I think that last question really summarizes my confusion: can Facebook users enter their credit card info directly into Facebook Apps, or do you have to go through Payments/Credits APIs as a "middleman"?

    Read the article

  • SQL SERVER – History of SQL Server Database Encryption

    - by pinaldave
    I recently met Michael Coles and Rodeney Landrum the author of one of the kind book Expert SQL Server 2008 Encryption at SQLPASS in Seattle. During the conversation we ended up how Microsoft is evolving encryption technology. The same discussion lead to talking about history of encryption tools in SQL Server. Michale pointed me to page 18 of his book of encryption. He explicitly give me permission to re-produce relevant part of history from his book. Encryption in SQL Server 2000 Built-in cryptographic encryption functionality was nonexistent in SQL Server 2000 and prior versions. In order to get server-side encryption in SQL Server you had to resort to purchasing or creating your own SQL Server XPs. Creating your own cryptographic XPs could be a daunting task owing to the fact that XPs had to be compiled as native DLLs (using a language like C or C++) and the XP application programming interface (API) was poorly documented. In addition there were always concerns around creating wellbehaved XPs that “played nicely” with the SQL Server process. Encryption in SQL Server 2005 Prior to the release of SQL Server 2005 there was a flurry of regulatory activity in response to accounting scandals and attacks on repositories of confidential consumer data. Much of this regulation centered onthe need for protecting and controlling access to sensitive financial and consumer information. With the release of SQL Server 2005 Microsoft responded to the increasing demand for built-in encryption byproviding the necessary tools to encrypt data at the column level. This functionality prominently featured the following: Support for column-level encryption of data using symmetric keys or passphrases. Built-in access to a variety of symmetric and asymmetric encryption algorithms, including AES, DES, Triple DES, RC2, RC4, and RSA. Capability to create and manage symmetric keys. Key creation and management. Ability to generate asymmetric keys and self-signed certificates, or to install external asymmetric keys and certificates. Implementation of hierarchical model for encryption key management, similar to the ANSI X9.17 standard model. SQL functions to generate one-way hash codes and digital signatures, including SHA-1 and MD5 hashes. Additional SQL functions to encrypt and decrypt data. Extensions to the SQL language to support creation, use, and administration of encryption keys and certificates. SQL CLR extensions that provide access to .NET-based encryption functionality. Encryption in SQL Server 2008 Encryption demands have increased over the past few years. For instance, there has been a demand for the ability to store encryption keys “off-the-box,” physically separate from the database and the data it contains. Also there is a recognized requirement for legacy databases and applications to take advantage of encryption without changing the existing code base. To address these needs SQL Server 2008 adds the following features to its encryption arsenal: Transparent Data Encryption (TDE): Allows you to encrypt an entire database, including log files and the tempdb database, in such a way that it is transparent to client applications. Extensible Key Management (EKM): Allows you to store and manage your encryption keys on an external device known as a hardware security module (HSM). Cryptographic random number generation functionality. Additional cryptography-related catalog views and dynamic management views. SQL language extensions to support the new encryption functionality. The encryption book covers all the tools in its various chapter in one simple story. If you are interested how encryption evolved and reached to the stage where it is today, this book is must for everyone. You can read my earlier review of the book over here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology Tagged: Encryption, SQL Server Encryption, SQLPASS

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Book Review: Oracle ADF Real World Developer’s Guide

    - by Frank Nimphius
    Recently PACKT Publishing published "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman, a product manager in our team. Though already the sixth book dedicated to Oracle ADF, it has a lot of great information in it that none of the previous books covered, making it a safe buy even for those who own the other books published by Oracle Press (McGrwHill) and PACKT Publishing. More than the half of the "Oracle ADF Real World Developer’s Guide" book is dedicated to Oracle ADF Business Components in a depth and clarity that allows you to feel the expertise that Jobinesh gained in this area. If you enjoy Jobinesh blog (http://jobinesh.blogspot.co.uk/) about Oracle ADF, then, no matter what expert you are in Oracle ADF, this book makes you happy as it provides you with detail information you always wished to have. If you are new to Oracle ADF, then this book alone doesn't get you flying, but, if you have some Java background, accelerates your learning big, big, big times. Chapter 1 is an introduction to Oracle ADF and not only explains the layers but also how it compares to plain Java EE solutions (page 13). If you are new to Oracle JDeveloper and ADF, then at the end of this chapter you know how to start JDeveloper and begin your ADF development Chapter 2 starts with what Jobinesh really is good at: ADF Business Components. In this chapter you learn about the architecture ingredients of ADF Business Components: View Objects, View Links, Associations, Entities, Row Sets, Query Collections and Application Modules. This chapter also provides a introduction to ADFBC SDO services, as well as sequence diagrams for what happens when you execute queries or commit updates. Chapter 3 is dedicated to entity objects and  is one of many chapters in this book you will enjoy and never want to miss. Jobinesh explains the artifacts that make up an entity object, how to work with entities and resource bundles, and many advanced topics, including inheritance, change history tracking, custom properties, validation and cursor handling.  Chapter 4 - you guessed it - is all about View objects. Comparable to entities, you learn about the XM files and classes that make a view object, as well as how to define and work with queries. List-of-values, inheritance, polymorphism, bind variables and data filtering are interesting - and important topics that follow. Again the chapter provides helpful sequence diagrams for you to understand what happens internally within a view object. Chapter 5 focuses on advanced view object and entity object topics, like lifecycle callback methods and when you want to override them. This chapter is a good digest of Jobinesh's blog entries (which most ADF developers have in their bookmark list). Really worth reading ! Chapter 6 then is bout Application Modules. Beside of what application modules are, this chapter covers important topics like properties, passivation, activation, application module pooling, how and where to write custom logic. In addition you learn about the AM lifecycle and request sequence. Chapter 7 is about the ADF binding layer. If you are new to Oracle ADF and got lost in the more advanced ADF Business Components chapters, then this chapter is where you get back into the game. In very easy terms, Jobinesh explains what the ADF binding is, how it fits into the JSF request lifecycle and what are the metadata file involved. Chapter 8 then goes into building data bound web user interfaces. In this chapter you get the basics of JavaServer Faces (e.g. managed beans) and learn about the interaction between the JSF UI and the ADF binding layer. Later this chapter provides advanced solutions for working with tree components and list of values. Chapter 9 introduces bounded task flows and ADF controller. This is a chapter you want to read if you are new to ADF of have started. Experts don't find anything new here, which doesn't mean that it is not worth reading it (I for example, enjoyed the controller talk very much) Chapter 10 is an advanced coverage of bounded task flow and talks about contextual events  Chapter 11 is another highlight and explains error handling, trains, transactions and more. I can only recommend you read this chapter. I am aware of many documents that cover exception handling in Oracle ADF (and my Oracle Magazine article for January/February 2013 does the same), but none that covers it in such a great depth. Chapter 12 covers ADF best practices, which is a great round-up of all the tips provided in this book (without Jobinesh to repeat himself). Its all cool stuff that helps you with your ADF projects. In summary, "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman is a great book and addition for all Oracle ADF developers and those who want to become one. Frank

    Read the article

  • World Backup Day

    - by red(at)work
    Here at Red Gate Towers, the SQL Backup development team have been hunkered down in their shed for the last few months, with the toolbox, blowtorch and chamois leather out, upgrading SQL Backup. When we started, autumn leaves were falling. Now we're about to finish, spring flowers are budding. If not quite a gleaming new machine, at the very least a familiar, reliable engine with some shiny new bits on it will trundle magnificently out of the workshop. One of the interesting things I've noticed about working on software development teams is that the team is together for so long 'implementing' stuff - designing, coding, testing, fixing bugs and so on - that you occasionally forget why you're doing what you're doing. Doubt creeps in. It feels like a long time since we launched this project in a fanfare of optimism and enthusiasm, and all that clarity of purpose and mission "yee-haw" has dissipated with the daily pressures of development. Every now and again, we look up from our bunker and notice all those thousands of users out there, with their different configurations and working practices and each with their own set of problems and requirements, and we ask ourselves "does anyone care about what we're doing?" Has the world moved on while we've been busy? Could we have been doing something more useful with the time and talent of all these excellent people we've assembled? In truth, you can research and test and validate all you like, but you never really know if you've done the right thing (or at least, something valuable for some users) until you release. All projects suffer this insecurity. If they don't, maybe you're not worrying enough about what you're building. The two enemies of software development are certainty and complacency. Oh, and of course, rival teams with Nerf guns. The goal of SQL Backup 7 is to make it so easy to schedule regular restores of your backups that you have no excuse not to. Why schedule a restore? Because your data is not as good as your last backup. It's only as good as your last successful restore. If you're not checking your backups by restoring them and running an integrity check on the database, you're only doing half the job. It seems that most DBAs know that this is best practice, but it can be tricky and time-consuming to set up, so it's one of those tasks that can get forgotten in the midst all the other demands on their time. Sometimes, they're just too busy firefighting. But if it was simple to do? That was our inspiration for SQL Backup 7. So it was heartening to read Brent Ozar's blog post the other day about World Backup Day. To be honest, I'd never heard of World Backup Day (Talk Like a Pirate Day, yes, but not this one); however, its emphasis on not just backing up your data but checking the validity of those backups was exactly the same message we had in mind when building SQL Backup 7. It's printed on a piece of A3 above our planning board - "Make backup verification so easy to do that no DBA has an excuse for not doing it" It's the missing piece that completes the puzzle. Simple idea, great concept, useful feature, but, as it turned out, far from straightforward to implement. The problem is the future. As Marty McFly discovered over the course of three movies, the future is uncertain and hard to predict - so when you are scheduling a restore to take place an hour, day, week or month after the backup, there are all kinds of questions that you wouldn't normally have to consider. Where will this backup live? Will it even exist at the time? Will it be split into multiple files? What will the file names be? Will it be encrypted? What files should it be restored to? SQL Backup needs to know what to expect at the time the restore job is actually run. Of course, a DBA will know the answer to all these questions, but to deliver the whole point of version 7, we wanted to make it easy for them to input that information into SQL Backup. We think we've done that. When you create your scheduled backup job, there is now an option to create a "reminder" to follow it up with a scheduled restore to verify the resulting backups. Actually, it's much more than a reminder, as it stores all the relevant data so you can click it and pre-populate the wizard with all the right settings to set up your verification restores. Simple. But, what do you think? We'd love you to try it. Post by Brian Harris

    Read the article

  • 2012&ndash;The End Of The World Review

    - by Tim Murphy
    The end of the world must be coming.  Not because the Mayan calendar says so, but because Microsoft is innovating more than Apple.  It has been a crazy year, with pundits declaring not that the end of the world is coming, but that the end of Microsoft is coming.  Let’s take a look at what 2012 has brought us. The beginning of year is a blur.  I managed to get to TechEd in June which was the first time that I got to take a deep dive into Windows 8 and many other things that had been announced in 2011.  The promise I saw in these products was really encouraging.  The thought of being able to run Windows 8 from a thumb drive or have Hyper-V native to the OS told me that at least for developers good things were coming. I finally got my feet wet with Windows 8 with the developer preview just prior to the RTM.  While the initial experience was a bit of a culture shock I quickly grew to love it.  The media still seems to hold little love for the “reimagined” platform, but I think that once people spend some time with it they will enjoy the experience and what the FUD mongers say will fade into the background.  With the launch of the OS we finally got a look at the Surface.  I think this is a bold entry into the tablet market.  While I wish it was a little more affordable I am already starting to see them in the wild being used by non-techies. I was waiting for Windows Phone 8 at least as much as Windows 8, probably more.  The new hardware, better marketing and new OS features I think are going to finally push us to the point of having a real presence in the smartphone market.  I am seeing a number of iPhone users picking up a Nokia Lumia 920 and getting rid of their brand new iPhone 5.  The only real debacle that I saw around the launch was when they held back the SDK from general developers. Shortly after the launch events came Build 2012.  I was extremely disappointed that I didn’t make it to this year’s Build.  Even if they weren’t handing out Surface and Lumia devices I think the atmosphere and content were something that really needed to be experience in person.  Hopefully there will be a Build next year and it’s schedule will be announced soon.  As you would expect Windows 8 and Windows Phone 8 development were the mainstay of the conference, but improvements in Azure also played a key role.  This movement of services to the cloud will continue and we need to understand where it best fits into the solutions we build. Lower on the radar this year were Office 2013, SQL Server 2012, and Windows Server 2012.  Their glory stolen by the consumer OS and hardware announcements, these new releases are no less important.  Companies will see significant improvements in performance and capabilities if they upgrade.  At TechEd they had shown some of the new features of Windows Server 2012 around hardware integration and Hyper-V performance which absolutely blew me away.  It is our job to bring these important improvements to our company’s attention so that they can be leveraged. Personally, the consulting business in 2012 was the busiest it has been in a long time.  More companies were ready to attack new projects after several years of putting them on the back burner.  I also worked to bring back momentum to the Chicago Information Technology Architects Group.  Both the community and clients are excited about the new technologies that have come out in 2012 and now it is time to deliver. What does 2013 have in store.  I don’t see it be quite as exciting as 2012.  Microsoft will be releasing the Surface Pro in January and it seems that we will see more frequent OS update for Windows.  There are rumors that we may see a Surface phone in 2013.  It has also been announced that there will finally be a rework of the XBox next fall.  The new year will also be a time for us in the development community to take advantage of these new tools and devices.  After all, it is what we build on top of these platforms that will attract more consumers and corporations to using them. Just as I am 99.999% sure that the world is not going to end this year, I am also sure that Microsoft will move on and that most of this negative backlash from the media is actually fear and jealousy.  In the end I think we have a promising year ahead of us. del.icio.us Tags: Microsoft,Pundits,Mayans,Windows 8,Windows Phone 8,Surface

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >