Search Results

Search found 3376 results on 136 pages for 'arun david'.

Page 124/136 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • IDC's Sally Hudson on Securing Mobile Access

    - by Naresh Persaud
    After the launch of Identity Management 11g R2, Oracle Magazine writer David Baum sat down with Sally Hudson, research director of security products at International Data Corporation (IDC) to get her perspective on securing mobile access.  Below is an excerpt from the interview. The complete article can be found here. "We’re seeing a much more diverse landscape of devices, computing habits, and access methods from outside of the corporate network. This trend necessitates a total security picture with different layers and end-point controls. It used to be just about keeping people out. Now, you have to let people in. Most organizations are looking toward multifaceted authentication—beyond the password—by using biometrics, soft tokens, and so forth to do this securely. Corporate IT strategies have evolved beyond just identity and access management to encompass a layered security approach that extends from the end point to the data center. It involves multiple technologies and touch points and coordination, with different layers of security from the internals of the database to the edge of the network." ( Sally Hudson, Oracle Mag Sept/Oct 2012) As the landscape changes you can find out how to adapt by reading Oracle's strategy paper on providing identity services at Internet scale. 

    Read the article

  • How Do Top Performing High Tech Companies Measure Online Marketing Success?

    - by Charles Knapp
    You might expect a focus on Net Promoter scores, open rates, and click metrics. The real answers from top performers may surprise you. I've been working for a few months with Aberdeen Group and colleagues from IBM and Oracle to survey high technology firms worldwide on best practices in marketing and channel sales effectiveness.  Now, we will share the results of our original customer research in a new white paper and webcast. Register today to learn how leading High Tech companies are increasing their Return on Marketing Investment (ROMI) and growing channel sales revenue. Discover how top performing high tech companies manage and use customer data, measure marketing spend effectiveness, and support internal and channel sales. Learn how best in class high tech companies use enterprise data throughout their customer lifecycle -- messaging to leads, selling to prospects, and serving customers. Our speakers will be: Peter Ostrow, Research Director - Sales Effectiveness, Aberdeen Group David Lasher, Global Business Services Partner, IBM Jonathan Oomrigar, Vice President, Global High Technology Business Unit, Oracle Reserve your place now! This global webinar is on Tuesday, November 15, 10-11 am PST / 1-2 pm EST / 6-7 GMT / 7-8 CET

    Read the article

  • Spring to Java EE, Part Three - new tech article on otn/java

    - by Janice J. Heiss
    In a new article up on otn/java, Java EE expert David Heffelfinger continues his series exploring the relative strengths and weaknesses of Java EE and Spring. Here, he demonstrates how easy it is to develop the data layer of an application using Java EE, JPA, and the NetBeans IDE instead of the Spring Framework.In the first two parts of the series, he generated a complete Java EE application by using JavaServer Faces (JSF) 2.0, Enterprise JavaBeans (EJB) 3.1, and Java Persistence API (JPA) 2.0 from Spring’s Pet Clinic MySQL schema, thus showing how easy it is to develop an application whose functionality equaled that of the Spring sample application.In his new article, Heffelfinger tweaks the application to make it more user friendly.From the article:“The generated application displays primary keys on some of the pages, and these keys are surrogate primary keys—meaning that they have no business value and are used strictly as a unique identifier—so there is no reason why they should be visible to the user. In addition, we will modify some of the generated labels to make them more user-friendly.”He concludes the article with a summary:“The Java EE version of the application is not a straight port of the Spring version. For example, the Java EE version enables us to create, update, and delete veterinarians as well as veterinary specialties, whereas the Spring version of the application enables us only to view veterinarians and specialties. Additionally, the Spring version has a single page for managing/viewing owners, pets, and visits, whereas the Java EE version of the application has separate pages for each of these entities.The other thing we should keep in mind is that we didn’t actually write a lot of the code and markup for the Java EE version of the application, because the bulk of it was generated by the NetBeans wizard.” Have a look at the complete article here.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 2, 2012

    - by Bob Rhubart
    ADF Mobile - Login Functionality | Andrejus Baranovskis "The new ADF Mobile approach with native deployment is cool when you want to access phone functionality (camera, email, sms and etc.), also when you want to build mobile applications with advanced UI, " reports Oracle ACE Director Andrejus Baranovskis. Big Data: Running out of Metric System | Andrew McAfee Do very large numbers make your brain hurt? Better stock up on aspirin. According to Andrew McAfee: "It seems safe to say that before the current decade is out we’ll need to convene a 20th conference to come up with some more prefixes for extraordinarily large quantities not to describe intergalactic distances or the amount of energy released by nuclear reactions, but to capture the amount of digital data in the world." Cloud computing will save us from the zombie apocalypse | Cloud Computing - InfoWorld "It's just a matter of time before we migrate our existing IT assets to public cloud systems," says InfoWorld cloud blogger David Linthicum. "Additionally, it's a short window until the dead rise from the grave and attempt to eat our brains." Is is Halloween or something? Thought for the Day "A computer lets you make more mistakes faster than any invention in human history—with the possible exceptions of hand guns and tequila." — Mitch Ratcliffe

    Read the article

  • Should I use the cool, new and awesome stuff [closed]

    - by Ieyasu Sawada
    I'm in the field of Web Development. I follow a lot of awesome people on Twitter(Paul Irish, Alex Sexton, David Walsh, Rebecca Murphey, etc.) By following these people I'm constantly updated of the new and cool stuff in web development(backbone.js, angular.js, require.js, ember.js, jasmine, etc.) Now I'm creating a web application for a client and because of the different tools, libraries, plugins that I'm aware of I don't even know anymore when do I need to use those or do I even need those, or how do I even implement it in a sane way where I won't have to repeat myself(DRY). What's your advice on what I really need to do in order to become better. Do I really need to use these cool new tools or should I just stick with what I know for now and try to make my code better. Should I unfollow these people in order to not pollute my mind with stuff that I can't really use now because I don't have the necessary experience in order to use it. What sort of things should I really be focusing on for someone like me who has only about 2 years of experience in the field of web development.

    Read the article

  • @CodeStock 2012 Review: Leon Gersing ( @Rubybuddha ) - "You"

    "YOU"Speaker: Leon GersingTwitter: @Rubybuddha Site: http://about.me/leongersing I honestly had no idea what I was getting in to when I sat down in to this session. I basically saw the picture of the speaker and knew that it would be a good session. I was completely wrong; it was the BEST SESSION of CodeStock 2012.  In fact it was so good, I texted another coworker attending the conference to get over and listen to Leon. Leon took on the concept of growth in the software development community. He specifically referred David Hansson in his ability to stick to his beliefs when the development community thought that he was crazy for creating Ruby on Rails. If you do not know this story Ruby on Rails is one of the fastest growing web languages today. In addition, he also touched on the flip side of this argument in that we must be open to others ideas and not discard them so quickly because we all come from differing perspectives and can add value to a project/team/community. This session left me with two very profound concepts/quotes: “In order to learn you must do it badly in front of a crowed and fail.” - @Rubybuddha I can look back on my career so far and say that he is correct; I think I have learned the most after failing, especially when I achieved this failure in front of other. “Experts must be able to fail.” - @Rubybuddha I think we can all learn from our own mistakes but we can also learn from others. When respected experts fail it is a great learning opportunity for the entire team as well as the person who failed. When expert admit mistakes and how they worked through them can be great learning tools for other developers so that they know how to avoid specific scenarios and if they do become stuck in the same issue they will know how to properly work their way out of them.

    Read the article

  • Light mask map and camera for static lights in XNA Platformer

    - by JiminyCricket
    Using the example for some basic light maps found here : http://blog.josack.com/2011/07/xna-2d-dynamic-lighting.html, I've managed to create a lightmap texture using individual lightmaps and display it over a 2D tiled world as in the Platformer example. I'm using the very basic 2D camera example as found here : http://www.david-amador.com/2009/10/xna-camera-2d-with-zoom-and-rotation/, and the problem is that the lightmap texture scrolls with the player sprite. This looks pretty good and would be excellent for lighting the player sprite as it moves. But, I also want to be able to place static lights (or some initial position for the lights) that do not move with the player or camera. When I turn off the camera or give it a static position, it works as a series of static lights so I believe it's probably caused by the camera transformation matrix following the player around. I'm using RenderTarget2Ds, one for the main game screen after all the backgrounds and tiles are rendered, and one for the "lightmap" which consists of a black background and a bunch of lighting textures which are merged with it using additive blending. For now, I'm doing all of this in PlatformerGame.cs where the camera transformation and position is set and the level.Draw() call is made. I can't figure out how to separate the drawing of the lightmap and the camera following the player. I was thinking it would be better to render the shadows and lighting directly in the drawing of the level itself, but I'm not sure how to do that either because this technique requires RenderTarget2Ds and calling SpriteBatch.Begin()/End().

    Read the article

  • Using position function for accessing particular node when using While Activity in SOA 11.1.1.5

    - by AJ
    Hi If you are using while activity in SOA Suite 11.1.1.5 and within loop you have a requirement to access repeating node of XML. You might need to use below XPATH expression for accessing the node. Here is the XML that I am using for this example <?xml version='1.0' encoding='UTF-8'?> David DemoJob 1 2012-04-15 40000 0 10 Steve TestJob 1 2012-04-15 40000 0 10 Here you can notice that Emp node is repeating i.e. EmpCollection node will contain multiple employees. Now in loop one of assign activity you need to access a particular node for e.g. For first time loop runs you want to access first node and second time second node and so on. You need to make use of postion() function like bpws:getVariableData('Receive1_Read_InputVariable','body','/ns4:EmpCollection/ns4:Emp[position()=$loopCounter]/ns4:job') Please Note: Here loopCounter is a variable that we have created of type xsd:int and prior to loop we have initialized a value of 1. Loop will run depending on the number of Emp nodes present at runtime. For that in while Activity you can use below XPATH expression ora:countNodes('Receive1_Read_InputVariable','body','/ns4:EmpCollection/ns4:Emp')=bpws:getVariableData('loopCounter') Do let me know in case of any issues or concern. Cheers AJ

    Read the article

  • Today's Links (6/24/2011)

    - by Bob Rhubart
    Fusion Applications - How we look at the near future | Domien Bolmers Bolmers recaps a Logica pow-wow around Fusion Applications. Who invented e-mail? | Nicholas Carr IT apparently does matter to Nicholas Carr as he shares links to Errol Morris's 5-part NYT series about the origins of email. David Sprott's Blog: Service Oriented Cloud (SOC) "Whilst all the really good Cloud environments are Service Oriented," says Sprott, "it’s very much the minority of consumer SaaS that is today." Fast, Faster, JRockit | René van Wijk Oracle ACE René van Wijk tells you "everything you ever wanted to know about the JRockit JVM, well quite a lot anyway." Creating an XML document based on my POJO domain model – how will JAXB help me? | Lucas Jellema "I thought that adding a few JAXB annotations to my existing POJO model would do the trick," says Jellema, "but no such luck." Announcing Oracle Environmental Accounting and Reporting | Theresa Hickman Oracle Environmental Accounting and Reporting is designed to help companies track and report greenhouse emissions. Yoga framework for REST-like partial resource access | William Vambenepe Vambenepe says: "A tweet by Stefan Tilkov brought Yoga to my attention, 'a framework for supporting REST-like URI requests with field selectors.'" InfoQ: Pragmatic Software Architecture and the Role of the Architect "Joe Wirtley introduces software architecture and the role of the architect in software development along with techniques, tips and resources to help one get started thinking as an architect."

    Read the article

  • MySQL Makes The Cover of Oracle Magazine!

    - by bertrand.matthelie(at)oracle.com
    @font-face { font-family: "Arial"; }@font-face { font-family: "Times"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }div.Section1 { page: Section1; } In case you haven't seen it yet, MySQL made the cover of the January/February Edition of Oracle Magazine!   Published 6 times per year and distributed to over half a million of IT managers, DBAs and developers, Oracle Magazine contains technology-strategy articles, sample code, tips, Oracle & partner news, and more.   @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } If you're thinking that this is yet another indicator that Oracle is very serious about MySQL, you're absolutely right!   I encourage you to read David Kelly's "Open For Business" article posted online here.   "First released in 1995 and purchased by Sun in 2008, MySQL has quickly graduated from the realm of hobbyists to the world of business, becoming the leading open source database for many Web applications and an integral part of the LAMP (Linux, Apache, MySQL, PHP) Web application stack. Almost a year after Oracle's acquisition of Sun, MySQL plays an even bigger role in enterprises of all sizes worldwide." ......   Enjoy the article!

    Read the article

  • @CodeStock 2012 Review: Leon Gersing ( @Rubybuddha ) - "You"

    "YOU"Speaker: Leon GersingTwitter: @Rubybuddha Site: http://about.me/leongersing I honestly had no idea what I was getting in to when I sat down in to this session. I basically saw the picture of the speaker and knew that it would be a good session. I was completely wrong; it was the BEST SESSION of CodeStock 2012.  In fact it was so good, I texted another coworker attending the conference to get over and listen to Leon. Leon took on the concept of growth in the software development community. He specifically referred David Hansson in his ability to stick to his beliefs when the development community thought that he was crazy for creating Ruby on Rails. If you do not know this story Ruby on Rails is one of the fastest growing web languages today. In addition, he also touched on the flip side of this argument in that we must be open to others ideas and not discard them so quickly because we all come from differing perspectives and can add value to a project/team/community. This session left me with two very profound concepts/quotes: “In order to learn you must do it badly in front of a crowed and fail.” - @Rubybuddha I can look back on my career so far and say that he is correct; I think I have learned the most after failing, especially when I achieved this failure in front of other. “Experts must be able to fail.” - @Rubybuddha I think we can all learn from our own mistakes but we can also learn from others. When respected experts fail it is a great learning opportunity for the entire team as well as the person who failed. When expert admit mistakes and how they worked through them can be great learning tools for other developers so that they know how to avoid specific scenarios and if they do become stuck in the same issue they will know how to properly work their way out of them.

    Read the article

  • Arrow ECS Oracle OpenWorld update

    - by mseika
    A date to mark in your diaries now! On Tuesday 23rd October will be holding our post Oracle OpenWorld Oracle. Covering all the important announcements from Oracle OpenWorld, this must attend event will be held in the Royal Exchange, London. The full agenda and speaker line up is being finalised but we will cover all the major strategy and product announcements from Oracle OpenWorld, FY 13 Channel Strategy and Partnering with Arrow ECS. Oracle OpenWorld a Channel Perspective, David Tweddle - Head of UK Alliance and Channel Hardware Announcements, Christopher Lindsay - Oracle Hardware EMEA Software Announcements, speaker to be confirmed Arrow ECS Focus and Strategy, Simon Rushbrook - Business Development Manager, Arrow ECS Summary and close, Nick Tinsley - Sales Director Who should attend? The event is subject Oracle NDA and is aimed at sales and pre-sales personal within your organisation. This event will commence at 12.30 with lunch, presentations will start at 1.30 and will close at 4.30 followed by drinks and networking. Full details to follow shortly but save the date and register now. Date & Time Tuesday 23rd October 201212.30pm - 4.30pm Post event drinks will be servedfrom 4.30pm Location London OfficeEntrance 2, Fourth Floor,The Royal Exchange,London, EC3V 3LN Click here for directions >

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-12

    - by Bob Rhubart
    15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. Adding a runtime picker to a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena shows how to create an Oracle WebCenter popup to allow users to "select items or do more complex things." Oracle Identity Manager 11g R2 Catalog | Daniel Gralewski Oracle Fusion Middleware A-Team blogger Daniel Gralewski shares a detailed overview of the new Catalog feature, one of the most talked about features in the latest release of Oracle Identity Manager 11g. Cloud API and service designers, stop thinking small | Cloud Computing - InfoWorld "The focus must shift away from fine-grained APIs that provide some type of primitive service, such as pushing data to a block of storage or perhaps making a request to a cloud-rooted database," says InfoWorld's David Linthicum. "To go beyond primitives, you must understand how these services should be used in a much larger architectural context. In other words, you need to understand how businesses will employ these services to form real workplace solutions -- inside and outside the enterprise." Oracle Solaris 8 P2V with Oracle database 10.2 and ASM | Orgad Kimchi Orgad Kimchi's technical post illustrates the migration of "a Solaris 8 physical system, with Oracle database version 10.2.0.5 with ASM file-system located on a SAN storage, into a Solaris 8 branded zone inside a Solaris 10 guest domain on top of a Solaris 11 control domain." Thought for the Day "The hardest single part of building a software system is deciding precisely what to build. " — Fred Brooks Source: SoftwareQuotes.com

    Read the article

  • Twitter Tuesday - Top 10 @ArchBeat Tweets - August 12-18, 2014

    - by Bob Rhubart-Oracle
    Man in gray hat: "You know, more than three thousand people follow @OTNArchBeat on Twitter. I wonder which tweets were the most popular over the last seven days." Man in brown hat: "Shut up! I think I see a UFO!" Man in gray hat: "That's OK. I'll just read this blog post." RT @java: "Programmers are creative people and typically delight in contriving clever ways to solve problems." -Casimir Saternos in @OracleJavaMag Aug 18, 2014 at 12:54 PM The Offer Still Stands: Produce your own episode of the OTN ArchBeat Podcast. Click for details. Aug 13, 2014 at 02:03 PM Binge-Ready! Watch the Top 10 OTN ArchBeat Videos featuring @stewartbryson @stenvesterli @gurcanorhan Aug 13, 2014 at 11:49 AM Oracle Announces First Java 9 Features | InfoQ Aug 18, 2014 at 12:20 PM Getting Started wit the #Coherence Memcached Adaptor | David Felcey Aug 18, 2014 at 10:19 AM #WebLogic Data Source Connection Labeling | Steve Felts Aug 14, 2014 at 10:03 AM How to introduce #DevOps into a moribund corporate culture | ZDNet Aug 15, 2014 at 11:23 AM Sample Chapter: Installing Oracle #WebLogic Server 12c and Using the Management Tools | Sam Alapati Aug 14, 2014 at 11:09 AM Building a Responsive #WebCenter Portal Application | @JayJayZheng Aug 12, 2014 at 11:04 AM #OEM12c Cloud Control authorization with Active Directory | Jeroen Gouma Aug 14, 2014 at 10:16 AM

    Read the article

  • Should I use a config file or database for storing business rules?

    - by foiseworth
    I have recently been reading The Pragmatic Programmer which states that: Details mess up our pristine code—especially if they change frequently. Every time we have to go in and change the code to accommodate some change in business logic, or in the law, or in management's personal tastes of the day, we run the risk of breaking the system—of introducing a new bug. Hunt, Andrew; Thomas, David (1999-10-20). The Pragmatic Programmer: From Journeyman to Master (Kindle Locations 2651-2653). Pearson Education (USA). Kindle Edition. I am currently programming a web app that has some models that have properties that can only be from a set of values, e.g. (not actual example as the web app data confidential): light-type = sphere / cube / cylinder The light type can only be the above three values but according to TPP I should always code as if they could change and place their values in a config file. As there are several incidents of this throughout the app, my question is: Should I store possibly values like these in: a config file: 'light-types' = array(sphere, cube, cylinder), 'other-type' = value, 'etc = etc-value a single table in a database with one line for each config item a database with a table for each config item (e.g. table: light_types; columns: id, name) some other way? Many thanks for any assistance / expertise offered.

    Read the article

  • Oracle Buys BigMachines - Adds Leading Configure, Price and Quote (CPQ) Cloud to the Oracle Cloud to Enable Smarter Selling

    - by Richard Lefebvre
    News Facts Oracle today announced that it has entered into an agreement to acquire BigMachines, a leading cloud-based Configure, Price and Quote (CPQ) solution provider. BigMachines’ CPQ Cloud accelerates the conversion of sales opportunities into revenue by automating the sales order process with guided selling, dynamic pricing, and an easy-to-use workflow approval process, accessible anywhere, on any device. Companies that use sales automation technology often rely on manual, cumbersome and disconnected processes to convert opportunities into orders. This creates errors, adds costs, delays revenue, and degrades the customer experience. BigMachines’ CPQ cloud extends sales automation to include the creation of an optimal quote, which enables sales personnel to easily configure and price complex products, select the best options, promotions and deal terms, and include up sell and renewals, all using automated workflows. In combination with Oracle’s enterprise-grade cloud solutions, including Marketing, Sales, Social, Commerce and Service Clouds, Oracle and BigMachines will create an end-to-end smarter selling cloud solution so sales personnel are more productive, customers are more satisfied, and companies grow revenue faster. More information on this announcement can be found at http://www.oracle.com/bigmachines Supporting Quotes “The fundamental goals of smarter selling are to provide sales teams with the information, access, and insights they need to maximize revenue opportunities and execute on all phases of the sales cycle,” said Thomas Kurian, Executive Vice President, Oracle Development. “By adding BigMachines’ CPQ Cloud to the Oracle Cloud, companies will be able to drive more revenue and increase customer satisfaction with a seamlessly integrated process across marketing and sales, pricing and quoting, and fulfillment and service.” “BigMachines has developed leading CPQ solutions that serve companies of all sizes across multiple industries,” said David Bonnette, BigMachines’ CEO. “Together with Oracle, we expect to provide a complete cloud solution to manage sales processes and deliver exceptional customer experiences.” Supporting Resources About Oracle and BigMachines General Presentation Customer and Partner Letter FAQ

    Read the article

  • Why is the camera not following the player? [on hold]

    - by Homer_Simpson
    I use the following code to create Parallax Scrolling: http://www.david-gouveia.com/portfolio/2d-camera-with-parallax-scrolling-in-xna/ Parallax Scrolling is working but I don't know how to focus the camera on the player. If the player moves, then the camera doesn't follow the player. The player leaves the screen when I'm moving it. I use the following code(as described in the tutorial), but it's not working: // Updates my camera to lock on the character _camera.LookAt(player.Playerposition); What can I do so that the player is always in the center of the screen/camera? My player class: public class Player { Texture2D Playertex; public Vector2 Playerposition = new Vector2(400, 240); private Game1 game1; public Player(Game1 game) { game1 = game; } public void Load(ContentManager content) { Playertex = content.Load<Texture2D>("8bitmario"); TouchPanel.EnabledGestures = GestureType.HorizontalDrag; } public void Update(GameTime gameTime) { while (TouchPanel.IsGestureAvailable) { GestureSample gs = TouchPanel.ReadGesture(); switch (gs.GestureType) { case GestureType.HorizontalDrag: Playerposition.X += 3f; break; } } } public void Render(SpriteBatch batch) { batch.Draw(Playertex, new Vector2(Playerposition.X - Playertex.Width / 2, Playerposition.Y - Playertex.Height / 2), Color.White); } } In Game1, I update the player and camera class: protected override void Update(GameTime gameTime) { // Updates my character's position player.Update(gameTime); // Updates my camera to lock on the character _camera.LookAt(player.Playerposition); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); foreach (Layer layer in _layers) layer.Draw(spriteBatch); spriteBatch.Begin(SpriteSortMode.Deferred, null, null, null, null, null, _camera.GetViewMatrix(new Vector2(0.0f, 0.0f))); player.Render(spriteBatch); spriteBatch.End(); base.Draw(gameTime); }

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 28 - November 3, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of Oct 28 - Nov 3, 2012. Eventually, 90% of tech budgets will be outside IT departments | ZDNet Another interesting post from ZDNet blogger Joe McKendrick about changing roles in IT. ADF Mobile - Login Functionality | Andrejus Baranovskis "The new ADF Mobile approach with native deployment is cool when you want to access phone functionality (camera, email, sms and etc.), also when you want to build mobile applications with advanced UI," reports Oracle ACE Director Andrejus Baranovskis. Mobile Development Platform Strategy Chart: ADF Mobile, WebCenter Sites, Portal, Content and Social "Unlike desktop web focused efforts, the world of mobile has undergone change at a feverish pace," says social enterprise expert John Brunswick. His extensive post charts various resources that will help you keep up. ADF Essentials - The Bare Necessities | Floyd Teter The experiment is over… And now Oracle ACE Director Floyd Teter shares his impressions after spending some time with Oracle ADF Essentials, the free version of Oracle ADF. A review of Oracle SOA Suite 11g Administrator’s Handbook | RedStack "More so than any other single piece of content that I have seen on the topic, it provides the information that a SOA administrator needs to know in order to successfully configure, manage, monitor, troubleshoot and backup an Oracle SOA environment." So says Oracle Fusion Middleware A-Team solution architect Mark Nelson of Oracle SOA Suite 11g Administrator’s Handbook, by Ahmed Aboulnaga and Arun Pareek. Expanding the Oracle Enterprise Repository with functional documentation Capgemini middleware specialist Marc Kuijpers shares information on how Oracle Enterprise Repository can be configured "to contain functional assets, i.e. functional designs, use cases and a logical data model" to aid in SOA governance efforts. Podcast: Are You Future Proof? - Part 2 In Part 2, practicing architects and Oracle ACE Directors Ron Batra (AT&T), Basheer Khan (Innowave Technology), and Ronald van Luttikhuizen discuss re-tooling one’s skill set to reflect changes in enterprise IT, including the knowledge to steer stakeholders around the hype to what’s truly valuable. Easy way to access JPA with REST (JSON / XML) | Edwin Biemond Oracle ACE Edwin Biemond shows you "what is possible with JPA-RS, how easy it is and howto setup your own EclipseLink REST service." Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic: when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. 2012 IOUG Virtualization SIG – Online Symposium on Nov 7 and Nov 8 | Kai Yu Oracle ACE Director Kai Yu shares information on this week's IOUG Virtualization SIG online event. Does that make it a virtual virtualization event? Thought for the Day "If McDonalds were run like a software company, one out of every hundred Big Macs would give you food poisoning — and the response would be, 'We’re sorry, here’s a coupon for two more.'" — Mark Minasi Source: SoftwareQuotes.com

    Read the article

  • 2014 Conferences - JFokus, JavaLand & GeeCon!

    - by Heather VanCura
    There has been a delay in publishing these past event summaries from early 2014--JFokus in February, JavaLand in March, and GeeCon in May. As we plan for Devoxx UK next week, I found these summaries that did not make it past 'draft' stage.  We had some great successes with the first three events of 2014, a Java developer conference trifecta! Participation topics included Java, the JCP program overall and the Adopt-a-JSR programs.   First up in February was JFokus in Stockholm. The energy and talent in Stockholm is amazing and the conference organizers do a stellar job running it and welcoming the speakers of this event.  I enjoyed the city walk and speaker dinner, as well as many opportunities to interact with conference speakers and attendees, both during and after the conference hours. Reza Rehman invited me to speak during his Java EE 7 lab session about the Adopt-a-JSR program, and I gave a quickie session on the JCP and Adopt-a-JSR.  There was also a late night Birds of a Feather (BoF) session held jointly with Cecelia Borg, Martijn Verburg and Reza Rehman.  This was an interactive conversation with a focus on the Java EE community survey results and encouraging more community participation and collaboration in Java development.  The Java 8 keynote by Georges Saab and Mark Reinhold was also very entertaining,  I was sorry to miss FOSDEM happening the previous weekend this year in Brussels, but I hope to attend in 2015.  Favorite take home gift -- Lambdas cap! In March, the inaugural version of the JavaLand conference happened inside Phantasialand, an amusement park in Germany. Markus Eisele suggested having an Early Adopters area at the conference, which I was keen to implement. In 2013 at Devoxx Belgium we held some activities in the Hackergaren area around Lambdas and Java EE 7, so this was a great opportunity to expand on a more interactive conference format and Andreas Badelt from the program committee helped in the planning for this area.  Daniel Bryant and Mani Sarkar from the London Java Community led some general Adopt-a-JSR discussions and AdoptOpen JDK activities.  JCP Spec Leads, Anatole Tresch from Credit Suisse, leading JSR 354, Money & Currency API, and Ed Burns from Oracle, leading JSR 344, JavaServer Faces 2.2, attended to engage with conference attendees on their JSRs.  Favorite - Stephen Chin's roller coaster video. In May, GeeCon in Krakow was anther awesome conference!  The conference organizers were warm and welcoming and I enjoyed time getting to know the other speakers at the event. There was a JCP and Adopt-a-JSR participation session as well as a moderated panel session on Early Adopters.  We had an amazing panel -- Daniel Bryant, Arun Gupta, Tomasz Borek , and Peter Lawrey. The panel discussed the Adopt-a-JSR and Adopt OpenJDK program, and how the participants work together to get involved and contribute to both the Java SE and Java EE platforms.  If was an interesting discussion and sparked some new ideas on how Java User Groups in Poland and around the world can contribute in a significant and meaningful way to create better and more practical Java standards today and in the future.  Favorite take home gift - GeeCon mug!   These were some of the highlights of the events--looking forward to Devoxx UK next week.  I will publish these details tomorrow!

    Read the article

  • named type not used for constructor injection

    - by nmarun
    Hi, I have a simple console application where I have the following setup: public interface ILogger { void Log(string message); } class NullLogger : ILogger { private readonly string version; public NullLogger() { version = "1.0"; } public NullLogger(string v) { version = v; } public void Log(string message) { Console.WriteLine("NULL " + version + " : " + message); } } The configuration details are below: <type type="UnityConsole.ILogger, UnityConsole" mapTo="UnityConsole.NullLogger, UnityConsole"> My calling code looks as below: IUnityContainer container = new UnityContainer(); UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); section.Containers.Default.Configure(container); ILogger nullLogger = container.Resolve(); nullLogger.Log("hello"); This works fine, but once I give a name to this type something like: <type type="UnityConsole.ILogger, UnityConsole" mapTo="UnityConsole.NullLogger, UnityConsole" name="NullLogger"> The above calling code does not work even if I explicitly register the type using container.RegisterType<ILogger, NullLogger>(); I get the error: {"Resolution of the dependency failed, type = \"UnityConsole.ILogger\", name = \"\". Exception message is: The current build operation (build key Build Key[UnityConsole.NullLogger, null]) failed: The parameter v could not be resolved when attempting to call constructor UnityConsole.NullLogger(System.String v). (Strategy type BuildPlanStrategy, index 3)"} Why doesn't unity look into named instances? To get it to work, I'll have to do: ILogger nullLogger = container.Resolve("NullLogger"); Where is this behavior documented? Arun

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Dynamic XAP loading in Task-It - Part 1

    Download Source Code NOTE 1: The source code provided is running against the RC versions of Silverlight 4 and VisualStudio 2010, so you will need to update to those bits to run it. NOTE 2: After downloading the source, be sure to set the .Web project as the StartUp Project, and Default.aspx as the Start Page In my MEF into post, MEF to the rescue in Task-It, I outlined a couple of issues I was facing and explained why I chose MEF (the Managed Extensibility Framework) to solve these issues. Other posts to check out There are a few other resources out there around dynamic XAP loading that you may want to review (by the way, Glenn Block is the main dude when it comes to MEF): Glenn Blocks 3-part series on a dynamically loaded dashboard Glenn and John Papas Silverlight TV video on dynamic xap loading These provide some great info, but didnt exactly cover the scenario I wanted to achieve in Task-Itand that is dynamically loading each of the apps pages the first time the user enters a page. The code In the code I provided for download above, I created a simple solution that shows the technique I used for dynamic XAP loading in Task-It, but without all of the other code that surrounds it. Taking all that other stuff away should make it easier to grasp. Having said that, there is still a fair amount of code involved. I am always looking for ways to make things simpler, and to achieve the desired result with as little code as possible, so if I find a better/simpler way I will blog about it, but for now this technique works for me. When I created this solution I started by creating a new Silverlight Navigation Application called DynamicXAP Loading. I then added the following line to my UriMappings in MainPage.xaml: <uriMapper:UriMapping Uri="/{assemblyName};component/{path}" MappedUri="/{assemblyName};component/{path}"/> In the section of MainPage.xaml that produces the page links in the upper right, I kept the Home link, but added a couple of new ones (page1 and page 2). These are the pages that will be dynamically (lazy) loaded: <StackPanel x:Name="LinksStackPanel" Style="{StaticResource LinksStackPanelStyle}">      <HyperlinkButton Style="{StaticResource LinkStyle}" NavigateUri="/Home" TargetName="ContentFrame" Content="home"/>      <Rectangle Style="{StaticResource DividerStyle}"/>      <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 1" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage1}"/>      <Rectangle Style="{StaticResource DividerStyle}"/>      <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 2" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage2}"/>  </StackPanel> In App.xaml.cs I added a bit of MEF code. In Application_Startup I call a method called InitializeContainer, which creates a PackageCatalog (a MEF thing), then I create a CompositionContainer and pass it to the CompositionHost.Initialize method. This is boiler-plate MEF stuff that allows you to do 'composition' and import 'packages'. You're welcome to do a bit more MEF research on what is happening here if you'd like, but for the purpose of this example you can just trust that it works. :-) private void Application_Startup(object sender, StartupEventArgs e) {     InitializeContainer();     this.RootVisual = new MainPage(); }   private static void InitializeContainer() {     var catalog = new PackageCatalog();     catalog.AddPackage(Package.Current);     var container = new CompositionContainer(catalog);     container.ComposeExportedValue(catalog);     CompositionHost.Initialize(container); } Infrastructure In the sample code you'll notice that there is a project in the solution called DynamicXAPLoading.Infrastructure. This is simply a Silverlight Class Library project that I created just to move stuff I considered application 'infrastructure' code into a separate place, rather than cluttering the main Silverlight project (DynamicXapLoading). I did this same thing in Task-It, as the amount of this type of code was starting to clutter up the Silverlight project, and it just seemed to make sense to move things like Enums, Constants and the like off to a separate place. In the DynamicXapLoading.Infrastructure project you'll see 3 classes: Enums - There is only one enum in here called ModuleEnum. We'll use these later. PageMetadata - We will use this class later to add metadata to a new dynamically loaded project. ViewModelBase - This is simply a base class for view models that we will use in this, as well as future samples. As mentioned in my MVVM post, I will be using the MVVM pattern throughout my code for reasons detailed in the post. By the way, the ViewModelExtension class in there allows me to do strongly-typed property changed notification, so rather than OnPropertyChanged("MyProperty"), I can do this.OnPropertyChanged(p => p.MyProperty). It's just a less error-prown approach, because if you don't spell "MyProperty" correctly using the first method, nothing will break, it just won't work. Adding a new page We currently have a couple of pages that are being dynamically (lazy) loaded, but now let's add a third page. 1. First, create a new Silverlight Application project: In this example I call it Page3. In the future you may prefer to use a different name, like DynamicXAPLoading.Page3, or even DynamicXAPLoading.Modules.Page3. It can be whatever you want. In my Task-It application I used the latter approach (with 'Modules' in the name). I do think of these application as 'modules', but Prism uses the same term, so some folks may not like that. Use whichever naming convention you feel is appropriate, but for now Page3 will do. When you change the name to Page3 and click OK, you will be presented with the Add New Project dialog: It is important that you leave the 'Host the Silverlight application in a new or existing Web site in the solution' checked, and the .Web project will be selected in the dropdown below. This will create the .xap file for this project under ClientBin in the .Web project, which is where we want it. 2. Uncheck the 'Add a test page that references the application' checkbox, and leave everything else as is. 3. Once the project is created, you can delete App.xaml and MainPage.xaml. 4. You will need to add references your new project to the following: DynamicXAPLoading.Infrastructure.dll (this is a Project reference) DynamicNavigation.dll (this is in the Libs directory under the DynamicXAPLoading project) System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll System.Windows.Controls.Navigation.dll If you have installed the latest RC bits you will find the last 3 dll's under the .NET tab in the Add Referenced dialog. They live in the following location, or if you are on a 64-bit machine like me, it will be Program Files (x86).       C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Libraries\Client Now let's create some UI for our new project. 5. First, create a new Silverlight User Control called Page3.dyn.xaml 6. Paste the following code into the xaml: <dyn:DynamicPageShim xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     xmlns:my="clr-namespace:Page3;assembly=Page3">     <my:Page3Host /> </dyn:DynamicPageShim> This is just a 'shim', part of David Poll's technique for dynamic loading. 7. Expand the icon next to Page3.dyn.xaml and delete the code-behind file (Page3.dyn.xaml.cs). 8. Next we will create a control that will 'host' our page. Create another Silverlight User Control called Page3Host.xaml and paste in the following XAML: <dyn:DynamicPage x:Class="Page3.Page3Host"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     xmlns:Views="clr-namespace:Page3.Views"      mc:Ignorable="d"     d:DesignHeight="300" d:DesignWidth="400"     Title="Page 3">       <Views:Page3/>   </dyn:DynamicPage> 9. Now paste the following code into the code-behind for this control: using DynamicXAPLoading.Infrastructure;   namespace Page3 {     [PageMetadata(NavigateUri = "/Page3;component/Page3.dyn.xaml", Module = Enums.Page3)]     public partial class Page3Host     {         public Page3Host()         {             InitializeComponent();         }     } } Notice that we are now using that PageMetadata custom attribute class that we created in the Infrastructure project, and setting its two properties. NavigateUri - This tells it that the assembly is called Page3 (with a slash beforehand), and the page we want to load is Page3.dyn.xaml...our 'shim'. That line we added to the UriMapper in MainPage.xaml will use this information to load the page. Module - This goes back to that ModuleEnum class in our Infrastructure project. However, setting the Module to ModuleEnum.Page3 will cause a compilation error, so... 10. Go back to that Enums.cs under the Infrastructure project and add a 3rd entry for Page3: public enum ModuleEnum {     Page1,     Page2,     Page3 } 11. Now right-click on the Page3 project and add a folder called Views. 12. Right-click on the Views folder and create a new Silverlight User Control called Page3.xaml. We won't bother creating a view model for this User Control as I did in the Page 1 and Page 2 projects, just for the sake of simplicity. Feel free to add one if you'd like though, and copy the code from one of those other projects. Right now those view models aren't really doing anything anyway...though they will in my next post. :-) 13. Now let's replace the xaml for Page3.xaml with the following: <dyn:DynamicPage x:Class="Page3.Views.Page3"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     mc:Ignorable="d"     d:DesignHeight="300" d:DesignWidth="400"     Style="{StaticResource PageStyle}">       <Grid x:Name="LayoutRoot">         <ScrollViewer x:Name="PageScrollViewer" Style="{StaticResource PageScrollViewerStyle}">             <StackPanel x:Name="ContentStackPanel">                 <TextBlock x:Name="HeaderText" Style="{StaticResource HeaderTextStyle}" Text="Page 3"/>                 <TextBlock x:Name="ContentText" Style="{StaticResource ContentTextStyle}" Text="Page 3 content"/>             </StackPanel>         </ScrollViewer>     </Grid>   </dyn:DynamicPage> 14. And in the code-behind remove the inheritance from UserControl, so it should look like this: namespace Page3.Views {     public partial class Page3     {         public Page3()         {             InitializeComponent();         }     } } One thing you may have noticed is that the base class for the last two User Controls we created is DynamicPage. Once again, we are using the infrastructure that David Poll created. 15. OK, a few last things. We need a link on our main page so that we can access our new page. In MainPage.xaml let's update our links to look like this: <StackPanel x:Name="LinksStackPanel" Style="{StaticResource LinksStackPanelStyle}">     <HyperlinkButton Style="{StaticResource LinkStyle}" NavigateUri="/Home" TargetName="ContentFrame" Content="home"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 1" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage1}"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 2" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage2}"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 3" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage3}"/> </StackPanel> 16. Next, we need to add the following at the bottom of MainPageViewModel in the ViewModels directory of our DynamicXAPLoading project: public ModuleEnum ModulePage3 {     get { return ModuleEnum.Page3; } } 17. And at last, we need to add a case for our new page to the switch statement in MainPageViewModel: switch (module) {     case ModuleEnum.Page1:         DownloadPackage("Page1.xap");         break;     case ModuleEnum.Page2:         DownloadPackage("Page2.xap");         break;     case ModuleEnum.Page3:         DownloadPackage("Page3.xap");         break;     default:         break; } Now fire up the application and click the page 1, page 2 and page 3 links. What you'll notice is that there is a 2-second delay the first time you hit each page. That is because I added the following line to the Navigate method in MainPageViewModel: Thread.Sleep(2000); // Simulate a 2 second initial loading delay The reason I put this in there is that I wanted to simulate a delay the first time the page loads (as the .xap is being downloaded from the server). You'll notice that after the first hit to the page though that there is no delay...that's because the .xap has already been downloaded. Feel free to comment out this 2-second delay, or remove it if you'd like. I just wanted to show how subsequent hits to the page would be quicker than the initial one. By the way, you may want to display some sort of BusyIndicator while the .xap is loading. I have that in my Task-It appplication, but for the sake of simplicity I did not include it here. In the future I'll blog about how I show and hide the BusyIndicator using events (I'm currently using the eventing framework in Prism for that, but may move to the one in the MVVM Light Toolkit some time soon). Whew, that felt like a lot of steps, but it does work quite nicely. As I mentioned earlier, I'll try to find ways to simplify the code (I'd like to get away from having things like hard-coded .xap file names) and will blog about it in the future if I find a better way. In my next post, I'll talk more about what is actually happening with the code that makes this all work.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • Syntax error on token "QUOTE", VariableDeclaratorId expected after this token

    - by user356812
    I've posted a bigger chunk of the code below. You can see that initially QUOTE was procedural- coded in place. I'm trying to learn how to use declarative design so I want to do the same thing but by using resources. It seems like I need to access the string.xml thru the @R.id tag and identify QUOTE with that string value. But I don't know enough to negotiate this. Any tips? Thanks! public class circle extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(new GraphicsView(this)); } static public class GraphicsView extends View { //private static final String QUOTE = "Happy Birthday to David."; private final String QUOTE = getString(R.string.quote); ..... @Override protected void onDraw(Canvas canvas) { // Drawing commands go here canvas.drawPath(circle, cPaint); canvas.drawTextOnPath(QUOTE, circle, 0, 20, tPaint);

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >