Search Results

Search found 3246 results on 130 pages for 'david stanley'.

Page 119/130 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • ArchBeat Link-o-Rama Top 20 for June 17-23, 2012

    - by Bob Rhubart
    The most popular item shared via my social networks for the week of June 17-23, 2012. Simple Made Easy | Rich Hickey InfoQ: Current Trends in Enterprise Mobility Cloud Bursting between AWS and Rackspace | High Scalability A Distributed Access Control Architecture for Cloud Computing Congrats to @MNEMONIC01 on his new book: Oracle WebLogic Server 12c: First Look BI Architecture Master Class for Partners - Oracle Architecture Unplugged Guide to integration architecture | Stephanie Mann Oracle Data Integrator 11g - Faster Files | David Allan Recap: EMEA User Group Leaders Meeting Latvia May 2012 Starting a cluster | Mark Nelson SOA, Cloud & Service Technology Symposium 2012 London - Special Oracle Discount Enterprise 2.0 Conference: Building Social Business | Oracle WebCenter Blog FY13 Oracle PartnerNetworkr Kickoff - Tues June 26, 2012 | @oraclepartners Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6? | Ricardo Ferreira Call for Nominations: Oracle Fusion Middleware Innovation Awards 2012 - Win a free pass to #OOW12 Hibernate4 and Coherence | Rene van Wijk Eclipse and Oracle Fusion Development - Free Virtual Event, July 10th Why building SaaS well means giving up your servers | GigaOM Personas - what, why & how | Mascha van Oosterhout Oracle Public Cloud Architecture | Tyler Jewell Thought for the Day "There is only one thing more painful than learning from experience and that is not learning from experience." — Archibald McLeish (May 7, 1892 – April 20, 1982) Source: SoftwareQuotes.com

    Read the article

  • 2D Collision masks for handling slopes

    - by JiminyCricket
    I've been looking at the example at: http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel and am trying to figure out how to adjust the sprite once a collision has been detected. As David suggested at XNA 4.0 2D sidescroller variable terrain heightmap for walking/collision, I made a few sensor points (feet, sides, bottom center, etc.) and can easily detect when these points actually collide with non-transparent portions of a second texture (simple slope). I'm having trouble with the algorithm of how I would actually adjust the sprite position based on a collision. Say I detect a collision with the slope at the sprite's right foot. How can I scan the slope texture data to find the Y position to place the sprite's foot so it is no longer inside the slope? The way it is stored as a 1D array in the example is a bit confusing, should I try to store the data as a 2D array instead? For test purposes, I'm thinking of just using the slope texture alpha itself as a primitive and easy collision mask (no grass bits or anything besides a simple non-linear slope). Then, as in the example, I find the coordinates of any collisions between the slope texture and the sprite's sensors and mark these special sensor collisions as having occurred. Finally, in the case of moving up a slope, I would scan for the first transparent pixel above (in the texture's Ys at that X) the right foot collision point and set that as the new height of the sprite. I'm a little unclear also on when I should make these adjustments. Collisions are checked on every game.update() so would I quickly change the position of the sprite before the next update is called? I also noticed several people mention that it's best to separate collision checks horizontally and vertically, why is that exactly? Open to any suggestions if this is an inefficient or inaccurate way of handling this. I wish MSDN had provided an example of something like this, I didn't know it would be so much more complex than NES Mario style pure box platforming!

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-20

    - by Bob Rhubart
    New book: Oracle WebLogic Server 12c: First Look Congratulations to Michel Schildmeijer on the publication of his new book. Call for Nominations: Oracle Fusion Middleware Innovation Awards 2012 - Win a free pass to #OOW12 These awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Either a customer, their partner, or an Oracle representative can submit the nomination form on behalf of the customer. Submission deadline: July 17. Winners receive a free pass to Oracle OpenWorld 2012 in San Francisco. ODTUG Kscope12 - June 24-28 - San Antonio, TX San Antonio, TX, June 24-28, 2012 Kscope12, sponsored by ODTUG, is your home for Application Express, BI and Oracle EPM, Database Development, Fusion Middleware, and MySQL training by the best of the best! Eclipse and Oracle Fusion Development - Free Virtual Event, July 10th Get more out of Eclipse with these useful resources. How to Create Multiple Internal Repositories for Oracle Solaris 11 | Albert White Albert White shows you how to create and manage internal repositories for release, development, and support versions of Solaris 11. Social Technology and the Potential for Organic Business Networks | Michael Fauscette "An organic business network driven company is the antithesis of a hierarchical, rigid, reactive, process-constrained, and siloed organization." Cloud Bursting between AWS and Rackspace | High Scalability Nati Shalom explains "cloud bursting," an interesting hybrid cloud model. Born-again cloud advocates finally see the light | David Linthicum "I can't help but wish that we keep an open mind about the next technology evolution when it begins and get religion earlier," says Linthicum. How to know that a method was run, when you didn’t write that method | RedStack Middleware A-Team blogger Mark Nelson shares a useful tip for those working with ADF. Thought for the Day "There does not now, nor will there ever exist, a programming language in which it is the least bit hard to write bad programs." — L. Flon Source: SoftwareQuotes.com

    Read the article

  • links for 2011-03-18

    - by Bob Rhubart
    Events Overview (tags: ping.fm entarch) No description available. (tags: ping.fm) Andrejus Baranovskis: SOA & E2.0 Partner Community Forum Slides Oracle ACE Director Andrejus Baranovskis shares slides from his presentation at the SOA & E2.0 Partner Community Forum in Netherlands. (tags: oracle otn oracleace soa enterprise2.0 webcenter) ODTUG Kaleidoscope 2011 - The Premier Conference for Oracle Fusion Middleware AMIS Technology blog Oracle ACE Director Lucas Jellema shares information on what he considers "the best event for anyone doing, dabbling in or considering doing Oracle Fusion Middleware." (tags: oracle otn oracleace odtug fusionmiddleware) Mark Rittman: ODTUG K-Scope 2011 Early Bird Deadline is Closing "The deadline for Early Bird registrations for Kscope is fast approaching [March 25]. If you want to attend at the discounted rate, sign up soon." - Oracle ACE Director Mark Rittman (tags: oracle otn oracleace odtug) Master Data Management and Cloud Computing (Oracle Master Data Management) "Cloud Computing has the potential to significantly degrade data quality across the enterprise over time. Deploying a Master Data Management solution prior to or in conjunction with a move to the Cloud can insure that the data flowing into the enterprise from the Cloud is clean and governed." - David Butler (tags: oracle otn mdm cloud)

    Read the article

  • JOIN THE ORACLE Fusion Middleware Summer Camps

    - by mseika
    JOIN THE ORACLE Fusion Middleware Summer Camps For Specialized partners who are working on following projects & opportunities, we offer these advanced summer camps: - BPM Suite 11 - ADF 11g - WebCenter Portal - WebLogic 12c - SOA Suite 11g - ADF for BPM Suite 11 - WebCenter Sites 11g All training sessions will be from HQ product management and our PTS team. The sessions will take place in July in Lisbon Portugal and Munich Germany. . Participation is limited to two people per company and bootcamp. Registration is handled by first come first serve, please pay attention to the skill requirements, the pre-requisitions and the follow up! We will not accept people onto the training who do not match the criteria! Lisbon: Monday, July 9th 11:00AM - Friday July 13th 16:00 PM (Lisbon time) - ADF 11g advanced training by Grant Ronald and Frank Nimphius - WebCenter Portal advanced training by Stefan Krantz and Angelo Santagata - WebLogic 12c training by Cosmin Tudor Munich: Monday, July 16th 11:00 AM - Wednesday July 18th 16:00 PM (CET) - ADF for BPM Suite 11g advanced training by David Read - WebCenter Sites 11g advanced training by Product Management & PTS Cost: Free of charge, cancelation or no-show fee 2.000€ Bootcamps are limited to 20 persons first come first serve For details and registration please visit Lisbon registration page: & Munich registration page Quotes summer camps 2011 “From zero to hero with this BPM workshop” Steven Boon, Ordina Linkedin “This is the training that prepares for real projects and POCs” Jon Petter Hjulstad, eVita – blog & twitter SOA & BPM Partner Community registration Please first login at http://partner.oracle.com and then visit: http://www.oracle.com/goto/emea/soa. If you have any questions please contact the Oracle Partner Business Center. If you have questions please feel free to contact us any time! Best regards Jürgen KressOracle EMEA SOA & BPM Partner Adoption EMEATel. +49 89 1430 1479E-Mail: [email protected]

    Read the article

  • IDC's Sally Hudson on Securing Mobile Access

    - by Naresh Persaud
    After the launch of Identity Management 11g R2, Oracle Magazine writer David Baum sat down with Sally Hudson, research director of security products at International Data Corporation (IDC) to get her perspective on securing mobile access.  Below is an excerpt from the interview. The complete article can be found here. "We’re seeing a much more diverse landscape of devices, computing habits, and access methods from outside of the corporate network. This trend necessitates a total security picture with different layers and end-point controls. It used to be just about keeping people out. Now, you have to let people in. Most organizations are looking toward multifaceted authentication—beyond the password—by using biometrics, soft tokens, and so forth to do this securely. Corporate IT strategies have evolved beyond just identity and access management to encompass a layered security approach that extends from the end point to the data center. It involves multiple technologies and touch points and coordination, with different layers of security from the internals of the database to the edge of the network." ( Sally Hudson, Oracle Mag Sept/Oct 2012) As the landscape changes you can find out how to adapt by reading Oracle's strategy paper on providing identity services at Internet scale. 

    Read the article

  • How Do Top Performing High Tech Companies Measure Online Marketing Success?

    - by Charles Knapp
    You might expect a focus on Net Promoter scores, open rates, and click metrics. The real answers from top performers may surprise you. I've been working for a few months with Aberdeen Group and colleagues from IBM and Oracle to survey high technology firms worldwide on best practices in marketing and channel sales effectiveness.  Now, we will share the results of our original customer research in a new white paper and webcast. Register today to learn how leading High Tech companies are increasing their Return on Marketing Investment (ROMI) and growing channel sales revenue. Discover how top performing high tech companies manage and use customer data, measure marketing spend effectiveness, and support internal and channel sales. Learn how best in class high tech companies use enterprise data throughout their customer lifecycle -- messaging to leads, selling to prospects, and serving customers. Our speakers will be: Peter Ostrow, Research Director - Sales Effectiveness, Aberdeen Group David Lasher, Global Business Services Partner, IBM Jonathan Oomrigar, Vice President, Global High Technology Business Unit, Oracle Reserve your place now! This global webinar is on Tuesday, November 15, 10-11 am PST / 1-2 pm EST / 6-7 GMT / 7-8 CET

    Read the article

  • Spring to Java EE, Part Three - new tech article on otn/java

    - by Janice J. Heiss
    In a new article up on otn/java, Java EE expert David Heffelfinger continues his series exploring the relative strengths and weaknesses of Java EE and Spring. Here, he demonstrates how easy it is to develop the data layer of an application using Java EE, JPA, and the NetBeans IDE instead of the Spring Framework.In the first two parts of the series, he generated a complete Java EE application by using JavaServer Faces (JSF) 2.0, Enterprise JavaBeans (EJB) 3.1, and Java Persistence API (JPA) 2.0 from Spring’s Pet Clinic MySQL schema, thus showing how easy it is to develop an application whose functionality equaled that of the Spring sample application.In his new article, Heffelfinger tweaks the application to make it more user friendly.From the article:“The generated application displays primary keys on some of the pages, and these keys are surrogate primary keys—meaning that they have no business value and are used strictly as a unique identifier—so there is no reason why they should be visible to the user. In addition, we will modify some of the generated labels to make them more user-friendly.”He concludes the article with a summary:“The Java EE version of the application is not a straight port of the Spring version. For example, the Java EE version enables us to create, update, and delete veterinarians as well as veterinary specialties, whereas the Spring version of the application enables us only to view veterinarians and specialties. Additionally, the Spring version has a single page for managing/viewing owners, pets, and visits, whereas the Java EE version of the application has separate pages for each of these entities.The other thing we should keep in mind is that we didn’t actually write a lot of the code and markup for the Java EE version of the application, because the bulk of it was generated by the NetBeans wizard.” Have a look at the complete article here.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 2, 2012

    - by Bob Rhubart
    ADF Mobile - Login Functionality | Andrejus Baranovskis "The new ADF Mobile approach with native deployment is cool when you want to access phone functionality (camera, email, sms and etc.), also when you want to build mobile applications with advanced UI, " reports Oracle ACE Director Andrejus Baranovskis. Big Data: Running out of Metric System | Andrew McAfee Do very large numbers make your brain hurt? Better stock up on aspirin. According to Andrew McAfee: "It seems safe to say that before the current decade is out we’ll need to convene a 20th conference to come up with some more prefixes for extraordinarily large quantities not to describe intergalactic distances or the amount of energy released by nuclear reactions, but to capture the amount of digital data in the world." Cloud computing will save us from the zombie apocalypse | Cloud Computing - InfoWorld "It's just a matter of time before we migrate our existing IT assets to public cloud systems," says InfoWorld cloud blogger David Linthicum. "Additionally, it's a short window until the dead rise from the grave and attempt to eat our brains." Is is Halloween or something? Thought for the Day "A computer lets you make more mistakes faster than any invention in human history—with the possible exceptions of hand guns and tequila." — Mitch Ratcliffe

    Read the article

  • Should I use the cool, new and awesome stuff [closed]

    - by Ieyasu Sawada
    I'm in the field of Web Development. I follow a lot of awesome people on Twitter(Paul Irish, Alex Sexton, David Walsh, Rebecca Murphey, etc.) By following these people I'm constantly updated of the new and cool stuff in web development(backbone.js, angular.js, require.js, ember.js, jasmine, etc.) Now I'm creating a web application for a client and because of the different tools, libraries, plugins that I'm aware of I don't even know anymore when do I need to use those or do I even need those, or how do I even implement it in a sane way where I won't have to repeat myself(DRY). What's your advice on what I really need to do in order to become better. Do I really need to use these cool new tools or should I just stick with what I know for now and try to make my code better. Should I unfollow these people in order to not pollute my mind with stuff that I can't really use now because I don't have the necessary experience in order to use it. What sort of things should I really be focusing on for someone like me who has only about 2 years of experience in the field of web development.

    Read the article

  • @CodeStock 2012 Review: Leon Gersing ( @Rubybuddha ) - "You"

    "YOU"Speaker: Leon GersingTwitter: @Rubybuddha Site: http://about.me/leongersing I honestly had no idea what I was getting in to when I sat down in to this session. I basically saw the picture of the speaker and knew that it would be a good session. I was completely wrong; it was the BEST SESSION of CodeStock 2012.  In fact it was so good, I texted another coworker attending the conference to get over and listen to Leon. Leon took on the concept of growth in the software development community. He specifically referred David Hansson in his ability to stick to his beliefs when the development community thought that he was crazy for creating Ruby on Rails. If you do not know this story Ruby on Rails is one of the fastest growing web languages today. In addition, he also touched on the flip side of this argument in that we must be open to others ideas and not discard them so quickly because we all come from differing perspectives and can add value to a project/team/community. This session left me with two very profound concepts/quotes: “In order to learn you must do it badly in front of a crowed and fail.” - @Rubybuddha I can look back on my career so far and say that he is correct; I think I have learned the most after failing, especially when I achieved this failure in front of other. “Experts must be able to fail.” - @Rubybuddha I think we can all learn from our own mistakes but we can also learn from others. When respected experts fail it is a great learning opportunity for the entire team as well as the person who failed. When expert admit mistakes and how they worked through them can be great learning tools for other developers so that they know how to avoid specific scenarios and if they do become stuck in the same issue they will know how to properly work their way out of them.

    Read the article

  • Today's Links (6/24/2011)

    - by Bob Rhubart
    Fusion Applications - How we look at the near future | Domien Bolmers Bolmers recaps a Logica pow-wow around Fusion Applications. Who invented e-mail? | Nicholas Carr IT apparently does matter to Nicholas Carr as he shares links to Errol Morris's 5-part NYT series about the origins of email. David Sprott's Blog: Service Oriented Cloud (SOC) "Whilst all the really good Cloud environments are Service Oriented," says Sprott, "it’s very much the minority of consumer SaaS that is today." Fast, Faster, JRockit | René van Wijk Oracle ACE René van Wijk tells you "everything you ever wanted to know about the JRockit JVM, well quite a lot anyway." Creating an XML document based on my POJO domain model – how will JAXB help me? | Lucas Jellema "I thought that adding a few JAXB annotations to my existing POJO model would do the trick," says Jellema, "but no such luck." Announcing Oracle Environmental Accounting and Reporting | Theresa Hickman Oracle Environmental Accounting and Reporting is designed to help companies track and report greenhouse emissions. Yoga framework for REST-like partial resource access | William Vambenepe Vambenepe says: "A tweet by Stefan Tilkov brought Yoga to my attention, 'a framework for supporting REST-like URI requests with field selectors.'" InfoQ: Pragmatic Software Architecture and the Role of the Architect "Joe Wirtley introduces software architecture and the role of the architect in software development along with techniques, tips and resources to help one get started thinking as an architect."

    Read the article

  • Light mask map and camera for static lights in XNA Platformer

    - by JiminyCricket
    Using the example for some basic light maps found here : http://blog.josack.com/2011/07/xna-2d-dynamic-lighting.html, I've managed to create a lightmap texture using individual lightmaps and display it over a 2D tiled world as in the Platformer example. I'm using the very basic 2D camera example as found here : http://www.david-amador.com/2009/10/xna-camera-2d-with-zoom-and-rotation/, and the problem is that the lightmap texture scrolls with the player sprite. This looks pretty good and would be excellent for lighting the player sprite as it moves. But, I also want to be able to place static lights (or some initial position for the lights) that do not move with the player or camera. When I turn off the camera or give it a static position, it works as a series of static lights so I believe it's probably caused by the camera transformation matrix following the player around. I'm using RenderTarget2Ds, one for the main game screen after all the backgrounds and tiles are rendered, and one for the "lightmap" which consists of a black background and a bunch of lighting textures which are merged with it using additive blending. For now, I'm doing all of this in PlatformerGame.cs where the camera transformation and position is set and the level.Draw() call is made. I can't figure out how to separate the drawing of the lightmap and the camera following the player. I was thinking it would be better to render the shadows and lighting directly in the drawing of the level itself, but I'm not sure how to do that either because this technique requires RenderTarget2Ds and calling SpriteBatch.Begin()/End().

    Read the article

  • Using position function for accessing particular node when using While Activity in SOA 11.1.1.5

    - by AJ
    Hi If you are using while activity in SOA Suite 11.1.1.5 and within loop you have a requirement to access repeating node of XML. You might need to use below XPATH expression for accessing the node. Here is the XML that I am using for this example <?xml version='1.0' encoding='UTF-8'?> David DemoJob 1 2012-04-15 40000 0 10 Steve TestJob 1 2012-04-15 40000 0 10 Here you can notice that Emp node is repeating i.e. EmpCollection node will contain multiple employees. Now in loop one of assign activity you need to access a particular node for e.g. For first time loop runs you want to access first node and second time second node and so on. You need to make use of postion() function like bpws:getVariableData('Receive1_Read_InputVariable','body','/ns4:EmpCollection/ns4:Emp[position()=$loopCounter]/ns4:job') Please Note: Here loopCounter is a variable that we have created of type xsd:int and prior to loop we have initialized a value of 1. Loop will run depending on the number of Emp nodes present at runtime. For that in while Activity you can use below XPATH expression ora:countNodes('Receive1_Read_InputVariable','body','/ns4:EmpCollection/ns4:Emp')=bpws:getVariableData('loopCounter') Do let me know in case of any issues or concern. Cheers AJ

    Read the article

  • Java EE @ Devoxx UK

    - by delabassee
    Devoxx UK is taking place next week (12th and 13th June) in London. As with any Devoxx conference, this UK edition will have a nice mix of content, an impressive list of speakers and obviously Java EE will be well will covered too:  Apache TomEE, Java EE Web Profile and more on Tomcat (David Blevins) Myths, Tales and Voodoo - About Java EE and Testing (Adam Bien) 50 new features of Java EE 7 (Antonio Goncalves & Arun Gupta) Java EE 7 Hands-on Lab (Arun Gupta) In addition, there will be 2 BoF related to Java EE on Thursday evening, the first BoF will be about the Java EE platform and the second one will be about the Java EE Reference Implementation, i.e. GlassFish. I will participate in the Java EE Community BoF where will discuss Java EE general but with all recent activities, I suspect that a large portion of the BoF will spent on discussing the current plans for Java EE 8.  Right after and in the same room, I will join Steve Millidge of C2B2 for the GlassFish is here to stay! BoF. The goal is to discuss on GlassFish, the current status, the plans for the next release, how the community can contributes, etc. It should be mentioned that attending those BoFs is completely free, just make sure to register here.  So if you are in London next week, mind the Geek and see you at Devoxx UK!

    Read the article

  • MySQL Makes The Cover of Oracle Magazine!

    - by bertrand.matthelie(at)oracle.com
    @font-face { font-family: "Arial"; }@font-face { font-family: "Times"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }div.Section1 { page: Section1; } In case you haven't seen it yet, MySQL made the cover of the January/February Edition of Oracle Magazine!   Published 6 times per year and distributed to over half a million of IT managers, DBAs and developers, Oracle Magazine contains technology-strategy articles, sample code, tips, Oracle & partner news, and more.   @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } If you're thinking that this is yet another indicator that Oracle is very serious about MySQL, you're absolutely right!   I encourage you to read David Kelly's "Open For Business" article posted online here.   "First released in 1995 and purchased by Sun in 2008, MySQL has quickly graduated from the realm of hobbyists to the world of business, becoming the leading open source database for many Web applications and an integral part of the LAMP (Linux, Apache, MySQL, PHP) Web application stack. Almost a year after Oracle's acquisition of Sun, MySQL plays an even bigger role in enterprises of all sizes worldwide." ......   Enjoy the article!

    Read the article

  • @CodeStock 2012 Review: Leon Gersing ( @Rubybuddha ) - "You"

    "YOU"Speaker: Leon GersingTwitter: @Rubybuddha Site: http://about.me/leongersing I honestly had no idea what I was getting in to when I sat down in to this session. I basically saw the picture of the speaker and knew that it would be a good session. I was completely wrong; it was the BEST SESSION of CodeStock 2012.  In fact it was so good, I texted another coworker attending the conference to get over and listen to Leon. Leon took on the concept of growth in the software development community. He specifically referred David Hansson in his ability to stick to his beliefs when the development community thought that he was crazy for creating Ruby on Rails. If you do not know this story Ruby on Rails is one of the fastest growing web languages today. In addition, he also touched on the flip side of this argument in that we must be open to others ideas and not discard them so quickly because we all come from differing perspectives and can add value to a project/team/community. This session left me with two very profound concepts/quotes: “In order to learn you must do it badly in front of a crowed and fail.” - @Rubybuddha I can look back on my career so far and say that he is correct; I think I have learned the most after failing, especially when I achieved this failure in front of other. “Experts must be able to fail.” - @Rubybuddha I think we can all learn from our own mistakes but we can also learn from others. When respected experts fail it is a great learning opportunity for the entire team as well as the person who failed. When expert admit mistakes and how they worked through them can be great learning tools for other developers so that they know how to avoid specific scenarios and if they do become stuck in the same issue they will know how to properly work their way out of them.

    Read the article

  • Arrow ECS Oracle OpenWorld update

    - by mseika
    A date to mark in your diaries now! On Tuesday 23rd October will be holding our post Oracle OpenWorld Oracle. Covering all the important announcements from Oracle OpenWorld, this must attend event will be held in the Royal Exchange, London. The full agenda and speaker line up is being finalised but we will cover all the major strategy and product announcements from Oracle OpenWorld, FY 13 Channel Strategy and Partnering with Arrow ECS. Oracle OpenWorld a Channel Perspective, David Tweddle - Head of UK Alliance and Channel Hardware Announcements, Christopher Lindsay - Oracle Hardware EMEA Software Announcements, speaker to be confirmed Arrow ECS Focus and Strategy, Simon Rushbrook - Business Development Manager, Arrow ECS Summary and close, Nick Tinsley - Sales Director Who should attend? The event is subject Oracle NDA and is aimed at sales and pre-sales personal within your organisation. This event will commence at 12.30 with lunch, presentations will start at 1.30 and will close at 4.30 followed by drinks and networking. Full details to follow shortly but save the date and register now. Date & Time Tuesday 23rd October 201212.30pm - 4.30pm Post event drinks will be servedfrom 4.30pm Location London OfficeEntrance 2, Fourth Floor,The Royal Exchange,London, EC3V 3LN Click here for directions >

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-12

    - by Bob Rhubart
    15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. Adding a runtime picker to a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena shows how to create an Oracle WebCenter popup to allow users to "select items or do more complex things." Oracle Identity Manager 11g R2 Catalog | Daniel Gralewski Oracle Fusion Middleware A-Team blogger Daniel Gralewski shares a detailed overview of the new Catalog feature, one of the most talked about features in the latest release of Oracle Identity Manager 11g. Cloud API and service designers, stop thinking small | Cloud Computing - InfoWorld "The focus must shift away from fine-grained APIs that provide some type of primitive service, such as pushing data to a block of storage or perhaps making a request to a cloud-rooted database," says InfoWorld's David Linthicum. "To go beyond primitives, you must understand how these services should be used in a much larger architectural context. In other words, you need to understand how businesses will employ these services to form real workplace solutions -- inside and outside the enterprise." Oracle Solaris 8 P2V with Oracle database 10.2 and ASM | Orgad Kimchi Orgad Kimchi's technical post illustrates the migration of "a Solaris 8 physical system, with Oracle database version 10.2.0.5 with ASM file-system located on a SAN storage, into a Solaris 8 branded zone inside a Solaris 10 guest domain on top of a Solaris 11 control domain." Thought for the Day "The hardest single part of building a software system is deciding precisely what to build. " — Fred Brooks Source: SoftwareQuotes.com

    Read the article

  • Twitter Tuesday - Top 10 @ArchBeat Tweets - August 12-18, 2014

    - by Bob Rhubart-Oracle
    Man in gray hat: "You know, more than three thousand people follow @OTNArchBeat on Twitter. I wonder which tweets were the most popular over the last seven days." Man in brown hat: "Shut up! I think I see a UFO!" Man in gray hat: "That's OK. I'll just read this blog post." RT @java: "Programmers are creative people and typically delight in contriving clever ways to solve problems." -Casimir Saternos in @OracleJavaMag Aug 18, 2014 at 12:54 PM The Offer Still Stands: Produce your own episode of the OTN ArchBeat Podcast. Click for details. Aug 13, 2014 at 02:03 PM Binge-Ready! Watch the Top 10 OTN ArchBeat Videos featuring @stewartbryson @stenvesterli @gurcanorhan Aug 13, 2014 at 11:49 AM Oracle Announces First Java 9 Features | InfoQ Aug 18, 2014 at 12:20 PM Getting Started wit the #Coherence Memcached Adaptor | David Felcey Aug 18, 2014 at 10:19 AM #WebLogic Data Source Connection Labeling | Steve Felts Aug 14, 2014 at 10:03 AM How to introduce #DevOps into a moribund corporate culture | ZDNet Aug 15, 2014 at 11:23 AM Sample Chapter: Installing Oracle #WebLogic Server 12c and Using the Management Tools | Sam Alapati Aug 14, 2014 at 11:09 AM Building a Responsive #WebCenter Portal Application | @JayJayZheng Aug 12, 2014 at 11:04 AM #OEM12c Cloud Control authorization with Active Directory | Jeroen Gouma Aug 14, 2014 at 10:16 AM

    Read the article

  • Should I use a config file or database for storing business rules?

    - by foiseworth
    I have recently been reading The Pragmatic Programmer which states that: Details mess up our pristine code—especially if they change frequently. Every time we have to go in and change the code to accommodate some change in business logic, or in the law, or in management's personal tastes of the day, we run the risk of breaking the system—of introducing a new bug. Hunt, Andrew; Thomas, David (1999-10-20). The Pragmatic Programmer: From Journeyman to Master (Kindle Locations 2651-2653). Pearson Education (USA). Kindle Edition. I am currently programming a web app that has some models that have properties that can only be from a set of values, e.g. (not actual example as the web app data confidential): light-type = sphere / cube / cylinder The light type can only be the above three values but according to TPP I should always code as if they could change and place their values in a config file. As there are several incidents of this throughout the app, my question is: Should I store possibly values like these in: a config file: 'light-types' = array(sphere, cube, cylinder), 'other-type' = value, 'etc = etc-value a single table in a database with one line for each config item a database with a table for each config item (e.g. table: light_types; columns: id, name) some other way? Many thanks for any assistance / expertise offered.

    Read the article

  • Oracle Buys BigMachines - Adds Leading Configure, Price and Quote (CPQ) Cloud to the Oracle Cloud to Enable Smarter Selling

    - by Richard Lefebvre
    News Facts Oracle today announced that it has entered into an agreement to acquire BigMachines, a leading cloud-based Configure, Price and Quote (CPQ) solution provider. BigMachines’ CPQ Cloud accelerates the conversion of sales opportunities into revenue by automating the sales order process with guided selling, dynamic pricing, and an easy-to-use workflow approval process, accessible anywhere, on any device. Companies that use sales automation technology often rely on manual, cumbersome and disconnected processes to convert opportunities into orders. This creates errors, adds costs, delays revenue, and degrades the customer experience. BigMachines’ CPQ cloud extends sales automation to include the creation of an optimal quote, which enables sales personnel to easily configure and price complex products, select the best options, promotions and deal terms, and include up sell and renewals, all using automated workflows. In combination with Oracle’s enterprise-grade cloud solutions, including Marketing, Sales, Social, Commerce and Service Clouds, Oracle and BigMachines will create an end-to-end smarter selling cloud solution so sales personnel are more productive, customers are more satisfied, and companies grow revenue faster. More information on this announcement can be found at http://www.oracle.com/bigmachines Supporting Quotes “The fundamental goals of smarter selling are to provide sales teams with the information, access, and insights they need to maximize revenue opportunities and execute on all phases of the sales cycle,” said Thomas Kurian, Executive Vice President, Oracle Development. “By adding BigMachines’ CPQ Cloud to the Oracle Cloud, companies will be able to drive more revenue and increase customer satisfaction with a seamlessly integrated process across marketing and sales, pricing and quoting, and fulfillment and service.” “BigMachines has developed leading CPQ solutions that serve companies of all sizes across multiple industries,” said David Bonnette, BigMachines’ CEO. “Together with Oracle, we expect to provide a complete cloud solution to manage sales processes and deliver exceptional customer experiences.” Supporting Resources About Oracle and BigMachines General Presentation Customer and Partner Letter FAQ

    Read the article

  • Why is the camera not following the player? [on hold]

    - by Homer_Simpson
    I use the following code to create Parallax Scrolling: http://www.david-gouveia.com/portfolio/2d-camera-with-parallax-scrolling-in-xna/ Parallax Scrolling is working but I don't know how to focus the camera on the player. If the player moves, then the camera doesn't follow the player. The player leaves the screen when I'm moving it. I use the following code(as described in the tutorial), but it's not working: // Updates my camera to lock on the character _camera.LookAt(player.Playerposition); What can I do so that the player is always in the center of the screen/camera? My player class: public class Player { Texture2D Playertex; public Vector2 Playerposition = new Vector2(400, 240); private Game1 game1; public Player(Game1 game) { game1 = game; } public void Load(ContentManager content) { Playertex = content.Load<Texture2D>("8bitmario"); TouchPanel.EnabledGestures = GestureType.HorizontalDrag; } public void Update(GameTime gameTime) { while (TouchPanel.IsGestureAvailable) { GestureSample gs = TouchPanel.ReadGesture(); switch (gs.GestureType) { case GestureType.HorizontalDrag: Playerposition.X += 3f; break; } } } public void Render(SpriteBatch batch) { batch.Draw(Playertex, new Vector2(Playerposition.X - Playertex.Width / 2, Playerposition.Y - Playertex.Height / 2), Color.White); } } In Game1, I update the player and camera class: protected override void Update(GameTime gameTime) { // Updates my character's position player.Update(gameTime); // Updates my camera to lock on the character _camera.LookAt(player.Playerposition); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); foreach (Layer layer in _layers) layer.Draw(spriteBatch); spriteBatch.Begin(SpriteSortMode.Deferred, null, null, null, null, null, _camera.GetViewMatrix(new Vector2(0.0f, 0.0f))); player.Render(spriteBatch); spriteBatch.End(); base.Draw(gameTime); }

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >