Search Results

Search found 2575 results on 103 pages for 'canonical cover'.

Page 75/103 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Visual Studio 2010 RC &ndash; Silverlight 4 and WCF RIA Services Development - Updates from MIX Anno

    - by Harish Ranganathan
    MIX is happening and there is a lot of excitement around the various releases such as the Windows Phone 7 Developer Preview, IE9 Platform Preview and few other announcements that have been made.  Clearly, the Windows Phone 7 Developer Preview has generated the maximum interest and opened a plethora of opportunities for .NET Developers.  It also takes the mobile development to a new generation and doesn’t force developers to learn different programming language. Along with this, few other releases have been out.  The most anticipated Silverlight 4 RC is out and its corresponding templates are also out there for you to download.  Once VS 2010 RC was released, it was much of a disappointment that it doesn’t support SL4 development as well as the SL4 Business Application Development (a.k.a. WCF RIA Services).   There were few workarounds though nothing concrete.  Earlier I had written about how the WCF RIA Services Preview does work with ASP.NET Development using VS 2010 RC. However, with the release of SL4 RC and the corresponding tooling updates, one can develop for both SL4 as well as SL4 + WCF RIA Services using VS 2010 RC.  This is kind of important and keeps the continuum going until VS 2010 RTMs.  So, the purpose of this post is to quickly give the updates and links to install the relevant tools. Silverlight 4 RC Runtime Windows Runtime or the Mac Runtime Silverlight 4 RC Developer Tools for Visual Studio 2010 RC Silverlight 4 Tools for Visual Studio 2010 (this would install the Runtime as well automatically) Expression Blend 4 Beta Expression Blend 4 Beta If you install the SL4 RC Developer Tools, it also installs the WCF RIA Services Preview automatically.  You just need to install the WCF RIA Services Toolkit that can be downloaded from Install the WCF RIA Services Toolkit Of course you can also just install the WCF RIA Services for VS 2010 RC separately (without SL4 Tools) from here Kindly note, all the above mentioned links are with respect to Visual Studio 2010 RC edition.  If you are developing with VS 2008, then you can just target SL3 (as I write this, there seems to be no official support for developing SL4 with VS 2008) and the related tools can be downloaded from http://www.silverlight.net/getstarted/ Basically you need to download SL 3 Runtime, SDK, Expression Blend 3 and the Silverlight Toolkit.  All the links for the download are available in the above mentioned page. Also, a version of WCF RIA Services that is supported in VS 2008 is available for download at WCF RIA Services Beta for VS 2008 I know there are far too many things to keep in mind.  So, I put a flowchart that could help with depicting it pictorial.  Note that this is just my own imagination and doesn’t cover all scenarios.  for example, if you are neither developing for Webform, Silverlight, you end up nowhere whereas in actual scenario you may want to develop Desktop, Services, Console, Game and what not.  So, keep in mind this is just Web. Cheers !!!

    Read the article

  • What is Database Continuous Integration?

    - by David Atkinson
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • What is Database Continuous Integration?

    - by SQLDev
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • Live Debugging

    - by Daniel Moth
    Based on my classification of diagnostics, you should know what live debugging is NOT about - at least according to me :-) and in this post I'll share how I think of live debugging. These are the (outer) steps to live debugging Get the debugger in the picture. Control program execution. Inspect state. Iterate between 2 and 3 as necessary. Stop debugging (and potentially start new iteration going back to step 1). Step 1 has two options: start with the debugger attached, or execute your binary separately and attach the debugger later. You might say there is a 3rd option, where the app notifies you that there is an issue, referred to as JIT debugging. However, that is just a variation of the attach because that is when you start the debugging session: when you attach. I'll be covering in future posts how this step works in Visual Studio. Step 2 is about pausing (or breaking) your app so that it makes no progress and remains "frozen". A sub-variation is to pause only parts of its execution, or in other words to freeze individual threads. I'll be covering in future posts the various ways you can perform this step in Visual Studio. Step 3, is about seeing what the state of your program is when you have paused it. Typically it involves comparing the state you are finding, with a mental picture of what you thought the state would be. Or simply checking invariants about the intended state of the app, with the actual state of the app. I'll be covering in future posts the various ways you can perform this step in Visual Studio. Step 4 is necessary if you need to inspect more state - rinse and repeat. Self-explanatory, and will be covered as part of steps 2 & 3. Step 5 is the most straightforward, with 3 options: Detach the debugger; terminate your binary though the normal way that it terminates (e.g. close the main window); and, terminate the debugging session through your debugger with a result that it terminates the execution of your program too. In a future post I'll cover the ways you can detach or terminate the debugger in Visual Studio. I found an old picture I used to use to map the steps above on Visual Studio 2010. It is basically the Debug menu with colored rectangles around each menu mapping the menu to one of the first 3 steps (step 5 was merged with step 1 for that slide). Here it is in case it helps: Stay tuned for more... Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Understanding the JSF Lifecycle and ADF Optimized Lifecycle

    - by Steven Davelaar
    While coaching ADF development teams over the years, I have noticed that many developers lack a basic understanding of Java Server Faces, in particular the JSF lifecycle and how ADF optimizes this lifecycle in specific situations. As a result, ADF developers who are tasked to build a seemingly simple ADF page, can get extremely frustrated by the -in their eyes- unexpected or unlogical behavior of ADF.  They start to play with the immediate property and the partialTriggers property in a trial-and-error manner. Often, they play with these properties until their specific issue is solved, unaware of other more severe bugs that might be introduced by the values they choose for these properties. So, I decided to submit a presentation for the UKOUG entitled "What you need to know about JSF to be succesful with ADF".  The abstract was accepted, and I started putting together the presentation and demo application. I built up a demo application step-by-step, trying to cover the JSF-related  top issues and challenges I encountered over the years in a simple "Hello World" demo. This turned out to be both a very time-consuming and very interesting journey. I had never thought I would learn so much myself in preparing this presentation. I never thought I would end up with potentially controversial conclusions like "Never set immediate=true on an editable component".  I did not realize the sometimes immense implications of the ADF optimized lifecycle beforehand. I never thought that "Hello World" demo's could get so complex. But as I went on I was confident this was valuable material, even for experienced ADF developers with a good understanding of JSF. When I finished, I realized the original title and abstract was misleading, as was the target audience. Yes, it was covering the JSF lifecycle, but no other aspects of JSF you need to know for ADF development. Yes, it was covering some JSF basics as mentioned in the abstract, but all in all it had become a pretty advanced presentation. At the same time, the issues discussed are very common, novice ADF developers might easily run into them while building their first pages. I ran out of time, so I decided to just present what I had, apologizing at the beginning for the misleading title, showing a second slide with a better title "18 invaluable lessons about ADF-JSF interaction". I think the presentation was well received overall, although people who don't like it or don't understand it, usually don't come and tell you afterwards.... I am still struggling with the title, for this blog post I used yet another title, anyway, you can download the presentation-that-still-lacks-a-good-title here. The finished JDev 11.1.1.6 demo app can be downloaded here.  The 18 lessons mentioned in the presentation are summarized here. As mentioned on the last slide, print out the lessons, and learn them by heart, I am pretty sure it will save you lots of time and frustration!

    Read the article

  • MySQL - Powering Online Media & Entertainment

    - by bertrand.matthelie(at)oracle.com
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Times"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p { margin: 0cm 0cm 0.0001pt; font-size: 10pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } If you're reading news, watching videos, or playing games online, you're probably relying on MySQL to do so.   Facebook, YouTube, BBC News, Zynga, thePlatform and many other leading Media & Entertainment organizations chose MySQL to power their online news, gaming, social networking, advertising or other applications.   During the past decade, the Media & Entertainment industry experienced a spectacular transformation.  The mobile Internet is becoming the dominant media platform, and the boundaries between the different types of media (i.e. Print, TV, Radio, Internet) have increasingly blurred as we've gradually come to perform more and more of our daily activities online.   To better understand how MySQL can help you win in the fast paced world of Media & Entertainment, check out our whitepaper "MySQL - Powering The Online Media & Entertainment Industry" in which we cover:   ·       The key trends shaping the evolution of the media & entertainment industry.   ·       Their implications, and the requirements they place on the infrastructure of information & entertainment services providers.   ·       How you can leverage Oracle's MySQL technologies to quickly and cost-effectively deliver new highly scalable and highly available online media & entertainment applications.   You're welcome to download it here.

    Read the article

  • Case Management Patterns with Oracle Unified Business Process Management Suite

    - by Ajay Khanna
    Contributed by Heidi Buelow, Oracle Product Management Case Management was a hot topic all week at Oracle OpenWorld so I was excited to share our current features and upcoming plans at the session Thursday morning on Case Management Patterns with Oracle Unified Business Process Management Suite.  My colleague, Ravi Rangaswamy, the Case Management Development Manager, and I, Heidi Buelow, the Case Management Product Manager, discussed case management use case patterns with an interested audience.  We also talked about the current BPM Suite offering for Case Managment and showed a demo of our upcoming release where Case Management becomes a first class component in a BPM composite application. Case Management use case patterns cover a wide range of horizontal applications such as Accounts Payable, Dispute Resolution, Call Center, Employee OnBoarding, and many vertical applications in domains and industries such as Public Sector services, Insurance claims, and Healthcare.  Really, it is any use case where the resolution of a request may require a knowledge worker making decisions using experienced judgement in the current situation.  This allows for expidited care and customer satisfaction, both being highly valued for consumer loyalty, regulatory compliance, and efficient resolution. Today, BPM Suite provides the tools for creating Case Management applications using BPMN 2.0, Business Rules, and rich BAM and Case Analytics.  The Process Composer provides the agility to change rules and processes by the business users.  The case manager and case workers have the flexibilty they need.  With integrated content management and the concept of a BPM Process Spaces instance (case) space, the current release enables case management use case applications. In the next release, Case Management becomes a first class component. By this, we mean, Case is a separate component in the composite.  We are adding case attributes such as milestones, case events, case stakeholders, and more, providing a rich toolset for the use cases that require a flexible Case Management approach.  Activites become available according to the conditions that you specify and information can be protected by permissions indicated.  In BPM Studio, you design a Case and associate all of the attributes and activities that are needed, yet, at runtime you have the flexibility to add and change these as needed. We enjoyed sharing Case Management and it was well received by the audience.  The presentation is available online and we have viewlets of the demo that will be available at release time.

    Read the article

  • Correcting Grammar for Microsoft Products and Technology

    I see book authors, editors, bloggers, press, team members, and occasionally even a VP misspell our products, technologies, and features that I thought I would build and maintain a list of the correct capitalization and spelling of the most commonly misspelled Microsoft products and technologies. Sources: Internal site (brandtools) and the Microsoft Trademarks Web site. Last updated: April 27, 2010   Incorrect Correct .net or .Net .NET .Net framework 4.0, .NET framework 4.0 .NET Framework AdCenter, Ad Center, Adcenter adCenter Ado.net, ADO.Net ADO.NET Asp.net, ASP.Net ASP.NET Asp.Net ajax, Asp.NET Ajax ASP.NET AJAX Asp.Net Mvc ASP.NET MVC Biz Spark, Bizspark BizSpark Clear Type, Clear type, Cleartype ClearType Directaccess, Direct Access DirectAccess Direct Show, Directshow DirectShow Direct X DirectX Dream Spark, Dreamspark DreamSpark Home Group, Home group HomeGroup HotMail, Hot Mail Hotmail Info Path, Infopath InfoPath Intellisense, Intellisense IntelliSense Iron Ruby IronRuby Kin KIN Linq LINQ MSN Messenger Windows Live Messenger One Note, Onenote OneNote Open type, Opentype OpenType PlayTo, Play to Play To Power Point, Powerpoint PowerPoint Powershell, Power Shell PowerShell Sea Dragon, Seadragon SeaDragon Sharepoint, Share Point SharePoint Silver Light, SilverLight Silverlight Skydrive, Sky Drive SkyDrive Sql Server SQL Server Visual Basic .net (the .net was removed in the 2005 version) Visual Basic  Visual C# Express 2010 or Visual Basic Express 2010 or Visual C++ Express 2010 Visual version 2010 Express as in Visual C# 2010 Express, Visual Basic 2010 Express Visual Studio 2010 Team Foundation Server Visual Studio Team Foundation Server 2010 Visual Studio Ultimate 2010 or Visual Studio Professional 2010 Visual Studio 2010 version, as in Visual Studio 2010 Ultimate, Visual Studio 2010 Professional WebSite Spark, Website spark Website Spark Win 32 Win32 Windows Mobile (except when referring to previous versions like 5.0 or 6), Windows phone 7 Series Windows Phone Xaml XAML XBOX, xbox Xbox Xbox Live, XBOX Live Xbox LIVE   Caveats These guidelines dont apply to URLs (ex: www.asp.net) or to code namespaces, variables, and classes should follow the .NET Framework naming guidelines. This list only covers capitalization/spacing rules, it doesnt cover the correct usage of (tm) or symbols or the correct word usage rules. For those, refer to the trademark Web site. Also note that I have no idea why we are so inconsistent say on keeping features/brands two words versus one word or the order of product/version/year.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Github Organization Repositories, Issues, Multiple Developers, and Forking - Best Workflow Practices

    - by Jim Rubenstein
    A weird title, yes, but I've got a bit of ground to cover I think. We have an organization account on github with private repositories. We want to use github's native issues/pull-requests features (pull requests are basically exactly what we want as far as code reviews and feature discussions). We found the tool hub by defunkt which has a cool little feature of being able to convert an existing issue to a pull request, and automatically associate your current branch with it. I'm wondering if it is best practice to have each developer in the organization fork the organization's repository to do their feature work/bug fixes/etc. This seems like a pretty solid work flow (as, it's basically what every open source project on github does) but we want to be sure that we can track issues and pull requests from ONE source, the organization's repository. So I have a few questions: Is a fork-per-developer approach appropriate in this case? It seems like it could be a little overkill. I'm not sure that we need a fork for every developer, unless we introduce developers who don't have direct push access and need all their code reviewed. In which case, we would want to institute a policy like that, for those developers only. So, which is better? All developers in a single repository, or a fork for everyone? Does anyone have experience with the hub tool, specifically the pull-request feature? If we do a fork-per-developer (or even for less-privileged devs) will the pull-request feature of hub operate on the pull requests from the upstream master repository (the organization's repository?) or does it have different behavior? EDIT I did some testing with issues, forks, and pull requests and found that. If you create an issue on your organization's repository, then fork the repository from your organization to your own github account, do some changes, merge to your fork's master branch. When you try to run hub -i <issue #> you get an error, User is not authorized to modify the issue. So, apparently that work flow won't work.

    Read the article

  • Transparent Technology from Amazon

    - by David Dorf
    Amazon has been making some interesting moves again, this time in the augmented humanity area.  Augmented humanity is about helping humans overcome their shortcomings using technology.  Putting a powerful smartphone in your pocket helps you in many ways like navigating streets, communicating with far off friends, and accessing information.  But the interface for smartphones is somewhat limiting and unnatural, so companies have been looking for ways to make the technology more transparent and therefore easier to use. When Apple helped us drop the stylus, we took a giant leap forward in simplicity.  Using touchscreens with intuitive gestures was part of the iPhone's original appeal.  People don't want to know that technology is there -- they just want the benefits.  So what's the next leap beyond the touchscreen to make smartphones even easier to use? Two natural ways we interact with the world around us is by using sight and voice.  Google and Apple have been using both in their mobile platforms for limited uses cases.  Nobody actually wants to type a text message, so why not just speak it?  Any if you want more information about a book, why not just snap a picture of the cover?  That's much more accurate than trying to key the title and/or author. So what's Amazon been doing?  First, Amazon released a new iPhone app called Flow that allows iPhone users to see information about products in context.  Yes, its an augmented reality app that uses the phone's camera to view products, and overlays data about the products on the screen.  For the most part it requires the barcode to be visible to correctly identify the product, but I believe it can also recognize certain logos as well.  Download the app and try it out but don't expect perfection.  Its good enough to demonstrate the concept, but its far from accurate enough.  (MobileBeat did a pretty good review.)  Extrapolate to the future and we might just have a heads-up display in our eyeglasses. The second interesting area is voice response, for which Siri is getting lots of attention.  Amazon may have purchased a voice recognition company called Yap, although the deal is not confirmed.  But it would make perfect sense, especially with the Kindle Fire in Amazon's lineup. I believe over the next 3-5 years the way in which we interact with smartphones will mature, and they will become more transparent yet more important to our daily lives.  This will, of course, impact the way we shop, making information more readily accessible than it already is.  Amazon seems to be positioning itself to be at the forefront of this trend, so we should be watching them carefully.

    Read the article

  • Impulsioned jumping

    - by Mutoh
    There's one thing that has been puzzling me, and that is how to implement a 'faux-impulsed' jump in a platformer. If you don't know what I'm talking about, then think of the jumps of Mario, Kirby, and Quote from Cave Story. What do they have in common? Well, the height of your jump is determined by how long you keep the jump button pressed. Knowing that these character's 'impulses' are built not before their jump, as in actual physics, but rather while in mid-air - that is, you can very well lift your finger midway of the max height and it will stop, even if with desacceleration between it and the full stop; which is why you can simply tap for a hop and hold it for a long jump -, I am mesmerized by how they keep their trajetories as arcs. My current implementation works as following: While the jump button is pressed, gravity is turned off and the avatar's Y coordenate is decremented by the constant value of the gravity. For example, if things fall at Z units per tick, it will rise Z units per tick. Once the button is released or the limit is reached, the avatar desaccelerates in an amount that would make it cover X units until its speed reaches 0; once it does, it accelerates up until its speed matches gravity - sticking to the example, I could say it accelerates from 0 to Z units/tick while still covering X units. This implementation, however, makes jumps too diagonal, and unless the avatar's speed is faster than the gravity, which would make it way too fast in my current project (it moves at about 4 pixels per tick and gravity is 10 pixels per tick, at a framerate of 40FPS), it also makes it more vertical than horizontal. Those familiar with platformers would notice that the character's arc'd jump almost always allows them to jump further even if they aren't as fast as the game's gravity, and when it doesn't, if not played right, would prove itself to be very counter-intuitive. I know this because I could attest that my implementation is very annoying. Has anyone ever attempted at similar mechanics, and maybe even succeeded? I'd like to know what's behind this kind of platformer jumping. If you haven't ever had any experience with this beforehand and want to give it a go, then please, don't try to correct or enhance my explained implementation, unless I was on the right way - try to make up your solution from scratch. I don't care if you use gravity, physics or whatnot, as long as it shows how these pseudo-impulses work, it does the job. Also, I'd like its presentation to avoid a language-specific coding; like, sharing us a C++ example, or Delphi... As much as I'm using the XNA framework for my project and wouldn't mind C# stuff, I don't have much patience to read other's code, and I'm certain game developers of other languages would be interested in what we achieve here, so don't mind sticking to pseudo-code. Thank you beforehand.

    Read the article

  • ATG Live Webcast March 21 Reminder: Network, WAN, and PC Performance Tuning (Performance Series Part 3 of 3)

    - by BillSawyer
    A quick reminder about tomorrow's webcast:  Andy Tremayne, Senior Architect, Applications Performance, and co-author of Oracle Applications Performance Tuning Handbook from Oracle Press, and Uday Moogala, Senior Principal Engineer, Applications Performance, will discuss network performance for E-Business Suite. Andy and Uday will cover tuning the client and tuning the network. They will share real-life examples of network performance, and show you tools and techniques that you can use to estimate or simulate performance on your own network.The agenda for the Performance Tuning - Part 3 of 3 webcast includes the following topics: Tuning the Client Tuning the Network Date:               Thursday, March 21, 2012Time:              8:00 AM - 9:00 AM Pacific Standard TimePresenters:  Andy Tremayne, Senior Architect, Applications Performance                        Uday Moogala, Senior Principal Engineer, Applications PerformanceWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              99341To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  591264961If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here.

    Read the article

  • SQL SERVER – Validating Spatial Object with IsValidDetailed Function

    - by pinaldave
    What do you prefer – error or warning indicating error may happen with the reason for the error. While writing the previous statement I remember the movie “Minory Report”. This blog post is not about minority report but I will still cover the concept in a single statement “Let us predict the future and prevent the crime which is about to happen in future”. (Please feel free to correct me if I am wrong about the movie concept, I really do not want to hurt your sentiment if you are dedicated fan). Let us switch to the SQL Server world. Spatial data types are interesting concepts. I love writing about spatial data types because it allows me to be creative with shapes (just like toddlers). When working with Spatial Datatypes it is all good when the spatial object works fine. However, when the spatial object has issue or it is created with invalid coordinates it used to give a simple error that there is an issue with the object but did not provide much information. This made it very difficult to debug. If this spatial object was used in the big procedure and while this big procedural error out because of the invalid spatial object, it is indeed very difficult to debug it. I always wished that the more information provided regarding what is the problem with spatial datatype. SQL Server 2012 has introduced the new function IsValidDetailed(). This function has made my life very easy. In simple words this function will check if the spatial object passed is valid or not. If it is valid it will give information that it is valid. If the spatial object is not valid it will return the answer that it is not valid and the reason for the same. This makes it very easy to debug the issue and make the necessary correction. DECLARE @p GEOMETRY = 'Polygon((2 2, 6 6, 4 2, 2 2))' SELECT @p.IsValidDetailed() GO DECLARE @p GEOMETRY = 'Polygon((2 2, 3 3, 4 4, 5 5, 6 6, 2 2))' SELECT @p.IsValidDetailed() GO DECLARE @p GEOMETRY = 'Polygon((2 2, 4 4, 4 2, 2 3, 2 2))' SELECT @p.IsValidDetailed() GO DECLARE @p GEOMETRY = 'CIRCULARSTRING(2 2, 4 4, 0 0)' SELECT @p.IsValidDetailed() GO DECLARE @p GEOMETRY = 'CIRCULARSTRING(2 2, 4 4, 0 0)' SELECT @p.IsValidDetailed() GO DECLARE @p GEOMETRY = 'LINESTRING(2 2, 4 4, 0 0)' SELECT @p.IsValidDetailed() GO Here is the resultset of the above query. You can see any valid query and some invalid query. If the query is invalid it also demonstrates the reason along with the error message. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Spatial Database, SQL Spatial

    Read the article

  • Play Your Position Until the Play Breaks Down&hellip;then Do Whatever it Takes.

    - by AjarnMark
    If I didn’t know better, I would think that K. Brian Kelley (blog | twitter) has been listening in on conversations with my boss. In his recent blog post Successful Teams: Knowing When to Step Out of Your Role, Brian describes quite clearly a philosophy that my boss has been trying to get across to everyone in the department.  We have been using sports analogies, like how important it is to play your position, until the play breaks down (such as a fumble) and then do whatever it takes it to cover each other / recover the ball / win.  While we like having very skilled people who could do a lot of different tasks, it is important that you first do your assigned tasks, and only once those are complete, or failure of the larger mission is probable, do you consider walking away from them to help someone else with their responsibilities. The thing that you cannot afford, especially on a lean team, is the really nice guy who is always trying to help out other people, but in doing so, is never quite getting his own responsibilities taken care of.  Yes, if the Running Back drops the football, you want any member of the team in the vicinity to jump on it, whether that is the leading blocker or the Quarterback.  But until the fumble happens, you want the leading blocker to focus on doing his job, and block for the Running Back.  If the blocker is doing any other job than his primary responsibility, you’re probably going to lose. This sounds logical enough, but it is really easy to go astray with the best of intentions.  This is especially true on a small, tight-knit team, where it is really easy to get sucked into someone else’s task or problem, doubly so if you think you can do it better or faster than them.  Now you are really setting yourself up for failure.  The right thing is to let the other person do the job, even if it seems less efficient in the short-run, so that you can focus on the tasks which require your expertise.  Don’t break formation…don’t abandon your assignment, until it is clear that mission failure is imminent, and even then, as Brian writes, it should be with the agreement of the mission leader. Thanks, Brian, for putting it so well.  This has been distributed throughout our department.

    Read the article

  • Tuxedo Load Balancing

    - by Todd Little
    A question I often receive is how does Tuxedo perform load balancing.  This is often asked by customers that see an imbalance in the number of requests handled by servers offering a specific service. First of all let me say that Tuxedo really does load or request optimization instead of load balancing.  What I mean by that is that Tuxedo doesn't attempt to ensure that all servers offering a specific service get the same number of requests, but instead attempts to ensure that requests are processed in the least amount of time.   Simple round robin "load balancing" can be employed to ensure that all servers for a particular service are given the same number of requests.  But the question I ask is, "to what benefit"?  Instead Tuxedo scans the queues (which may or may not correspond to servers based upon SSSQ - Single Server Single Queue or MSSQ - Multiple Server Single Queue) to determine on which queue a request should be placed.  The scan is always performed in the same order and during the scan if a queue is empty the request is immediately placed on that queue and request routing is done.  However, should all the queues be busy, meaning that requests are currently being processed, Tuxedo chooses the queue with the least amount of "work" queued to it where work is the sum of all the requests queued weighted by their "load" value as defined in the UBBCONFIG file.  What this means is that under light loads, only the first few queues (servers) process all the requests as an empty queue is often found before reaching the end of the scan.  Thus the first few servers in the queue handle most of the requests.  While this sounds non-optimal, in fact it capitalizes on the underlying operating systems and hardware behavior to produce the best possible performance.  Round Robin scheduling would spread the requests across all the available servers and thus require all of them to be in memory, and likely not share much in the way of hardware or memory caches.  Tuxedo's system maximizes the various caches and thus optimizes overall performance.  Hopefully this makes sense and now explains why you may see a few servers handling most of the requests.  Under heavy load, meaning enough load to keep all servers that can handle a request busy, you should see a relatively equal number of requests processed.  Next post I'll try and cover how this applies to servers in a clustered (MP) environment because the load balancing there is a little more complicated. Regards,Todd LittleOracle Tuxedo Chief Architect

    Read the article

  • Why no more macro languages?

    - by Muhammad Alkarouri
    In this answer to a previous question of mine about scripting languages suitability as shells, DigitalRoss identifies the difference between the macro languages and the "parsed typed" languages in terms of string treatment as the main reason that scripting languages are not suitable for shell purposes. Macro languages include nroff and m4 for example. What are the design decisions (or compromises) needed to create a macro programming language? And why are most of the mainstream languages parsed rather than macro? This very similar question (and the accepted answer) covers fairly well why the parsed typed languages, take C for example, suffer from the use of macros. I believe my question here covers different grounds: Macro languages or those working on a textual level are not wholly failures. Arguably, they include bash, Tcl and other shell languages. And they work in a specific niche such as shells as explained in my links above. Even m4 had a fairly long time of success, and some of the web template languages can be regarded as macro languages. It is quite possible that macros and parsed typing do not go well together and that is why macros "break" common languages. In the answer to the linked question, a macro like #define TWO 1+1 would have been covered by the common rules of the language rather than conflicting with those of the host language. And issues like "macros are not typed" and "code doesn't compile" are not relevant in the context of a language designed as untyped and interpreted with little concern for efficiency. The question about the design decisions needed to create a macro language pertain to a hobby project which I am currently working on on designing a new shell. Taking the previous question in context would clarify the difference between adding macros to a parsed language and my objective. I hope the clarification shows that the question linked doesn't cover this question, which is two parts: If I want to create a macro language (for a shell or a web template, for example), what limitations and compromises (and guidelines, if exist) need to be done? (Probably answerable by a link or reference) Why have no macro languages succeed in becoming mainstream except in particular niches? What makes typed languages successful in large programming, while "stringly-typed" languages succeed in shells and one-liner like environments?

    Read the article

  • Part 1 Basic Webtrends REST Examples

    - by GeekAgilistMercenary
    In this entry I just want to cover some examples of how to connect to Webtrends DX Web Services.  The DX Web Services use REST as the architecture, providing simple URI based end points to connect to.  With the Webtrends SDK you can connect to these services with your account information.  Here are the basic steps to retrieve a profile list, the reports from one of those profiles, and then the report you want from that report list. First step is to create a Webtrends User. WebTrends.Sdk.Account.User webtrendsUser = new Account.User(); webtrendsUser.UserName = username; webtrendsUser.Password = password; webtrendsUser.AccountName = account; After you create the Webtrends User, simple request a profile list by getting list of ProfileDefinition Objects. List<WebTrends.Sdk.Profile.ProfileDefinition> profiles = WebTrends.Sdk.Factory.NavigationFactory.BuildListing(webtrendsUser); Next you will want to grab a report based on the profile you are in and your credentials. List<WebTrends.Sdk.Report.ReportDefinition> reports = WebTrends.Sdk.Factory.NavigationFactory.BuildListing(profiles[i], webtrendsUser); In the code above, i would equate to the specific profile you want from the retrieved list of profiles in the profiles list.  The common scenario is that one has pulled the profiles into a drop down, combo, or list box that the user can select.  Then when the user selects the specific profile that profile object can then be used to pull the List of ReportDefinitions. Once we have the report definitions, all sorts of criteria can be added together to query for a specific report.  This is also were things can get a little tricky.  For instance, take a look at the code below. WebTrends.Sdk.Factory.ReportFactory.CreateDimensionalReport( report.ID.ToString(), profiles[i].ID.ToString(), "2010m01", webtrendsUser); The CreateDimensionalReport takes 4 parameters for this particular overload.  The report ID, profile ID, the Webtrends Date Format, and the Webtrends User Object.  There are a number of other overloads available within this factory's method that allow for passing the specific REST URI, and other criteria to retrieve the report of your choice.  In the near future we will be adding some more to this method also, which will provide more flexibility without needing to use the full REST URI. I will have more on this, so all you Coders out there using Webtrends DX Services, I hope this is helpful!  Enjoy. Original Entry

    Read the article

  • Solving Inbound Refinery PDF Conversion Issues, Part 1

    - by Kevin Smith
    Working with Inbound Refinery (IBR)  and PDF Conversion can be very frustrating. When everything is working smoothly you kind of forgot it is even there. Documents are cheeked into WebCenter Content (WCC), sent to IBR for conversion, converted to PDF, returned to WCC, and viola your Office documents have a nice PDF rendition available for viewing. Then a user checks in a bunch of password protected Word files, the conversions fail, your IBR queue starts backing up, users start calling asking why their document have not been released yet, and your spend a frustrating afternoon trying to recover and get things back running properly again. Password protected documents are one cause of PDF conversion failures, and I will cover those in a future blog post, but there are many other problems that can cause conversions to fail, especially when working with the WinNativeConverter and using the native applications, e.g. Word, to convert a document to PDF. There are other conversion options like PDFExportConverter which uses Oracle OutsideIn to convert documents directly to PDF without the need for the native applications. However, to get the best fidelity to the original document the native applications must be used. Many customers have tried PDFExportConverter, but have stayed with the native applications for conversion since the conversion results from PDFExportConverter were not as good as when the native applications are used. One problem I ran into recently, that at least has a easy solution, are Word documents that display a Show Repairs dialog when the document is opened. If you open the problem document yourself you will see this dialog. This will cause the conversion to time out. Any time the native application displays a dialog that requires user input the conversion will time out. The solution is to set add a setting for BulletProofOnCorruption to the registry for the user running Word on the IBR server. See this support note from Microsoft for details. The support note says to set the registry key under HKEY_CURRENT_USER, but since we are running IBR as a service the correct location is under HKEY_USERS\.DEFAULT. Also since in our environment we were using Office 2007, the correct registry key to use was: HKEY_USERS\.DEFAULT\Software\Microsoft\Office\11.0\Word\Options Once you have done this restart the IBR managed server and resubmit your problem document. It should now be converted successfully. For more details on IBR see the Oracle® WebCenter Content Administrator's Guide for Conversion.

    Read the article

  • Bay Area Coherence Special Interest Group Next Meeting July 21, 2011

    - by csoto
    Date: Thursday, July 21, 2011 Time: 4:30pm - 8:15pm ET (note that Parking at 475 Sansome Closes at 8:30pm) Where: Oracle Office, 475 Sansome Street, San Francisco, CA Google Map We will be providing snacks and beverages. Register! - Registration is required for building security. Presentation Line Up:? 5:10pm - Batch Processing Using Coherence in Oracle Group Policy Administration - Paul Cleary, Oracle Oracle Insurance Policy Administration (OIPA) is a flexible, rules-based policy administration solution that provides full record keeping for all policy lifecycle transactions. One component of OIPA is Cycle processing, which is the batch processing of pending insurance transactions. This presentation introduces OIPA and Cycle processing, describing the unique challenges of processing a high volume of transactions within strict time windows. It then reviews how OIPA uses Oracle Coherence and the Processing Pattern to meet these challenges, describing implementation specifics that highlight the simplicity and robustness of the Processing Pattern. 6:10pm - Secure, Optimize, and Load Balance Coherence with F5 - Chris Akker, F5 F5 Networks, Inc., the global leader in Application Delivery Networking, helps the world’s largest enterprises and service providers realize the full value of virtualization, cloud computing, and on-demand IT. Recently, F5 and Oracle partnered to deliver a novel solution that integrates Oracle Coherence 3.7 with F5 BIG-IP Local Traffic Manager (LTM). This session will introduce F5 and how you can leverage BIG-IP LTM to secure, optimize, and load balance application traffic generated from Coherence*Extend clients across any number of servers in a cluster and to hardware-accelerate CPU-intensive SSL encryption. 7:10pm - Using Oracle Coherence to Enable Database Partitioning and DC Level Fault Tolerance - Alexei Ragozin, Independent Consultant and Brian Oliver, Oracle Partitioning is a very powerful technique for scaling database centric applications. One tricky part of partitioned architecture is routing of requests to the right database. The routing layer (routing table) should know the right database instance for each attribute which may be used for routing (e.g. account id, login, email, etc): it should be fast, it should fault tolerant and it should scale. All the above makes Oracle Coherence a natural choice for implementing such routing tables in partitioned architectures. This presentation will cover synchronization of the grid with multiple databases, conflict resolution, cross cluster replication and other aspects related to implementing robust partitioned architecture. Additional Info:?? - Download Past Presentations: The presentations from the previous meetings of the BACSIG are available for download here. Click on the presentation titles to download the PDF files. - Join the Coherence online community on our Oracle Coherence Users Group on LinkedIn. - Contact BACSIG with any comments, questions, presentation proposals and content suggestions.

    Read the article

  • ADF Code Guidelines

    - by Chris Muir
    During Oracle Open World 2012 the ADF Product Management team announced a new OTN website, the ADF Architecture Square.  While OOW represents a great opportunity to let customers know about new and exciting developments, the problem with making announcements during OOW however is customers are bombarded with so many messages that it's easy to miss something important. So in this blog post I'd like to highlight as part of the ADF Architecture Square website, one of the initial core offerings is a new document entitled ADF Code Guidelines. Now the title of this document should hopefully make it obvious what the document contains, but what's the purpose of the document, why did Oracle create it? Personally having worked as an ADF consultant before joining Oracle, one thing I noted amongst ADF customers who had successfully deployed production systems, that they all approached software development in a professional and engineered way, and all of these customers had their own guideline documents on ADF best practices, conventions and recommendations.  These documents designed to be consumed by their own staff to ensure ADF applications were "built right", typically sourced their guidelines from their team's own expert learnings, and the huge amount of ADF technical collateral that is publicly available.  Maybe from manuals and whitepapers, presentations and blog posts, some written by Oracle and some written by independent sources. Now this is all good and well for the teams that have gone through this effort, gathering all the information and putting it into structured documents, kudos to them.  But for new customers who want to break into the ADF space, who have project pressures to deliver ADF solutions without necessarily working on assembling best practices, creating such a document is understandably (regrettably?) a low priority.  So in recognising this hurdle, at Oracle we've devised the ADF Code Guidelines.  This document sets out ADF code guidelines, practices and conventions for applications built using ADF Business Components and ADF Faces Rich Client (release 11g and greater).  The guidelines are summarized from a number of Oracle documents and other 3rd party collateral, with the goal of giving developers and development teams a short circuit on producing their own best practices collateral. The document is not a final production, but a living document that will be extended to cover new information as discovered or as the ADF framework changes. Readers are encouraged to discuss the guidelines on the ADF EMG and provide constructive feedback to me (Chris Muir) via the ADF EMG Issue Tracker. We hope you'll find the ADF Code Guidelines useful and look forward to providing updates in the near future. Image courtesy of paytai / FreeDigitalPhotos.net

    Read the article

  • Devoxx 2011: Java EE 6 Hands-on Lab Delivered

    - by arungupta
    I, along with Alexis's help, delivered a Java EE 6 hands-on lab to a packed room of about 40+ attendees at Devoxx 2011. The lab was derived from the OTN Developer Days 2012 version but added lot more content to showcase several Java EE 6 technologies. The problem statement from the lab document states: This hands-on lab builds a typical 3-tier Java EE 6 Web application that retrieves customer information from a database and displays it in a Web page. The application also allows new customers to be added to the database as well. The string-based and type-safe queries are used to query and add rows to the database. Each row in the database table is published as a RESTful resource and is then accessed programmatically. Typical design patterns required by a Web application like validation, caching, observer, partial page rendering, and cross-cutting concerns like logging are explained and implemented using different Java EE 6 technologies. The lab covered Java Persistence API 2, Servlet 3, Enterprise JavaBeans 3.1, JavaServer Faces 2, Java API for RESTful Web Services 1.1, Contexts and Dependency Injection 1.0, and Bean Validation 1.0 over 47 pages of detailed self-paced instructions. Here is the complete Table of Contents: The lab can be downloaded from here and requires only NetBeans IDE "All" or "Java EE" version, which includes GlassFish anyway. All the feedback received from the lab has been incorporated in the instructions and bugs filed (Updated 49559, 205232, 205248, 205256). 80% of the attendees could easily complete the lab and some even completed in much less than 3 hours. That indicates that either more content needs to be added to the lab or the intellectual level of the attendees at the conference was pretty high. I think the lab has enough content for 3 hours but we moved at a much more faster pace so I conclude on the latter. Truly a joy to conduct a lab to 40 Devoxxians! Another related lab that might be handy for folks is "Develop, Deploy, and Monitor your Java EE 6 applications using GlassFish 3.1 Cluster". It explains how: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete lab instructions and source code are available and you can try them. I plan to continue evolving the contents for the Java EE 6 hands-on lab to cover more technologies and features and will announce them on this blog. Let me know on what else would you like to see in the future versions.

    Read the article

  • Devoxx 2011: Java EE 6 Hands-on Lab Delivered

    - by arungupta
    I, along with Alexis's help, delivered a Java EE 6 hands-on lab to a packed room of about 40+ attendees at Devoxx 2011. The lab was derived from the OTN Developer Days 2012 version but added lot more content to showcase several Java EE 6 technologies. The problem statement from the lab document states: This hands-on lab builds a typical 3-tier Java EE 6 Web application that retrieves customer information from a database and displays it in a Web page. The application also allows new customers to be added to the database as well. The string-based and type-safe queries are used to query and add rows to the database. Each row in the database table is published as a RESTful resource and is then accessed programmatically. Typical design patterns required by a Web application like validation, caching, observer, partial page rendering, and cross-cutting concerns like logging are explained and implemented using different Java EE 6 technologies. The lab covered Java Persistence API 2, Servlet 3, Enterprise JavaBeans 3.1, JavaServer Faces 2, Java API for RESTful Web Services 1.1, Contexts and Dependency Injection 1.0, and Bean Validation 1.0 over 47 pages of detailed self-paced instructions. Here is the complete Table of Contents: The lab can be downloaded from here and requires only NetBeans IDE "All" or "Java EE" version, which includes GlassFish anyway. All the feedback received from the lab has been incorporated in the instructions and bugs filed (Updated 49559, 205232, 205248, 205256). 80% of the attendees could easily complete the lab and some even completed in much less than 3 hours. That indicates that either more content needs to be added to the lab or the intellectual level of the attendees at the conference was pretty high. I think the lab has enough content for 3 hours but we moved at a much more faster pace so I conclude on the latter. Truly a joy to conduct a lab to 40 Devoxxians! Another related lab that might be handy for folks is "Develop, Deploy, and Monitor your Java EE 6 applications using GlassFish 3.1 Cluster". It explains how: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete lab instructions and source code are available and you can try them. I plan to continue evolving the contents for the Java EE 6 hands-on lab to cover more technologies and features and will announce them on this blog. Let me know on what else would you like to see in the future versions.

    Read the article

  • IRM and Consumerization

    - by martin.abrahams
    As the season of rampant consumerism draws to its official close on 12th Night, it seems a fitting time to discuss consumerization - whereby technologies from the consumer market, such as the Android and iPad, are adopted by business organizations. I expect many of you will have received a shiny new mobile gadget for Christmas - and will be expecting to use it for work as well as leisure in 2011. In my case, I'm just getting to grips with my first Android phone. This trend developed so much during 2010 that a number of my customers have officially changed their stance on consumer devices - accepting consumerization as something to embrace rather than resist. Clearly, consumerization has significant implications for information control, as corporate data is distributed to consumer devices whether the organization is aware of it or not. I daresay that some DLP solutions can limit distribution to some extent, but this creates a conflict between accepting consumerization and frustrating it. So what does Oracle IRM have to offer the consumerized enterprise? First and foremost, consumerization does not automatically represent great additional risk - if an enterprise seals its sensitive information. Sealed files are encrypted, and that fundamental protection is not affected by copying files to consumer devices. A device might be lost or stolen, and the user might not think to report the loss of a personally owned device, but the data and the enterprise that owns it are protected. Indeed, the consumerization trend is another strong reason for enterprises to deploy IRM - to protect against this expansion of channels by which data might be accidentally exposed. It also enables encryption requirements to be met even though the enterprise does not own the device and cannot enforce device encryption. Moving on to the usage of sealed content on such devices, some of our customers are using virtual desktop solutions such that, in truth, the sealed content is being opened and used on a PC in the normal way, and the user is simply using their device for display purposes. This has several advantages: The sensitive documents are not actually on the devices, so device loss and theft are even less of a worry The enterprise has another layer of control over how and where content is used, as access to the virtual solution involves another layer of authentication and authorization - defence in depth It is a generic solution that means the enterprise does not need to actively support the ever expanding variety of consumer devices - the enterprise just manages some virtual access to traditional systems using something like Citrix or Remote Desktop services. It is a tried and tested way of accessing sealed documents. People have being using Oracle IRM in conjunction with Citrix and Remote Desktop for several years. For some scenarios, we also have the "IRM wrapper" option that provides a simple app for sealing and unsealing content on a range of operating systems. We are busy working on other ways to support the explosion of consumer devices, but this blog is not a proper forum for talking about them at this time. If you are an Oracle IRM customer, we will be pleased to discuss our plans and your requirements with you directly on request. You can be sure that the blog will cover the new capabilities as soon as possible.

    Read the article

  • Do you care about your Oracle System Support experience?

    - by user12244613
    It has been a while since I blogged about Systems Support within Oracle. I want to take this opportunity to raise awareness of how Oracle is communicating out to its systems customers. Previously every item to be communicated was sent independently via an email message however, not all messages appear to be being getting the attention they require. In an effort to ensure Oracle is reaching all of our Sun and Oracle System customers, we have created the Oracle Systems Support Newsletter. This monthly newsletter will have a summary of customer support relevant information for you to use and will cover topics that impact your support experience. For example: 1. Did you know that sending explorer content to email addresses with @sun.com is going away soon? For more information, review the Document 1362484.1 2. Are you an Auto Service Request (ASR) user? If yes, here are the latest changes: · ASR Manager accepts My Oracle Support User Name (email address) and password. [Doc ID 1345484.1] · ASR IP Address for secure file transfer has changed [Doc ID 1338575.1] · ASR No Heartbeat Status - Find out how to resolve [Doc ID 1346328.1] 3. Did you notice we have changed the Service Request options for Hardware and introduced a new problem category called “Automated Diagnosis”? This service streamlines the data you send in and then automatically provides an update of known issues found in your My Oracle Support Service Request. This feature also fast tracks hardware failures by sending parts as soon as the data is analyzed. Have you used this new feature? If yes tell us about it – take the 5minute survey 4. Are you being proactive or are you still ‘fire fighting’ in the reactive mode? If you are being proactive for your Oracle System products you might have used Oracle Sun System Analysis. Did you finding this helpful? Can we improve it? You tell us, take the 5minute survey 5. Are you aware that if you attach files to your Service Request it enables the support engineer to start work straight away? For a summary of products and files review the Newsletter. 6. Are you struggling to find patches or firmware or product downloads? If yes, these types of issues are all addressed in the Newsletter. If this is the type of information you want to know about each month, then take time to read the Newsletter link and bookmark it in My Oracle Support so you can stay informed. Thanks for your time.

    Read the article

  • ApiChange Corporate Edition

    - by Alois Kraus
    In my inital announcement I could only cover a small subset what ApiChange can do for you. Lets look at how ApiChange can help you to fix bugs due to wrong usage of an Api within a fraction of time than it would take normally. It happens that software is tested and some bugs show up. One bug could be …. : We get way too man log messages during our test run. Now you have the task to find the most frequent messages and eliminate the Log calls from the source code. But what about the myriads other log calls? How can we check that the distribution of log calls is nearly equal across all developers? And if not how can we contact the developer to check his code? ApiChange can help you too connect these loose ends. It combines several information silos into one cohesive view. The picture below shows how it is able to fill the gaps. The public version does currently “only” parse the binaries and pdbs to give you for a –whousesmethod query the following colums: If it happens that you have Rational ClearCase (a source control system) in your development shop and an Active Directory in place then ApiChange will try to determine from the source file which was determined from the pdb the last check in user which should be present in your Active Directory. From there it is only a small hop to an LDAP query to your AD domain or the GC (Global Catalog) to get from the user name his Full name Email Phone number Department …. ApiChange will append this additional data all of your query results which contain source files if you add the –fileinfo option. As I said this is currently not enabled by default since the AD domain needs to be configured which are currently only some hard coded values in the SiteConstants.cs source file of ApiChange.Api.dll. Once you got this data you can generate metrics based on source file, developer, assembly, … and add additional data by drag and drop directly into the pivot tables inside Excel. This allows you to e.g. to generate a report which lists the source files with most log calls in descending order along with the developer name and email in the pivot table. Armed with this knowledge you can take meaningful measures e.g. to ask the developer if the huge number of log calls in this source file can be optimized. I am aware that this is a very specific scenario but it is a huge time saver when you are able to fill the missing gaps of information. ApiChange does this in an extensible way. namespace ApiChange.ExternalData {     public interface IFileInformationProvider     {         UserInfo GetInformationFromFile(string fileName);     } } It defines an interface where you can implement your custom information provider to close the gap between source control system and the real person I have to send an email to ask if his code needs a closer inspection.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >