Search Results

Search found 896 results on 36 pages for 'analyze'.

Page 26/36 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Review: A Quick Look at Reflector

    - by James Michael Hare
    I, like many, was disappointed when I heard that Reflector 7 was not free, and perhaps that’s why I waited so long to try it and just kept using my version 6 (which continues to be free).  But though I resisted for so long, I longed for the better features that were being developed, and began to wonder if I should upgrade.  Thus, I began to look into the features being offered in Reflector 7.5 to see what was new. Multiple Editions Reflector 7.5 comes in three flavors, each building on the features of the previous version: Standard – Contains just the Standalone application ($70) VS – Same as Standard but adds Reflector Object Browser for Visual Studio ($130) VSPro – Same as VS but adds ability to set breakpoints and step into decompiled code ($190) So let’s examine each of these features. The Standalone Application (Standard, VS, VSPro editions) Popping open Reflector 7.5 and looking at the GUI, we see much of the same familiar features, with a few new ones as well: Most notably, the disassembler window now has a tabbed window with navigation buttons.  This makes it much easier to back out of a deep-dive into many layers of decompiled code back to a previous point. Also, there is now an analyzer which can be used to determine dependencies for a given method, property, type, etc. For example, if we select System.Net.Sockets.TcpClient and hit the Analyze button, we’d see a window with the following nodes we could expand: This gives us the ability to see what a given type uses, what uses it, who exposes it, and who instantiates it. Now obviously, for low-level types (like DateTime) this list would be enormous, but this can give a lot of information on how a given type is connected to the larger code ecosystem. One of the other things I like about using Reflector 7.5 is that it does a much better job of displaying iterator blocks than Reflector 6 did. For example, if you were to take a look at the Enumerable.Cast() extension method in System.Linq, and dive into the CastIterator in Reflector 6, you’d see this: But now, in Reflector 7.5, we see the iterator logic much more clearly: This is a big improvement in the quality of their code disassembler and for me was one of the main reasons I decided to take the plunge and get version 7.5. The Reflector Object Browser (VS, VSPro editions) If you have the .NET Reflector VS or VSPro editions, you’ll find you have in Visual Studio a Reflector Object Browser window available where you can select and decompile any assembly right in Visual Studio. For example, if you want to take a peek at how System.Collections.Generic.List<T> works, you can either select List<T> in the Reflector Object Browser, or even simpler just select a usage of it in your code and CTRL + Click to dive in. – And it takes you right to a source window with the decompiled source: Setting Breakpoints and Stepping Into Decompiled Code (VSPro) If you have the VSPro edition, in addition to all the things said above, you also get the additional ability to set breakpoints in this decompiled code and step through it as if it were your own code: This can be a handy feature when you need to see why your code’s use of a BCL or other third-party library isn’t working as you expect. Summary Yes, Reflector is no longer free, and yes, that’s a bit of a bummer. But it always was and still is a very fine tool. If you still have Reflector 6, you aren’t forced to upgrade any longer, but getting the nicer disassembler (especially for iterator blocks) and the handy VS integration is worth at least considering upgrading for.  So I leave it up to you, these are some of the features of Reflector 7.5, what’s your thoughts? Technorati Tags: .NET,Reflector

    Read the article

  • Analysis Services (SSAS) - Unexpected Internal Error when processing (ProcessUpdate). Workaround/Resolution

    - by James Rogers
    Many implementations require the use of ProcessUpdate to support Type 1 slowly changing dimensions. ProcessUpdate drops all of the affected indexes and aggregations in partitions affected by data that changes in the Dimension on which the ProcessUpdate is being performed. Twice now I have had situations where the processing fails with "Internal error: An unexpected exception occurred." Any subsequent ProcessUpdate processing will also fail with the same error. In talking with Microsoft the issue is corrupt indexes for the Dimension(s) being processed in the partitions of the affected measure group. I cannot guarantee that the following will correct your problem but it did in my case and saved us quite a bit of down time.   Workaround: ProcessIndexes on the entire cube that is being processed and throwing the error. This corrected the problem on both 2008 and 2008 R2.   Pros:  Does not require a complete rebuild of the data (ProcessFull) for either the Dimension or Cube. User access can continue while this ProcessIndexes in underway.   Cons: Can take a long time, especially on large cubes with many partitions, dimensions and/or aggregations. Query Performance is usually severely impacted due to the memory and CPU requirements for Aggregation and Index building   <Batch http://schemas.microsoft.com/analysisservices/2003/engine"http://schemas.microsoft.com/analysisservices/2003/engine">  <Parallel>     <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">       <Object>         <DatabaseID>MyDatabase</DatabaseID>         <CubeID>MyCube</CubeID>       </Object>       <Type>ProcessIndexes</Type>       <WriteBackTableCreation>UseExisting</WriteBackTableCreation>     </Process>  </Parallel> </Batch>   The cube where the corruption exists can be found by having Profiler running while the ProcessUpdate is executing. The first partition that displays the "The Job has ended in failure." message in the TextData column will be part of the cube/measuregroup that has the corruption. You can try to run ProcessIndexes on just that measure group. This may correct the problem and save additional time if you have other large measure groups in the cube that are not affected by the corruption.   Remember to execute your normal ProcessUpdate batch after the successful completion of the ProcessIndexes. The ProcessIndexes does not pick up data changes.   Things that did not work: ProcessClearIndexes - why this doesn't work and ProcessIndexes does is unclear at this point. ProcessFull on the partition in question. In my latest case, this would clear up the problem for that partition. However, the next partition the ProcessUpdate touched that had data in it would generate and error. This leads me to believe the corruption problem will exist in all partitions in the affected measure group that have data in them.   NOTE: I experience this problem in both a SQL 2008 and SQL 2008 R2 Analysis Services environment, on separate built from the same relational database. This leads me to believe that some data condition in the tables used for the Dimension processing caused the corruption since the two environments were on physically separate hardware. I am waiting on Microsoft to analyze the dumps to give us more insight into what actually caused the corruption and will update this post accordingly.

    Read the article

  • Create Panoramic Photos with Windows Live Photo Gallery

    - by Matthew Guay
    Have you ever wanted to capture the view from a mountain or the full size of a building?  Here’s how you can stitch multiple shots together into the perfect panoramic picture for free with Windows Live Photo Gallery. Getting Started First, make sure you have Windows Live Photo Gallery installed (link below).  Live Photo Gallery is part of the Windows Live Essentials suite, you can select other programs to install along with it if you want. Make sure to uncheck setting your home page to MSN and setting your search provider as Bing if you don’t want them changed.   Now, make sure you have pictures that will work good for a panorama.  These need to be taken from the same spot, and the edges of the pictures need to overlap so the program can find where the pictures connect.  Here we have taken pictures inside a building with a cell phone camera. Make your Panorama Open Live Photo Gallery, and find the pictures you want to use in your panorama.  It will automatically index and display all of the photos in your Pictures folder or Library if you’re using Windows 7. If your pictures are saved elsewhere, add that folder to Photo Gallery.  Click File, Include a folder in the gallery, and select the correct folder at the prompt. Now select all of the pictures that you will use in your panorama.  You can easily do this by clicking the checkbox on each picture that appears when you hover over it.    Once all of the pictures are selected, click Make in the menu bar and select Create panoramic photo… Alternately, right-click on any of the pictures you’ve selected, and click Create panoramic photo… Live Photo Gallery will analyze your photos and compost them together to create a panorama.  The amount of time it takes will vary depending on the number of photos, size of the pictures, and computer speed. When it’s finished making the panorama, you’ll be prompted to enter a file name and save the picture. Your new panorama picture will open as soon as it’s saved.  Depending on your shots, the picture may have quite a bit of black space around the edges where each picture didn’t cover the exact same amount of area. To correct this, click Fix on the menu bar, and then select Crop Photo in the sidebar that opens. Select the center of the picture with the crop tool, and click Apply when you’ve got the selection you want. Live Photo Gallery automatically saves your picture changes, and you can revert back to the original picture if you wish. Now you’ve got a nice panoramic shot, trimmed and ready to print, share, and more. Conclusion Panoramic shots are great ways to capture your whole surroundings, whether it’s a sports stadium, mall, or a scenic mountain view.  They can also be a great way to capture more with low-resolution cameras. Link Download Windows Live Photo Gallery Similar Articles Productive Geek Tips Family Fun: Share Photos with Photo Gallery and Windows Live SpacesLearning Windows 7: Manage Photos with Live Photo GalleryEasily Re-Size Photos in Windows Vista or XPInstall Windows Live Essentials In Windows 7Convert Photos to Flash for Your Website TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • Measuring Code Quality

    - by DotNetBlues
    Several months back, I was tasked with measuring the quality of code in my organization. Foolishly, I said, "No problem." I figured that Visual Studio has a built-in code metrics tool (Analyze -> Calculate Code Metrics) and that would be a fine place to start with. I was right, but also very wrong. The Visual Studio calculates five primary metrics: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class Coupling, and Lines of Code. The first two are figured at the method level, the second at (primarily) the class level, and the last is a simple count. The first question any reasonable person should ask is "Which one do I look at first?" The first question any manager is going to ask is, "What one number tells me about the whole application?" My answer to both, in a way, was "Maintainability Index." Why? Because each of the other numbers represent one element of quality while MI is a composite number that includes Cyclomatic Complexity. I'd be lying if I said no consideration was given to the fact that it was abstract enough that it's harder for some surly developer (I've been known to resemble that remark) to start arguing why a high coupling or inheritance is no big deal or how complex requirements are to blame for complex code. I should also note that I don't think there is one magic bullet metric that will tell you objectively how good a code base is. There are a ton of different metrics out there, and each one was created for a specific purpose in mind and has a pet theory behind it. When you've got a group of developers who aren't accustomed to measuring code quality, picking a 0-100 scale, non-controversial metric that can be easily generated by tools you already own really isn't a bad place to start. That sort of answers the question a developer would ask, but what about the management question; how do you dashboard this stuff when Visual Studio doesn't roll up the numbers to the solution level? Since VS does roll up the MI to the project level, I thought I could just figure out what sort of weighting Microsoft used to roll method scores up to the class level and then to the namespace and project levels. I was a bit surprised by the answer: there is no weighting. That means that a class with one 1300 line method (which will score a 0 MI) and one empty constructor (which will score a 100 MI) will have an overall MI of a respectable 50. Throw in a couple of DTOs that are nothing more than getters and setters (which tend to score 95 or better) and the project ends up looking really, really healthy. The next poor bastard who has to work on the application is probably not going to be singing the praises of its maintainability, though. For the record, that 1300 line method isn't a hypothetical, either. So, what does one do with that? Well, I decided to weight the average by the Lines of Code per method. For our above example, the formula for the class's MI becomes ((1300 * 0) + (1 * 100))/1301 = .077, rounded to 0. Sounds about right. Continue the pattern for namespace, project, solution, and even multi-solution application MI scores. This can be done relatively easily by using the "export to Excel" button and running a quick formula against the data. On the short list of follow-up questions would be, "How do I improve my application's score?" That's an answer for another time, though.

    Read the article

  • SQL SERVER – What are Actions in SSAS and How to Make a Reporting Action

    - by Pinal Dave
    Actions are used for customized browsing and drilling of data for the end-user. It’s an event that a user can raise while accessing the cube data. They are used in cube browsers like excel and are triggered when a user in a client tool clicks on a particular member, level, dimension, cells or may be the cube itself.  For example a user might be able to see a reporting services report, open a web page or drill through to detailed information related to the cube data. Analysis server supports 3 types of actions :- Report Drill-through Standard Actions In this blog post, I will explain the Reporting  action. The objective of this action is to return a report with details of the product where the sales amount is greater than 1000 in cube browser analysis. You need to create a basic cube first with the facts and dimensions you want in the analysis. Following are the steps to create reporting action. Go to SQL server data tools and open the analysis services project. Navigate to actions and click on new reporting action. 2.) Specify the name of the action and choose target type as attribute members since we have to create the action on members for a attribute. 3.) Specify the Target object of your report action. Target object would be the dimension or attribute on which you want the report to appear. In our case it is product name. 4.) Next you have to define the condition on which you want the report link to appear. However, this is an optional feature. In this example we are specifying a condition, which will check if the sales amount is greater than 10,000. So, that the link appears only for those products where the defined condition is met. 5.) Next you have to specify the server name on which the report is present, report path  and the report format in which you want the report to appear. 6.) Additionally you can specify the parameters. As with conditional expression, the parameters should be a valid MDX expression. The parameter name should be same as the one defined in the report. 7.) Deploy your solution after you are done with specifying parameters and go to the cube browser. 8.) Click on the analyze in excel button, this will open your cube in excel 9.) Make an analysis which shows product names and their sales amount. 10.) Right click on a product where sales amount is greater than 10000 you will see the reporting action link. Click on that and you will be taken to your reporting services report. 11.) Clicking on the link will take you to the URL of the report. I created this report using report project wizard in SQL server data tools. So, this is how we can launch reports from a cube browser. Similarly you can open web pages, run applications and a number of  other tasks. Koenig Solutions offers SSAS training which contains all Analysis Services including Reporting in great detail. In my next blog post I will talk about drill-through actions. Author: Namita Sharma, Senior Corporate Trainer at Koenig Solutions. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSAS

    Read the article

  • Windows 8 Camp&ndash;Ways to Prepare

    - by Lori Lalonde
    When Windows 8 was announced at the BUILD conference back in September, it created quite a buzz among the developer community. By the spring of 2012,  Windows 8 Developer Camps started popping up everywhere imaginable. I received a lot of questions from CTTDNUG members about whether or not we would be hosting one locally. If you recall my post about the Windows Phone/Azure Developer Workshop that CTTDNUG hosted back in March, you’ll remember that the biggest hurdle to overcome when planning this type of event was finding the right venue. It took some time, but I finally found a venue that was available and provided the prerequisites needed to ensure this camp is a success. I am very excited that CTTDNUG will be hosting a Windows 8 Camp this summer in the Kitchener/Waterloo area. In fact, it’s coming up in less than 2 weeks. Clearly other developers are excited as well, because our registration numbers show that the event is already 70% full! On top of that, I was fortunate enough to also book two well-known evangelists to present and teach at this full day developer camp: Andrei Marukovich and Atley Hunter. This was the icing on the cake. With the content provided by Microsoft, and two local experts that live and breathe Windows 8 development, I know that I, along with other developers that attend this event, will have the opportunity to maximize our learning potential and hit the ground running. If you plan on attending a Windows 8 Developer Camp soon, and want to ensure you get the most “bang for your buck” (figuratively speaking, since these camps are free), there are some things you can do to prepare before the big day: 1) Install the prerequisites on your own device before the big day I can’t stress this enough. Otherwise, you will be spending valuable time during the hands-on period downloading and installing what is needed, rather than digging into the development and using that time to ask the experts on-hand about programming challenges, issues, questions you may have with respect to your development. Prerequisites: Windows 8 Release Preview Visual Studio 2012 RC Download the Windows 8 SDK Samples 2) Purchase, download, and read Charles Petzold’s newest book:  Programming Windows 6th Edition This is a great introduction to the type of content you will be learning about during the camp. Doing some light reading beforehand might raise some questions about the concepts discussed in the book, which will give you the opportunity to write them down and bring them with you to the camp. The experts on hand will be able to answer them for you. 3) Make use of the freebies that are available Telerik has recently released a preview of their RadControls for Metro. You can sign up to receive a license code to give you access to install the preview for free and start playing around with it. Syncfusion also offers a free download of their Metro Studio package, which is a collection of metro style icons that you can customize and use in your own applications. Last but not least, once you’ve installed the Windows 8 Release Preview on your own device, go to the Windows 8 Store and download a handful of the free apps that are available. Testing out other Metro apps may give you ideas of what you can do in your own apps and analyze what features you like: application flow, type of animations used, concepts that were leveraged, how live tiles were used, etc. I hope you found these tips to be useful as you embark on a new development journey! Although this post focused on how to prepare for a Windows 8 camp, the same ideas are there whichever developer camp/workshop/event you attend. Learning does not begin and end on the day of the event. Attending a developer camp is just one step of many to master whatever technology you are interested in. It is a continuous process, which is fully maximized when you do your homework beforehand, actively participate during,  and follow up by putting what you learned to practice afterwards. Happy coding!

    Read the article

  • Surviving MATLAB and R as a Hardcore Programmer

    - by dsimcha
    I love programming in languages that seem geared towards hardcore programmers. (My favorites are Python and D.) MATLAB is geared towards engineers and R is geared towards statisticians, and it seems like these languages were designed by people who aren't hardcore programmers and don't think like hardcore programmers. I always find them somewhat awkward to use, and to some extent I can't put my finger on why. Here are some issues I have managed to identify: (Both): The extreme emphasis on vectors and matrices to the extent that there are no true primitives. (Both): The difficulty of basic string manipulation. (Both): Lack of or awkwardness in support for basic data structures like hash tables and "real", i.e. type-parametric and nestable, arrays. (Both): They're really, really slow even by interpreted language standards, unless you bend over backwards to vectorize your code. (Both): They seem to not be designed to interact with the outside world. For example, both are fairly bulky programs that take a while to launch and seem to not be designed to make simple text filter programs easy to write. Furthermore, the lack of good string processing makes file I/O in anything but very standard forms near impossible. (Both): Object orientation seems to have a very bolted-on feel. Yes, you can do it, but it doesn't feel much more idiomatic than OO in C. (Both): No obvious, simple way to get a reference type. No pointers or class references. For example, I have no idea how you roll your own linked list in either of these languages. (MATLAB): You can't put multiple top level functions in a single file, encouraging very long functions and cut-and-paste coding. (MATLAB): Integers apparently don't exist as a first class type. (R): The basic builtin data structures seem way too high level and poorly documented, and never seem to do quite what I expect given my experience with similar but lower level data structures. (R): The documentation is spread all over the place and virtually impossible to browse or search. Even D, which is often knocked for bad documentation and is still fairly alpha-ish, is substantially better as far as I can tell. (R): At least as far as I'm aware, there's no good IDE for it. Again, even D, a fairly alpha-ish language with a small community, does better. In general, I also feel like MATLAB and R could be easily replaced by plain old libraries in more general-purpose langauges, if sufficiently comprehensive libraries existed. This is especially true in newer general purpose languages that include lots of features for library writers. Why do R and MATLAB seem so weird to me? Are there any other major issues that you've noticed that may make these languages come off as strange to hardcore programmers? When their use is necessary, what are some good survival tips? Edit: I'm seeing one issue from some of the answers I've gotten. I have a strong personal preference, when I analyze data, to have one script that incorporates the whole pipeline. This implies that a general purpose language needs to be used. I hate having to write a script to "clean up" the data and spit it out, then another to read it back in a completely different environment, etc. I find the friction of using MATLAB/R for some of my work and a completely different language with a completely different address space and way of thinking for the rest to be a huge source of friction. Furthermore, I know there are glue layers that exist, but they always seem to be horribly complicated and a source of friction.

    Read the article

  • Have you ever wondered...?

    - by diana.gray
    I've often wondered why folks do the same thing over and over. For some of us, it's because we "don't get it" and there's an abundance of TV talk shows that will help us analyze the why of it. Dr. Phil is all too eager to ask "...and how's that working for you?". But I'm not referring to being stuck in a destructive pattern or denial. I'm really talking about doing something over and over because you have found a joy, a comfort, a boost of energy from an activity or event. For example, how many times have I planted bulbs in November or December only to be amazed by their reach, colors, and fragrance in early spring? Or baked fresh cookies and allowed the aroma to fill the house? Or kissed a sleeping baby held gently in my arms and being reminded of how tiny and fragile we all are. I've often wondered why it is that I get so much out of something I've done so many times. I think it's because I've changed. The activity may be the same but in the preceding days, months and years I've had new experiences, challenges, joys and sorrows that have shaped me. I'm different. The same is true about attending the Professional Businesswomen of California (PBWC) conference. Although the conference is an annual event held at San Francisco's Moscone Center, I still enjoy being with 3,000 other women like me. Yes, we work at different companies and in different industries, have different lifestyles and are at different stages in our professional careers and personal lives; but we are all alike in that we bring the NEW me each year that we attend. This year I can cheer when Safra Catz, President of Oracle, encourages us to trust our intuition; that "if something doesn't make sense, it doesn't make sense". And I can warmly introduce myself to Lisa Askins, Cheryl Melching's business partner at Center Stage Group, when I would have been too intimated to do so last year. This year I can commit to new challenges such as "no whining, no excuses and no gossip" as suggested by Roxanne Emmerich, a goal that I would have wavered on last year. I can also embrace the suggestion given by Dr. Ian Smith to "spend one hour each day" on me - giving myself time to rejuvenate. A friend, when asked if she was attending PBWC this year, said "I've attended the conference several times and there's nothing new!" My perspective is that WE are what makes PBWC's annual conference new. We are far different in 2010 than we were in 2009. We are learning, growing, developing and shedding and that's what makes the conference fresh, vibrant, rewarding, and lasting. It is the diversity of women coming together that makes it new. By sharing our experiences, we discover. By meeting with one another professionally and personally, we connect. And by applying the wisdom learned, we shine. We are reNEW-ed. It shows in our fresh ideas, confident interactions, strategic decisions and successful businesses. This refreshed approach is what our companies want and need, our families depend on, our communities and nation look to for creative solutions to pressing concerns. Thanks Oracle for your continued support and thanks PBWC for providing an annual day to be reNEW-ed.

    Read the article

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

  • Part 7: EBS Modifications and Flagged Files in R12

    - by volker.eckardt(at)oracle.com
    Let me, based on my previous blog, explain the procedure of flagged files a bit better and facilitate the same with screenshots. Flagged files is a concept within the Oracle eBusiness Suite (EBS) release 12, where you flag a standard deployment file, let’s say a Forms file, a Package or a Java class file. When you run the patch analyse, the list of flagged files will be checked and in case one of these files gets patched, the analyse report will tell you. Note: This functionality is also available in release 11, here it is implemented and known as “applcust.txt”. You can flag as many files as you want, in whatever relationship they are with your customizations. In addition to the flag itself you can add a comment. You should use this comment to point to your customization reference (here XXAR_RPT_066 or XXAP_CUST_030). Consider the following two cases: You have created your own report, based on a standard report. In this case you will flag the report file itself, and the key views used. When a patch updates one of these files, you will be informed and can initiate a proper review and testing. (ex.: first line for ARXCTA.rdf) You have created an extensive personalization and because it is business critical you like to be informed if the page definition gets updated. In this case you register the PG.xml file as flagged file. (ex.: second line below for CreateExtBankAcctPG.xml) The menu path to register flagged files is the following: (R) System Administrator > (M) Oracle Applications Manager > Site Map > Maintenance > Register Flagged Files     Your DBA should now run the Patch Analyse every time he is going to apply a new patch. (R) System Administrator > (M) Oracle Applications Manager > Patch Wizard > Task “Recommend/Analyze Patches” The screenshot above shows the impact summary. For this blog entry the number “2” titled “Flagged Files Changed“ is in our focus. When you click the “2” you will get a similar screen like the first in this blog, showing you exactly the files which will get patched if you continue and apply this patch in this environment right now. Note: It is also shown that just 20% of all patch files will get applied. This situation might be different in case your environments are on a different patch level. For sure also the customization impact might then be different. The flagging step can be done directly in the Oracle Applications Manager.  Our developers are responsible for. To transport such a flag+comment we use a FNDLOAD script. It is suggested to put the flagged files data file directly into your CEMLI patch. Herewith the flagged files registration will be executed right at the same time when the patch gets applied. Process Steps: Developer: Builds CEMLI Reviews code and identifies key standard objects referenced Determines standard object files and flags them Creates FNDLOAD file and adds the same to the CEMLI patch DBA: Executes for every new Oracle standard patch the patch analyse in a representative environment Checks and retrieves the flagged files and comments Sends flagged file list back to development team for analyse / retest Developer: Analyses / Updates / Retests effected CEMLIs Prerequisite: The patch analyse has to be executed in an environment where flagged files have been registered. (If you run the patch analyse in a vanilla or outdated environment (compared to your PROD), the analyse will not be so helpful!) When to start with Flagged files? Start right now utilizing this feature. It is an invest to improve the production stability and fulfil your SLA!   Summary Flagged Files is a very helpful EBS R12 technique when analysing patches. Implement a procedure within your development process to maintain such flags. Let the DBA run the patch analyse in an environment with a similar patch and customization level as your current production.   Related Links: EBS Patching Procedures - Chapter 2-13 - Registered Flagged Files

    Read the article

  • Indentify Codecs & Technical Information About Video Files

    - by DigitalGeekery
    Have you ever wanted to play an audio or video file but didn’t have the proper codec installed? Today we’ll show how to determine codecs, along with a host of other technical details about your media files with MediaInfo. Installation Download and install MediaInfo. You can find the download link at the bottom of the page. Note: When installing MediaInfo there is a recommended software bundle which you can opt out of by selecting Do not install option. Each recommended software choice may be different, like in this example it offers Spyware Terminator. The cool thing though is they use Open Candy which opts you out of the install. Just double check to make sure you’re not installing extra crapware. Using MediaInfo The first time you run MediaInfo it will display the Preferences window. There are various option such as language, output format, and whether or not you want MediaInfo to check for new versions. Click OK. Select a file or folder to analyze by clicking on the File or Folder icons on the left of the application window or by selecting File > Open from the menu. You can also drag and drop a file directly onto the application. MediaInfo will display details of your media file. In Basic view, you’ll see basic information. Notice in the example below the video and audio codecs, along with file size, running time of the media file, and even the application used to create the video file (Writing application).    You can switch to some of the other views by selecting View from the Menu and choosing form the dropdown list.   Sheet View will present the information a bit more clearly. You can see in the example below that the video and audio codec are listing in clearly identified columns. (AVC is often more commonly referred to H.264.)   Tree View is perhaps the most detailed. You can see from the example below the codec used for this AVI file is XviD.   Scrolling down even further you’ll see additional information like video and audio bit rates, frame rate, aspect ratio, and more.   In Basic View (and also in Sheet view) you can click to find a player for your file. In this instance with an MP4 file, it took me to the download page for Quicktime. This is by no means the only media player for this file, but if you are stuck for how to play a media file, this will forward you to a solution that works. You can do the same thing with Video codec. Click Go to the web site of this video codec to find a download.   MediaInfo is a simple but powerful tool that can be used to discover the details of a media file, or just to find a compatible codec. It works with most any video file type and is available for Windows, Mac, and Linux. Some Mac and Linux versions, however, are currently command line only. Download MediaInfo Similar Articles Productive Geek Tips How to Convert Videos to 3GP for Mobile PhonesFix for VLC Skipping and Lagging Playing High-Def Video FilesUsing VLC Player Under VistaUse Your Mac Mini as a Media Server Part 2How to Play .OGM Video Files in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • Data Security Through Structure, Procedures, Policies, and Governance

    Security Structure and Procedures One of the easiest ways to implement security is through the use of structure, in particular the structure in which data is stored. The preferred method for this through the use of User Roles, these Roles allow for specific access to be granted based on what role a user plays in relation to the data that they are manipulating. Typical data access actions are defined by the CRUD Principle. CRUD Principle: Create New Data Read Existing Data Update Existing Data Delete Existing Data Based on the actions assigned to a role assigned, User can manipulate data as they need to preform daily business operations.  An example of this can be seen in a hospital where doctors have been assigned Create, Read, Update, and Delete access to their patient’s prescriptions so that a doctor can prescribe and adjust any existing prescriptions as necessary. However, a nurse will only have Read access on the patient’s prescriptions so that they will know what medicines to give to the patients. If you notice, they do not have access to prescribe new prescriptions, update or delete existing prescriptions because only the patient’s doctor has access to preform those actions. With User Roles comes responsibility, companies need to constantly monitor data access to ensure that the proper roles have the most appropriate access levels to ensure users are not exposed to inappropriate data.  In addition this also protects rouge employees from gaining access to critical business information that could be destroyed, altered or stolen. It is important that all data access is monitored because of this threat. Security Governance Current Data Governance laws regarding security Health Insurance Portability and Accountability Act (HIPAA) Sarbanes-Oxley Act Database Breach Notification Act The US Department of Health and Human Services defines HIIPAA as a Privacy Rule. This legislation protects the privacy of individually identifiable health information. Currently, HIPAA   sets the national standards for securing electronically protected health records. Additionally, its confidentiality provisions protect identifiable information being used to analyze patient safety events and improve patient safety. In 2002 after the wake of the Enron and World Com Financial scandals Senator Paul Sarbanes and Representative Michael Oxley lead the creation of the Sarbanes-Oxley Act. This act administered by the Securities and Exchange Commission (SEC) dramatically altered corporate financial practices and data governance. In addition, it also set specific deadlines for compliance. The Sarbanes-Oxley is not a set of standard business rules and does not specify how a company should retain its records; In fact, this act outlines which pieces of data are to be stored as well as the storage duration. The Database Breach Notification Act requires companies, in the event of a data breach containing personally identifiable information, to notify all California residents whose information was stored on the compromised system at the time of the event, according to Gregory Manter. He further explains that this act is only California legislation. However, it does affect “any person or business that conducts business in California, and that owns or licenses computerized data that includes personal information,” regardless of where the compromised data is located.  This will force any business that maintains at least limited interactions with California residents will find themselves subject to the Act’s provisions. Security Policies All companies must work in accordance with the appropriate city, county, state, and federal laws. One way to ensure that a company is legally compliant is to enforce security policies that adhere to the appropriate legislation in their area or areas that they service. These types of polices need to be mandated by a company’s Security Officer. For smaller companies, these policies need to come from executives, Directors, and Owners.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for December 9-15, 2012

    - by Bob Rhubart
    You click, we listen. The following list reflects the Top 10 most popular items posted on the OTN ArchBeat Facefbook page for the week of December 9-15, 2012. DevOps Basics II: What is Listening on Open Ports and Files – WebLogic Essentials | Dr. Frank Munz "Can you easily find out which WebLogic servers are listening to which port numbers and addresses?" asks Dr. Frank Munz. The good doctor has an answer—and a tech tip. Using OBIEE against Transactional Schemas Part 4: Complex Dimensions | Stewart Bryson "Another important entity for reporting in the Customer Tracking application is the Contact entity," says Stewart Bryson. "At first glance, it might seem that we should simply build another dimension called Dim – Contact, and use analyses to combine our Customer and Contact dimensions along with our Activity fact table to analyze Customer and Contact behavior." SOA 11g Technology Adapters – ECID Propagation | Greg Mally "Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few," says Oracle Fusion Middleware A-Team member Greg Mally. "Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective." Greg's post focuses on technical tips for dealing with one of these challenges. Podcast: DevOps and Continuous Integration In Part 1 of a 3-part program, panelists Tim Hall (Senior Director of product management for Oracle Enterprise Repository and Oracle’s Application Integration Architecture), Robert Wunderlich (Principal Product Manager for Oracle’s Application Integration Architecture Foundation Pack) and Peter Belknap (Director of product management for Oracle SOA Integration) discuss why DevOps matters and how it changes development methodologies and organizational structure. Good To Know - Conflicting View Objects and Shared Entity | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis shares his thoughts -- and a sample application -- dealing with an "interesting ADF behavior" encountered over the weekend. Cloud Deployment Models | B. R. Clouse Looking out for the cloud newbies... "As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective," says B. R. Clouse. "This blog is a basic review of cloud deployment models, to help orient newcomers and neophytes." Service governance morphs into cloud API management | David Linthicum "When building and using clouds, the ability to manage APIs or services is the single most important item you can provide to ensure the success of the project," says David Linthicum. "But most organizations driving a cloud project for the first time have no experience handling a service-based architecture and don't see the need for API management until after deployment. By then, it's too late." Oracle Fusion Middleware Security: Password Policy in OAM 11g R2 | Rob Otto Rob Otto continues the Oracle Fusion Middleware A-Team "Oracle Access Manager Academy" series with a detailed look at OAM's ability to support "a subset of password management processes without the need to use Oracle Identity Manager and LDAP Sync." Understanding the JSF Lifecycle and ADF Optimized Lifecycle | Steven Davelaar Could you call that a surprise ending? Oracle WebCenter & ADF Architecture Team (A-Team) member learned a lot more than he expected while creating a UKOUG presentation entitled "What you need to know about JSF to be succesful with ADF." Expanding on requestaudit - Tracing who is doing what...and for how long | Kyle Hatlestad "One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing," says Oracle Fusion Middleware A-Team architect Kyle Hatlestad. Get up close and technical in his post. Thought for the Day "There is no code so big, twisted, or complex that maintenance can't make it worse." — Gerald Weinberg Source: SoftwareQuotes.com

    Read the article

  • Cloud Apps News @#OOW12

    - by Natalia Rachelson
    All eyes were on Oracle this past week and the news cycle was in full swing. What better time to make some key announcements that were guaranteed to create buzz ... and so we did. The name of the game was Cloud! Here are the key Cloud announcements for Apps, which included Fusion Tap that enables mobility across all Cloud Apps, HCM customer momentum in the Cloud, and our very first ERP Cloud Services customer. Oracle Unveils Oracle Fusion Tap for the iPadOracle Fusion Tap - Productivity Amplified Anywhere, Anytime "Both the enterprise and technology providers must recognize the need to innovate and adapt for the increasing mobility of the workforce - not just for sales teams, but across the organization," said Carter Lusher, Research Fellow and Chief Analyst of Enterprise Applications Ecosystem, Ovum. "A mobile application that quickly and powerfully allows employees to make connections, analyze data, and complete activities at any time and wherever they may be located drives new levels of business value and enhances efficiency. Frankly, mobile access is no longer a 'nice to have' but a 'must have.'"  "The mobile workforce is a business reality, and Oracle Fusion Tap is an example of how Oracle delivers mobile and cloud innovations that fundamentally improve productivity and how we work," said Chris Leone, Senior Vice President of Application Development, Oracle. "With Oracle Fusion Tap users will have an all-in-one, easily extensible app that puts mission-critical data and colleague connection at their fingertips." The entire release is available here http://www.oracle.com/us/corporate/press/1855392 Customers Live on Oracle Fusion Human Capital ManagementOracle HCM Cloud Service Helps Power HR's Contribution to the Business "More than 25 of the 100-plus customers who have selected Oracle Fusion Human Capital Management (HCM) are already live. Ardent Leisure, Peach Aviation, Toshiba Medical Systems and Zillow have deployed Oracle HCM Cloud Service and are using it to transform their HR operations. They join companies such as Principal Financial Group and Elizabeth Arden, who are already using Oracle HCM Cloud Service to help manage international growth and deliver pervasive, role-based, configurable solutions to their employees. With these recent go-lives, Oracle takes a leading position in successfully bringing live HCM customers in the cloud."  "As a technology company, Zillow looked to a partner who could scale with us. Zillow has gone live on Oracle HCM Cloud Service, which will give us the ability automate and streamline HR operations for our employees in the near future," said Sarah Bilton, Senior Director HR, Zillow. Read the entire release here http://www.oracle.com/us/corporate/press/1859573 Lending Club Selects Oracle ERP Cloud Service to Help Increase Insight and EfficienciesOracle ERP Cloud Service Provides an Open Architecture, Best-of-Breed Decision-Making, and Scalability in the Cloud "Lending Club, the leading platform for investing in and obtaining personal loans, has selected Oracle ERP Cloud Service to help improve decision-making and workflow, implement robust reporting, and take advantage of the inherent scalability and cost savings provided by the cloud. With more than 76,000 borrowers and 90,000 investors Lending Club utilizes technology and innovation to reduce the cost of traditional banking and offer borrowers better rates and investors better returns.  After an extensive search, Lending Club selected Oracle ERP Cloud Service due to the breadth and depth of capabilities and ongoing innovation of Oracle ERP Cloud Service, as well as Oracle's open architecture, industry leadership and commitment to partners." "Lending Club is an innovative, data-intensive, high-growth company and we needed a solution and partner that could match us," said Carrie Dolan, CFO, Lending Club. "We conducted a thorough review of our options, and Oracle ERP Cloud Service was the clear winner in terms of capabilities and business value as well as commitment to us as a customer." Read the entire release here http://www.oracle.com/us/corporate/press/1859020

    Read the article

  • Delivering SOA Governance with EAMS and Oracle Enterprise Repository by Link Consulting Team

    - by JuergenKress
    In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Link Consulting,OER,OSR,SOA Governance,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • The Best Data Integration for Exadata Comes from Oracle

    - by maria costanzo
    Oracle Data Integrator and Oracle GoldenGate offer unique and optimized data integration solutions for Oracle Exadata. For example, customers that choose to feed their data warehouse or reporting database with near real-time throughout the day, can do so without decreasing  performance or availability of source and target systems. And if you ask why real-time, the short answer is: in today’s fast-paced, always-on world, business decisions need to use more relevant, timely data to be able to act fast and seize opportunities. A longer response to "why real-time" question can be found in a related blog post. If we look at the solution architecture, as shown on the diagram below,  Oracle Data Integrator and Oracle GoldenGate are both uniquely designed to take full advantage of the power of the database and to eliminate unnecessary middle-tier components. Oracle Data Integrator (ODI) is the best bulk data loading solution for Exadata. ODI is the only ETL platform that can leverage the full power of Exadata, integrate directly on the Exadata machine without any additional hardware, and by far provides the simplest setup and fastest overall performance on an Exadata system. We regularly see customers achieving a 5-10 times boost when they move their ETL to ODI on Exadata. For  some companies the performance gain is even much higher. For example a large insurance company did a proof of concept comparing ODI vs a traditional ETL tool (one of the market leaders) on Exadata. The same process that was taking 5hrs and 11 minutes to complete using the competing ETL product took 7 minutes and 20 seconds with ODI. Oracle Data Integrator was 42 times faster than the conventional ETL when running on Exadata.This shows that Oracle's own data integration offering helps you to gain the most out of your Exadata investment with a truly optimized solution. GoldenGate is the best solution for streaming data from heterogeneous sources into Exadata in real time. Oracle GoldenGate can also be used together with Data Integrator for hybrid use cases that also demand non-invasive capture, high-speed real time replication. Oracle GoldenGate enables real-time data feeds from heterogeneous sources non-invasively, and delivers to the staging area on the target Exadata system. ODI runs directly on Exadata to use the database engine power to perform in-database transformations. Enterprise Data Quality is integrated with Oracle Data integrator and enables ODI to load trusted data into the data warehouse tables. Only Oracle can offer all these technical benefits wrapped into a single intelligence data warehouse solution that runs on Exadata. Compared to traditional ETL with add-on CDC this solution offers: §  Non-invasive data capture from heterogeneous sources and avoids any performance impact on source §  No mid-tier; set based transformations use database power §  Mini-batches throughout the day –or- bulk processing nightly which means maximum availability for the DW §  Integrated solution with Enterprise Data Quality enables leveraging trusted data in the data warehouse In addition to Starwood Hotels and Resorts, Morrison Supermarkets, United Kingdom’s fourth-largest food retailer, has seen the power of this solution for their new BI platform and shared their story with us. Morrisons needed to analyze data across a large number of manufacturing, warehousing, retail, and financial applications with the goal to achieve single view into operations for improved customer service. The retailer deployed Oracle GoldenGate and Oracle Data Integrator to bring new data into Oracle Exadata in near real-time and replicate the data into reporting structures within the data warehouse—extending visibility into operations. Using Oracle's data integration offering for Exadata, Morrisons produced financial reports in seconds, rather than minutes, and improved staff productivity and agility. You can read more about Morrison’s success story here and hear from Starwood here. From an Irem Radzik article.

    Read the article

  • The Future of Air Travel: Intelligence and Automation

    - by BobEvans
    Remember those white-knuckle flights through stormy weather where unexpected plunges in altitude result in near-permanent relocations of major internal organs? Perhaps there’s a better way, according to a recent Wall Street Journal article: “Pilots of a Honeywell International Inc. test plane stayed on their initial flight path, relying on the company's latest onboard radar technology to steer through the worst of the weather. The specially outfitted Boeing 757 barely shuddered as it gingerly skirted some of the most ferocious storm cells over Fort Walton Beach and then climbed above the rest in zero visibility.” Or how about the multifaceted check-in process, which might not wreak havoc on liver location but nevertheless makes you wonder if you’ve been trapped in some sort of covert psychological-stress test? Another WSJ article, called “The Self-Service Airport,” says there’s reason for hope there as well: “Airlines are laying the groundwork for the next big step in the airport experience: a trip from the curb to the plane without interacting with a single airline employee. At the airport of the near future, ‘your first interaction could be with a flight attendant,’ said Ben Minicucci, chief operating officer of Alaska Airlines, a unit of Alaska Air Group Inc.” And in the topsy-turvy world of air travel, it’s not just the passengers who’ve been experiencing bumpy rides: the airlines themselves are grappling with a range of challenges—some beyond their control, some not—that make profitability increasingly elusive in spite of heavy demand for their services. A recent piece in The Economist illustrates one of the mega-challenges confronting the airline industry via a striking set of contrasting and very large numbers: while the airlines pay $7 billion per year to third-party computerized reservation services, the airlines themselves earn a collective profit of only $3 billion per year. In that context, the anecdotes above point unmistakably to the future that airlines must pursue if they hope to be able to manage some of the factors outside of their control (e.g., weather) as well as all of those within their control (operating expenses, end-to-end visibility, safety, load optimization, etc.): more intelligence, more automation, more interconnectedness, and more real-time awareness of every facet of their operations. Those moves will benefit both passengers and the air carriers, says the WSJ piece on The Self-Service Airport: “Airlines say the advanced technology will quicken the airport experience for seasoned travelers—shaving a minute or two from the checked-baggage process alone—while freeing airline employees to focus on fliers with questions. ‘It's more about throughput with the resources you have than getting rid of humans,’ said Andrew O'Connor, director of airport solutions at Geneva-based airline IT provider SITA.” Oracle’s attempting to help airlines gain control over these challenges by blending together a range of its technologies into a solution called the Oracle Airline Data Model, which suggests the following steps: • To retain and grow their customer base, airlines need to focus on the customer experience. • To personalize and differentiate the customer experience, airlines need to effectively manage their passenger data. • The Oracle Airline Data Model can help airlines jump-start their customer-experience initiatives by consolidating passenger data into a customer data hub that drives realtime business intelligence and strategic customer insight. • Oracle’s Airline Data Model brings together multiple types of data that can jumpstart your data-warehousing project with rich out-of-the-box functionality. • Oracle’s Intelligent Warehouse for Airlines brings together the powerful capabilities of Oracle Exadata and the Oracle Airline Data Model to give you real-time strategic insights into passenger demand, revenues, sales channels and your flight network. The airline industry aside, the bullet points above offer a broad strategic outline for just about any industry because the customer experience is becoming pre-eminent in each and there is simply no way to deliver world-class customer experiences unless a company can capture, manage, and analyze all of the relevant data in real-time. I’ll leave you with two thoughts from the WSJ article about the new in-flight radar system from Honeywell: first, studies show that a single episode of serious turbulence can wrack up $150,000 in additional costs for an airline—so, it certainly behooves the carriers to gain the intelligence to avoid turbulence as much as possible. And second, it’s back to that top-priority customer-experience thing and the value that ever-increasing levels of intelligence can deliver. As the article says: “In the cabin, reporters watched screens showing the most intense parts of the nearly 10-mile wide storm, which churned some 7,000 feet below, in vibrant red and other colors. The screens also were filled with tiny symbols depicting likely locations of lightning and hail, which can damage planes and wreak havoc on the nerves of white-knuckle flyers.”  (Bob Evans is senior vice-president, communications, for Oracle.)  

    Read the article

  • Get Fanatical About Your Followers

    - by Mike Stiles
    In the fourth of our series of discussions with Aberdeen’s Trip Kucera, we touch on what fans of your brand have come to expect in exchange for their fandom. Spotlight: Around the Oracle Social office, we live for football. So when we think of a true “fan” of a brand, something on the level of a football fan is what comes to mind. But are brands trying to invest fans on that same level? Trip: Yeah, if you’re a football fan, this is definitely your time of year. And if you’ve been to any NFL games recently, especially if you hadn’t been for a few years previously, you may have noticed that from the cup holders to in-stadium Wi-Fi, there’s an increasing emphasis being placed on “fan-focused” accommodations. That’s what they’re known as in the stadium business. Spotlight: How are brands doing in that fan-focused arena? Trip: Remember fan is short for “fanatical.” Brands can definitely learn from the way teams have become fanatical about their fans, or in the social media world, their followers. Many companies consider a segment of their addressable social audience as true fans; I’ve even heard the term “super-fans” used. So just as fans know and can tell you nearly everything about their favorite team, our research shows that there’s a lot value from getting to know your social audience—your followers—at a deeper level. Spotlight: So did your research show there’s a lot to be gained by making fandom a two-way street? Trip: Aberdeen’s new social relationship management research suggests that companies should develop capabilities to better analyze their social audience at a more granular level. Countless “ripped from the headlines” examples, from “United Breaks Guitars” to the most recent British Airways social fiasco we talked about a few weeks ago show how social can magnify the impact of a single customer voice. Spotlight: So how do the companies who are executing social most successfully do that? Trip: Leaders, which are the top-performing companies in Aberdeen’s study, are showing the value of identifying and categorizing your social audience. You should certainly treat every customer as if they have 10,000 followers, because they just might, but you can also proactively engage with high-value customer and high-value influencers. Getting back to the football analogy, it’s like how teams strive to give every guest a great experience, but they really roll out the red carpet for those season ticket and luxury box holders. Spotlight: I’m not allowed in luxury boxes, so you’ll have to tell me what that’s like. But what is the brand equivalent of rolling out the red carpet? Trip: Leaders are nearly three times more likely than Followers to have a process in place that identifies key social influencers for engagement, and more than twice as likely to identify customer advocates for social outreach. This is the kind of knowledge that gives companies the ability to better target social messaging and promotions like we talked about in our last discussion, as well as a basis for understanding how to measure the impact of their social media programs. I’ll give you an example. I hosted an event at one of my favorite restaurants recently. I had mentioned them in a Tweet several weeks before the event, and on the day of the event, they Tweeted out that they were looking forward to seeing me that evening for the event. It’s a small thing, but it had a big impact and I’d certainly go back as a result. Spotlight: So what specifically can brands use and look at to determine where their potential super-fans are? Trip: Social graph analysis, which looks at both the demographic/psychographic trends as well as the behavioral connections, can surface important brand value. Aberdeen’s PR and Brand Management research indicated that top-performing companies are more than three times more likely than Followers to both determine demographic trends through social listening (44% vs. 13%), and to identify meaningful customer segments through social (44% vs. 12%). This kind of brand-level insight can complement and enrich traditional market research. But perhaps even more importantly, it can serve as an early warning system for customer experience failures. @mikestilesPhoto: freedigitalphotos.net

    Read the article

  • Profiling Silverlight Applications after installing Visual Studio 2010 Service Pack 1

    - by mbcrump
    Introduction Now that the dust has settled and everyone has downloaded and installed Visual Studio 2010 Service Pack 1, its time to talk about a new feature included that will help Silverlight Developers profile their applications. Let’s take a look at what the official documentation says about it: Performance Wizard for Silverlight – taken from VS2010 SP1 KB. Visual Studio 2010 SP1 enables you to tune the Silverlight application performance by profiling the code. A traditional code profiler cannot tune the rendering performance for Silverlight applications. Many higher-level profilers are added to Visual Studio 2010 SP1 so that you can better determine which parts of the application consume time. So, how do you do it? After you finish installing VS2010 SP1, make sure it took by going to Help –> About. You should see SP1Rel under Visual Studio 2010 as shown below. Now, that we have verified you are on the most current release, let’s load up a Silverlight Application. I’m going to take my hobby Silverlight project that I created a month or so ago. The reason that I’m picking this project is that I didn’t focus so much on performance as it was just built for fun and to see what I could do with Silverlight. I believe this makes the perfect application to profile.  After the project is loaded, click on Analyze then Launch Performance Wizard. Go ahead and click on CPU Sampling (recommended). You will notice that it ask which application to target. By Default, it will select the .Web project in an Silverlight Application. Go ahead and leave the default Web Project checked. We are going to leave the client as Internet Explorer. Now, go ahead and click finish. Now your Silverlight Application will launch. While your application is running, you will see the following inside of Visual Studio 2010. Here is where you will need to attach your Silverlight Application to the web application that is current being profiled. Simply click on the  Attach/Detach button below and find your application to attach to the profiler. In my case, I am using IE8 and could find it by the title. After you close your browser, you will notice it generated a report: These files will end with a .VSP If you click on the .VSP you will it generated the following report: We could turn off “Just My Code” but it may pick up things that we didn’t want to profile as shown below: One other feature to note is that you may want to export the data to a CSV or XML. You can do that by looking at the toolbar and clicking the button highlighted below. Conclusion The profiler for Silverlight is a great addition to an already great product. So before you ship a Silverlight Application run it through the profile and see what comes up. Since its included and free I can’t see a reason not to do this. Thanks again for reading and I hope you subscribe to my blog or follow me on Twitter for more Silverlight/WP7 fun.  Subscribe to my feed

    Read the article

  • Using Content Analytics for More Effective Engagement

    - by Kellsey Ruppel
    Using Content Analytics for More Effective Engagement: Turning High-Volume Content into Templates for Success By Mitchell Palski, Oracle WebCenter Sales Consultant Many organizations use Oracle WebCenter Portal to develop these basic types of portals: Intranet portals used for collaboration, employee self-service, and company communication Extranet portals used by customers and partners for self-service and support Team collaboration portals that allow users to share documents and content, track activity, and engage in discussions Portals are intended to provide a personalized, single point of interaction with web-based applications and information. The user experiences that a Portal is capable of displaying should be relevant to an individual user or class of users (a group or role). The components of a Portal that would vary based on a user’s identity include: Web content such as images, news articles, and on-screen instruction Social tools such as threaded discussions, polls/surveys, and blogs Document management tools to upload, download, and edit files Web applications that present data visualizations and data entry modules These collections of content, tools, and applications make up valuable workspaces. The challenge that a development team may have is defining which combinations are the most effective for its users. No one wants to create and manage a workspace that goes un-used or (even worse) that is used but is ineffective. Oracle WebCenter Portal provides you with the capabilities to not only rapidly develop variations of portals, but also identify which portals are the most effective and should be re-used throughout an enterprise. Capturing Portal AnalyticsOracle WebCenter Portal provides an analytics service that allows administrators and business users to track and analyze portal usage. These analytics are captured in the form of: Usage tracking metrics Behavior tracking User Profile Correlation The out-of-the-box task reports that come with Oracle WebCenter Portal include: WebCenter Portal Traffic Page Traffic Login Metrics Portlet Traffic Portlet Response Time Portlet Instance Traffic Portlet Instance Response Time Search Metrics Document Metrics Wiki Metrics Blog Metrics Discussion Metrics Portal Traffic Portal Response Time By determining the usage and behavior tracking metrics that are associated with specific user profiles (including groups and roles), your administrators will be able to identify the components of your solution that are the most valuable.  Your first step as an administrator should be to identify the specific pages and/or components are used the most frequently. Next, determine the user(s) or user-group(s) that are accessing those high-use elements of a portal. It is also important to determine patterns in high-usage and see if they correlate to a specific schedule. One of the goals of any development team (especially those that are following Agile methodologies) should be to develop reusable web components to minimize redundant development. Oracle WebCenter Portal provides you the tools to capture the successful workspaces that have already been developed and identified so that they can be reused for similar user demographics. Re-using Successful PortalsWhen creating a new Portal in Oracle WebCenter, developers have the option to base that portal on a template that includes: Pre-seeded data such as pages, tools, user roles, and look-and-feel assets Specific sub-sets of page-layouts, tools, and other resources to standardize what is added to a Portal’s pages Any custom components that your team creates during development cycles Once you have identified a successful workspace and its most valuable components, leverage Oracle WebCenter’s ability to turn that custom portal into a portal template. By creating a template from your already successful portal, you are empowering your enterprise by providing a starting point for future initiatives. Your new projects, new teams, and new web pages can benefit from lessons learned and adjustments that have already been made to optimize user experiences instead of starting from scratch. ***For a complete explanation of how to work with Portal Templates, be sure to read the Fusion Middleware documentation available online.

    Read the article

  • Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why? [closed]

    - by hanzolo
    Over the years I've constantly heard horror stories, had people say "Real Programmers Dont Use VSS", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services "Web-Based" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works "LEAVE IT", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said "someone said VSS Sucks" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new "wizard" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done.

    Read the article

  • Objective C - displaying data in NSTextView

    - by Leo
    Hi, I'm having difficulties displaying data in a TextView in iPhone programming. I'm analyzing incoming audio data (from the microphone). In order to do that, I create an object "analyzer" from my SignalAnalyzer class which performs analysis of the incoming data. What I would like to do is to display each new incoming data in a TextView in realtime. So when I push a button, I create the object "analyzer" whiwh analyze the incoming data. Each time there is new data, I need to display it on the screen in a TextView. My problem is that I'm getting an error because (I think) I'm trying to send a message to the parent class (the one taking care of displaying stuff in my TextView : it has a TexView instance variable linked in Interface Builder). What should I do in order to tell my parent class what it needs to display ? Or how sohould I design my classes to display automaticlally something ? Thank you for your help. PS : Here is my error : 2010-04-19 14:59:39.360 MyApp[1421:5003] void WebThreadLockFromAnyThread(), 0x14a890: Obtaining the web lock from a thread other than the main thread or the web thread. UIKit should not be called from a secondary thread. 2010-04-19 14:59:39.369 MyApp[1421:5003] bool _WebTryThreadLock(bool), 0x14a890: Tried to obtain the web lock from a thread other than the main thread or the web thread. This may be a result of calling to UIKit from a secondary thread. Crashing now... Program received signal: “EXC_BAD_ACCESS”.

    Read the article

  • Tiff Analyzer

    - by Kevin
    I am writing a program to convert some data, mainly a bunch of Tiff images. Some of the Tiffs seems to have a minor problem with them. They show up fine in some viewers (Irfanview, client's old system) but not in others (Client's new system, Window's picture and fax viewer). I have manually looked at the binary data and all the tags seem ok. Can anyone recommend an app that can analyze it and tell me what, if anything, is wrong with it? Also, for clarity sake, I'm only converting the data about the images which is stored seperately in a database and copying the images, I'm not editting the images myself, so I'm pretty sure I'm not messing them up. UDPATE: For anyone interested, here are the tags from a good and bad file: BAD Tag Type Length Value 256 Image Width SHORT 1 1652 257 Image Length SHORT 1 704 258 Bits Per Sample SHORT 1 1 259 Compression SHORT 1 4 262 Photometric SHORT 1 0 266 Fill Order SHORT 1 1 273 Strip Offsets LONG 1 210 (d2 Hex) 274 Orientation SHORT 1 3 277 Samples Per Pixel SHORT 1 1 278 Rows Per Strip SHORT 1 450 279 Strip Byte Counts LONG 1 7264 (1c60 Hex) 282 X Resolution RATIONAL 1 <194 200 / 1 = 200.000 283 Y Resolution RATIONAL 1 <202 200 / 1 = 200.000 284 Planar Configuration SHORT 1 1 296 Resolution Unit SHORT 1 2 Good Tag Type Length Value 254 New Subfile Type LONG 1 0 (0 Hex) 256 Image Width SHORT 1 1193 257 Image Length SHORT 1 788 258 Bits Per Sample SHORT 1 1 259 Compression SHORT 1 4 262 Photometric SHORT 1 0 266 Fill Order SHORT 1 1 270 Image Description ASCII 45 256 273 Strip Offsets LONG 1 1118 (45e Hex) 274 Orientation SHORT 1 1 277 Samples Per Pixel SHORT 1 1 278 Rows Per Strip LONG 1 788 (314 Hex) 279 Strip Byte Counts LONG 1 496 (1f0 Hex) 280 Min Sample Value SHORT 1 0 281 Max Sample Value SHORT 1 1 282 X Resolution RATIONAL 1 <301 200 / 1 = 200.000 283 Y Resolution RATIONAL 1 <309 200 / 1 = 200.000 284 Planar Configuration SHORT 1 1 293 Group 4 Options LONG 1 0 (0 Hex) 296 Resolution Unit SHORT 1 2

    Read the article

  • HQL Query Not Running

    - by Sarang
    select new UpdateCountDataBean(count(elements(am.actionId)) as noOfUpdates, am.pname as name) from ActivityMaster am group by am.pname The UpdateCountDataBean is created by me while the second class i.e. ActivityMaster is POJO class. java.lang.NullPointerException at org.hibernate.hql.ast.tree.MethodNode.handleElements(MethodNode.java:158) at org.hibernate.hql.ast.tree.MethodNode.resolveCollectionProperty(MethodNode.java:109) at org.hibernate.hql.ast.tree.CollectionFunction.resolve(CollectionFunction.java:22) at org.hibernate.hql.ast.HqlSqlWalker.processFunction(HqlSqlWalker.java:835) at org.hibernate.hql.antlr.HqlSqlBaseWalker.collectionFunction(HqlSqlBaseWalker.java:2558) at org.hibernate.hql.antlr.HqlSqlBaseWalker.aggregateExpr(HqlSqlBaseWalker.java:2907) at org.hibernate.hql.antlr.HqlSqlBaseWalker.count(HqlSqlBaseWalker.java:2483) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectExpr(HqlSqlBaseWalker.java:1971) at org.hibernate.hql.antlr.HqlSqlBaseWalker.aliasedSelectExpr(HqlSqlBaseWalker.java:2057) at org.hibernate.hql.antlr.HqlSqlBaseWalker.constructor(HqlSqlBaseWalker.java:2226) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectExpr(HqlSqlBaseWalker.java:1952) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectExprList(HqlSqlBaseWalker.java:1825) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectClause(HqlSqlBaseWalker.java:1394) at org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:553) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectStatement(HqlSqlBaseWalker.java:281) at org.hibernate.hql.antlr.HqlSqlBaseWalker.statement(HqlSqlBaseWalker.java:229) at org.hibernate.hql.ast.QueryTranslatorImpl.analyze(QueryTranslatorImpl.java:228) at org.hibernate.hql.ast.QueryTranslatorImpl.doCompile(QueryTranslatorImpl.java:160) at org.hibernate.hql.ast.QueryTranslatorImpl.compile(QueryTranslatorImpl.java:111) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:77) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:56) at org.hibernate.engine.query.QueryPlanCache.getHQLQueryPlan(QueryPlanCache.java:72) at org.hibernate.impl.AbstractSessionImpl.getHQLQueryPlan(AbstractSessionImpl.java:133) at org.hibernate.impl.AbstractSessionImpl.createQuery(AbstractSessionImpl.java:112) at org.hibernate.impl.SessionImpl.createQuery(SessionImpl.java:1623)

    Read the article

  • Whats wrong with this HQL query?

    - by ManBugra
    did i encounter a hibernate bug or do i have an error i dont see: select enty.number from EntityAliasName enty where enty.myId in ( select cons.myId from Consens cons where cons.number in ( select ord.number from Orders ord where ord.customer = :customer and ord.creationDate < ( select max(ord.creationDate) from Orders ord where ord.customer = :customer ) ) ) what i do get is the following: org.hibernate.util.StringHelper.root(StringHelper.java:257) Caused by: java.lang.NullPointerException at org.hibernate.util.StringHelper.root(StringHelper.java:257) at org.hibernate.persister.entity.AbstractEntityPersister.getSubclassPropertyTableNumber(AbstractEntityPersister.java:1391) at org.hibernate.persister.entity.BasicEntityPropertyMapping.toColumns(BasicEntityPropertyMapping.java:54) at org.hibernate.persister.entity.AbstractEntityPersister.toColumns(AbstractEntityPersister.java:1367) at org.hibernate.hql.ast.tree.FromElement.getIdentityColumn(FromElement.java:320) at org.hibernate.hql.ast.tree.IdentNode.resolveAsAlias(IdentNode.java:154) at org.hibernate.hql.ast.tree.IdentNode.resolve(IdentNode.java:100) at org.hibernate.hql.ast.tree.FromReferenceNode.resolve(FromReferenceNode.java:117) at org.hibernate.hql.ast.tree.FromReferenceNode.resolve(FromReferenceNode.java:113) at org.hibernate.hql.ast.HqlSqlWalker.resolve(HqlSqlWalker.java:854) at org.hibernate.hql.antlr.HqlSqlBaseWalker.propertyRef(HqlSqlBaseWalker.java:1172) at org.hibernate.hql.antlr.HqlSqlBaseWalker.propertyRefLhs(HqlSqlBaseWalker.java:5167) at org.hibernate.hql.antlr.HqlSqlBaseWalker.propertyRef(HqlSqlBaseWalker.java:1133) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectExpr(HqlSqlBaseWalker.java:1993) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectExprList(HqlSqlBaseWalker.java:1932) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectClause(HqlSqlBaseWalker.java:1476) at org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:580) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectStatement(HqlSqlBaseWalker.java:288) at org.hibernate.hql.antlr.HqlSqlBaseWalker.statement(HqlSqlBaseWalker.java:231) at org.hibernate.hql.ast.QueryTranslatorImpl.analyze(QueryTranslatorImpl.java:254) at org.hibernate.hql.ast.QueryTranslatorImpl.doCompile(QueryTranslatorImpl.java:185) at org.hibernate.hql.ast.QueryTranslatorImpl.compile(QueryTranslatorImpl.java:136) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:101) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:80) at org.hibernate.engine.query.QueryPlanCache.getHQLQueryPlan(QueryPlanCache.java:94) at org.hibernate.impl.SessionFactoryImpl.checkNamedQueries(SessionFactoryImpl.java:484) at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:394) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1341) using: Hibernate 3.3.2.GA / postgresql

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >