Search Results

Search found 6716 results on 269 pages for 'blackberry enterprise'.

Page 178/269 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • The challenge of communicating externally with IRM secured content

    - by Simon Thorpe
    I am often asked by customers about how they handle sending IRM secured documents to external parties. Their concern is that using IRM to secure sensitive information they need to share outside their business, is troubled with the inability for third parties to install the software which enables them to gain access to the information. It is a very legitimate question and one i've had to answer many times in the past 10 years whilst helping customers plan successful IRM deployments. The operating system does not provide the required level of content security The problem arises from what IRM delivers, persistent security to your sensitive information where ever it resides and whenever it is in use. Oracle IRM gives customers an array of features that help ensure sensitive information in an IRM document or email is always protected and only accessed by authorized users using legitimate applications. Examples of such functionality are; Control of the clipboard, either by disabling completely in the opened document or by allowing the cut and pasting of information between secured IRM documents but not into insecure applications. Protection against programmatic access to the document. Office documents and PDF documents have the ability to be accessed by other applications and scripts. With Oracle IRM we have to protect against this to ensure content cannot be leaked by someone writing a simple program. Securing of decrypted content in memory. At some point during the process of opening and presenting a sealed document to an end user, we must decrypt it and give it to the application (Adobe Reader, Microsoft Word, Excel etc). This process must be secure so that someone cannot simply get access to the decrypted information. The operating system alone just doesn't have the functionality to deliver these types of features. This is why for every IRM technology there must be some extra software installed and typically this software requires administrative rights to do so. The fact is that if you want to have very strong security and access control over a document you are going to send to someone who is beyond your network infrastructure, there must be some software to provide that functionality. Simple installation with Oracle IRM The software used to control access to Oracle IRM sealed content is called the Oracle IRM Desktop. It is a small, free piece of software roughly about 12mb in size. This software delivers functionality for everything a user needs to work with an Oracle IRM solution. It provides the functionality for all formats we support, the storage and transparent synchronization of user rights and unique to Oracle, the ability to search inside sealed files stored on the local computer. In Oracle we've made every technical effort to ensure that installing this software is a simple as possible. In situations where the user's computer is part of the enterprise, this software is typically deployed using existing technologies such as Systems Management Server from Microsoft or by using Active Directory Group Policies. However when sending sealed content externally, you cannot automatically install software on the end users machine. You need to rely on them to download and install themselves. Again we've made every effort for this manual install process to be as simple as we can. Starting with the small download size of the software itself to the simple installation process, most end users are able to install and access sealed content very quickly. You can see for yourself how easily this is done by walking through our free and easy self service demonstration of using sealed content. How to handle objections and ensure there is value However the fact still remains that end users may object to installing, or may simply be unable to install the software themselves due to lack of permissions. This is often a problem with any technology that requires specialized software to access a new type of document. In Oracle, over the past 10 years, we've learned many ways to get over this barrier of getting software deployed by external users. First and I would say of most importance, is the content MUST have some value to the person you are asking to install software. Without some type of value proposition you are going to find it very difficult to get past objections to installing the IRM Desktop. Imagine if you were going to secure the weekly campus restaurant menu and send this to contractors. Their initial response will be, "why on earth are you asking me to download some software just to access your menu!?". A valid objection... there is no value to the user in doing this. Now consider the scenario where you are sending one of your contractors their employment contract which contains their address, social security number and bank account details. Are they likely to take 5 minutes to install the IRM Desktop? You bet they are, because there is real value in doing so and they understand why you are doing it. They want their personal information to be securely handled and a quick download and install of some software is a small task in comparison to dealing with the loss of this information. Be clear in communicating this value So when sending sealed content to people externally, you must be clear in communicating why you are using an IRM technology and why they need to install some software to access the content. Do not try and avoid the issue, you must be clear and upfront about it. In doing so you will significantly reduce the "I didn't know I needed to do this..." responses and also gain respect for being straight forward. One customer I worked with, 6 months after the initial deployment of Oracle IRM, called me panicking that the partner they had started to share their engineering documents with refused to install any software to access this highly confidential intellectual property. I explained they had to communicate to the partner why they were doing this. I told them to go back with the statement that "the company takes protecting its intellectual property seriously and had decided to use IRM to control access to engineering documents." and if the partner didn't respect this decision, they would find another company that would. The result? A few days later the partner had made the Oracle IRM Desktop part of their approved list of software in the company. Companies are successful when sending sealed content to third parties We have many, many customers who send sensitive content to third parties. Some customers actually sell access to Oracle IRM protected content and therefore 99% of their users are external to their business, one in particular has sold content to hundreds of thousands of external users. Oracle themselves use the technology to secure M&A documents, payroll data and security assessments which go beyond the traditional enterprise security perimeter. Pretty much every company who deploys Oracle IRM will at some point be sending those documents to people outside of the company, these customers must be successful otherwise Oracle IRM wouldn't be successful. Because our software is used by a wide variety of companies, some who use it to sell content, i've often run into people i'm sharing a sealed document with and they already have the IRM Desktop installed due to accessing content from another company. The future In summary I would say that yes, this is a hurdle that many customers are concerned about but we see much evidence that in practice, people leap that hurdle with relative ease as long as they are good at communicating the value of using IRM and also take measures to ensure end users can easily go through the process of installation. We are constantly developing new ideas to reducing this hurdle and maybe one day the operating systems will give us enough rich security functionality to have no software installation. Until then, Oracle IRM is by far the easiest solution to balance security and usability for your business. If you would like to evaluate it for yourselves, please contact us.

    Read the article

  • Design for complex ATG applications

    - by Glen Borkowski
    Overview Needless to say, some ATG applications are more complex than others.  Some ATG applications support a single site, single language, single catalog, single currency, have a single development staff, single business team, and a relatively simple business model.  The real complex applications have to support multiple sites, multiple languages, multiple catalogs, multiple currencies, a couple different development teams, multiple business teams, and a highly complex business model (and processes to go along with it).  While it's still important to implement a proper design for simple applications, it's absolutely critical to do this for the complex applications.  Why?  It's all about time and money.  If you are unable to manage your complex applications in an efficient manner, the cost of managing it will increase dramatically as will the time to get things done (time to market).  On the positive side, your competition is most likely in the same situation, so you just need to be more efficient than they are. This article is intended to discuss a number of key areas to think about when designing complex applications on ATG.  Some of this can get fairly technical, so it may help to get some background first.  You can get enough of the required background information from this post.  After reading that, come back here and follow along. Application Design Of all the various types of ATG applications out there, the most complex tend to be the ones in the telecommunications industry - especially the ones which operate in multiple countries.  To get started, let's assume that we are talking about an application like that.  One that has these properties: Operates in multiple countries - must support multiple sites, catalogs, languages, and currencies The organization is fairly loosely-coupled - single brand, but different businesses across different countries There is some common functionality across all sites in all countries There is some common functionality across different sites within the same country Sites within a single country may have some unique functionality - relative to other sites in the same country Complex product catalog (mostly in terms of bundles, eligibility, and compatibility) At this point, I'll assume you have read through the required reading and have a decent understanding of how ATG modules work... Code / configuration - assemble into modules When it comes to defining your modules for a complex application, there are a number of goals: Divide functionality between the modules in a way that maps to your business Group common functionality 'further down in the stack of modules' Provide a good balance between shared resources and autonomy for countries / sites Now I'll describe a high level approach to how you could accomplish those goals...  Let's start from the bottom and work our way up.  At the very bottom, you have the modules that ship with ATG - the 'out of the box' stuff.  You want to make sure that you are leveraging all the modules that make sense in order to get the most value from ATG as possible - and less stuff you'll have to write yourself.  On top of the ATG modules, you should create what we'll refer to as the Corporate Foundation Module described as follows: Sits directly on top of ATG modules Used by all applications across all countries and sites - this is the foundation for everyone Contains everything that is common across all countries / all sites Once established and settled, will change less frequently than other 'higher' modules Encapsulates as many enterprise-wide integrations as possible Will provide means of code sharing therefore less development / testing - faster time to market Contains a 'reference' web application (described below) The next layer up could be multiple modules for each country (you could replace this with region if that makes more sense).  We'll define those modules as follows: Sits on top of the corporate foundation module Contains what is unique to all sites in a given country Responsible for managing any resource bundles for this country (to handle multiple languages) Overrides / replaces corporate integration points with any country-specific ones Finally, we will define what should be a fairly 'thin' (in terms of functionality) set of modules for each site as follows: Sits on top of the country it resides in module Contains what is unique for a given site within a given country Will mostly contain configuration, but could also define some unique functionality as well Contains one or more web applications The graphic below should help to indicate how these modules fit together: Web applications As described in the previous section, there are many opportunities for sharing (minimizing costs) as it relates to the code and configuration aspects of ATG modules.  Web applications are also contained within ATG modules, however, sharing web applications can be a bit more difficult because this is what the end customer actually sees, and since each site may have some degree of unique look & feel, sharing becomes more challenging.  One approach that can help is to define a 'reference' web application at the corporate foundation layer to act as a solid starting point for each site.  Here's a description of the 'reference' web application: Contains minimal / sample reference styling as this will mostly be addressed at the site level web app Focus on functionality - ensure that core functionality is revealed via this web application Each individual site can use this as a starting point There may be multiple types of web apps (i.e. B2C, B2B, etc) There are some techniques to share web application assets - i.e. multiple web applications, defined in the web.xml, and it's worth investigating, but is out of scope here. Reference infrastructure In this complex environment, it is assumed that there is not a single infrastructure for all countries and all sites.  It's more likely that different countries (or regions) could have their own solution for infrastructure.  In this case, it will be advantageous to define a reference infrastructure which contains all the hardware and software that make up the core environment.  Specifications and diagrams should be created to outline what this reference infrastructure looks like, as well as it's baseline cost and the incremental cost to scale up with volume.  Having some consistency in terms of infrastructure will save time and money as new countries / sites come online.  Here are some properties of the reference infrastructure: Standardized approach to setup of hardware Type and number of servers Defines application server, operating system, database, etc... - including vendor and specific versions Consistent naming conventions Provides a consistent base of terminology and understanding across environments Defines which ATG services run on which servers Production Staging BCC / Preview Each site can change as required to meet scale requirements Governance / organization It should be no surprise that the complex application we're talking about is backed by an equally complex organization.  One of the more challenging aspects of efficiently managing a series of complex applications is to ensure the proper level of governance and organization.  Here are some ideas and goals to work towards: Establish a committee to make enterprise-wide decisions that affect all sites Representation should be evenly distributed Should have a clear communication procedure Focus on high level business goals Evaluation of feature / function gaps and how that relates to ATG release schedule / roadmap Determine when to upgrade & ensure value will be realized Determine how to manage various levels of modules Who is responsible for maintaining corporate / country / site layers Determine a procedure for controlling what goes in the corporate foundation module Standardize on source code control, database, hardware, OS versions, J2EE app servers, development procedures, etc only use tested / proven versions - this is something that should be centralized so that every country / site does not have to worry about compatibility between versions Create a innovation team Quickly develop new features, perform proof of concepts All teams can benefit from their findings Summary At this point, it should be clear why the topics above (design, governance, organization, etc) are critical to being able to efficiently manage a complex application.  To summarize, it's all about competitive advantage...  You will need to reduce costs and improve time to market with the goal of providing a better experience for your end customers.  You can reduce cost by reducing development time, time allocated to testing (don't have to test the corporate foundation module over and over again - do it once), and optimizing operations.  With an efficient design, you can improve your time to market and your business will be more flexible  and agile.  Over time, you'll find that you're becoming more focused on offering functionality that is new to the market (creativity) and this will be rewarded - you're now a leader. In addition to the above, you'll realize soft benefits as well.  Your staff will be operating in a culture based on sharing.  You'll want to reward efforts to improve and enhance the foundation as this will benefit everyone.  This culture will inspire innovation, which can only lend itself to your competitive advantage.

    Read the article

  • From J2EE to Java EE: what has changed?

    - by Bruno.Borges
    See original @Java_EE tweet on 29 May 2014 Yeap, it has been 8 years since the term J2EE was replaced, and still some people refer to it (mostly recruiters, luckily!). But then comes the question: what has changed besides the name? Our community friend Abhishek Gupta worked on this question and provided an excellent response titled "What's in a name? Java EE? J2EE?". But let me give you a few highlights here so you don't lose yourself with YATO (yet another tab opened): J2EE used to be an infrastructure and resources provider only, requiring developers to depend on external 3rd-party frameworks to then implement application requirements or improve productivity J2EE used to require hundreds of XML lines of codes to define just a dozen of resources like EJBs, MDBs, Servlets, and so on J2EE used to support only EAR (Enterprise Archives) with a bunch of other archives like JARs and WARs just to run a simple Web application And so on, and so on! It was a great technology but still required a lot of work to get something up and running. Remember xDoclet? Remember Struts? The old days of pure Hibernate code? Or when Ajax became a trending topic and we were all implementing it with DWR Servlet? Still, we J2EE developers survived, and learned, and helped evolve the platform to a whole new level of DX (Developer Experience). A new DX for J2EE suggested a new name. One that referred to the platform as the Enterprise Edition of Java, because "Java is why we're here" quoting Bill Shannon. The release of Java EE 5 included so many features that clearly showed developers the platform was going after all those DX gaps. Radical simplification of the persistence model with the introduction of JPA Support of Annotations following the launch of Java SE 5.0 Updated XML APIs with the introduction of StAX Drastic simplification of the EJB component model (with annotations!) Convention over Configuration and Dependency Injection A few bullets you may say but that represented a whole new DX and a vision for upcoming versions. Clearly, the release of Java EE 5 helped drive the future of the platform by reducing the number of XMLs, Java Interfaces, simplified configurations, provided convention-over-configuration, etc! We then saw the release of Java EE 6 with even more great features like Managed Beans, CDI, Bean Validation, improved JSP and Servlets APIs, JASPIC, the posisbility to deploy plain WARs and so many other improvements it is difficult to list in one sentence. And we've gotta give Spring Framework some credit here: thanks to Rod Johnson and team, concepts like Dependency Injection fit perfectly into the Java EE Platform. Clearly, Spring used to be one of the most inspiring frameworks for the Java EE platform, and it is great to see things like Pivotal and Spring supporting JSR 352 Batch API standard! Cooperation to keep improving DX at maximum in the server-side Java landscape.  The master piece result of these previous releases is seen and called today as Java EE 7, which by providing a newly and improved JavaServer Faces release, with new features for Web Development like WebSockets API, improved JAX-RS, and JSON-P, but also including Batch API and so many other great improvements, has increased developer productivity and brought innovation to server-side Java developers. Java EE is not just a new name (which was introduced back in May 2006!) but a new Developer Experience for server-side Java developers. To show you why we are here and where we are going (see the Java EE 8 update), we wanted to share with you a draft of the new Java EE logos that the evangelist team created, to help you spread the word about Java EE. You can get access to these images at the Java EE Platform Facebook Album, or the Google+ Java EE Platform Album whichever is better for you, but don't forget to like and/or +1 those social network profiles :-) A message to all job recruiters: stop using J2EE and start using Java EE if you want to find great Java EE 5, Java EE 6, or Java EE 7 developers To not only save you recruiter valuable characters when tweeting that job opportunity but to also match the correct term, we invite you to replace long terms like "Java/J2EE" or even worse "#Java #J2EE #JEE" or all these awkward combinations with the only acceptable hashtag: #JavaEE. And to prove that Java EE is catching among developers and even recruiters, and that J2EE is past, let me highlight here how are the jobs trends! The image below is from Indeed.com trends page, for the following keywords: J2EE, Java/J2EE, Java/JEE, JEE. As you can see, J2EE is indeed going away, while JEE saw some increase. Perhaps because some people are just lazy to type "Java" but at the same time they are aware that J2EE (the '2') is past. We shall forgive that for a while :-) Another proof that J2EE is going away is by looking at its trending statistics at Google. People have been showing less and less interest in the term J2EE. See the chart below:  Recruiter, if you still need proof that J2EE is past, that Java EE is trending, and that other job recruiters are seeking for Java EE developers, and that the developer community is aware of the new term, perhaps these other charts can show you what term you should be using. See for example the Job Trends for Java EE at Indeed.com and notice where it started... 2006! 8 years ago :-) Last but not least, the Google Trends for Java EE term (including the still wrong but forgivable JavaEE term) shows us that the new term is catching up very well. J2EE is past. Oh, and don't worry about the curves going down. We developers like to be hipsters sometimes and today only AngularJS, NodeJS, BigData are going up. Java EE and other traditional server-side technologies such as Spring, or even from other platforms such as Ruby on Rails, PHP, Grails, are pretty much consolidated and the curves... well, they are consolidated too. So If you are a Java EE developer, drop that J2EE from your résumé, and let recruiters also know that this term is past. Embrace Java EE, and enjoy a new developer experience for server-side Java developers. Java EE on TwitterJava EE on Google+Java EE on Facebook

    Read the article

  • Mobile BI Comes of Age

    - by rich.clayton(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} One of the hot topics in the Business Intelligence industry is mobility.  More specifically the question is how business can be transformed by the iPhone and the iPad.  In June 2003, Gartner predicted that Mobile BI would be obsolete and that the technology was headed for the 'trough of disillusionment'.  I agreed with them at that time.  Many vendors like MicroStrategy and Business Objects jumped into the fray attempting to show how PDA's like Palm Pilots could be integrated with BI.  Their investments resulted in interesting demos with no commercial traction.  Why, because wireless networks and mobile operating systems were primitive, immature and slow. In my opinion, Apple's iOS has changed everything in Mobile BI.  Yes Blackberry, Android and Symbian and all the rest have their place in the market but I believe that increasingly consumers (not IT departments) influence BI decision making processes.  Consumers are choosing the iPhone and the iPad. The number of iPads I see in business meetings now is staggering.  Some use it for email and note taking and others are starting to use corporate applications.  The possibilities for Mobile BI are countless and I would expect to see iPads enterprise-wide over the next few years.   These new devices will provide just-in-time access to critical business information.  Front-line managers interacting with customers, suppliers, patients or citizens will have information literally at their fingertips. I've experimented with several mobile BI tools.  They look cool but like their Executive Information System (EIS) predecessors of the 1990's these tools lack a backbone and a plausible integration strategy.  EIS was a viral technology in the early 1990's.  Executives from every industry and job function were showcasing their dashboards to fellow co-workers and colleagues at the country club.  Just like the iPad, every senior manager wanted one.  EIS wasn't a device however, it was a software application.   EIS quickly faded into the software sunset as it lacked integration with corporate information systems.  BI servers  replaced EIS because the technology focused on the heavy data lifting of integrating, normalizing, aggregating and managing large, complex data volumes.  The devices are here to stay. The cute stand-alone mobile BI tools, not so much. If all you're looking to do is put Excel files on your iPad, there are plenty of free tools on the market.  You'll look cool at your next management meeting but after a few weeks, the cool factor will fade away and you'll be wondering how you will ever maintain it.  If however you want secure, consistent, reliable information on your iPad, you need an integration strategy and a way to model the data.  BI Server technologies like the Oracle BI Foundation is a market leading approach to tackle that issue. I liken the BI mobility frenzy to buying classic cars.  Classic Cars have two buying groups - teenagers and middle-age folks looking to tinker.  Teenagers look at the pin-stripes and the paint job while middle-agers (like me)  kick the tires a bit and look under the hood to check out the quality and reliability of the engine.  Mobile BI tools sure look sexy but don't go very far without an engine and a transmission or an integration strategy. The strategic question in Mobile BI is can these startups build a motor and transmission faster than Oracle can re-paint the car?  Oracle has a great engine and a transmission that connects to all enterprise information assets.  We're working on the new paint job and are excited about the possibilities.  Just as vertical integration worked in the automotive business, it too works in the technology industry.

    Read the article

  • The Faces in the Crowdsourcing

    - by Applications User Experience
    By Jeff Sauro, Principal Usability Engineer, Oracle Imagine having access to a global workforce of hundreds of thousands of people who can perform tasks or provide feedback on a design quickly and almost immediately. Distributing simple tasks not easily done by computers to the masses is called "crowdsourcing" and until recently was an interesting concept, but due to practical constraints wasn't used often. Enter Amazon.com. For five years, Amazon has hosted a service called Mechanical Turk, which provides an easy interface to the crowds. The service has almost half a million registered, global users performing a quarter of a million human intelligence tasks (HITs). HITs are submitted by individuals and companies in the U.S. and pay from $.01 for simple tasks (such as determining if a picture is offensive) to several dollars (for tasks like transcribing audio). What do we know about the people who toil away in this digital crowd? Can we rely on the work done in this anonymous marketplace? A rendering of the actual Mechanical Turk (from Wikipedia) Knowing who is behind Amazon's Mechanical Turk is fitting, considering the history of the actual Mechanical Turk. In the late 1800's, a mechanical chess-playing machine awed crowds as it beat master chess players in what was thought to be a mechanical miracle. It turned out that the creator, Wolfgang von Kempelen, had a small person (also a chess master) hiding inside the machine operating the arms to provide the illusion of automation. The field of human computer interaction (HCI) is quite familiar with gathering user input and incorporating it into all stages of the design process. It makes sense then that Mechanical Turk was a popular discussion topic at the recent Computer Human Interaction usability conference sponsored by the Association for Computing Machinery in Atlanta. It is already being used as a source for input on Web sites (for example, Feedbackarmy.com) and behavioral research studies. Two papers shed some light on the faces in this crowd. One paper tells us about the shifting demographics from mostly stay-at-home moms to young men in India. The second paper discusses the reliability and quality of work from the workers. Just who exactly would spend time doing tasks for pennies? In "Who are the crowdworkers?" University of California researchers Ross, Silberman, Zaldivar and Tomlinson conducted a survey of Mechanical Turk worker demographics and compared it to a similar survey done two years before. The initial survey reported workers consisting largely of young, well-educated women living in the U.S. with annual household incomes above $40,000. The more recent survey reveals a shift in demographics largely driven by an influx of workers from India. Indian workers went from 5% to over 30% of the crowd, and this block is largely male (two-thirds) with a higher average education than U.S. workers, and 64% report an annual income of less than $10,000 (keeping in mind $1 has a lot more purchasing power in India). This shifting demographic certainly has implications as language and culture can play critical roles in the outcome of HITs. Of course, the demographic data came from paying Turkers $.10 to fill out a survey, so there is some question about both a self-selection bias (characteristics which cause Turks to take this survey may be unrepresentative of the larger population), not to mention whether we can really trust the data we get from the crowd. Crowds can perform tasks or provide feedback on a design quickly and almost immediately for usability testing. (Photo attributed to victoriapeckham Flikr While having immediate access to a global workforce is nice, one major problem with Mechanical Turk is the incentive structure. Individuals and companies that deploy HITs want quality responses for a low price. Workers, on the other hand, want to complete the task and get paid as quickly as possible, so that they can get on to the next task. Since many HITs on Mechanical Turk are surveys, how valid and reliable are these results? How do we know whether workers are just rushing through the multiple-choice responses haphazardly answering? In "Are your participants gaming the system?" researchers at Carnegie Mellon (Downs, Holbrook, Sheng and Cranor) set up an experiment to find out what percentage of their workers were just in it for the money. The authors set up a 30-minute HIT (one of the more lengthy ones for Mechanical Turk) and offered a very high $4 to those who qualified and $.20 to those who did not. As part of the HIT, workers were asked to read an email and respond to two questions that determined whether workers were likely rushing through the HIT and not answering conscientiously. One question was simple and took little effort, while the second question required a bit more work to find the answer. Workers were led to believe other factors than these two questions were the qualifying aspect of the HIT. Of the 2000 participants, roughly 1200 (or 61%) answered both questions correctly. Eighty-eight percent answered the easy question correctly, and 64% answered the difficult question correctly. In other words, about 12% of the crowd were gaming the system, not paying enough attention to the question or making careless errors. Up to about 40% won't put in more than a modest effort to get paid for a HIT. Young men and those that considered themselves in the financial industry tended to be the most likely to try to game the system. There wasn't a breakdown by country, but given the demographic information from the first article, we could infer that many of these young men come from India, which makes language and other cultural differences a factor. These articles raise questions about the role of crowdsourcing as a means for getting quick user input at low cost. While compensating users for their time is nothing new, the incentive structure and anonymity of Mechanical Turk raises some interesting questions. How complex of a task can we ask of the crowd, and how much should these workers be paid? Can we rely on the information we get from these professional users, and if so, how can we best incorporate it into designing more usable products? Traditional usability testing will still play a central role in enterprise software. Crowdsourcing doesn't replace testing; instead, it makes certain parts of gathering user feedback easier. One can turn to the crowd for simple tasks that don't require specialized skills and get a lot of data fast. As more studies are conducted on Mechanical Turk, I suspect we will see crowdsourcing playing an increasing role in human computer interaction and enterprise computing. References: Downs, J. S., Holbrook, M. B., Sheng, S., and Cranor, L. F. 2010. Are your participants gaming the system?: screening mechanical turk workers. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI '10. ACM, New York, NY, 2399-2402. Link: http://doi.acm.org/10.1145/1753326.1753688 Ross, J., Irani, L., Silberman, M. S., Zaldivar, A., and Tomlinson, B. 2010. Who are the crowdworkers?: shifting demographics in mechanical turk. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI EA '10. ACM, New York, NY, 2863-2872. Link: http://doi.acm.org/10.1145/1753846.1753873

    Read the article

  • The Future of Project Management is Social

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Kazim Isfahani, Director, Product Marketing, Oracle Rapid Ascent. Breakneck Speed. Lightning Fast. Perhaps even overwhelming. No matter which set of adjectives we use to describe it, social media’s rise into the enterprise mainstream has been unprecedented. Indeed, the big 4 social media powerhouses (Facebook, Google+, LinkedIn, and Twitter), have nearly 2 Billion users between them. You may be asking (as you should really) “That’s all well and good for the consumer, but for me at my company, what’s your point? Beyond the fact that I can check and post updates, that is.” Good question, kind sir. Impact of Social and Collaboration on Project Management I’ll dovetail this discussion to the project management realm, since that’s what I’m writing about. Speed is a big challenge for project-driven organizations. Anything that can help speed up project delivery - be it a new product introduction effort or a geographical expansion project - fast is a good thing. So where does this whole social thing fit particularly since there are already a host of tools to help with traditional project execution? The fact is companies have seen improvements in their productivity by deploying departmental collaboration and other social-oriented solutions. McKinsey’s survey on social tools shows we have reached critical scale: 72% of respondents report that their companies use at least one and over 40% say they are using social networks and blogs. We don’t hear as much about the impact of social media technologies at the project and project manager level, but that does not mean there is none. Consider the new hire. The type of individual entering the workforce and executing on projects is a generation of worker expecting visually appealing, easy to use and easy to understand technology meshing hand-in-hand with business processes. Consider the project manager. The social era has enhanced the role that the project manager must play. Today’s project manager must be a supreme communicator, an influencer, a sympathizer, a negotiator, and still manage to keep all stakeholders in the loop on project progress. Social tools play a significant role in this effort. Now consider the impact to the project team. The way that a project team functions has changed, with newer, social oriented technologies making the process of information dissemination and team communications much more fluid. It’s clear that a shift is occurring where “social” is intersecting with project management. The Rise of Social Project Management We refer to the melding of project management and social networking as Social Project Management. Social Project Management is based upon the philosophy that the project team is one part of an integrated whole, and that valuable and unique abilities exist within the larger organization. For this reason, Social Project Management systems should be integrated into the collaborative platform(s) of an organization, allowing communication to proceed outside the project boundaries. What makes social project management "social" is an implicit awareness where distributed teams build connected links in ways that were previously restricted to teams that were co-located. Just as critical, Social Project Management embraces the vision of seamless online collaboration within a project team, but also provides for, (and enhances) the use of rigorous project management techniques. Social Project Management acknowledges that projects (particularly large projects) are a social activity - people doing work with people, for other people, with commitments to yet other people. The more people (larger projects), the more interpersonal the interactions, and the more social affects the project. The Epitome of Social - Fusion Project Portfolio Management If I take this one level further to discuss Fusion Project Portfolio Management, the notion of Social Project Management is on full display. With Fusion Project Portfolio Management, project team members have a single place for interaction on projects and access to any other resources working within the Fusion ERP applications. This allows team members the opportunity to be informed with greater participation and provide better information. The application’s the visual appeal, and highly graphical nature makes it easy to navigate information. The project activity stream adds to the intuitive user experience. The goal of productivity is pervasive throughout Fusion Project Portfolio Management. Field research conducted with Oracle customers and partners showed that users needed a way to stay in the context of their core transactions and yet easily access social networking tools. This is manifested in the application so when a user executes a business process, they not only have the transactional application at their fingertips, but also have things like e-mail, SMS, text, instant messaging, chat – all providing a number of different ways to interact with people and/or groups of people, both internal and external to the project and enterprise. But in the end, connecting people is relatively easy. The larger issue is finding a way to serve up relevant, system-generated, actionable information, in real time, which will allow for more streamlined execution on key business processes. Fusion Project Portfolio Management’s design concept enables users to create project communities, establish discussion threads, manage event calendars as well as deliver project based work spaces to organize communications within the context of a project – all within a secure business environment. We’d love to hear from you and get your thoughts and ideas about how Social Project Management is impacting your organization. To learn more about Oracle Fusion Project Portfolio Management, please visit this link

    Read the article

  • 7 Good Reasons to Upgrade E-Business Suite to the cloud

    - by Lisa Schwartz
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} As promised here is blog Part 2: Why Upgrade to Oracle E-Business Suite 12 in the cloud? 7 Good Reasons to Upgrade to E-Business Suite 12 in the Cloud: 1)   Take advantage of new and improved features: from global sub-ledger accounting to mobile access for supply chain management to built-in extensions for information search and discovery. If you haven’t checked out the latest features yet, there are over 1000 EBS 12 enhancements. 2) Plan now to address any ongoing Oracle Support considerations and regulatory compliance requirements. EBS Release 11 support is ending soon. Based upon that information alone, you should have an EBS upgrade strategy and planning well underway. 3) Customizations got you worried? Expedite your next Oracle E-Business Suite upgrade – have Oracle identify all customizations, reduce un-needed customizations (EBS 12 has built-in many of your customizations) and during the upgrade keep all necessary customizations to run your business. 4) Migrating EBS to the cloud allows parallel migration and testing. Therefore no extra hardware purchases for the testing and upgrade. Business disruption is minimized. And, by moving to the cloud, this provides for smoother future upgrades that are based on your own timeline. 5) Oracle Experts will upgrade and run your EBS applications for you in the cloud. Free your IT resources to develop new services and work on projects that are critical to business innovation and competitiveness. Your IT resources will not be inundated with upgrade tasks!      6) Reallocate precious IT dollars to other projects, eliminate CapEx costs. 7) Oracle minimizes business risk by having enterprise class cloud services under stringent SLAs designed to run your business applications for you such as: a. Enterprise grade infrastructure b. World-class security and identity management c. Best practices in regulatory compliance: from classified federal gov’t standards, to healthcare HIPPA standards to meeting Financial Services requirements (PCI DSS) Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 7 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Next Step: To help you upgrade and get to the cloud in the shortest period of  time, Oracle has a program called Oracle Upgrade Factory for Oracle E-Business Suite 12. It offers a unique approach, seamlessly bundling Managed Cloud Services and Oracle Consulting Services together for an entire Oracle E-Business Suite upgrade and migration to a managed private  cloud. Read the Oracle Upgrade Factory Solution Brief here. Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Translating Your Customizations

    - by Richard Bingham
    This blog post explains the basics of translating the customizations you can make to Fusion Applications products, with the inclusion of information for both composer-based customizations and the generic design-time customizations done via JDeveloper. Introduction Like most Oracle Applications, Fusion Applications installs on-premise with a US-English base language that is, in Release 7, supported by the option to add up to a total of 22 additional language packs (In Oracle Cloud production environments languages are pre-installed already). As such many organizations offer their users the option of working with their local language, and logically that should also apply for any customizations as well. Composer-based UI Customizations Customizations made in Page Composer take into consideration the session LOCALE, as set in the user preferences screen, during all customization work, and stores the customization in the MDS repository accordingly. As such the actual new or changed values used will only apply for the same language under which the customization was made, and text for any other languages requires a separate upload. See the Resource Bundles section below, which incidentally also applies to custom UI changes done in JDeveloper. You may have noticed this when you select the “Select Text Resource” menu option when editing the text on a page. Using this ensures that the resource bundles are used, whereas if you define a static value in Expression Builder it will never be available for translation. Notice in the screenshot below the “What’s New” custom value I have already defined using the ‘Select Text Resource’ feature is internally using the adfBundle groovy function to pull the custom value for my key (RT_S_1) from the ComposerOverrideBundle. Figure 1 – Page Composer showing the override bundle being used. Business Objects Customizing the Business Objects available in the Applications Composer tool for the CRM products, such as adding additional fields, also operates using the session language. Translating these additional values for these fields into other installed languages requires loading additional resource bundles, again as described below. Reports and Analytics Most customizations to Reports and BI Analytics are just essentially reorganizations and visualizations of existing number and text data from the system, and as such will use the appropriate values based on the users session language. Where a translated value or string exists for that session language, it will be used without the need for additional work. Extending through the addition of brand new reports and analytics requires another method of loading the translated strings, as part of what is known as ‘Localizing’ the BI Catalog and Metadata. This time it is via an export/import of XML data through the BI Administrators console, and is described in the OBIEE Admin Guide. Fusion Applications reports based on BI Publisher are already defined in template-per-locale, and in addition provide an extra process for getting the data for translation and reloading. This again uses the standard resource bundle format. Loading a custom report is illustrated in this video from our YouTube channel which shows the screen for both setting the template local and running an export for translation. Fusion Applications Menus Whilst the seeded Navigator and Global Menu values are fully translated when the additional language is installed, if they are customized then the change or new menu item will apply universally, not currently per language. This is set to change in a future release with the new UI Text Editor feature described below. More on Resource Bundles As mentioned above, to provide translations for most of your customizations you need to add values to a resource bundle. This is an industry open standard (OASIS) format XML file with the extension .xliff, and store translated values for the strings used by ADF at run-time. The general process is that these values are exported from the MDS repository, manually edited, and then imported back in again.This needs to be done by an administrator, via either WLST commands or through Enterprise Manager as per the screenshot below. This is detailed out in the Fusion Applications Extensibility Guide. For SaaS environments the Cloud Operations team can assist. Figure 2 – Enterprise Manager’s MDS export used getting resource bundles for manual translation and re-imported on the same screen. All customized strings are stored in an override bundle (xliff file) for each locale, suffixed with the language initials, with English ones being saved to the default. As such each language bundle can be easily identified and updated. Similarly if you used JDeveloper to create your own applications as extensions to Fusion Applications you would use the native support for resource bundles, and add them into the faces-config.xml file for inclusion in your application. An example is this ADF customization video from our YouTube channel. JDeveloper also supports automatic synchronization between your underlying resource bundles and any translatable strings you add – very handy. For more information see chapters on “Using Automatic Resource Bundle Integration in JDeveloper” and “Manually Defining Resource Bundles and Locales” in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework. FND Messages and Look-ups FND Messages, as defined here, are not used for UI labels (they are known as ‘strings’), but are the responses back to users as a result of an action, such as from a page submit. Each ‘message’ is defined and stored in the related database table (FND_MESSAGES_B), with another (FND_MESSAGES_TL) holding any language-specific values. These come seeded with the additional language installs, however if you customize the messages via the “Manage Messages” task in Functional Setup Manager, or add new ones, then currently (in Release 7) you’ll need to repeat it for each language. Figure 3 – An FND Message defined in an English user session. Similarly Look-ups are stored in a translation table (FND_LOOKUP_VALUES_TL) where appropriate, and can be customized by setting the users session language and making the change  in the Setup and Maintenance task entitled “Manage [Standard|Common] Look-ups”. Online Help Yes, in fact all the seeded help is applied as part of each language pack install as part of the post-install provisioning process. If you are editing or adding custom online help then the Create Help screen provides a drop-down of which language your help customization will apply to. This is shown in the video below from our YouTube channel, and obviously you’ll need to it for each language in use. What is Coming for Translations? Currently planned for Release 8 is something called the User Interface (UI) Text Editor. This tool will allow the editing of all the text shown on the pages and forms of Fusion Application. This will provide a search based on a particular term or word, say “Worker”, and will allow it to be adjusted, say to “Employee”, which then updates all the Resource Bundles that contain it. In the case of multi-language environments, it will use the users session language (locale) to know which Resource Bundles to apply the change to. This capability will also support customization sandboxes, to help ensure changes can be tested and approved.  It is also interesting to note that the design currently allows any page-specific customizations done using Page Composer or Application Composer to over-write the global changes done via the UI Text Editor, allowing for special context-sensitive values to still be used. Further Reading and Resources The following short list provides the mains resources for digging into more detail on translation support for both Composer and JDeveloper customization projects. There is a dedicated chapter entitled “Translating Custom Text” in the Fusion Applications Extensibility Guide. This has good examples and steps for many tasks, especially administering resource bundles. Using localization formatting (numbers, dates etc) for design-time changes is well documented in the Fusion Applications Developer Guide. For more guidelines on general design-time globalization, see either the ‘Internationalizing and Localizing Pages’ chapter in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework (Oracle Fusion Applications Edition) or the general Oracle Database Globalization Support Guide. The Oracle Architecture ‘A-Team’ provided a recent post on customizing the user session timeout popup, using design-time changes to resource bundles. It has detailed step-by-step examples which can be a useful illustration.

    Read the article

  • Slicing the EDG

    - by Antony Reynolds
    Different SOA Domain Configurations In this blog entry I would like to introduce three different configurations for a SOA environment.  I have omitted load balancers and OTD/OHS as they introduce a whole new round of discussion.  For each possible deployment architecture I have identified some of the advantages. Super Domain This is a single EDG style domain for everything needed for SOA/OSB.   It extends the standard EDG slightly but otherwise assumes a single “super” domain. This is basically the SOA EDG.  I have broken out JMS servers and Coherence servers to improve scalability and reduce dependencies. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if rest of domain is unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service. Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Benefits Single Administration Point (1 Admin Server) Closely follows EDG with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Drawbacks Patching is an all or nothing affair. Startup time for SOA may be slow if large number of composites deployed. Multiple Domains This extends the EDG into multiple domains, allowing separate management and update of these domains.  I see this type of configuration quite often with customers, although some don't have OWSM, others don't have separate Coherence etc. SOA & BAM are kept in the same domain as little benefit is obtained by separating them. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service. Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Benefits Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Patch lifecycle of OSB/SOA/JMS are no longer lock stepped. JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB. OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability. All domains use same OWSM policy store (MDS-WSM). Drawbacks Multiple domains to manage and configure. Multiple Admin servers (single view requires use of Grid Control) Multiple Admin servers/WSM clusters waste resources. Additional homes needed to enjoy benefits of separate patching. Cross domain trust needs setting up to simplify cross domain interactions. Startup time for SOA may be slow if large number of composites deployed. Shared Service Environment This model extends the previous multiple domain arrangement to provide a true shared service environment.This extends the previous model by allowing multiple additional SOA domains and/or other domains to take advantage of the shared services.  Only one non-shared domain is shown, but there could be multiple, allowing groups of applications to share patching independent of other application groups. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Shared SOA Domain hosts Human Workflow Tasks BAM Common "utility" composites Single OSB domain provides "Enterprise Service Bus" All domains use same OWSM policy store (MDS-WSM) Benefits Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Patch lifecycle of OSB/SOA/JMS are no longer lock stepped. JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB. OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability. All domains use same OWSM policy store (MDS-WSM). Supports large numbers of deployed composites in multiple domains. Single URL for Human Workflow end users. Single URL for BAM end users. Drawbacks Multiple domains to manage and configure. Multiple Admin servers (single view requires use of Grid Control) Multiple Admin servers/WSM clusters waste resources. Additional homes needed to enjoy benefits of separate patching. Cross domain trust needs setting up to simplify cross domain interactions. Human Workflow needs to be specially configured to point to shared services domain. Summary The alternatives in this blog allow for patching to have different impacts, depending on the model chosen.  Each organization must decide the tradeoffs for itself.  One extreme is to go for the shared services model and have one domain per SOA application.  This requires a lot of administration of the multiple domains.  The other extreme is to have a single super domain.  This makes the entire enterprise susceptible to an outage at the same time due to patching or other domain level changes.  Hopefully this blog will help your organization choose the right model for you.

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • Azure Task Scheduling Options

    - by charlie.mott
    Currently, the Azure PaaS does not offer a distributed\resilient task scheduling service.  If you do want to host a task scheduling product\solution off-premise (and ideally use Azure), what are your options? PaaS Option 1: Worker Roles Use a worker role to schedule and execute actions at specific time periods.  There are a few frameworks available to assist with this: http://azuretoolkit.codeplex.com https://github.com/Lokad/lokad-cloud/wiki/TaskScheduler http://blog.smarx.com/posts/building-a-task-scheduler-in-windows-azure - This addresses a slightly different set of requirements. It’s a more dynamic approach for queuing up tasks, but not repeatable tasks (e.g. daily). I found the Azure Toolkit option the most simple to implement.  Step 1 : Create a domain entity implementing IJob for each job to schedule.  In this sample, I asynchronously call a WCF service method. 1: namespace Acme.WorkerRole.Jobs 2: { 3: using AzureToolkit; 4: using ScheduledTasksService; 5: 6: public class UploadEmployeesJob : IJob 7: { 8: public void Run() 9: { 10: // Call Tasks Service 11: var client = new ScheduledTasksServiceClient("BasicHttpBinding_IScheduledTasksService"); 12: client.UploadEmployees(); 13: client.Close(); 14: } 15: } 16: } Step 2 : In the worker role run method, add the jobs to the toolkit engine. 1: namespace Acme.WorkerRole 2: { 3: using AzureToolkit.Engine; 4: using Jobs; 5:   6: public class WorkerRole : WorkerRoleEntryPoint 7: { 8: public override void Run() 9: { 10: var engine = new CloudEngine(); 11:   12: // Add Scheduled Jobs (using CronJob syntax - see http://www.adminschoice.com/crontab-quick-reference). 13:   14: // 1. Upload Employee job - 8.00 PM every weekday (Mon-Fri) 15: engine.WithJobScheduler().ScheduleJob<UploadEmployeesJob>(c => { c.CronSchedule = "0 20 * * 1-5"; }); 16: // 2. Purge Data job - 10 AM every Saturday 17: engine.WithJobScheduler().ScheduleJob<PurgeDataJob>(c => { c.CronSchedule = "0 10 * * 6"; }); 18: // 3. Process Exceptions job - Every 5 minutes 19: engine.WithJobScheduler().ScheduleJob<ProcessExceptionsJob>(c => { c.CronSchedule = "*/5 * * * *"; }); 20:   21: engine.Run(); 22: base.Run(); 23: } 24: } 25: } Pros Cons Azure Toolkit option is simple to implement. For the AzureToolkit option, you are limited to a single worker role.  Otherwise, the jobs will be executed multiple times, once for each worker role instance.   Paying for a continuously running worker role, even if it just processes a single job once a week.  If you only have a few scheduled tasks to run calling asynchronous services hosted in different web roles, an extra small worker role likely to be sufficient.  However, for an extra small worker role this still costs $14.40/month (03/09/2012). Option 2: Use Scheduled Task on Azure Web Role calling a console app Setup a Windows Scheduled Task on the Azure Web Role. This calls a console application that calls the WCF service methods that run the task actions. This design is described here: http://www.ronaldwidha.net/2011/02/23/cron-job-on-azure-using-scheduled-task-on-a-web-role-to-replace-azure-worker-role-for-background-job/ http://www.voiceoftech.com/swhitley/index.php/2011/07/windows-azure-task-scheduler/ http://devlicio.us/blogs/vinull/archive/2011/10/23/moving-to-azure-worker-roles-for-nothing-and-tasks-for-free.aspx Pros Cons Fairly easy to implement. Supportability - I RDC’ed onto the Azure server and stopped the scheduled task. I then rebooted the machine and the task was re-started. I also tried deleting the task and rebooting, the same thing occurred. The only way to permanently guarantee that a task is disabled is to do a fresh deployment. I think this is a major supportability concern.   Saleability - multiple instances would trigger multiple tasks. You can only have one instance for the scheduled task web role. The guidance implements setup of the scheduled task as part of a web role instance. But if you have more than one instance in a web role, the task will be triggered multiple times for each scheduled action (once per machine). Workaround: If we wanted to use scheduled tasks for another client with a saleable WCF service, then we could include the console & tasks scripts in a separate web role (e.g. a empty WCF service with no real purpose to it). SaaS Option 3: Azure Marketplace I thought that someone might be offering this type of service via the Azure marketplace. At the point of writing this blog post, I did not find anyone doing so. https://datamarket.azure.com/ Pros Cons   Nobody currently offers this on the Azure Marketplace. Option 4: Online Job Scheduling Service Provider There are plenty of online providers that offer this type of service on a pay-as-you-go approach.  Some of these are free for small usage.   Many of these providers are listed here: http://en.wikipedia.org/wiki/Webcron Pros Cons No bespoke development for scheduler. Reliance on third party. IaaS Option 5: Setup Scheduling Software on Azure IaaS VM’s One of job scheduling software offerings could be installed and configured on Azure VM’s.  A list of software options is listed here: http://en.wikipedia.org/wiki/List_of_job_scheduler_software Pros Cons Enterprise distributed\resilient task scheduling service VM Setup and maintenance   Software Licence Costs Option 6: VM Gallery A the time of writing this blog post, I did not spot a VM in the gallery that included pre-installation of any of the above software options. Pros Cons   No current VM template. Summary For my current project that had a small handful of tasks to schedule with a limited project budget I chose option 1 (a worker role using the Azure Toolkit to schedule tasks).  If I was building an enterprise scale solution for the future, options 4 and 5 are currently worthy of consideration. Hopefully, Microsoft will include tasks scheduling in the future as part of their PaaS offerings.

    Read the article

  • JPA 2?EJB 3.1?JSF 2????????! WebLogic Server 12c?????????Java EE 6??????|WebLogic Channel|??????

    - by ???02
    ????????????????????????????????????????·???????????Java EE 6???????????????·????WebLogic Server 12c?(???)?????????Oracle Enterprise Pack for Eclipse 12c?????Java EE 6??????3???????????????????????JSF 2.0?????????????????????????JAX-RS????RESTful?Web???????????????(???)?????????????JSF 2.0???????????????? Java EE 6??????????????????????????????????????JSF(JavaServer Faces) 2.0??????????Java EE?????????????????????????????????Struts????????????????????????????????JSF 2.0?Java EE 6??????????????????????????????????????????????????JSP(JavaServer Pages)?JSF???????????????????????·???????????????????????Web???????????????????????????????????????????????????????????????????????????????? ???????????????????????????????EJB??????????????EMPLOYEES??????????????????????XHTML????????????????????????????????????????????????????????????ManagedBean????????????JSF 2.0????????????????????? ?????????Oracle Enterprise Pack for Eclipse(OEPE)?????????????????Eclipse(OEPE)???????·?????OOW?????????????????·???????????Properties?????????????????·???·????????????????????????????Project Facets????????????JavaServer Faces?????????????Apply?????????OK???????????? ???JSF????????????????????????????ManagedBean???IndexBean?????????????OOW??????????????????·???????????????NEW?-?Class??????New Java Class??????????????????????Package????managed???Name????IndexBean???????Finish???????????? ?????IndexBean??????·????????????????????????????????????????????IndexBean(IndexBean.java)?package managed;import java.util.ArrayList;import java.util.List;import javax.ejb.EJB;import javax.faces.bean.ManagedBean;import ejb.EmpLogic;import model.Employee;@ManagedBeanpublic class IndexBean {  @EJB  private EmpLogic empLogic;  private String keyword;  private List<Employee> results = new ArrayList<Employee>();  public String getKeyword() {    return keyword;  }  public void setKeyword(String keyword) {    this.keyword = keyword;  }  public List getResults() {    return results;  }  public void actionSearch() {    results.clear();    results.addAll(empLogic.getEmp(keyword));  }} ????????????????keyword?results??????????????????????????????Session Bean???EmpLogic?????????????????@EJB?????????????????????????????????????????????????????????????????????actionSearch??????????????EmpLogic?????????·????????????????????result???????? ???ManagedBean?????????????????????????????????????????·??????OOW??????????????WebContent???????index.xhtml????? ???????????index.xhtml????????????????????????????????????????????????(Index.xhtml)?<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"   "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"  xmlns:ui="http://java.sun.com/jsf/facelets"  xmlns:h="http://java.sun.com/jsf/html"  xmlns:f="http://java.sun.com/jsf/core"><h:head>  <title>Employee??????</title></h:head><h:body>  <h:form>    <h:inputText value="#{indexBean.keyword}" />    <h:commandButton action="#{indexBean.actionSearch}" value="??" />    <h:dataTable value="#{indexBean.results}" var="emp" border="1">      <h:column>        <f:facet name="header">          <h:outputText value="employeeId" />        </f:facet>        <h:outputText value="#{emp.employeeId}" />      </h:column>      <h:column>        <f:facet name="header">          <h:outputText value="firstName" />        </f:facet>        <h:outputText value="#{emp.firstName}" />      </h:column>      <h:column>        <f:facet name="header">          <h:outputText value="lastName" />        </f:facet>        <h:outputText value="#{emp.lastName}" />      </h:column>      <h:column>        <f:facet name="header">          <h:outputText value="salary" />        </f:facet>        <h:outputText value="#{emp.salary}" />      </h:column>    </h:dataTable>  </h:form></h:body></html> index.xhtml???????????????????ManagedBean???IndexBean??????????????????????????????IndexBean?????actionSearch??????????h:commandButton???????????????????????????????????????? ???Web???????????????(web.xml)??????web.xml???????·?????OOW???????????WebContent?-?WEB-INF?????? ?????????????web-app??????????????welcome-file-list(????)?????????????Web???????????????(web.xml)?<?xml version="1.0" encoding="UTF-8"?><web-app xmlns:javaee="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="3.0">  <javaee:display-name>OOW</javaee:display-name>  <servlet>    <servlet-name>Faces Servlet</servlet-name>    <servlet-class>javax.faces.webapp.FacesServlet</servlet-class>    <load-on-startup>1</load-on-startup>  </servlet>  <servlet-mapping>    <servlet-name>Faces Servlet</servlet-name>    <url-pattern>/faces/*</url-pattern>  </servlet-mapping>  <welcome-file-list>    <welcome-file>/faces/index.xhtml</welcome-file>  </welcome-file-list></web-app> ???JSF????????????????????????????? ??????Java EE 6?JPA 2.0?EJB 3.1?JSF 2.0????????????????????????????????????????????????????????????????·?????OOW???????????·???????????????Run As?-?Run on Server??????????????????????????????????????????????????????????Oracle WebLogic Server 12c(12.1.1)??????Next??????????????? ?????????????????????Domain Directory??????Browse????????????????????????C:\Oracle\Middleware\user_projects\domains\base_domain??????Finish???????????? ?????WebLogic Server?????????????????????????????????????????????????????????????????????OEPE??Servers???????Oracle WebLogic Server 12c???????????·???????????????Properties??????????????????????????????WebLogic?-?Publishing????????????Publish as an exploded archive??????????????????OK???????????? ???????????????????????????????????????????·?????OOW???????????·???????????????Run As?-?Run on Server??????????????????Finish???????????? ???????????????????????????????????????????????·??????????????????????????????????????????firstName?????????????????JAX-RS???RESTful?Web??????? ?????????JAX-RS????RESTful?Web??????????????? Java EE??????????Java EE 5???SOAP????Web??????????JAX-WS??????????Java EE 6????????JAX-RS?????????????RESTful?Web????????????·????????????????????????JAX-RS????????Session Bean??????·?????????Web???????????????????????????????????????????????JAX-RS?????????? ?????????????????????????????JAX-RS???RESTful Web??????????????????????????·?????OOW???????????·???????????????Properties???????????????????????????Project Facets?????????????JAX-RS(Rest Web Services)???????????Further configuration required?????????????Modify Faceted Project???????????????JAX-RS??????·?????????????????JAX-RS Implementation Library??????Manage libraries????(???????????)?????????????? ??????Preference(Filtered)???????????????New????????????????New User Library????????????????User library name????JAX-RS???????OK???????????????????Preference(Filtered)?????????????Add JARs????????????????????????C:\Oracle\Middleware\modules \com.sun.jersey.core_1.1.0.0_1-9.jar??????OK???????????? ???Modify Faceted Project??????????JAX-RS Implementation Library????JAX-RS????????????????????JAX-RS servlet class name????com.sun.jersey.spi.container.servlet.ServletContainer???????OK?????????????Project Facets???????????????????OK?????????????????? ???RESTful Web??????????????????????????????????(???????EmpLogic?????????????)??RESTful Web?????????????EmpLogic(EmpLogic.java)?package ejb; import java.util.List; import javax.ejb.LocalBean; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.ws.rs.GET;import javax.ws.rs.Path;import javax.ws.rs.PathParam;import javax.ws.rs.Produces;import model.Employee; @Stateless @LocalBean @Path("/emprest")public class EmpLogic {     @PersistenceContext(unitName = "OOW")     private EntityManager em;     public EmpLogic() {     }  @GET  @Path("/getname/{empno}")  // ?  @Produces("text/plain")  // ?  public String getEmpName(@PathParam("empno") long empno) {    Employee e = em.find(Employee.class, empno);    if (e == null) {      return "no data.";    } else {      return e.getFirstName();    }  }} ?????????????????????@Path("/emprest ")????????????RESTful Web????????????HTTP??????????????JAX-RS????????????????????????RESTful Web?????Web??????????????????@Produces???????(?)??????????????????????????text/plain????????????????????????????application/xml?????????XML???????????application/json?????JSON?????????????????? ???????????????Web???????????????????????????????????????·?????OOW???????????·???????????????Run As?-?Run on Server??????????????????Finish???????????????????Web??????http://localhost:7001/OOW/jaxrs/emprest/getname/186????????????????URL?????????(186)?employeeId?????????????firstName????????????????*    *    * ????????3??????WebLogic Server 12c?OEPE????Java EE 6?????????????????Java EE 6????????????????·????????????????????????????Java EE?????????????????????????????????????????????????????????????????????????????????

    Read the article

  • Spring transaction : Transaction not active

    - by Videanu Adrian
    i develop a app using struts2, spring 3.1, Jpa2 and Hibernate. From Spring i use transactions and IoC. so, i have an ajax code block that calls for a struts2 action every second (this is happening for every user that is logged into application (simultaneous users are around 20-30 at a time)). this action name is PopupAction public class PopupAction extends VActionBase implements ServletRequestAware { private static final long serialVersionUID = -293004532677112584L; private iIntermedService intermedService; private HttpServletRequest servletRequest; @Override public String execute() { Integer agentId = (Integer) session.get("USER_AGENT_ID"); Intermed iObj; try { iObj = intermedService.getIntermed(agentId,locationsString); } catch (Exception e) { logger.error("Cannot get Intermed!!! "+e.getMessage()); return ERROR; } return SUCCESS; } } and then i have the service class : @Transactional(readOnly=true) public class IntermedServiceImpl extends GenericIService<Intermed, Integer> implements iIntermedService { @Override public Intermed getIntermed (int agentId,String queueIds) throws Exception { Intermed intermedObj = null; //TODO - find a better implementation for this queueIds parameter!!!! try{ String sql = "SELECT i FROM bla bla bla.....)"; Query q = this.em.createQuery(sql); List<Intermed> iList = q.getResultList(); if (iList.size() == 1){ intermedObj = (Intermed) iList.get(0); //get latest object from DB em.refresh(intermedObj); } }catch(Exception e){ e.printStackTrace(); logger.error(e.getCause()+e.getMessage()); throw e; } return intermedObj; } } here is the spring configuration : <bean id="emfI" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="inboundDS" /> <property name="persistenceUnitName" value="I2PU"/> <!-- GlassFish load-time weaving setup --> <property name="loadTimeWeaver"> <bean class="org.springframework.instrument.classloading.glassfish.GlassFishLoadTimeWeaver"/> </property> </bean> <tx:annotation-driven transaction-manager="txManagerI" /> <tx:advice id="txManagerInboundAdvice" transaction-manager="txManagerI"> <tx:attributes> <tx:method name="*" rollback-for="java.lang.Exception"/> </tx:attributes> </tx:advice> I have names for transactionManager because i have 3 datasources and 3 transaction managers. the problem is that my glassfish logs are full of messages like these: -- removed in order to be able to add more recent logs -- So the cause is : Caused by: java.lang.IllegalStateException: Transaction not active. But i have no idea what can cause this. Any help ? thanks Updates So i have added to @Transactional annotation the transaction manager name that he has to use, but this still does not solved my problem. I have captured a log from the time that the transaction is created until i got that exception: 2012-02-08T15:08:55.954+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractBeanFactory.java:245) - Returning cached instance of singleton bean 'txManagerVA' 2012-02-08T15:08:55.962+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:365) - Creating new transaction with name [xxx.vs.common.services.inbound.IntermedServiceImpl.getIntermed]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,readOnly; '',-java.lang.Exception 2012-02-08T15:08:55.967+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:368) - Opened new EntityManager [org.hibernate.ejb.EntityManagerImpl@edf83f9] for JPA transaction 2012-02-08T15:08:55.976+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:400) - Exposing JPA transaction as JDBC transaction [org.springframework.orm.jpa.vendor.HibernateJpaDialect$HibernateConnectionHandle@725b979b] 2012-02-08T15:08:55.977+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:193) - Bound value [org.springframework.jdbc.datasource.ConnectionHolder@4fb57177] for key [com.sun.gjc.spi.jdbc40.DataSource40@75fa4851] to thread [thread-pool-1-80(80)] 2012-02-08T15:08:55.978+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:193) - Bound value [org.springframework.orm.jpa.EntityManagerHolder@112c6483] for key [org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean@47d4f12f] to thread [thread-pool-1-80(80)] 2012-02-08T15:08:55.979+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:272) - Initializing transaction synchronization 2012-02-08T15:08:55.980+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionAspectSupport.java:362) - Getting transaction for [xxx.vs.common.services.inbound.IntermedServiceImpl.getIntermed] 2012-02-08T15:08:55.983+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (ExtendedEntityManagerCreator.java:423) - Starting resource local transaction on application-managed EntityManager [org.hibernate.ejb.EntityManagerImpl@46d002f4] 2012-02-08T15:08:55.984+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:193) - Bound value [org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization@797add43] for key [org.hibernate.ejb.EntityManagerImpl@46d002f4] to thread [thread-pool-1-80(80)] 2012-02-08T15:08:55.986+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (ExtendedEntityManagerCreator.java:400) - Joined local transaction 2012-02-08T15:08:55.991+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionAspectSupport.java:391) - Completing transaction for [xxx.vs.common.services.inbound.IntermedServiceImpl.getIntermed] 2012-02-08T15:08:55.992+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:922) - Triggering beforeCommit synchronization 2012-02-08T15:08:55.994+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:935) - Triggering beforeCompletion synchronization 2012-02-08T15:08:56.001+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:243) - Removed value [org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization@797add43] for key [org.hibernate.ejb.EntityManagerImpl@46d002f4] from thread [thread-pool-1-80(80)] 2012-02-08T15:08:56.002+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:752) - Initiating transaction commit 2012-02-08T15:08:56.003+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:507) - Committing JPA transaction on EntityManager [org.hibernate.ejb.EntityManagerImpl@edf83f9] 2012-02-08T15:08:56.008+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:948) - Triggering afterCommit synchronization 2012-02-08T15:08:56.010+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:964) - Triggering afterCompletion synchronization 2012-02-08T15:08:56.011+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:331) - Clearing transaction synchronization 2012-02-08T15:08:56.012+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:243) - Removed value [org.springframework.orm.jpa.EntityManagerHolder@112c6483] for key [org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean@47d4f12f] from thread [thread-pool-1-80(80)] 2012-02-08T15:08:56.021+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:243) - Removed value [org.springframework.jdbc.datasource.ConnectionHolder@4fb57177] for key [com.sun.gjc.spi.jdbc40.DataSource40@75fa4851] from thread [thread-pool-1-80(80)] 2012-02-08T15:08:56.021+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:593) - Closing JPA EntityManager [org.hibernate.ejb.EntityManagerImpl@edf83f9] after transaction 2012-02-08T15:08:56.022+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (EntityManagerFactoryUtils.java:343) - Closing JPA EntityManager 2012-02-08T15:08:56.023+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|ERROR [thread-pool-1-80(80)] (PopupAction.java:39) - Cannot get Intermed!!! Transaction not active; nested exception is java.lang.IllegalStateException: Transaction not active 2012-02-08T15:08:56.024+0200|SEVERE||_ThreadID=184;_ThreadName=Thread-5;|org.springframework.dao.InvalidDataAccessApiUsageException: Transaction not active; nested exception is java.lang.IllegalStateException: Transaction not active at org.springframework.orm.jpa.EntityManagerFactoryUtils.convertJpaAccessExceptionIfPossible(EntityManagerFactoryUtils.java:298) at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:106) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization.convertException(ExtendedEntityManagerCreator.java:501) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization.afterCommit(ExtendedEntityManagerCreator.java:481) at org.springframework.transaction.support.TransactionSynchronizationUtils.invokeAfterCommit(TransactionSynchronizationUtils.java:133) at org.springframework.transaction.support.TransactionSynchronizationUtils.triggerAfterCommit(TransactionSynchronizationUtils.java:121) at org.springframework.transaction.support.AbstractPlatformTransactionManager.triggerAfterCommit(AbstractPlatformTransactionManager.java:950) at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:796) at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723) at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:393) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:120) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202) at $Proxy325.getIntermed(Unknown Source) at xxx.vs.common.actions.PopupAction.execute(PopupAction.java:37) at sun.reflect.GeneratedMethodAccessor1581.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:453) at com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:292) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:255) at org.apache.struts2.interceptor.debugging.DebuggingInterceptor.intercept(DebuggingInterceptor.java:256) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:176) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:265) at org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ConversionErrorInterceptor.intercept(ConversionErrorInterceptor.java:138) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:211) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:211) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.StaticParametersInterceptor.intercept(StaticParametersInterceptor.java:190) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.MultiselectInterceptor.intercept(MultiselectInterceptor.java:75) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.CheckboxInterceptor.intercept(CheckboxInterceptor.java:90) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.FileUploadInterceptor.intercept(FileUploadInterceptor.java:243) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ModelDrivenInterceptor.intercept(ModelDrivenInterceptor.java:100) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ScopedModelDrivenInterceptor.intercept(ScopedModelDrivenInterceptor.java:141) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ChainingInterceptor.intercept(ChainingInterceptor.java:145) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.PrepareInterceptor.doIntercept(PrepareInterceptor.java:171) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.I18nInterceptor.intercept(I18nInterceptor.java:176) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.ServletConfigInterceptor.intercept(ServletConfigInterceptor.java:164) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.AliasInterceptor.intercept(AliasInterceptor.java:192) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ExceptionMappingInterceptor.intercept(ExceptionMappingInterceptor.java:187) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at xxx.vs.common.utils.AuthenticationInterceptor.intercept(AuthenticationInterceptor.java:78) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.googlecode.sslplugin.interceptors.SSLInterceptor.intercept(SSLInterceptor.java:128) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.impl.StrutsActionProxy.execute(StrutsActionProxy.java:54) at org.apache.struts2.dispatcher.Dispatcher.serviceAction(Dispatcher.java:510) at org.apache.struts2.dispatcher.ng.ExecuteOperations.executeAction(ExecuteOperations.java:77) at org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:91) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:98) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:91) at org.apache.catalina 2012-02-08T15:08:56.024+0200|SEVERE||_ThreadID=184;_ThreadName=Thread-5;|.core.StandardHostValve.invoke(StandardHostValve.java:162) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:330) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:174) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:828) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:725) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1019) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:225) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:679) Caused by: java.lang.IllegalStateException: Transaction not active at org.hibernate.ejb.TransactionImpl.commit(TransactionImpl.java:69) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization.afterCommit(ExtendedEntityManagerCreator.java:478) ... 93 more so again..... any ideea ?

    Read the article

  • Help with Hibernate mapping

    - by GigaPr
    Hi i have the following classes public class RSS { private Integer id; private String title; private String description; private String link; private Date dateCreated; private Collection rssItems; private String url; private String language; private String rating; private Date pubDate; private Date lastBuildDate; private User user; private Date dateModified; public RSS() { } public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getTitle() { return title; } public void setTitle(String title) { this.title = title; } public void setDescription(String description){ this.description = description; } public String getDescription(){ return this.description; } public void setLink(String link){ this.link = link; } public String getLink(){ return this.link; } public void setUrl(String url){ this.url = url; } public String getUrl(){ return this.url; } public void setLanguage(String language){ this.language = language; } public String getLanguage(){ return this.language; } public void setRating(String rating){ this.rating = rating; } public String getRating(){ return this.rating; } public Date getPubDate() { return pubDate; } public void setPubDate(Date pubDate) { this.pubDate = pubDate; } public Date getLastBuildDate() { return lastBuildDate; } public void setLastBuildDate(Date lastBuildDate) { this.lastBuildDate = lastBuildDate; } public Date getDateModified() { return dateModified; } public void setDateModified(Date dateModified) { this.dateModified = dateModified; } public Date getDateCreated() { return dateCreated; } public void setDateCreated(Date dateCreated) { this.dateCreated = dateCreated; } public Collection getRssItems() { return rssItems; } public void setRssItems(Collection rssItems) { this.rssItems = rssItems; } } public class RSSItem { private RSS rss; private Integer id; private String title; private String description; private String link; private Date dateCreated; private Date dateModified; private int rss_id; public RSSItem() {} public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getTitle() { return title; } public void setTitle(String title) { this.title = title; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } public String getLink() { return link; } public void setLink(String link) { this.link = link; } public Date getDateCreated() { return dateCreated; } public void setDateCreated(Date dateCreated) { this.dateCreated = dateCreated; } public Date getDateModified() { return dateModified; } public void setDateModified(Date dateModified) { this.dateModified = dateModified; } public RSS getRss() { return rss; } public void setRss(RSS rss) { this.rss = rss; } } that i mapped as <hibernate-mapping> <class name="com.rssFeed.domain.RSS" schema="PUBLIC" table="RSS"> <id name="id" type="int"> <column name="ID"/> <generator class="native"/> </id> <property name="title" type="string"> <column name="TITLE" not-null="true"/> </property> <property name="lastBuildDate" type="java.util.Date"> <column name="LASTBUILDDATE"/> </property> <property name="pubDate" type="java.util.Date"> <column name="PUBDATE" /> </property> <property name="dateCreated" type="java.util.Date"> <column name="DATECREATED" not-null="true"/> </property> <property name="dateModified" type="java.util.Date"> <column name="DATEMODIFIED" not-null="true"/> </property> <property name="description" type="string"> <column name="DESCRIPTION" not-null="true"/> </property> <property name="link" type="string"> <column name="LINK" not-null="true"/> </property> <property name="url" type="string"> <column name="URL" not-null="true"/> </property> <property name="language" type="string"> <column name="LANGUAGE" not-null="true"/> </property> <property name="rating" type="string"> <column name="RATING"/> </property> <set inverse="true" lazy="false" name="rssItems"> <key> <column name="RSS_ID"/> </key> <one-to-many class="com.rssFeed.domain.RSSItem"/> </set> </class> </hibernate-mapping> <hibernate-mapping> <class name="com.rssFeed.domain.RSSItem" schema="PUBLIC" table="RSSItem"> <id name="id" type="int"> <column name="ID"/> <generator class="native"/> </id> <property name="title" type="string"> <column name="TITLE" not-null="true"/> </property> <property name="description" type="string"> <column name="DESCRIPTION" not-null="true"/> </property> <property name="link" type="string"> <column name="LINK" not-null="true"/> </property> <property name="dateCreated" type="java.util.Date"> <column name="DATECREATED"/> </property> <property name="dateModified" type="java.util.Date"> <column name="DATEMODIFIED"/> </property> <many-to-one class="com.rssFeed.domain.RSS" fetch="select" name="rss"> <column name="RSS_ID"/> </many-to-one> </class> </hibernate-mapping> But when i try to fetch an RSS I get the following error Exception occurred in target VM: failed to lazily initialize a collection of role: com.rssFeed.domain.RSS.rssItems, no session or session was closed org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.rssFeed.domain.RSS.rssItems, no session or session was closed at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:358) at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationExceptionIfNotConnected(AbstractPersistentCollection.java:350) at org.hibernate.collection.AbstractPersistentCollection.readSize(AbstractPersistentCollection.java:97) at org.hibernate.collection.PersistentSet.size(PersistentSet.java:139) at com.rssFeed.dao.hibernate.HibernateRssDao.get(HibernateRssDao.java:47) at com.rssFeed.ServiceImplementation.RssServiceImplementation.get(RssServiceImplementation.java:46) at com.rssFeed.mvc.ViewRssController.handleRequest(ViewRssController.java:20) at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:48) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:875) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:809) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:476) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:431) at javax.servlet.http.HttpServlet.service(HttpServlet.java:734) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) < what does it mean? Thanks

    Read the article

  • DirectAccess client can't connect

    - by odd parity
    I've set up a DirectAccess server on Windows Server 2012 at my workplace. I'm using a Windows 8 Enterprise client to connect to it. It works fine over a mobile connection, but it fails when connecting from home. I've ruled out the firewall/router as the culprit as the issues persist when connecting the laptop directly to the cable modem. I'm not sure where to begin to debug this, does anyone have any pointers? Both Teredo and IPHTTPS interfaces are up (although as the server is behind a NAT and we only have 1 public IP I understand that IPHTTPS is the only protocol that will be used). The IPHTTPS tunnel also seems to be connected: netsh interface httpstunnel show interfaces Interface IPHTTPSInterface (Group Policy) Parameters ------------------------------------------------------------ Role : client URL : https://redacted:443/IPHTTPS Last Error Code : 0x0 Interface Status : IPHTTPS interface active however the DirectAccess link can't be activated - get-daconnectionstatus cycles between Status : Error Substatus : CouldNotContactDirectAccessServer and Status : Error Substatus : RemoteNetworkAuthenticationFailure Any suggestions on how to attack this are appreciated!

    Read the article

  • CertMgr fails trying to import an SPC file

    - by nsr81
    We have an SPC files which came with the Cisco IP Communicator installer. It needs to be imported into the localMachine ROOT store. However, which the certmgr.exe is run against this SPC file, it errors out. Doesn't matter if it's run from within the installer or manually. The commands I've tried using are: certmgr.exe -add -all CDPcredentials.spc -s -r localMachine root The result displayed is: Error: Failed to save to the destination store CertMgr Failed There is no other information, no log file, nothing in the eventviewer. I's almost as if the ROOT store is in a read-only state. I would also like to point out that I'm able to import single certificates. Just not an SPC files, which contains multiple certificates. I have also tried different versions of the CertMgr utility. Running on Windows 7 Enterprise 64bit. Any assistance would be appreciated.

    Read the article

  • Sharepoint 2010, 404 error after installation

    - by Tommy Jakobsen
    Running Windows Server 2008 Standard R2, SQL Server 2008 Enterprise, Team Foundation Server 2010, I installed Sharepoint Server 2010 (single server). It installed correctly, and the wizard configured it without errors. When accessing the sharepoint server through http://localhost/ I get a 404 error. I also get a 404 when trying to access the admin interface on port 42620. Sharepoint, TFS and Reporting services are the only application on my IIS. NOT sharing the same port, so that can't be the error. Do you have any ideas what the problem can be? Is there some way that I can debug this?

    Read the article

  • wildcard ssl certificate - exchange 2010 - POP/IMAP problem

    - by Sise
    previously we have requested a wildcard ssl certificate from godaddy for our major domain. one of the reasons was the new established exchange server 2010. usually you require following names included in certificiate: FQDN (e.g. mail.whatever.com) Hostname (mail) Domain name (whatever.com) Autodiscover.whatever.com MX Record with the wildcard certificate these are all covered (except of the local hostname). During creation/importing of the ssl certificate into exchange 2010, exchange first asks, if a wildcard certificate is used and then encounters an error - due to the certificate is a wildcard certificate and not a certificate especially generated for the FQDN, SSL for POP and IMAP can not be provided. couldn't find any workaround or solution for this on google, so I hope, maybe some one here has an answer or solution for me! :) the exchange 2010 is running on a windows server 2008 R2 enterprise. thanks in advance and best regards, sise

    Read the article

  • how do I resolve "user isn't assigned to any management roles" error in Exchange 2010 EMC?

    - by TheoJones
    Newly installed Exchange 2010 box (technically, a partially installed box, as this error is preventing me from completing the install). When I launch EMC or the Management Powershell, I get this error: VERBOSE: Connecting to myserver.mydomain.internal [myserver.mydomain.internal] Processing data from remote server failed with the following error message: The user "mydomain\administrator" isn't assigned to any management roles. For more information, see the about_Remote_Troubleshooting Help topic. Failed to connect to any Exchange Server in the current site. Thing is.. The logged in administrator account (confirmed using 'whoami') is a member of the following groups: Administrators Delegated Setup Discovery Management Domain Admins Domain Users Enterprise Admins Exchange Organization Administrators GPO Creator Owners Organization Management Schema Admins Server Management Any ideas? how can I get past this?

    Read the article

  • Exchange 2010 periodically stops responding to SMTP events with error 421 4.4.1 Connection timed out

    - by Michael Shimmins
    After some help diagnosing why Exchange 2010 Enterprise stops responding to SMTP events. I can't find a pattern to it. It doesn't appear to be an actual timeout, as the server responds immediately with the error. To reproduce it I telnet into the server on port 25 and issue a EHLO. The server immediately replies with the 421: 421 4.4.1 Connection timed out Once this starts happening I've found restarting the exchange box is the only reliable way to get it flowing again. Sometimes restarting the Transport service or the mailbox attendant service seems to fix it, but this could be coincidental as it often has no effect.

    Read the article

  • RightFax 9.3 Available Disk Space?

    - by dkirk
    We are currently running RightFax 9.3 as our fax server. I was just in the RightFax Enterprise Fax Manager resolving another problem and noticed 2 little red exclamation marks in the lower left hand pane. I have one beside "Available disk space for fax images" and one beside "Available disk space for fax database". Both are labeled with 5%? What is this? How can this be resolved? I have plenty of physical storage left on the server drives so I am curious as to which space it is referring to??

    Read the article

  • WSRM error on Server running SQL databases

    - by Adam
    I have a Server Running Windows Server 2008 Enterprise Edition With SQL 2005. There is no problems with the server in its day to day functions but i am getting a Warning in the Event Log every 5 minutes with the following: Windows System Resource Manager encountered the following error 0x80010117. User Name will not be logged in the subsequent event logs. Error 0x80010117 User Action Address the error condition, and then try again. This has been happening for over 2 weeks now and i cannot find anything online to help! If i could have some help, then it would much appreciated. Thanks

    Read the article

  • Windows 2008 R2 DHCP server not responding to DHCP discover

    - by MartinSteel
    I've got two Windows 2008 Enterprise R2 servers both running DNS and DHCP called cod & lobster. DHCP is setup using the split scope option introduced with 2008 R2, whereby both servers should respond with the first response providing the lease. Setup is as follows: Cod - IP: 192.168.0.231 - Pool: 192.168.0.101 - 192.168.0.179, exclusion for 160-179. - Response Delay: 0ms - Authorised in Active Directory (Re-authorised to confirm) - Windows firewall disabled while testing Lobster - IP: 192.168.0.232 - Pool: 192.168.0.101 - 192.168.0.179, exclusion for 101-159. - Response Delay: 1000ms - Authorised in Active Directory All DHCP leases to clients are currently being issues by Lobster rather than Cod. Packet captures with Wireshark show the following (all to broadcast address): Client - DHCP Discover Lobster - DHCP Offer (after 1s delay) Client - DHCP Request Lobster - DHCP Ack Client - DHCP Inform From my setup with two servers I'd expect to see a DHCP Offer coming from Cod almost immediately after the DHCP Discover. Does anybody have any idea what would prevent the DHCP Server responding to the discover?

    Read the article

  • How to change RDS licensing mode from 'per user/device' to 'Remote control for administrators' on Wi

    - by Prashant Mandhare
    We have installed windows 2008 R2 enterprise on a Dell server. This server is placed remotely in data center and only administrator is going to access it for maintenance purpose. No multiple users or client remote access is needed Now during 'remote desktop services' role installation network admin accidentally selected 'per user/device' licensing mode. Because of which now 120 days free try period is ticking. Since only administrator is going to access this server remotely we need to have 'Remote control for administrators' licensing mode (like windows 2003) on it. How we can change licensing mode from 'per user/device' to 'Remote control for administrators' on 2008 server? Also will it be possible to do this change remotely using RDC session itself? or do i need to change it using physical console (if remote access is gonna be disabled during switch)?

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >