Search Results

Search found 36521 results on 1461 pages for 'aq advanced queue oracle support streams propagation schedule dblink troubleshoo'.

Page 390/1461 | < Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >

  • Difference between EJB Persist & Merge operation

    - by shantala.sankeshwar
    This article gives the difference between EJB Persist & Merge operations with scenarios.Use Case Description Users working on EJB persist & merge operations often have this question in mind " When merge can create new entity as well as modify existing entity,then why do we have 2 separate operations - persist & merge?" The reason is very simple.If we use merge operation to create new entity & if the entity exists then it does not throw any exception,but persist throws exception if the entity already exists.Merge should be used to modify the existing entity.The sql statement that gets executed on persist operation is insert statement.But in case of merge first select statement gets executed & then update sql statement gets executed.Scenario 1: Persist operation to create new Emp recordLet us suppose that we have a Java EE Web Application created with Entities from Emp table & have created session bean with data control. Drop Emp Object(Expand SessionEJBLocal->Constructors under Data Controls) as ADF Parameter form in jspx pageDrop persistEmp(Emp) as ADF CommandButton & provide #{bindings.EmpIterator.currentRow.dataProvider} as the value for emp parameter.Then run this page & provide values for Emp,click on 'persistEmp' button.New Emp record gets created.So when we execute persist operation only insert sql statement gets executed :INSERT INTO EMP (EMPNO, COMM, HIREDATE, ENAME, JOB, DEPTNO, SAL, MGR) VALUES (?, ?, ?, ?, ?, ?, ?, ?)    bind => [2, null, null, e2, null, 10, null, null]Scenario 2: Merge operation to modify existing Emp recordLet us suppose that we have a Java EE Web Application created with Entities from Emp table & have created session bean with data control.Drop empFindAll() Object as ADF form on jspx page.Drop mergeEmp(Emp) operation as commandButton & provide #{bindings.EmpIterator.currentRow.dataProvider} as the value for emp parameter.Then run this page & modify values for Emp record,click on 'mergeEmp' button.The respective Emp record gets modified.So when we execute merge operation select & update sql statements gets executed :SELECT EMPNO, COMM, HIREDATE, ENAME, JOB, DEPTNO, SAL, MGR FROM EMP WHERE (EMPNO = ?) bind => [7566]UPDATE EMP SET ENAME = ? WHERE (EMPNO = ?) bind => [KINGS, 7839]

    Read the article

  • MySQL Enterprise Backup 3.8.2 - Overview

    - by Priya Jayakumar
      MySQL Enterprise Backup (MEB) is the ideal solution for backing up MySQL databases. MEB 3.8.2 is released in June 2013. MySQL Enterprise Backup 3.8.2 release’s main goal is to improve usability. With this release, users can know the progress of backup completed both in terms of size and as a percentage of the total. This release also offers options to be able to manage the behavior of MEB in case the space on the secondary storage is completely exhausted during backup. The progress indicator is a (short) string that indicates how far the execution of a time-consuming MEB command has progressed. It consists of one or more "meters" that measures the progress of the command. There are two options introduced to control the progress reporting function of mysqlbackup command (1) –show-progress (2) –progress-interval. The user can control the progress indicator by using “--show-progress” option in any of the MEB operations. This option instructs MEB to output periodically short reports on the progress of time-consuming commands. The argument of this option instructs where the output could be sent. For example it could be stderr, stdout, file, fifo and table. With the “--show-progress” option both the total size of the backup to be copied and the size that’s already copied will be shown. Along with this, the state of the operation for example data or meta-data being copied or tables being locked and other such operations will also be reported. This gives more clear information to the DBA on the progress of the backup that’s happening. Interval between progress report in seconds is controlled by “--progress-interval” option. For more information on this please refer progress-report-options. MEB can also be accessed through GUI from MySQL WorkBench’s next version. This can be used as the front end interface for MEB users to perform backup operations at the click of a button. This feature was highly requested by DBAs and will be very useful. Refer http://insidemysql.com/mysql-workbench-6-0-a-sneak-preview/ for WorkBench upcoming release info. Along with the progress report feature some of the important issues like below are also addressed in MEB 3.8.2. In MEB 3.8.2 a new command line option “--on-disk-full” is introduced to abort or warn the user when a backup process encounters a full disk condition. When no option is given, by default it would abort. A few issues related to “incremental-backup” are also addressed in this release. Please refer 3.8.2 documentation for more details. It would be good for MEB users to move to 3.8.2 to take incremental backups. Overall the added usability and the important defects fixed in this release makes MySQL Enterprise Backup 3.8.2 a promising release.  

    Read the article

  • Why Ultra-Low Power Computing Will Change Everything

    - by Tori Wieldt
    The ARM TechCon keynote "Why Ultra-Low Power Computing Will Change Everything" was anything but low-powered. The speaker, Dr. Johnathan Koomey, knows his subject: he is a Consulting Professor at Stanford University, worked for more than two decades at Lawrence Berkeley National Laboratory, and has been a visiting professor at Stanford University, Yale University, and UC Berkeley's Energy and Resources Group. His current focus is creating a standard (computations per kilowatt hour) and measuring computer energy consumption over time. The trends are impressive: energy consumption has halved every 1.5 years for the last 60 years. Battery life has made roughly a 10x improvement each decade since 1960. It's these improvements that have made laptops and cell phones possible. What does the future hold? Dr. Koomey said that in the past, the race by chip manufacturers was to create the fastest computer, but the priorities have now changed. New computers are tiny, smart, connected and cheap. "You can't underestimate the importance of a shift in industry focus from raw performance to power efficiency for mobile devices," he said. There is also a confluence of trends in computing, communications, sensors, and controls. The challenge is how to reduce the power requirements for these tiny devices. Alternate sources of power that are being explored are light, heat, motion, and even blood sugar. The University of Michigan has produced a miniature sensor that harnesses solar energy and could last for years without needing to be replaced. Also, the University of Washington has created a sensor that scavenges power from existing radio and TV signals.Specific devices designed for a purpose are much more efficient than general purpose computers. With all these sensors, instead of big data, developers should focus on nano-data, personalized information that will adjust the lights in a room, a machine, a variable sign, etc.Dr. Koomey showed some examples:The Proteus Digital Health Feedback System, an ingestible sensor that transmits when a patient has taken their medicine and is powered by their stomach juices. (Gives "powered by you" a whole new meaning!) Streetline Parking Systems, that provide real-time data about available parking spaces. The information can be sent to your phone or update parking signs around the city to point to areas with available spaces. Less driving around looking for parking spaces!The BigBelly trash system that uses solar power, compacts trash, and sends a text message when it is full. This dramatically reduces the number of times a truck has to come to pick up trash, freeing up resources and slashing fuel costs. This is a classic example of the efficiency of moving "bits not atoms." But researchers are approaching the physical limits of sensors, Dr. Kommey explained. With the current rate of technology improvement, they'll reach the three-atom transistor by 2041. Once they hit that wall, it will force a revolution they way we do computing. But wait, researchers at Purdue University and the University of New South Wales are both working on a reliable one-atom transistors! Other researchers are working on "approximate computing" that will reduce computing requirements drastically. So it's unclear where the wall actually is. In the meantime, as Dr. Koomey promised, ultra-low power computing will change everything.

    Read the article

  • Where Facebook Stands Heading Into 2013

    - by Mike Stiles
    In our last blog, we looked at how Twitter is positioned heading into 2013. Now it’s time to take a similar look at Facebook. 2012, for a time at least, seemed to be the era of Facebook-bashing. Between a far-from-smooth IPO, subsequent stock price declines, and anxiety over privacy, the top social network became a target for comedians, politicians, business journalists, and of course those who were prone to Facebook-bash even in the best of times. But amidst the “this is the end of Facebook” headlines, the company kept experimenting, kept testing, kept innovating, and pressing forward, committed as always to the user experience, while concurrently addressing monetization with greater urgency. Facebook enters 2013 with over 1 billion users around the world. Usage grew 41% in Brazil, Russia, Japan, South Korea and India in 2012. In the Middle East and North Africa, an average 21 new signups happen per minute. Engagement and time spent on the site would impress the harshest of critics. Facebook, while not bulletproof, has become such an integrated daily force in users’ lives, it’s getting hard to imagine any future mass rejection. You want to see a company recognizing weaknesses and shoring them up. Mobile was a weakness in 2012 as Facebook was one of many caught by surprise at the speed of user migration to mobile. But new mobile interfaces, better mobile ads, speed upgrades, standalone Messenger and Pages mobile apps, and the big dollar acquisition of Instagram, were a few indicators Facebook won’t play catch-up any more than it has to. As a user, the cool thing about Facebook is, it knows you. The uncool thing about Facebook is, it knows you. The company’s walking a delicate line between the public’s competing desires for customized experiences and privacy. While the company’s working to make privacy options clearer and easier, Facebook’s Paul Adams says data aggregation can move from acting on what a user is engaging with at the moment to a more holistic view of what they’re likely to want at any given time. To help learn about you, there’s Open Graph. Embedded through diverse partnerships, the idea is to surface what you’re doing and what you care about, and help you discover things via your friends’ activities. Facebook’s Director of Engineering, Mike Vernal, says building mobile social apps connected to Facebook in such ways is the next wave of big innovation. Expect to see that fostered in 2013. The Facebook site experience is always evolving. Some users like that about Facebook, others can’t wait to complain about it…on Facebook. The Facebook focal point, the News Feed, is not sacred and is seeing plenty of experimentation with the insertion of modules. From upcoming concerts, events, suggested Pages you might like, to aggregated “most shared” content from social reader apps, plenty could start popping up between those pictures of what your friends had for lunch.  As for which friends’ lunches you see, that’s a function of the mythic EdgeRank…which is also tinkered with. When Facebook changed it in September, Page admins saw reach go down and the high anxiety set in quickly. Engagement, however, held steady. The adjustment was about relevancy over reach. (And oh yeah, reach was something that could be charged for). Facebook wants users to see what they’re most likely to like, based on past usage and interactions. Adding to the “cream must rise to the top” philosophy, they’re now even trying out ordering post comments based on the engagement the comments get. Boy, it’s getting competitive out there for a social engager. Facebook has to make $$$. To do that, they must offer attractive vehicles to marketers. There are a myriad of ad units. But a key Facebook marketing concept is the Sponsored Story. It’s key because it encourages content that’s good, relevant, and performs well organically. If it is, marketing dollars can amplify it and extend its reach. Brands can expect the rollout of a search product and an ad network. That’s a big deal. It takes, as Open Graph does, the power of Facebook’s user data and carries it beyond the Facebook environment into the digital world at large. No one could target like Facebook can, and some analysts think it could double their roughly $5 billion revenue stream. As every potential revenue nook and cranny is explored, there are the users themselves. In addition to Gifts, Facebook thinks users might pay a few bucks to promote their own posts so more of their friends will see them. There’s also word classifieds could be purchased in News Feeds, though they won’t be called classifieds. And that’s where Facebook stands; a wildly popular destination, a part of our culture, with ever increasing functionalities, the biggest of big data, revenue strategies that appeal to marketers without souring the user experience, new challenges as a now public company, ongoing privacy concerns, and innovations that carry Facebook far beyond its own borders. Anyone care to write a “this is the end of Facebook” headline? @mikestilesPhoto via stock.schng

    Read the article

  • View Link inConsistency

    - by Abhishek Dwivedi
    What is View Link Consistency? When multiple instances (say VO1, VO2, VO3 etc) of an EO-based VO are based on the same underlying EO, a new row created in one of these VO instances (say VO1)can be automatically added (without re-query) to the row sets of the others (VO2, VO3 etc ). This capability is known as the view link consistency. This feature works for any VO for which it is enabled, regardless of whether they are involved in a view link or not. What causes View Link inConsistency? Unless jbo.viewlink.consistent  is disabled for this VO (or globally), or setAssociationConsistent(false) is applied, any of the following can cause View Link inConsistency.  1. setWhereClause 2. Unreferenced secondary EO 3. findByViewCriteria() 4. Using view link accessor row set Why does this happen - View Link inConsistency? Well, there can be one of the following reasons. a. In case of 1 & 2, the view link consistency flag is disabled on that view object. b. As far as 3 is concerned, findByViewCriteria is used to retrieve a new row set to process programmatically without changing the contents of the default row set. In this case, unlike previous cases, the view link consistency flag is not disabled, meaning that the changes in the default row set would be reflected in the new row set.  However, the opposite doesn't hold true. For instance, if a row is deleted from this new row set, the corresponding row in the default row set does not get deleted. In one of my features, which involved deletion of row(s), I resolved the view link inconsistency issue by replacing findByViewCriteria by applyViewCriteria. b. For 4, it's similar to 3 - whenever a view link accessor row set is retrieved, a new row set is created. Now, creating new row set does not mean re-executing the query each time, only creating a new instance of a RowSet object with its default iterator reset to the "slot" before the first row. Also, please note that this new row set always originates from an internally created view object instance, not one you that added to the data model. This internal view object instance is created as needed and added with a system-defined name to the root application module. Anyway, the very reason a distinct, internally-created view object instance is used is to guarantee that it remains unaffected by developer-related changes to their own view objects instances in the data model.

    Read the article

  • New Cloud Security Book: Securing the Cloud by Vic Winkler

    - by user12608550
    It's rare that I read a technical book straight through; I usually read key chapters and save the rest for later reference. But Winkler's book, written by an accomplished and highly experienced security professional, was worth a complete read, cover to cover. Of the recently published cloud security books, such as... Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance, by Tim Mather, Subra Kumaraswamy, and Shahed Latif; O'Reilly Media Inc, 2009; Cloud Computing: Implementation, Management, and Security, by John Rittenhouse and James Ransome; CRC Press 2010; Cloud Security: A Comprehensive Guide to Secure Cloud Computing, by Ronald Krutz and Russell Vines; Wiley Publishing Inc, 2010 ...Securing the Cloud is the most useful and informative about all aspects of cloud security. Clearly, through his experience, the author has thought through many practical issues of securing large, virtualized IT installations. His Chapter 6 on Best Practices and Chapter 9 with its valuable checklists are worth the price of the book. If you are among the many new cloud computing professionals, Securing the Cloud is an essential reference for your work.

    Read the article

  • Workshop in Denver canceled - thanks to hurricane Isaac

    - by Mike Dietrich
    Yesterday Roy did start his journey on time to travel to Denver, CO for today's Upgrade and Migration Workshop.  But unfortunately due to the remnants of  hurricane Issac moving up the East Coast and scrambling up flight schedules Roy's flight from NYC to Denver got canceled after a 3 hour delay leaving Manchester, NH, and there was no option to arrive in Denver this morning on time. So we apologize for canceling that workshop. The local marketing department will contact you regarding an alternative date. Sorry for any inconvenience!

    Read the article

  • Creating Custom validation rule and register it

    - by FormsEleven
    What is Validation Rule? A validation rule is a piece of code that performs some check ensuring that data meets given constraints.In an enterprise application development environment, often it might require developers to have validation be performed based on some logic at several places across projects. Instead of redundant validation creation, a custom validation rule provides a library with a validation rules that can be registered and used across applications.A custom Validation is encapsulated in a reusable component so that you do not have to write it every time when you need to do input validation. Here is how we can easily implement a custom validation that checks for name of an employee to be "KING" For creating a custom Validation , 1.         Create Generic Application Workspace "CustomValidator" with the project "Model" 2.         Create an BC4J based on emp table. 3.         Create a custom validation rule.In EmpNamerule class, update the validateValue(..) method as follows:  public boolean validateValue(Object value) { EntityImpl emp = (EntityImpl)value; if(emp.getAttribute("Ename").toString().equals("KING")){ return false; } return true; } Create ADF Library: Next step would be to create ADF library. Create ADF library with name lets say testADFLibrary1.jarRegister ADF Library Next step is to register the ADF library , so that its available across the applications. Invoke the menu "Tools -> Preferences"Select the option "Business Components -> Registered Rules" from left paneClick on button "Pick Library". The dialog "Select Library" comes up with  the user library addedAdd new library' that points to the above jarCheck the checkbox "Register" and set the name for the rule Sample UsageHere is how we can easily implement a validation rule that restrict the name of the employee not to be "KING".Create new Application with BC4J based on EMP table.Create new validation under Business rule tab for Ename & select the above custom validation rule.Run the AppModule tester.

    Read the article

  • CRM Evolution 2014: Mediocrity is the New Horrible in Customer Service

    - by Tuula Fai
    "Mediocrity is the new horrible in customer service," Blair McHaney, Gold's Gym Almost everyone knows that customers' expectations have risen. But, after listening to two days of presentations at CRM Evolution, I think it’s more accurate to say that customers' expectations have skyrocketed. Fortunately, most companies have gotten the message and are taking their customer service to a higher level. For those who've been hesitant to 'boldly go where their customer service organization has not gone before,' take heart. I’ve got some statistics that will encourage you to take those first few steps. Why should I change? By engaging customers online, ancestry.com achieved a 99.5% customer satisfaction score (CSAT) while improving retention and saving millions on greater efficiency, including a 38%-50% drop in inbound calls and emails.1 By empowering employees to delight customers, Gold’s Gym achieved a 77.5% Net Promoter Score (NPS) and 22% customer churn rate. No small feat when you consider the industry averages are 40% NPS and 45% churn.2 By adapting quickly to social media, brands like Verizon have benefited from social community members spending 2.5x-10x more than average customers.3 ‘The fierce urgency of now’ is upon us in customer service. You can take your customer service to a higher level! To find out more, click here CRM Evolution Customer Service Experience Footnotes: 1. Arvindh Balakrishnan, Is Your Customer Service Modern?2. Blair McHaney, Wire Your Organization with Customer Feedback3. Becky Carroll, The Power of Communities for Improving the Service Experience and Building Advocates

    Read the article

  • What's Old is New Again

    - by David Dorf
    Last night I told my son he could stream music to his tablet "from the cloud" (in this case, the Amazon Cloud).  He paused, then said, "what is the cloud?"  I replied, "a bunch of servers connected to the internet."  Apparently he had visions of something much more magnificent.  Another similar term is "big data."  These marketing terms help to quickly convey topics but are oversimplifications that are open to many interpretations.  At their core, those terms a shiny packages holding recycled ideas. I see many headlines declaring big data changes everything, but it doesn't.  Savvy retailers have been dealing with large volumes of data since the electronic cash register was invented.  But the there have a been a few changes to the landscape that make big data a topic of conversation: 1. Computing power has caught up to storage volumes. Its now possible to more thoroughly analyze the copious volumes of data retailers have been squirreling away.  CPUs are faster, sold state drives more plentiful, and new ways to store and search data are available.  My iPhone is more power than the computer used in the Apollo mission to the moon. 2. Unstructured data is everywhere.  The Web used to be where retailers published product information, but now users are generating the bulk of the content in the form of comments, videos, and "likes."  The variety of information available to retailers is huge, and it meaning difficult to discern. 3. Everything is connected.  Looking at a report from my router, there are no less than 20 active devices on my home network.  We can track the location of mobile phones, tag products with RFID, and set our thermostats (I love my Nest) from a thousand miles away.  Not only is there more data, but its arriving at higher velocity. Careful readers will note the three Vs that help define so-called big data: volume, variety, and velocity. We now have more volume, more variety, and more velocity and different technologies to deal with them.  But at the heart, the objectives are still the same: Informed decisions Accurate forecasts Improved optimizations So don't let the term "big data" throw you off the scent.  Retailers still need to execute on the basics.  But do take a fresh look at the data that's available and the new technologies to process it.  The landscape will continue to change and agile organizations will always be reevaluating their approaches.  You can just add some more weapons to the arsenal.

    Read the article

  • Demo on Data Guard Protection From Lost-Write Corruption

    - by Rene Kundersma
    Today I received the news a new demo has been made available on OTN for Data Guard protection from lost-write corruption. Since this is a typical MAA solution and a very nice demo I decided to mention this great feature also in this blog even while it's a recommended best practice for some time. When lost writes occur an I/O subsystem acknowledges the completion of the block write even though the write I/O did not occur in the persistent storage. On a subsequent block read on the primary database, the I/O subsystem returns the stale version of the data block, which might be used to update other blocks of the database, thereby corrupting it.  Lost writes can occur after an OS or storage device driver failure, faulty host bus adapters, disk controller failures and volume manager errors. In the demo a data block lost write occurs when an I/O subsystem acknowledges the completion of the block write, while in fact the write did not occur in the persistent storage. When a primary database lost write corruption is detected by a Data Guard physical standby database, Redo Apply (MRP) will stop and the standby will signal an ORA-752 error to explicitly indicate a primary lost write has occurred (preventing corruption from spreading to the standby database). Links: MOS (1302539.1). "Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration" Demo MAA Best Practices Rene Kundersma

    Read the article

  • CRM vs VRM

    - by David Dorf
    In a previous post, I discussed the potential power of combining social, interest, and location graphs in order to personalize marketing and shopping experiences for consumers.  Marketing companies have been trying to collect detailed information for that very purpose, a large majority of which comes from tracking people on the internet.  But their approaches stem from the one-way nature of traditional advertising.  With TV, radio, and magazines there is no opportunity to truly connect to customers, which has trained marketing companies to [covertly] collect data and segment customers into easily identifiable groups.  To a large extent, we think of this as CRM. But what if we turned this viewpoint upside-down to accommodate for the two-way nature of social media?  The notion of marketing as conversations was the basis for the Cluetrain, an early attempt at drawing attention to the fact that customers are actually unique humans.  A more practical implementation is Project VRM, which is a reverse CRM of sorts.  Instead of vendors managing their relationships with customers, customers manage their relationships with vendors. Your shopping experience is not really controlled by you; rather, its controlled by the retailer and advertisers.  And unfortunately, they typically don't give you a say in the matter.  Yes, they might tailor the content for "female age 25-35 interested in shoes" but that's not really the essence of you, is it?  A better approach is to the let consumers volunteer information about themselves.  And why wouldn't they if it means a better, more relevant shopping experience?  I'd gladly list out my likes and dislikes in exchange for getting rid of all those annoying cookies on my harddrive. I really like this diagram from Beyond SocialCRM as it captures the differences between CRM and VRM. The closest thing to VRM I can find is Buyosphere, a start-up that allows consumers to track their shopping history across many vendors, then share it appropriately.  Also, Amazon does a pretty good job allowing its customers to edit their profile, which includes everything you've ever purchased from Amazon.  You can mark items as gifts, or explicitly exclude them from their recommendation engine.  This is a win-win for both the consumer and retailer. So here is my plea to retailers: Instead of trying to infer my interests from snapshots of my day, please just ask me.  We'll both have a better experience in the long-run.

    Read the article

  • Are you ready for the needed changes to your Supply Chain for 2013?

    - by Stephen Slade
    With the initiation of the Dodd-Frank Act, companies need to determine if their products contain 'conflict materials' from certain global markets as the Rep of Congo. The materials include metals such as gold, tin, tungsten and tantalum. Compaines with global sourcing face new disclosure requirements in Feb'13 related to business being done in Iran. Public companies are required to disclose to U.S. security regulators if they or their affiliates are engaged in business in Iran either directly or indirectly.  Is your supply chain compliant?  Do you have sourcing reports to validate?  Where are the materials in your chips & circuit boards coming from? In the next few weeks, responsible companies will be scrutinizing their supply chains, subs, JVs, and affiliates to search for exposure. Source: Brian Lane, Atty at Gibson Dunn Crutcher, as printed in the WSJ Tues, Dec 11, 2012 p.B8

    Read the article

  • Hai mai pensato a quanto ti costa qualificare le tue opportunità commerciali?

    - by user812481
    Il successo delle attività di marketing è dovuto alla profonda conoscenza dei propri clienti: chi sono, cosa acquistano e perché, come preferiscono essere contattati. Se i dati sui clienti sono distribuiti su più sistemi, rispondere a queste domande diventa difficile ed oneroso. Hai bisogno di un mix di strumenti best-in-class per l'automazione della forza di vendita e per l'efficienza delle attività di marketing, facendo confluire i dati chiave in un unico punto di accesso, per una visione a 360 gradi dei clienti. Vorresti incrementare il ROI delle campagne di marketing, proponendo diversi messaggi in funzione dei differenti target, ottenendo così un maggior successo delle iniziative? Scopri come ottenere una conoscenza maggiore del target per creare campagne di successo, mirate e personalizzate, attraverso video in italiano e docuemtni da condividere con i vostri colleghi.

    Read the article

  • Webcast: Applications Integration Architecture

    - by LuciaC
    Webcast: Applications Integration Architecture - Overview and Best Practices Date:  November 12, 2013.Join us for an Overview and Best Practices live webcast on Applications Integration Architecture (AIA). We are covering following topics in this Webcast : AIA Overview AIA - Where it Stands Pre-Install, Pre-Upgrade Concerns Understanding Dependency Certification Matrix Documentation Information Center Demonstration - How to evaluate certified combination Software Download/Installable Demonstration - edelivery Download Overview Reference Information Q & A (15 Minutes)  We will be holding 2 separate sessions to accommodate different timezones: EMEA / APAC - timezone Session : Tuesday, 12-NOV-2013 at 09:00 UK / 10:00 CET / 14:30 India / 18:00 Japan / 20:00 AEDT Details & Registration : Doc ID 1590146.1 Direct registration link USA - timezone Session : Wednesday, 13-NOV-2013 at 18:00 UK / 19:00 CET / 10:00 PST / 11:00 MST / 13:00 EST Details & Registration : Doc ID 1590147.1 Direct registration link If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler. Remember that you can access a full listing of all future webcasts as well as replays from Doc ID 740966.1.

    Read the article

  • Using Exception Handler in an ADF Task Flow

    - by anmprs
    Problem Statement: Exception thrown in a task flow gets wrapped in an exception that gives an unintelligible error message to the user. Figure 1 Solution 1. Over-writing the error message with a user-friendly error message. Figure 2 Steps to code 1. Generating an exception: Write a method that throws an exception and drop it in the task flow.2. Adding an Exception Handler: Write a method (example below) to overwrite the Error in the bean or data control and drop the method in the task flow. Figure 3 This method is marked as the Exception Handler by Right-Click on method > Mark Activity> Exception Handler or by the button that is displayed in this screenshot Figure 4 The Final task flow should look like this. This will overwrite the exception with the error message in figure 2. Note: There is no need for a control flow between the two method calls (as shown below). Figure 5 Solution 2: Re-Routing the task flow to display an error page Figure 6 Steps to code 1. This is the same as step 1 of solution 1.2. Adding an Exception Handler: The Exception handler is not always a method; in this case it is implemented on a task flow return.  The task flow looks like this. Figure 7 In the figure below you will notice that the task flow return points to a control flow ‘error’ in the calling task flow. Figure 8 This control flow in turn goes to a view ‘error.jsff’ which contains the error message that one wishes to display.  This can be seen in the figure below. (‘withErrorHandling’ is a  call to the task flow in figure 7) Figure 9

    Read the article

  • Customisation / overriding of the Envelop ecs files

    - by Dheeraj Kumar M
    There are few usecases where the requirement is to customise the envelop information (Interchange/Group ecs file). Such scenarios might be required to be used for only few of the customers. Hence, in addition to the default seeded envelop definitions, it also required to upload the customised definitions. Here is the steps for achieving the same. 1. Create only the Interchange ecs and save 2. Create only the group ecs and save 3. Use the same in B2B 1. Create only the Interchange ecs and save :       Open the document editor and select the required version and doctype. During creating new ecs, ensure to select the checkbox for insert envelop.       Once created, delete the group and transactionset nodes and retain only the Interchange ecs nodes, including both header and trailer. Save this file. 2. Create only the group ecs and save       After creating the ecs file as mentioned in steps of Interchange creation, delete the Interchange and transactionset nodes and retain only the group ecs nodes, including both header and trailer. Save this file. 3. Use the same in B2B       These newly created ecs can be used in B2B by 2 ways.              a. By overriding at the trading partner Level:              This will be very useful when the configuration is complete and then need to incorporate the customisation. In this case, just select the Trading partner - document - select the document which need to be customised.              Upload the newly created Interchange and group ECS files under the Interchange and group tabs respectively and re-deply the associated agreement.              The advantage of this approach is              - Flexibility to add customised envelop definitions to the partners              - Save the re-work of design time effort.              b. By adding another document definition in Administration - document screen:              This scenario can be used if there is no configuration done at the trading partner level. Create the required document revision and overtide the Interchange and group ECS files under the Interchange and group tabs respectively. Add the document in Trading partner - document. Create and deploy the agreements

    Read the article

  • Analytics in an Omni-Channel World

    - by David Dorf
    Retail has been around ever since mankind started bartering.  The earliest transactions were very specific to the individuals buying and selling, then someone had the bright idea to open a store.  Those transactions were a little more generic, but the store owner still knew his customers and what they wanted.  As the chains rolled out, customer intimacy was sacrificed for scale, and retailers began to rely on segments and clusters.  But thanks to the widespread availability of data and the technology to convert said data into information, retailers are getting back to details. The retail industry is following a maturity model for analytics that is has progressed through five stages, each delivering more value than the previous. Store Analytics Brick-and-mortar retailers (and pure-play catalogers as well) that collect anonymous basket-level data are able to get some sense of demand to help with allocation decisions.  Promotions and foot-traffic can be measured to understand marketing effectiveness and perhaps focus groups can help test ideas.  But decisions are influenced by the majority, using faceless customer segments and aggregated industry data points.  Loyalty programs help a little, but in many cases the cost outweighs the benefits. Web Analytics The Web made it much easier to collect data on specific, yet still anonymous consumers using cookies to track visits. Clickstreams and product searches are analyzed to understand the purchase journey, gauge demand, and better understand up-selling opportunities.  Personalization begins to allow retailers target market consumers with recommendations. Cross-Channel Analytics This phase is a minor one, but where most retailers probably sit today.  They are able to use information from one channel to bolster activities in another. However, there are technical challenges combining data silos so its not an easy task.  But for those retailers that are able to perform analytics on both sources of data, the pay-off is pretty nice.  Revenue per customer begins to go up as customers have a better brand experience. Mobile & Social Analytics Big data technologies are enabling a 360-degree view of the customer by incorporating psychographic data from social sites alongside traditional demographic data.  Retailers can track individual preferences, opinions, hobbies, etc. in order to understand a consumer's motivations.  Using mobile devices, consumers can interact with brands anywhere, anytime, accessing deep product information and reviews.  Mobile, combined with a loyalty program, presents an opportunity to put shopping into geographic context, understanding paths to the store, patterns within the store, and be an always-on advertising conduit. Omni-Channel Analytics All this data along with the proper technology represents a new paradigm in which the clock is turned back and retail becomes very personal once again.  Rich, individualized data better illuminates demand, allows for highly localized assortments, and helps tailor up-selling.  Interactions with all channels help build an accurate profile of each consumer, and allows retailers to tailor the retail experience to meet the heightened expectations of today's sophisticated shopper.  And of course this culminates in greater customer satisfaction and business profitability.

    Read the article

  • Újabb házassági terv az adatbázis piacon: Sybase és SAP

    - by Fekete Zoltán
    Az Oracle az utóbbi évben technológiai és alkalmazás oldalon több mint 50 céget vásárolt meg, legutóbb a hardver-operációs rendszer, Java, IDM, virtualizáció és számos más téren is innovatív Sun céget. Ráadásul az Oracle best-of-breed azaz iparági vezeto cégekkel és mogoldásokkal erosíti a portfólióját. Az Oracle hosszú évek óta az adattárházak (data wrehouse) területén is a Gartner szerint a piacvezetok mágikus négyzetében van. Ennek a területnek vezeto megoldása az Oracle Database optimalizált hardveren futtatása az Exadata / Database Machine területen. Az Oracle Database mind tranzakciókezelésre mind adattárház feldolgozásra, mind ezen megoldások egy környezetben futtatására optimalizált megoldás. Az SAP korábban meglehetosen elítélte az Oracle best-of-breed felvásárlási stratégiáját, mondván az nem vezet semmire. :) Most a megmaradék önálló cégek közül a Sybase-t szemelte ki. A BBC híre. Kicsit soknak tunik az 5,8 milliárd dollár? Érdekes, hogy a cikk szerint a felvásárlási terv hírére az SAP részvény árfolyam 40 centtel esett.

    Read the article

  • Linking to BIP reports from BIEE Analyses

    - by Tim Dexter
    Bryan found a great blog post from Fiston over on the OBIEEStuff blog. It covers the ability to link to a BIP report from a BIEE analyses report with the ability to pass parameters to it. I have doubled checked and you need to be on OBIEE 11.1.1.5 to see the 'Shared Report Link' mentioned in Fiston's post when you open a BIP report from the /analytics side of the house. Enjoy! OBIEE to BIP trick

    Read the article

  • Announcement: New Tutorial - Using ADF Faces and ADF Controller with OEPE

    - by Juan Camilo Ruiz
    We are happy to announce the publication of our newest tutorial, that explores some of the latest features added in our OEPE 12c release for ADF Development. The tutorial walks you through the creation of an ADF application that uses the ADF Faces Rich Client components, in combination with the ADF Controler, ADF Model and JPA. By developing this tutorial you will work and understand various features added into OEPE 12c that are specific to ADF development such as: ADF taskflow editor Visual pageDefinition editor ADF integration with AppXRay Navigation across artifacts such as pages, pageDefinition, managed beans, etc. Property inspector for ADF Faces components. Stay tunned for more and exciting tutorials that explore this and much more OEPE features. And of course your feedback is always welcome!

    Read the article

  • Short Season, Long Models - Dealing with Seasonality

    - by Michel Adar
    Accounting for seasonality presents a challenge for the accurate prediction of events. Examples of seasonality include: ·         Boxed cosmetics sets are more popular during Christmas. They sell at other times of the year, but they rise higher than other products during the holiday season. ·         Interest in a promotion rises around the time advertising on TV airs ·         Interest in the Sports section of a newspaper rises when there is a big football match There are several ways of dealing with seasonality in predictions. Time Windows If the length of the model time windows is short enough relative to the seasonality effect, then the models will see only seasonal data, and therefore will be accurate in their predictions. For example, a model with a weekly time window may be quick enough to adapt during the holiday season. In order for time windows to be useful in dealing with seasonality it is necessary that: The time window is significantly shorter than the season changes There is enough volume of data in the short time windows to produce an accurate model An additional issue to consider is that sometimes the season may have an abrupt end, for example the day after Christmas. Input Data If available, it is possible to include the seasonality effect in the input data for the model. For example the customer record may include a list of all the promotions advertised in the area of residence. A model with these inputs will have to learn the effect of the input. It is possible to learn it specific to the promotion – and by the way learn about inter-promotion cross feeding – by leaving the list of ads as it is; or it is possible to learn the general effect by having a flag that indicates if the promotion is being advertised. For inputs to properly represent the effect in the model it is necessary that: The model sees enough events with the input present. For example, by virtue of the model lifetime (or time window) being long enough to see several “seasons” or by having enough volume for the model to learn seasonality quickly. Proportional Frequency If we create a model that ignores seasonality it is possible to use that model to predict how the specific person likelihood differs from average. If we have a divergence from average then we can transfer that divergence proportionally to the observed frequency at the time of the prediction. Definitions: Ft = trailing average frequency of the event at time “t”. The average is done over a suitable period of to achieve a statistical significant estimate. F = average frequency as seen by the model. L = likelihood predicted by the model for a specific person Lt = predicted likelihood proportionally scaled for time “t”. If the model is good at predicting deviation from average, and this holds over the interesting range of seasons, then we can estimate Lt as: Lt = L * (Ft / F) Considering that: L = (L – F) + F Substituting we get: Lt = [(L – F) + F] * (Ft / F) Which simplifies to: (i)                  Lt = (L – F) * (Ft / F)  +  Ft This latest expression can be understood as “The adjusted likelihood at time t is the average likelihood at time t plus the effect from the model, which is calculated as the difference from average time the proportion of frequencies”. The formula above assumes a linear translation of the proportion. It is possible to generalize the formula using a factor which we will call “a” as follows: (ii)                Lt = (L – F) * (Ft / F) * a  +  Ft It is also possible to use a formula that does not scale the difference, like: (iii)               Lt = (L – F) * a  +  Ft While these formulas seem reasonable, they should be taken as hypothesis to be proven with empirical data. A theoretical analysis provides the following insights: The Cumulative Gains Chart (lift) should stay the same, as at any given time the order of the likelihood for different customers is preserved If F is equal to Ft then the formula reverts to “L” If (Ft = 0) then Lt in (i) and (ii) is 0 It is possible for Lt to be above 1. If it is desired to avoid going over 1, for relatively high base frequencies it is possible to use a relative interpretation of the multiplicative factor. For example, if we say that Y is twice as likely as X, then we can interpret this sentence as: If X is 3%, then Y is 6% If X is 11%, then Y is 22% If X is 70%, then Y is 85% - in this case we interpret “twice as likely” as “half as likely to not happen” Applying this reasoning to (i) for example we would get: If (L < F) or (Ft < (1 / ((L/F) + 1)) Then  Lt = L * (Ft / F) Else Lt = 1 – (F / L) + (Ft * F / L)  

    Read the article

  • A Year of Upheaval for Procurement Professionals-New Report & Webinar

    - by DanAshton
    2013 will see significant changes in priorities and initiatives among procurement professionals as they balance the needs of their enterprises with efforts to add capabilities for long-term procurement success. In response, procurement managers will expand their organization’s spend influence via supplier relationship management, sourcing, and category management. These findings are part of the new report, “2013 Procurement Key Issues: Going Deeper and Broader to Deliver Borderless Procurement Services,” by the Hackett Group. The authors say that compared to similar studies over the last five years, 2013 is registering the greatest year-over-year changes in priorities for both procurement performance and capability issues. Three Important PrioritiesThe survey found that procurement professionals are focusing their attention in three key areas. Cost reduction. Controlling expenses is always a high priority, but with 90 percent of the respondents now placing this at the top of their performance concerns, the Hackett analysts say this “clearly shows that, for better or worse, cost reduction is king” in 2013. Technology innovation. Innovation has shot up significantly in the priority rankings and is now tied with spend influence for second among procurement professionals. Sixty-five percent of the survey participants said pursuing game-changing innovation and technology is a top procurement initiative. Managing supply risk. This area registered a sharp rise in importance because of its role in protecting profits, Hackett says. Supplier compliance with performance milestones and regulatory requirements is receiving particular attention, with an emphasis on efficient management of cross-functional workflows. “These processes create headaches for suppliers and buyers alike, and can detract from strategic value creation when participants are bogged down in processing paper and spreadsheets,” the report explains.  For more insights into the current state of the procurement industry, download the full report, “2013 Procurement Key Issues: Going Deeper and Broader to Deliver Borderless Procurement Services” and watch a Webcast featuring Global Procurement Advisory Practice Leader for The Hackett Group, Chis Sawchuk, and Managing Supervisor of Supply Chain Processes and Systems for Ameren, Chris Nelms. 

    Read the article

  • Adatlopások, adatszivárgások és más incidensek az USA egészségügyében

    - by user645740
    A The New York Times blogján olvastam a hírt, hogy ismét adatlopás történt, most 4,5 milló páciens adatát szerezték meg hackerek 2014 április és június között, most a Community Health Systems rendszerébol. A cég 206 kórházat üzemeltet. Az ellopott adatok tartalmazzák a születési dátumokat, telefonszámokat, stb. is, viszont most egészségügyi állapotukra, kezelésükre vonatkozó adatot nem szereztek meg. A cikk itt olvasható: Hack of Community Health Systems Affects 4.5 Million Patients: http://bits.blogs.nytimes.com/2014/08/18/hack-of-community-health-systems-affects-4-5-million-patients/ Az USÁ-ban törvényi kötelezettségnek megfeleloen publikálni kell minden biztonsági incidenst, ami legalább 500 személy érint. Ezeket az adatokat a következo oldalon tekinthetjük meg: http://www.hhs.gov/ocr/privacy/hipaa/administrative/breachnotificationrule/breachtool.htmlCsak 2014-ben legalább 75 incidens volt, összesen több mint 1080 incidens van az adathalmazban. Sokszor papír alapon szivárogtak ki az infók, vagy nem titkosított USB drive, laptop tunt el, stb, illetve hacking is jó néhányszor elofordult.

    Read the article

  • Update: GTAS and EBS

    - by jeffrey.waterman
    Provided below are updated target date timeframes for provided patches for upcoming legislative enhancements.   Dates have been pushed out from previous dates provided due to changes in Treasury mandatory dates.  Mandatory dates for GTAS and IPAC have changes since previous target dates for patches were provided.   These are target dates, not commitments to deliver functionality. Deliverable Target Timeframes for Customer Patches Comments R12 GTAS Configuration Apr 2012 Patch is available GTAS Key Processes Oct/Nov 2012 Includes GTAS processes necessary to create the GTAS interface file, migration of FACTS balances to GTAS, GTAS Trial Balance, and GTAS Transaction Register. GTAS Reports Nov/Dec 2012 GTAS Trial Balance GTAS Transaction Register Capture of Trading Partner TAS/BETC Apr/May 2013 Includes modification necessary to capture BETC, Trading Partner TAS/BETC on relevant transactions. GTAS Other Processes May/Jun  2013 Includes GTAS Customer and Vendor  update processes. IPAC Aug/Sep Includes modification required to IPAC to accommodate Componentized TAS and BETC. 11i GTAS Configuration May 2012 Patch is available GTAS Key Processes Nov/Dec 2012 Includes GTAS processes necessary to create the GTAS interface file, migration of FACTS balances to GTAS, GTAS Trial Balance, and GTAS Transaction Register. GTAS Reports Dec/Jan 2012 GTAS Trial Balance GTAS Transaction Register Capture of Trading Partner TAS/BETC May/Jun 2013 Includes modification necessary to capture BETC, Trading Partner TAS/BETC on relevant transactions. GTAS Other Processes Jun/Jul 2013 Includes GTAS Customer and Vendor  update processes. IPAC Sep/Oct 2013 Includes modification required to IPAC to accommodate Componentized TAS and BETC.

    Read the article

< Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >