Search Results

Search found 7633 results on 306 pages for 'nbsp'.

Page 37/306 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Gestão do Conhecimento 2.0 - Data Adiada para 30 de Junho

    - by Claudia Costa
    Nas organizações o conceito de intranet está a evoluir de um simples repositório de documentos e links para uma plataforma colaborativa, onde os colaboradores podem consultar, navegar, publicar, analisar, comentar e valorizar os seus conhecimentos e de outros.   Durante esta sessão apresentaremos os produtos e proposta de valor da Oracle para a evolução da intranet e gestão do conhecimento 2.0 (também conhecido como Social KM). Clique aqui e registe-se.   Agenda (Oracle, Lagoas Park/ 9:30-14:30) 09:15 - Café de Boas Vindas & Registo 09:30 - Gestão do Conhecimento 2.0 10:30 - Demo de GdC 2.0 com Oracle 11:00 - Coffee Break 11:30 - Oracle WebCenter Framework 12:30 - Oracle WebCenter Spaces 13:30 - Conclusão   Pré-requisitos Cada participante deverá trazer o seu Laptop preparado com as seguintes características: ·         2GB RAM, com acesso a WiFi ·         Disco rígido com 25GB de espaço livre (caso queira gravar a máquina virtal a disponibilizar durante a sessão)    Clique aqui e registe-se.   * Pedimos desculpa por esta alteração.  Caso surja algum impedimento em poder participar nesta nova data, agradeço por favor que nos informe.    

    Read the article

  • surviveFocusChange=true

    - by Geertjan
    Here's a very cool thing that I keep forgetting about but that Jesse reminded me of in the recent blog entries on Undo/Redo: "surviveFocusChange=true". Look at the screenshot below. You see two windows with a toolbar button. The toolbar button is enabled whenever an object named "Bla" is in the Lookup. The "Demo" window has a "Bla" object in its Lookup and hence the toolbar button is enabled when the focus is in the "Demo" window, as shown below: Now the focus is in the "Output" window, which does not have a "Bla" object in its Lookup and hence the button is disabled: However, there are scenarios where you might like the button to remain enabled even when the focus changes. (One such scenario is the Undo/Redo scenario in this blog a few days ago, i.e., even when the Properties window has the focus the Undo/Redo buttons should be enabled.) Here you can see that the button is enabled even though the focus has switched to the "Output" window: How to achieve this? Well, you need to register your Action to have "surviveFocusChange" set to "true". It is, by default, set to "false": import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionReferences; import org.openide.awt.ActionRegistration; import org.openide.util.NbBundle.Messages; @ActionID(category = "File", id = "org.mymodule.BlaAction") @ActionRegistration(surviveFocusChange=true, iconBase = "org/mymodule/Datasource.gif", displayName = "#CTL_BlaAction") @ActionReferences({     @ActionReference(path = "Toolbars/Bla", position = 0) }) @Messages("CTL_BlaAction=Bla") public final class BlaAction implements ActionListener {     private final Bla context;     public BlaAction(Bla context) {         this.context = context;     }     @Override     public void actionPerformed(ActionEvent ev) {         // TODO use context     } } That's all. Now folders and files will be created in the NetBeans Platform filesystem from the annotations above when the module is compiled such that the NetBeans Platform will automatically keep the button enabled even when the user switches focus to a window that does not contain a "Bla" object in its Lookup. Hence, the same "Bla" object will remain available when switching from one window to another, until a new "Bla" object will be made available in the Lookup.

    Read the article

  • Optimising Database Mirroring over WAN

    - by blakmk
      I recently got asked by our network guys about botlenecks in the WAN that used for mirroring to our DR I site. They asked me to turn off encryption of Database Mirroring so that the riverbed software  they were using could optimise the packets sent over the WAN. I was a bit sceptical at first about the security risks, but it seems the riverbed software has its own form of obfuscation making the packets difficult to read. After reading an article by rusanu I realised that it could be done with minimal downtime and potential reducing network traffic by 5-10% on its own. After turning off encryption I was pleasantly suprised to see that overall network traffic for mirroring dropped by a whopping 75%!                                               This Web Page Created with PageBreeze Free HTML Editor

    Read the article

  • Why We Should Learn to Stop Worrying and Love Millennials

    - by HCM-Oracle
    By Christine Mellon Much is said and written about the new generations of employees entering our workforce, as though they are a strange specimen, a mysterious life form to be “figured out,” accommodated and engaged – at a safe distance, of course.  At its worst, this talk takes a critical and disapproving tone, with baby boomer employees adamantly refusing to validate this new breed of worker, let alone determine how to help them succeed and achieve their potential.   The irony of our baby-boomer resentments and suspicions is that they belie the fact that we created the very vision that younger employees are striving to achieve.  From our frustrations with empty careers that did not fulfill us, from our opposition to “the man,” from our sharp memories of our parents’ toiling for 30 years just for the right to retire, from the simple desire not to live our lives in a state of invisibility, came the seeds of hope for something better. One characteristic of Millennial workers that grew from these seeds is the desire to experience as much as possible.  They are the “Experiential Employee”, with a passion for growing in diverse ways and expanding personal and professional horizons.  Rather than rooting themselves in a single company for a career, or even in a single career path, these employees are committed to building a broad portfolio of experiences and capabilities that will enable them to make a difference and to leave a mark of significance in the world.  How much richer is the organization that nurtures and leverages this inclination?  Our curmudgeonly ways must be surrendered and our focus redirected toward building the next generation of talent ecosystems, if we are to optimize what future generations have to offer.   Accelerating Professional Development In spite of our Boomer grumblings about Millennials’ “unrealistic” expectations, the truth is that we have a well-matched set of circumstances.  We have executives-in-waiting who want to learn quickly and a concurrent, urgent need to ramp up their development time, based on anticipated high levels of retirement in the next 10+ years.  Since we need to rapidly skill up these heirs to the corporate kingdom, isn’t it a fortunate coincidence that they are hungry to learn, develop and move fluidly throughout our organizations??  So our challenge now is to efficiently operationalize the wisdom we have acquired about effective learning and development.   We have already evolved from classroom-based models to diverse instructional methods.  The next step is to find the best approaches to help younger employees learn quickly and apply new learnings in an impactful way.   Creating temporary or even permanent functional partnerships among Millennial employees is one way to maximize outcomes.  This might take the form of 2 or more employees owning aspects of what once fell under a single role.  While one might argue this would mean duplication of resources, it could be a short term cost while employees come up to speed.  And the potential benefits would be numerous:  leveraging and validating the inherent sense of community of new generations, creating cross-functional skills with broad applicability, yielding additional perspectives and approaches to traditional work outcomes, and accelerating the performance curve for incumbents through Cooperative Learning (Johnson, D. and Johnson R., 1989, 1999).  This well-researched teaching strategy, where students support each other in the absorption and application of new information, has been shown to deliver faster, more efficient learning, and greater retention. Alternately, perhaps short term contracts with exiting retirees, or former retirees, to help facilitate the development of following generations may have merit.  Again, a short term cost, certainly.  However, the gains realized in shortening the learning curve, and strengthening engagement are substantial and lasting. Ultimately, there needs to be creative thinking applied for each organization on how to accelerate the capabilities of our future leaders in unique ways that mesh with current culture. The manner in which performance is evaluated must finally shift as well.  Employees will need to be assessed on how well they have developed key skills and capabilities vs. end-to-end mastery of functional positions they have no interest in keeping for an entire career. As we become more comfortable in placing greater and greater weight on competencies vs. tasks, we will realize increased organizational agility via this new generation of workers, which will be further enhanced by their natural flexibility and appetite for change. Revisiting Succession  For many years, organizations have failed to deliver desired succession planning outcomes.  According to CEB’s 2013 research, only 28% of current leaders were pre-identified in a succession plan. These disappointing results, along with the entrance of the experiential, Millennial employee into the workforce, may just provide the needed impetus for HR to reinvent succession processes.   We have recognized that the best professional development efforts are not always linear, and the time has come to fully adopt this philosophy in regard to succession as well.  Paths to specific organizational roles will not look the same for newer generations who seek out unique learning opportunities, without consideration of a singular career destination.  Rather than charting particular jobs as precursors for key positions, the experiences and skills behind what makes an incumbent successful must become essential in succession mapping.  And the multitude of ways in which those experiences and skills may be acquired must be factored into the process, along with the individual employee’s level of learning agility. While this may seem daunting, it is necessary and long overdue.  We have talked about the criticality of competency-based succession, however, we have not lived up to our own rhetoric.  Many Boomers have experienced the same frustration in our careers; knowing we are capable of shining in a particular role, but being denied the opportunity due to how our career history lined up, on paper, with documented job requirements.  These requirements usually emphasized past jobs/titles and specific tasks, versus capabilities, drive and willingness (let alone determination) to learn new things.  How satisfying would it be for us to leave a legacy where such narrow thinking no longer applies and potential is amplified? Realizing Diversity Another bloom from the seeds we Boomers have tried to plant over the past decades is a completely evolved view of diversity.  Millennial employees assume a diverse workforce, and are startled by anything less.  Their social tolerance, nurtured by wide and diverse networks, is unprecedented.  College graduates expect a similar landscape in the “real world” to what they experienced throughout their lives.  They appreciate and seek out divergent points of view and experiences without needing any persuasion.  The face of our U.S. workforce will likely see dramatic change as Millennials apply their fresh take on hiring and building strong teams, with an inherent sense of inclusion.  This wonderful aspect of the Millennial wave should be celebrated and strongly encouraged, as it is the fulfillment of our own aspirations. Future Perfect The Experiential Employee is operating more as a free agent than a long term player, and their commitment will essentially last as long as meaningful organizational culture and personal/professional opportunities keep their interest.  As Boomers, we have laid the foundation for this new, spirited employment attitude, and we should take pride in knowing that.  Generations to come will challenge organizations to excel in how they identify, manage and nurture talent. Let’s support and revel in the future that we’ve helped invent, rather than lament what we think has been lost.  After all, the future is always connected to the past.  And as so eloquently phrased by Antoine Lavoisier, French nobleman, chemist and politico:  “Nothing is Lost, Nothing is Created, and Everything is Transformed.” Christine has over 25 years of diverse HR experience.  She has held HR consulting and corporate roles, including CHRO positions for Echostar in Denver, a 6,000+ employee global engineering firm, and Aepona, a startup software firm, successfully acquired by Intel. Christine is a resource to Oracle clients, to assist in Human Capital Management strategy development and implementation, compensation practices, talent development initiatives, employee engagement, global HR management, and integrated HR systems and processes that support the full employee lifecycle. 

    Read the article

  • JDeveloper 11.1.2 : Command Link in Table Column Work Around

    - by Frank Nimphius
    Just figured that in Oracle JDeveloper 11.1.2, clicking on a command link in a table does not mark the table row as selected as it is the behavior in previous releases of Oracle JDeveloper. For the time being, the following work around can be used to achieve the "old" behavior: To mark the table row as selected, you need to build and queue the table selection event in the code executed by the command link action listener. To queue a selection event, you need to know about the rowKey of the row that the command link that you clicked on is located in. To get to this information, you add an f:attribute tag to the command link as shown below <af:column sortProperty="#{bindings.DepartmentsView1.hints.DepartmentId.name}" sortable="false"    headerText="#{bindings.DepartmentsView1.hints.DepartmentId.label}" id="c1">   <af:commandLink text="#{row.DepartmentId}" id="cl1" partialSubmit="true"       actionListener="#{BrowseBean.onCommandItemSelected}">     <f:attribute name="rowKey" value="#{row.rowKey}"/>   </af:commandLink>   ... </af:column> The f:attribute tag references #{row.rowKey} wich in ADF translates to JUCtrlHierNodeBinding.getRowKey(). This information can be used in the command link action listener to compose the RowKeySet you need to queue the selected row. For simplicitly reasons, I created a table "binding" reference to the managed bean that executes the command link action. The managed bean code that is referenced from the af:commandLink actionListener property is shown next: public void onCommandItemSelected(ActionEvent actionEvent) {   //get access to the clicked command link   RichCommandLink comp = (RichCommandLink)actionEvent.getComponent();   //read the added f:attribute value   Key rowKey = (Key) comp.getAttributes().get("rowKey");     //get the current selected RowKeySet from the table   RowKeySet oldSelection = table.getSelectedRowKeys();   //build an empty RowKeySet for the new selection   RowKeySetImpl newSelection = new RowKeySetImpl();     //RowKeySets contain List objects with key objects in them   ArrayList list = new ArrayList();   list.add(rowKey);   newSelection.add(list);     //create the selectionEvent and queue it   SelectionEvent selectionEvent = new SelectionEvent(oldSelection, newSelection, table);   selectionEvent.queue();     //refresh the table   AdfFacesContext.getCurrentInstance().addPartialTarget(table); }

    Read the article

  • "From Russia with Love" - My Oracle Russian Experience

    - by cwarticki
    Two weeks ago, I traveled to Moscow, Russia. I had the pleasure of meeting with many of our Oracle Partners and Customers in the region.  I also worked with our Oracle Russia team throughout the week building many new friendships. The showcase for the week was an Oracle Support Strategy event for our Oracle Partners and Customers.  It was held at the Kateria-City Hotel, Moscow.  The Oracle Marketing team did an amazing job registering 100+ for the event, and nearly 100 were in attendance.           During the event, I spoke about many different topics. Part was a hands-on workshop to personalize your MOS Dashboard and configure Hot-Topics Email alerts.  Customers learned how to subscribe to newsletters and other Oracle information.  It covered a mulitude of Support Best Practices.  Additionally, I presented Platinum Services to the audience and my colleague Kristophe Hermans, from Oracle Belgium spoke on Proactive Support. In addition, I had the distinct privilege to meet one-on-one with our customers representing OJSC VimpelCom, MTC-Rus and Sberbank.  Pictured with me is Valery Yourinsky, Director of Technology Consulting Dept, FORS Distribution (Oracle Platinum Partner) Finally, I spent 2.5 days with my Oracle colleagues from Oracle Russia. They are super, hard-working, dedicated, customer-service professionals. All of them! I owe them all a debt of gratitude. Next time, we meet in Florida - ok? I am very appreciative to all our Oracle partners, customers and colleagues.  Thanks for hosting me and showing me a wonderful time in your country.  I look forward to my return. Sincerely,Chris WartickiGlobal Customer Management

    Read the article

  • New Sample Demonstrating the Traversing of Tree Bindings

    - by Duncan Mills
    A technique that I seem to use a fair amount, particularly in the construction of dynamic UIs is the use of a ADF Tree Binding to encode a multi-level master-detail relationship which is then expressed in the UI in some kind of looping form – usually a series of nested af:iterators, rather than the conventional tree or treetable. This technique exploits two features of the treebinding. First the fact that an treebinding can return both a collectionModel as well as a treeModel, this collectionModel can be used directly by an iterator. Secondly that the “rows” returned by the collectionModel themselves contain an attribute called .children. This attribute in turn gives access to a collection of all the children of that node which can also be iterated over. Putting this together you can represent the data encoded into a tree binding in all sorts of ways. As an example I’ve put together a very simple sample based on the HT schema and uploaded it to the ADF Sample project. It produces this UI: The important code is shown here for a Region -> Country -> Location Hierachy: <af:iterator id="i1" value="#{bindings.AllRegions.collectionModel}" var="rgn"> <af:showDetailHeader text="#{rgn.RegionName}" disclosed="true" id="sdh1"> <af:iterator id="i2" value="#{rgn.children}" var="cnty">     <af:showDetailHeader text="#{cnty.CountryName}" disclosed="true" id="sdh2">       <af:iterator id="i3" value="#{cnty.children}" var="loc">         <af:panelList id="pl1">         <af:outputText value="#{loc.City}" id="ot3"/>           </af:panelList>         </af:iterator>       </af:showDetailHeader>     </af:iterator>   </af:showDetailHeader> </af:iterator>  You can download the entire sample from here:

    Read the article

  • Who keeps removing that file?

    - by mgerdts
    Over the years, I've had many times when some file gets removed and there's no obvious culprit.  With dtrace, it is somewhat easy to figure out:  #! /usr/sbin/dtrace -wqs syscall::unlinkat:entry /cleanpath(copyinstr(arg1)) == "/dev/null"/ {         stop();         printf("%s[%d] stopped before removing /dev/null\n", execname, pid);         system("ptree %d; pstack %d", pid, pid); } That script will stop the process trying to remove /dev/null before it does it.  You can allow it to continue by restarting (unstopping?) the command with prun(1) or killing it with kill -9.  If you want the command to continue automatically after getting the ptree and pstack output, you can add "; prun %d" and another pid argument to the system() call.

    Read the article

  • Sampling SQL server batch activity

    - by extended_events
    Recently I was troubleshooting a performance issue on an internal tracking workload and needed to collect some very low level events over a period of 3-4 hours.  During analysis of the data I found that a common pattern I was using was to find a batch with a duration that was longer than average and follow all the events it produced.  This pattern got me thinking that I was discarding a substantial amount of event data that had been collected, and that it would be great to be able to reduce the collection overhead on the server if I could still get all activity from some batches. In the past I’ve used a sampling technique based on the counter predicate to build a baseline of overall activity (see Mikes post here).  This isn’t exactly what I want though as there would certainly be events from a particular batch that wouldn’t pass the predicate.  What I need is a way to identify streams of work and select say one in ten of them to watch, and sql server provides just such a mechanism: session_id.  Session_id is a server assigned integer that is bound to a connection at login and lasts until logout.  So by combining the session_id predicate source and the divides_by_uint64 predicate comparator we can limit collection, and still get all the events in batches for investigation. CREATE EVENT SESSION session_10_percent ON SERVER ADD EVENT sqlserver.sql_statement_starting(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info_external (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlserver.sql_statement_completed(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))) ADD TARGET ring_buffer WITH (MAX_DISPATCH_LATENCY=30 SECONDS,TRACK_CAUSALITY=ON) GO   There we go; event collection is reduced while still providing enough information to find the root of the problem.  By the way the performance issue turned out to be an IO issue, and the session definition above was more than enough to show long waits on PAGEIOLATCH*.        

    Read the article

  • ????????2012?11??: ????OSWatcher Black Box?????????(???)

    - by lanwang
    ????11?????????????(???)???????????OSWatcher Black Box????????;??OSWbb?????????????????;???????OSWatcher Black Box;???????????OSWatcher Black Box Analyzer (OSWbba)?????OSWbb??????  OSWatcher Black Box????????????????????????????,?????:”??OSW (OSWatcher Black Box) ????“??2012?6????????“?RAC????????????” WebEx??????(???)?????:2012?11?15?15:00(????)  ????: 250 409 927 ????????  1. ??????:https://oracleaw.webex.com/oracleaw/onstage/g.php?d=250409927&t=a2. ?????????????????,???????????,????????? InterCall????????Webex???????,???????????,??????:    - ????ID: 31151003   - ????????: 1080 044 111 82   - ?????????: 1080 074 413 29   - ????: 8009 661 55   - ????: 00801148720   - ????????????????MOS?? 1148600.1 ???????:????????????,??????????(31151003)??????(First Name and Last Name) ??????????????????????,???MOS??Doc ID 1456176.1

    Read the article

  • ??GoldenGate?LAG???

    - by Liu Maclean(???)
    GGSCI????LAG?? ????????????????Oracle?redo????online redo logfile? ? Replicat????????????????? ???????? ????,?????????????????LAG; ????????????????REPLICAT??apply???????????? OGG????RANGE??????????,????????REPLICATE??APPLY? OGG??MAXTRANSOPS???????? LAG?????????: ?Extract?????redolog????TRAIL?REMOTE HOST ????datapump???extract trail????????????REMOTE HOST ?collector?????????????????LOCAL TRAIL ?REPLICAT??LOCAL TRAIL???????? ?????????GGSCI?INFO?STATUS??????LAG,???SEND ???,LAG?????LAG?????: INFO??????LAG???SEND??????????? INFO?????LAG???MANAGER????????checkpoint SEND <OBJECT>, lag???LAG???<OBJECT>???????????? LAG?????????????????Kilobytes??? ????LAG??? ????????????? ? EXTRACT/PUMP/REPLICAT???????? ?2?????????, ???? LAG???EXTRACT??????? ??EXTRACT/PUMP/REPLICAT??????????????? REAL TIME,???LAG????? ?????????????? ????????REDO LOG?????????,?LAG???ER???????,?????????????? ??????,STOP EXTRACT?????????????????LAG,????EXTRACT?????,??EXTRACT????????? ????REDO LOG???? ?EXTRACT??????????????????? GGSCI (XIANGBLI-CN) 27> stop load2 Sending STOP request to EXTRACT LOAD2 … Request processed. GGSCI (XIANGBLI-CN) 28> start load2 Sending START request to MANAGER … EXTRACT LOAD2 starting GGSCI (XIANGBLI-CN) 31> info load2 EXTRACT    LOAD2     Last Started 2012-09-18 20:26   Status RUNNING Checkpoint Lag       00:04:34 (updated 00:00:08 ago) Log Read Checkpoint  Oracle Redo Logs 2012-09-18 20:21:32  Seqno 44, RBA 13750272 SCN 0.1845479 (1845479) GGSCI (XIANGBLI-CN) 35> lag load2 Sending GETLAG request to EXTRACT LOAD2 … Last record lag: 130 seconds. At EOF, no more records to process. GGSCI (XIANGBLI-CN) 36> info load2 EXTRACT    LOAD2     Last Started 2012-09-18 20:26   Status RUNNING Checkpoint Lag       00:00:00 (updated 00:00:02 ago) Log Read Checkpoint  Oracle Redo Logs 2012-09-18 20:27:33  Seqno 44, RBA 13817856 SCN 0.1845671 (1845671) ?????? Last record lag ? Checkpoint Lag ???? EXTRACT/PUMP/REPLICAT ?????????????(catch up), ???? ?????????????GB?redo???,??????EXTRACT/PUMP/REPLICAT ????????? ???INFO?LAG???checkpoint?,????????????Long Running Transactions (LRTs),??????????COMMIT? ????????????????????????COMMIT?????? ????EXTRACT/PUMP/REPLICAT???????????????????????commit????? ??REPLICAT????MAXTRANSOPS ?????LAG?

    Read the article

  • ????????2012?11??: ?SQLTXPLAIN???????SQL??(???)

    - by lanwang
    ????11?????????????(???)???????????OSWatcher Black Box????????;??OSWbb?????????????????;???????OSWatcher Black Box;???????????OSWatcher Black Box Analyzer (OSWbba)?????OSWbb??????  OSWatcher Black Box????????????????????????????,?????:”??OSW (OSWatcher Black Box) ????“??2012?6????????“?RAC????????????” WebEx??????(???)?????:2012?11?15?15:00(????)  ????: 250 409 927 ????????  1. ??????:https://oracleaw.webex.com/oracleaw/onstage/g.php?d=250409927&t=a2. ?????????????????,???????????,????????? InterCall????????Webex???????,???????????,??????:    - ????ID: 31151003   - ????????: 1080 044 111 82   - ?????????: 1080 074 413 29   - ????: 8009 661 55   - ????: 00801148720   - ????????????????MOS?? 1148600.1 ???????:????????????,??????????(31151003)??????(First Name and Last Name)

    Read the article

  • ?know How?Oracle WebLogic Server 11g? JRuby?JMX???????|WebLogic Channel|??????

    - by ???02
    ??Java SE???????????????JMX?????????????????????????????????????????????????????????????????????????????????????????????????????JRuby??????????JMX??????Oracle WebLogic Server 11g Release 1????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????¦????????????????????????????????????????    * Oracle Database 11g Release 2??     * JDK 1.6    * JRuby 1.5.6    * Oracle WebLogic Server 11g Release 1??    * ????·????????...

    Read the article

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • rails use counts in diferent views

    - by Oluf Nielsen
    Hello i guess this is going to be pretty noob question.. But.. I have an scaffold called list, which has_many :wishes. And with that information in my model, I can in my list view use this code <%=h @list.wishes.count % well now I have made an controller called statusboard.. And in that' I have 3 functions.. Or how to say it.. but it is Index, loggedin, loggedout.. And .. In loggedin and in the file #app/views/statusboard/loggedin.html.erb i want to display this.. Howdy {Username}, you have made {count lists} lists, and {count wishes} wishes here is that i figured i should write in my file.. Howdy {Username}, you have made <%=h @user.list.count % lists, and <%=h @user.wishes.count % wishes my list model is like this = class List < ActiveRecord::Base   attr_accessible :user_id, :name, :description   belongs_to :users   has_many :wishes end and my wish model is like this = class Wish < ActiveRecord::Base   attr_accessible :list_id, :name, :price, :link, :rating, :comment   belongs_to :list end and last my user model is like this = class User < ActiveRecord::Base   # Include default devise modules. Others available are:   # :token_authenticatable, :lockable and :timeoutable   devise :database_authenticatable, :registerable,# :confirmable,              :recoverable, :rememberable, :trackable, :validatable   # Setup accessible (or protected) attributes for your model   attr_accessible :email, :password, :password_confirmation   has_many :lists end i hope someone can help me :-)! / Oluf Nielsen

    Read the article

  • Entity Association Mapping with Code First Part 1 : Mapping Complex Types

    - by mortezam
    Last week the CTP5 build of the new Entity Framework Code First has been released by data team at Microsoft. Entity Framework Code-First provides a pretty powerful code-centric way to work with the databases. When it comes to associations, it brings ultimate flexibility. I’m a big fan of the EF Code First approach and am planning to explain association mapping with code first in a series of blog posts and this one is dedicated to Complex Types. If you are new to Code First approach, you can find a great walkthrough here. In order to build a solid foundation for our discussion, we will start by learning about some of the core concepts around the relationship mapping.   What is Mapping?Mapping is the act of determining how objects and their relationships are persisted in permanent data storage, in our case, relational databases. What is Relationship mapping?A mapping that describes how to persist a relationship (association, aggregation, or composition) between two or more objects. Types of RelationshipsThere are two categories of object relationships that we need to be concerned with when mapping associations. The first category is based on multiplicity and it includes three types: One-to-one relationships: This is a relationship where the maximums of each of its multiplicities is one. One-to-many relationships: Also known as a many-to-one relationship, this occurs when the maximum of one multiplicity is one and the other is greater than one. Many-to-many relationships: This is a relationship where the maximum of both multiplicities is greater than one. The second category is based on directionality and it contains two types: Uni-directional relationships: when an object knows about the object(s) it is related to but the other object(s) do not know of the original object. To put this in EF terminology, when a navigation property exists only on one of the association ends and not on the both. Bi-directional relationships: When the objects on both end of the relationship know of each other (i.e. a navigation property defined on both ends). How Object Relationships Are Implemented in POCO domain models?When the multiplicity is one (e.g. 0..1 or 1) the relationship is implemented by defining a navigation property that reference the other object (e.g. an Address property on User class). When the multiplicity is many (e.g. 0..*, 1..*) the relationship is implemented via an ICollection of the type of other object. How Relational Database Relationships Are Implemented? Relationships in relational databases are maintained through the use of Foreign Keys. A foreign key is a data attribute(s) that appears in one table and must be the primary key or other candidate key in another table. With a one-to-one relationship the foreign key needs to be implemented by one of the tables. To implement a one-to-many relationship we implement a foreign key from the “one table” to the “many table”. We could also choose to implement a one-to-many relationship via an associative table (aka Join table), effectively making it a many-to-many relationship. Introducing the ModelNow, let's review the model that we are going to use in order to implement Complex Type with Code First. It's a simple object model which consist of two classes: User and Address. Each user could have one billing address. The Address information of a User is modeled as a separate class as you can see in the UML model below: In object-modeling terms, this association is a kind of aggregation—a part-of relationship. Aggregation is a strong form of association; it has some additional semantics with regard to the lifecycle of objects. In this case, we have an even stronger form, composition, where the lifecycle of the part is fully dependent upon the lifecycle of the whole. Fine-grained domain models The motivation behind this design was to achieve Fine-grained domain models. In crude terms, fine-grained means “more classes than tables”. For example, a user may have both a billing address and a home address. In the database, you may have a single User table with the columns BillingStreet, BillingCity, and BillingPostalCode along with HomeStreet, HomeCity, and HomePostalCode. There are good reasons to use this somewhat denormalized relational model (performance, for one). In our object model, we can use the same approach, representing the two addresses as six string-valued properties of the User class. But it’s much better to model this using an Address class, where User has the BillingAddress and HomeAddress properties. This object model achieves improved cohesion and greater code reuse and is more understandable. Complex Types: Splitting a Table Across Multiple Types Back to our model, there is no difference between this composition and other weaker styles of association when it comes to the actual C# implementation. But in the context of ORM, there is a big difference: A composed class is often a candidate Complex Type. But C# has no concept of composition—a class or property can’t be marked as a composition. The only difference is the object identifier: a complex type has no individual identity (i.e. no AddressId defined on Address class) which make sense because when it comes to the database everything is going to be saved into one single table. How to implement a Complex Types with Code First Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type. Here is the implementation: public class User{    public int UserId { get; set; }    public string FirstName { get; set; }    public string LastName { get; set; }    public string Username { get; set; }    public Address Address { get; set; }} public class Address {     public string Street { get; set; }     public string City { get; set; }            public string PostalCode { get; set; }        }public class EntityMappingContext : DbContext {     public DbSet<User> Users { get; set; }        } With code first, this is all of the code we need to write to create a complex type, we do not need to configure any additional database schema mapping information through Data Annotations or the fluent API. Database SchemaThe mapping result for this object model is as follows: Limitations of this mappingThere are two important limitations to classes mapped as Complex Types: Shared references is not possible: The Address Complex Type doesn’t have its own database identity (primary key) and so can’t be referred to by any object other than the containing instance of User (e.g. a Shipping class that also needs to reference the same User Address). No elegant way to represent a null reference There is no elegant way to represent a null reference to an Address. When reading from database, EF Code First always initialize Address object even if values in all mapped columns of the complex type are null. This means that if you store a complex type object with all null property values, EF Code First returns a initialized complex type when the owning entity object is retrieved from the database. SummaryIn this post we learned about fine-grained domain models which complex type is just one example of it. Fine-grained is fully supported by EF Code First and is known as the most important requirement for a rich domain model. Complex type is usually the simplest way to represent one-to-one relationships and because the lifecycle is almost always dependent in such a case, it’s either an aggregation or a composition in UML. In the next posts we will revisit the same domain model and will learn about other ways to map a one-to-one association that does not have the limitations of the complex types. References ADO.NET team blog Mapping Objects to Relational Databases Java Persistence with Hibernate

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • The Oracle Retail Week Awards - in review

    - by user801960
    The Oracle Retail Week Awards 2012 were another great success, building on the legacy of previous award ceremonies. Over 1,600 of the UK's top retailers gathered at the Grosvenor House Hotel and many of Europe's top retail leaders attended the prestigious Oracle Retail VIP Reception in the Grosvenor House Hotel's Red Bar. Over the years the Oracle Retail Week Awards have become a rallying point for the morale of the retail industry, and each nominated retailer served as a demonstration that the industry is fighting fit. It was an honour to speak to so many figureheads of UK - and global - retail. All of us at Oracle Retail would like to congratulate both the winners and the nominees for the awards. Retail is a cornerstone of the economy and it was inspiring to see so many outstanding demonstrations of innovation and dedication in the entries. Winners 2012   The Market Force Customer Service Initiative of the Year Winner: Dixons Retail: Knowhow Highly Commended: Hughes Electrical: Digital Switchover     The Deloitte Employer of the Year Winner: Morrisons     Growing Retailer of the Year Winner: Hallett Retail - The Concessions People Highly Commended: Blue Inc     The TCC Marketing/Advertising Campaign of the Year Winner: Sainsbury's: Feed your Family for £50     The Brandbank Multichannel Retailer of the Year Winner: Debenhams Highly Commended: Halfords     The Ashton Partnership Product Innovation of the Year Winner: Argos: Chad Valley Highly Commended: Halfords: Private label bikes     The RR Donnelley Pure-play Online Retailer of the Year Winner: Wiggle     The Hitachi Consulting Responsible Retailer of the Year Winner: B&Q: One Planet Home     The CA Technologies Retail Technology Initiative of the Year Winner: Oasis: Argyll Street flagship launch with iPad PoS     The Premier Tax Free Speciality Retailer of the Year Winner: Holland & Barrett     Store Design of the Year Winner: Next Home and Garden, Shoreham, Sussex Highly Commended: Dixons Retail, Black concept store, Birmingham Bullring     Store Manager of the Year Winner: Ian Allcock, Homebase, Aylesford Highly Commended: Darren Parfitt, Boots UK, Melton Mowbray Health Centre     The Wates Retail Destination of the Year Winner: Westfield, Stratford     The AlixPartners Emerging Retail Leader of the Year Winner: Catriona Marshall, HobbyCraft, Chief Executive     The Wipro Retail International Retailer of the Year Winner: Apple     The Clarity Search Retail Leader of the Year Winner: Ian Cheshire, Chief Executive, Kingfisher     The Oracle Retailer of the Year Winner: Burberry     Outstanding Contribution to Retail Winner: Lord Harris of Peckham     Oracle Retail and "Your Experience Platform" Technology is the key to providing that differentiated retail experience. More specifically, it is what we at Oracle call ‘the experience platform’ - a set of integrated, cross-channel business technology solutions, selected and operated by a retail business and IT team, and deployed in accordance with that organisation’s individual strategy and processes. This business systems architecture simultaneously: Connects customer interactions across all channels and touchpoints, and every customer lifecycle phase to provide a differentiated customer experience that meets consumers’ needs and expectations. Delivers actionable insight that enables smarter decisions in planning, forecasting, merchandising, supply chain management, marketing, etc; Optimises operations to align every aspect of the retail business to gain efficiencies and economies, to align KPIs to eliminate strategic conflicts, and at the same time be working in support of customer priorities.   Working in unison, these three goals not only help retailers to successfully navigate the challenges of today but also to focus on delivering that personalised customer experience based on differentiated products, pricing, services and interactions that will help you to gain market share and grow sales.  

    Read the article

  • Upcoming User Group Events in 2011

    - by john.orourke(at)oracle.com
    At a recent customer event, someone asked me if Oracle had any plans to re-create the Hyperion Solutions Conference.  Unfortunately the answer is no.  With so many different product lines it would be challenging and costly for Oracle to run separate user conferences for every product line, and it would create too many events for customers with multiple products to attend.  So Oracle Open World is the company's main event for showcasing what's new and what's coming across all product lines.  If customers find Oracle OpenWorld too overwhelming or if the timing is bad, there are a number of other conferences, which are run by Oracle user groups and include a number of sessions focused on Oracle Hyperion EPM and BI products.  Here's a sneak preview of what's coming up for conferences in 2011 where you can network with other Hyperion users and learn what's new and what's coming in our products. Alliance 2011:  This conference is run by the Oracle Higher Education User Group (HEUG).  It's being held March 27 - 30th in lovely Denver, Colorado.  (a great location and time for skiers!)  This event is targeted at customers in Higher Education and Public Sector organizations and is expecting to draw over 3,500 attendees.  There will be a number of sessions focusing on Oracle Hyperion EPM and BI products in the Budgeting track, as well as the Reporting & BI track.  This includes product-focused sessions delivered by Oracle and partners, as well as case studies delivered by customers.  Here's a link to the registration page where you can get more information: http://www.heug.org/p/cm/ld/fid=255 Collaborate 2011:  This conference is run by three different user groups;  OAUG, IOUG and Quest.  It's being held April 10 - 14th in sunny Orlando, Florida.  (yes, sunshine and warmth!)  This event is targeted to customers with Oracle E-Business Suite, PeopleSoft, JD Edwards, Hyperion, Primavera and other products and is expected to draw over 5,000 attendees.  You'll find a number of sessions focused on Oracle Hyperion EPM and BI products in the BI/Data Warehousing/EPM track.  This includes product-focused sessions delivered by Oracle, our partners, and customers as well as a number of customer case studies.  There will also be an exhibit area with a number of demo pods focused on EPM and BI products.  Here's a link to the conference web site where you can get more information: http://collaborate.oaug.org/ Also, please note that the OAUG has a Hyperion SIG that runs focused EPM/Hyperion events throughout the year.  Here's a link to their web site where you can get more information: http://hyperionsig.oaug.org/ Kscope 2011:  Formerly the Kaleidoscope conference, this one is run by the Oracle Developer Tools User Group (ODTUG).  This conference is being held June 26 - 30th in Long Beach, CA. (surf's up!)  Historically, this event has focused on Oracle Development tools, but over the past few years the EPM and BI content has grown with over 100 sessions planned this year.  So this event is becoming a great venue for existing Hyperion customers to learn about the latest developments with Oracle Essbase, Hyperion Planning, Hyperion Financial Management, Oracle BI and other products.   You'll also find hands-on workshops, product demonstrations as well as EPM and BI Symposiums run by Oracle Development staff.  Here's a link to the web site where you can get more details.  http://www.kscope11.com/biepm UKOUG Conference Series:  EPM and Hyperion 2011:  For Hyperion customers in the UK, the UKOUG has a Hyperion SIG that runs a focused conference for EPM and Hyperion products.  The 2011 event is planned for June in London.  Here's a link to the web site for this event where you can get more information: http://hyperion.ukoug.org/default.asp?p=8461 In addition to these conferences, you can also find Oracle EPM and BI content at regional user group meetings globally as well as Marketing events run by Oracle.  Check the events page at www.oracle.com for the details on upcoming Marketing and regional User Group events.  So while Oracle will not be trying to replicate the Hyperion Solutions conference, the good news is that there are a number of other events available where customers can find out what's new and what's coming with Oracle EPM and BI products.  And these events are running at different times of the year in different locations - so you can pick the event that makes the most sense for your company from a timing and location standpoint. I'll be delivering a number of sessions at the Alliance and Collaborate conferences and hope to see many of our loyal customers and partners at these events.  And there's always Oracle OpenWorld coming up in October, for which the planning has already started.  I look forward to seeing you in 2011.

    Read the article

  • Changing the sequencing strategy for File/Ftp Adapter

    - by [email protected]
    The File/Ftp Adapter allows the user to configure the outbound write to use a sequence number. For example, if I choose address-data_%SEQ%.txt as the FileNamingConvention, then all my files would be generated as address-data_1.txt, address-data_2.txt,...and so on. But, where does this sequence number come from? The answer lies in the "control directory" for the particular adapter project(or scenario). In general, for every project that use the File or Ftp Adapter, a unique directory is created for book keeping purposes. And since this control directory is required to be unique, the adapter uses a digest to make sure that no two control directories are the same. For example, for my FlatStructure sample, the control information for my project would go under FMW_HOME/user_projects/domains/soainfra/fileftp/controlFiles/[DIGEST]/outbound where the value of DIGEST would differ from one project to another. If you look under this directory, you will see a file control_ob.properties and this is where the sequence number is maintained. Please note that the sequence number is maintained in binary form and you hence you might need a hex editor to view its content. You will also see another zero byte file, SEQ_nnn, but, ignore that for now. We'll get to it some other time. For now, please remember that this extra file is maintained as a backup. One of the challenges faced by the adapter runtime is to guard all writes to the control files so no two threads inadverently try to update them at the same time. And, it does so with the help of a "Mutex". For now, please remember that the mutex comes in different flavors: In-memory DB-based Coherence-based User-defined Again, we will talk about these mutexes some other time. Please note that there might be scenarios, particularly under heavy load, where the mutex might become a bottleneck. The adapter, however,  allows you to change the configuration so that the adapter sequence value comes from a database sequence or a stored procedure and in such situation, the mutex is acually by-passed and thereby resulting in better throughputs. In later releases, the behavior of the adapter would be defaulted to use a db-sequence.  The simplest way to achieve this is by switching your JNDI for the outbound JCA file to use "eis/HAFileAdapter" as shown   But, what does this do? Internally, the adapter runtime creates a sequence on the oracle database. For example, if you do a "select * from user_sequences" in your soa-infra schema, you will see a new sequence being created with name as SEQ_<GUID>__ where the GUID will differ from one project to another. However, if you want to use your own sequence, then it would require you to add a new property to your JCA file called SequenceName as shown below. Please note that you will need to create this sequence on your soainfra schema beforehand.     But, what if we use DB2 or MSSQL Server as the dehydration support? DB2 supports sequences natively but MSSQL Server does not. So, the adapter runtime uses a natively generated sequence for DB2, but, for MSSQL server, the adapter relies on a stored procedure that ships with the product. If you wish to achieve the same result for SOA Suite running DB2 as the dehydration store, simply change your connection factory JNDI name in the JCA file to eis/HAFileAdapterDB2 and for MSSQL, please use eis/HAFileAdapterMSSQL. And, if you wish to use a stored procedure other than the one that ships with the product, you will need to rely on binding properties to override the adapter behavior. Particularly, you will need to instruct the adapter that you wish to use a stored procedure as shown:       Please note that if you're using the File/Ftp Adapter in Append mode, then the adapter runtime degrades the mutex to use pessimistic locks as we don't want writers from different nodes to append to the same file at the same time.                    

    Read the article

  • Partner Webcast - Focus on Oracle Data Profiling and Data Quality 11g

    - by lukasz.romaszewski(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Partner Webcast Focus on Oracle Data Profiling and Data Quality 11g February 24th, 12am  CET   Oracle offers an integrated suite Data Quality software architected to discover and correct today's data quality problems and establish a platform prepared for tomorrow's yet unknown data challenges. Oracle Data Profiling provides data investigation, discovery, and profiling in support of quality, migration, integration, stewardship, and governance initiatives. It includes a broad range of features that expand upon basic profiling, including automated monitoring, business-rule validation, and trend analysis. Oracle Data Quality for Data Integrator provides cleansing, standardization, matching, address validation, location enrichment, and linking functions for global customer data and operational business data. It ensures that data adheres to established standards that are adaptable to fit each organization's specific needs.  Both single - and double - byte data are processed in local languages to provide a unique and centralized view of customers, products and services.   During this in-person briefing, Data Integration Solution Specialists will be providing a technical overview and a walkthrough.   Agenda ·         Oracle Data Integration Strategy overview ·         A focus on Oracle Data Profiling and Oracle Data Quality for Data Integrator: o   Oracle Data Profiling o   Oracle Data Quality for Data Integrator o   Live demoo   Q&A Delivery Format  This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Registrations   received less than 24hours  prior to start time may not receive confirmation to attend. To register , click here. For any questions please contact [email protected]

    Read the article

  • Nominations now open for the Oracle FMW Excellence Awards 2014

    - by Greg Jensen
    2014 Oracle Excellence Award NominationsWho Is the Innovative Leader for Identity Management? •    Is your organization leveraging one of Oracle’s Identity and Access Management solutions in your production environment?•    Are you a leading edge organization that has adopted a forward thinking approach to Identity and Access Management processes across the organization?•    Are you ready to promote and highlight the success of your deployment to your peers? •    Would you a chance to win FREE registration to Oracle OpenWorld 2014? Oracle is pleased to announce the call for nominations for the 2014 Oracle Excellence Awards: Oracle Fusion Middleware Innovation.  The Oracle Excellence Awards for Oracle Fusion Middleware Innovation honor organizations using Oracle Fusion Middleware to deliver unique business value.  This year, the awards will recognize customers across nine distinct categories, including Identity and Access Management.  Oracle customers, who feel they are pioneers in their implementation of at least one of the Oracle Identity and Access Management offerings in a production environment or active deployment, should submit a nomination.  If submitted by June 20th, 2014, you will have a chance to win a FREE registration to Oracle OpenWorld 2014 (September 28 - October 2) in San Francisco, CA.  Top customers will be showcased at Oracle OpenWorld and featured in Oracle publications.   The  Identity and Access Management Nomination Form Additional benefits to nomineesNominating your organization opens additional opportunities to partner with Oracle such as:•    Promotion of your Customer Success StoriesProvides a platform for you to share the success of your initiatives and programs to peer groups raising the overall visibility of your team and your organization as a leader in security•    Social Media promotion (Video, Blog & Podcast)Reach the masses of Oracle’s customers through sharing of success stories, or customer created blog content that highlights the advanced thought leadership role in security with co-authored articles on Oracle Blog page that reaches close to 100,000 subscribers. There are numerous options to promote activities on Facebook, Twitter and co-branded activities using Video and Audio. •    Live speaking opportunities to your peersAs a technology leader within your organization, you can represent your organization at Oracle sponsored events (online, in person or webcasts) to help share the success of your organizations efforts building out your team/organization brand and success. •    Invitation to the IDM Architect ForumOracle is able to invite the right customers into the IDM Architect Forum which is an invite only group of customers that meet monthly to hear technology driven presentations from their own peers (not from Oracle) on today’s trends.  If you want to hear privately what some of the most successful companies in every industry are doing about security, this is the forum to be in. All presentations are private and remain within the forum, and only members can see take advantage of the lessons gained from these meetings.  To date, there are 125 members. There are many more advantages to partnering with Oracle, however, it can start with the simple nomination form for Identity and Access Management category of the 2014 Oracle Excellence Award Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Heterogeneous Datacenter Management with Enterprise Manager 12c

    - by Joe Diemer
    The following is a Guest Blog, contributed by Bryce Kaiser, Product Manager at Blue MedoraWhen I envision a perfect datacenter, it would consist of technologies acquired from a single vendor across the entire server, middleware, application, network, and storage stack - Apps to Disk - that meets your organization’s every IT requirement with absolute best-of-breed solutions in every category.   To quote a familiar motto, your datacenter would consist of "Hardware and Software, Engineered to Work Together".  In almost all cases, practical realities dictate something far less than the IT Utopia mentioned above.   You may wish to leverage multiple vendors to keep licensing costs down, a single vendor may not have an offering in the IT category you need, or your preferred vendor may quite simply not have the solution that meets your needs.    In other words, your IT needs dictate a heterogeneous IT environment.  Heterogeneity, however, comes with additional complexity. The following are two pretty typical challenges:1) No End-to-End Visibility into the Enterprise Wide Application Deployment. Each vendor solution which is added to an infrastructure may bring its own tooling creating different consoles for different vendor applications and platforms.2) No Visibility into Performance Bottlenecks. When multiple management tools operate independently, you lose diagnostic capabilities including identifying cross-tier issues with database, hung-requests, slowness, memory leaks and hardware errors/failures causing DB/MW issues. As adoption of Oracle Enterprise Manager (EM) has increased, especially since the release of Enterprise Manager 12c, Oracle has seen an increase in the number of customers who want to leverage their investments in EM to manage non-Oracle workloads.  Enterprise Manager provides a single pane of glass view into their entire datacenter.  By creating a highly extensible framework via the Oracle EM Extensibility Development Kit (EDK), Oracle has provided the tooling for business partners such as my company Blue Medora as well as customers to easily fill gaps in the ecosystem and enhance existing solutions.  As mentioned in the previous post on the Enterprise Manager Extensibility Exchange, customers have access to an assortment of Oracle and Partner provided solutions through this Exchange, which is accessed at http://www.oracle.com/goto/emextensibility.  Currently, there are over 80 Oracle and partner provided plug-ins across the EM 11g and EM 12c versions.  Blue Medora is one of those contributing partners, for which you will find 3 of our solutions including our flagship plugin for VMware.  Let's look at Blue Medora’s VMware plug-in as an example to what I'm trying to convey.  Here is a common situation solved by true visibility into your entire stack:Symptoms•    My database is bogging down, however the database appears okay internally.  Maybe it’s starved for resources?•    My OS tooling is showing everything is “OK”.  Something doesn’t add up. Root cause•    Through the VMware plugin we can see the problem is actually on the virtualization layer Solution•    From within Enterprise Manager  -- the same tool you use for all of your database tuning -- we can overlay the data of the database target, host target, and virtual machine target for a true picture of the true root cause. Here is the console view: Perhaps your monitoring conditions are more specific to your environment.  No worries, Enterprise Manager still has you covered.  With Metric Extensions you have the “Next Generation” of User-Defined Metrics, which easily bring the power of your existing management scripts into a single console while leveraging the proven Enterprise Manager framework. Simply put, Oracle Enterprise manager boasts a growing ecosystem that provides the single pane of glass for your entire datacenter from the database and beyond.  Bryce can be contacted at [email protected]

    Read the article

  • Four Easy Ways to Save a Rocky CRM Relationship

    - by Divya Malik
     Today, I am pleased to introduce our guest blogger Luke Christianson. Luke is  an Application Sales rep based out of Minneapolis, MN.  You can find him on LinkedIn and follow him on Twitter. In any relationship, sooner or later, the excitement fades away.  The honeymoon period gives way to the old routines you had, before you committed to each other and you eventually begin doing things apart from one another.  I’m not talking about a marriage…  Well, I guess I am.Commitment to a CRM tool and building a deep and lasting relationship is not much different than the basics of a traditional love story.  After your controlled CRM pilot program, and maybe the National Sales Meeting where you couldn’t escape those three wonderful letters, CRM, you will soon find that if you haven’t designed an environment where it’s going to enable your reps to make more money, the relationship is doomed.   . If you’re currently in a dysfunctional CRM relationship, here are 4 simple tips to re-engaging users and getting that spark back. Shadow a Sales Rep:   Chances are you can find out exactly what is preventing your sales reps from using the application by simply watching how they go about their day.  Sales reps are driven by money, not by additional administrative duties.  Your system needs to be setup so that they can get the information they need quickly, facilitate making key updates and run their business out of one easy-to-use application.  Increase your sales team’s productivity by 5% automatically:    Cancel the weekly forecast calls with your reps and require them update their opportunities in CRM.  Something else that I’ve seen work extremely well, is when you do Monthly or Quarterly reviews, do not let your sales reps bring anything into the room with them; no spreadsheets, notebooks, or computers.  Everything they need to tell you should be able to be put into CRM and fully accessible by the Sales Manager at any time.  Tool time:      Make sure the tools that you have selected meet both your short-term goals and your long term goals.   You need tools that can adapt like your business does.  You probably can’t wait two months for an update to a picklist value or for the addition of a simple workflow rule.  Do you feel the tools that are in place can create the experience you want for your users? and finally, if all else fails... Keep It Simple, Stupid:     Do you really need to require 15 fields to create an Opportunity?  Do you need to clutter the interface with different reports that don’t add daily value?  Most CRM systems on the market today are flexible enough today that your admin could clean up most of the unnecessary interface ‘noise’ in a few hours.  If they're not, see #3. Every strong relationship can be tedious at times, you’ll fight and eventually make amends, you may even threaten to upgrade to a newer model…  But be patient and think about what you want to achieve and you’ll find a partner for life.

    Read the article

  • MySQL Enterprise Backup 3.8.2 has been released!

    - by Hema Sridharan
    MySQL Enterprise Backup v3.8.2, a maintenance release of online MySQL backup tool, is now available for download from My Oracle Support  (MOS) website as our latest GA release.  It will also be available via the Oracle Software Delivery Cloud in approximately 1-2 weeks. A brief summary of the changes in MySQL Enterprise Backup version 3.8.2 is given below.   A. Functionality Added or Changed:  MySQL Enterprise Backup has a new --on-disk-full command line option. mysqlbackup could hang when the disk became full, rather than detecting the low space condition. mysqlbackup now monitors disk space when running backup commands, and users can now specify the action to take at a disk-full condition with the --on-disk-full option. For more details, refer this page MySQL Enterprise Backup has a new progress report feature, which periodically outputs short progress indicators on its  operations to user-selected destinations (for example, stdout, stderr, a file, or other choices). For more details on progress report options, refer here   B. Bugs Fixed: When --innodb-file-per-table=ON, if a table was renamed and backup-to-image was in progress, apply-log would fail when being run on the backup. (Bug #16903973)   MySQL Server failed to start after a backup was restored if  there had been online DDL transactions on partitioned tables during the time of backup. (Bug #16924499)   apply-log failed if ALTER TABLE ... REORGANIZE PARTITION was applied to partitioned InnoDB tables during backup. (Bug #16721824, Bug #16903951)  apply-incremental-backup might fail with an assertion error if  the InnoDB tables being backed up were created in Barracuda format and with their KEY_BLOCK_SIZE  values  different from the innodb_page_size . This fix ensures that different KEY_BLOCK_SIZE  values are handled properly during incremental backup and apply-incremental-backup operations.  If a table was renamed following a full backup, a subsequent incremental backup could copy the .frm file with the new name, but not the associated .ibd file with the new name. After a  restore, the InnoDB data dictionary could be in an  inconsistent state. This issue primarily occurred if the table  was not changed between the full backup and the subsequent  incremental backup. Bug #16262690)  After a full backup, if a table was renamed and modified,  apply-incremental-backup would crash when run on the backup directory. (Bug #16262609) The value of the binary log position in backup_variables.txt  could be different from the output displayed during the   backup-and-apply-log operation. (This issue did not occur if  the backup and apply-log steps were done separately.) (Bug  #16195529) When using the --only-innodb-with-frm option, MySQL Enterprise Backup tried to create temporary files at unintended locations in the file system, which might cause a failure when, for example, the user had no write privilege for those locations.   This fix makes sure the paths for the temporary files are  correct. (Bug #14787324)  A backup process might hang when it ran into an LSN mismatch between a data file  and the redo log. This fix makes sure the process does not hang and it displays an error message showing the  name of the problematic data file (Bug #14791645) Please post your questions / comments about Backup in forums. Thanks, MEB Team

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >