Search Results

Search found 15168 results on 607 pages for 'enterprise manager 11g'.

Page 37/607 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Project Manager that wants to lock in time estimate with a signed contract

    - by sunpech
    At a previous employment, a project manager (PM) wasn't satisfied with the delivery time of the code on a project I was on. I was told by my project lead that that the PM was considering having me sign a contract to lock-in my time estimates I gave for tasks and delivery dates. The situation on the project was that we were working with new technologies, codebase, coding standards, and very prone-to-change requirements. I was learning new things and applying them the best I could on requirements that kept on changing. The requirements throughout the iterations grew by 2-3 times, with my estimate-to-complete growing by roughly 5-8 times. The only things that didn't change were the estimates and delivery dates. Yes, I did end up missing most deadlines. And I was working on some very new technologies that no one else on the entire development team could really help out on because they wouldn't be familiar with it. At least not easily. It seemed to me then, that the PM wanted his numbers to add up-- and thus wanted me to sign a contract to "ensure" that I would always deliver working code on time. I suppose with a signed contract the PM could use it against me if I couldn't deliver on time. I believe what happened next was that other project managers and/or project leads defended me, and didn't let this happen. My question is, should this raise a red flag about the manager? Is it common practice for a manager to lock-in time estimates of a software developer with a signed contract? Or in this case, try to. Please note, I was a full time employee, not an independent consultant. Update: I want to add that I did give new estimates weekly, but it seems the original estimates and delivery dates were what the PM was fixated on.

    Read the article

  • Use Enterprise Manager Cloud Control to monitor OBIEE 11.1.1.7.x Dashboards

    - by Torben Hein -Oracle
    (in via Senthil )  If your OBIEE 11.1.1.7.x is set up in the following way: The OBIEE repository is an Oracle Database and is set up as a data warehouse Usage tracking is enabled in OBIEE. ( For information on how to enable usage tracking in OBIEE, refer to the following link: Setting Up Usage Tracking in Oracle BI 11g ) The OBIEE instance is discovered in EM Cloud Control. ( For information on how to discover an OBIEE instance in Cloud Control, refer to the following link: Discovering Oracle Business Intelligence Instance and Oracle Essbase Targets ) The OBIEE repository is discovered in EM Cloud Control. ( For information on how to discover an Oracle database, refer to the following link: Discovering, Promoting, and Adding Database Targets ) then we've got news for you: KM Article:  OBIEE 11g: How To Diagnose Slowly Performing Dashboards using Enterprise Manager Cloud Control (Doc ID 1668236.1) takes you step by step through monitoring the SQL query performance behind your OBIEE dashboard. This Diagnostic approach ... .. will help you piece together information on BI dashboard performance, e.g. processing time from the different layers of the BI system including the repository. .. should enable you to get to the bottom of slow dashboards by using the wealth of information available in EM Cloud Control on OBIEE and Oracle DB. .. will NOT fix any performance issues on its own, but will help identify bottlenecks while processing dashboard requests. (layout and post: Torben, authorized: Lia)

    Read the article

  • Oracle Configuration Manager for HRMS / EBS Customers

    - by Robert Story
    Upcoming WebcastTitle: Oracle Configuration Manager for HRMS / EBS CustomersDate: April 9, 2010 Time: 11:00 am EDT, 8:00 am PDT, 8:30 pm IST Product Family: EBS HRMS Summary The webcast will focus on Highlights and Benefits of using Oracle Configuration Manager for HRMS / EBS Customers. The one-hour session is recommended for functional / technical EBS HRMS users and system administrators. Along with key highlights of Oracle Configuration Manager, the usage especially in debugging EBS and HRMS issues will be discussed. Topics will include: OCM Overview Data Collection and its usage Key Benefits for HRMS / EBS customers Change History. EBS HRMS Stat Pack. Deployed Customizations. Project management and Mile Stones. Resources & References A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • 10 Innovations in PeopleSoft 9.2 - #2 Lower TCO With The Peoplesoft Update Manager

    - by John Webb
    With the new PeopleSoft Update Manager in PeopleSoft 9.2 the way you manage updates to your PeopleSoft systems puts you in control of all changes on your schedule.   You can selectively apply patches with reduced time, effort, and cost.    Bundles and Maintenance Packs are no longer used.      Instead, a tailored custom package is automatically generated based on the parameters you select from the latest PeopleSoft source image.   You have access to all updates from Oracle on a cumulative basis and can select and search for specific updates such as new features, legal and regulatory changes, or a patch related to a specific issue, process or object.    Any prerequisites are automatically identified.  The  process of generating a change package is enabled through a new wizard with easy to follow steps and options.     As changes are introduced to your test environment the PeopleSoft Test Framework provides a closed loop process to run regression tests scripts against your changes.  For a quick overview of the PeopleSoft Update Manager check out the Video Feature Overview here: PeopleSoft Update Manager Video Feature Overview

    Read the article

  • How to maintain symlinks in linux file manager?

    - by MountainX
    I want to use symlinks extensively. However, if I move the target file, the symlink becomes broken (unlike on Windows). That's not acceptable to me, so I either need a solution or I won't be able to use symlinks the way I wish to. Is there a solution that will work with Dolphin file manager? A command line solution is described on commandlinefu. In summary, it is something like one of these: lmv(){for a in ${@:1:$(expr $#-1)};do [ -e "$a" -a -e "${@:$#:1}" ] && mv "$a";"${@:$#:1}" && ln -s "${@:$#:1}"/"$(basename "$a")";"$(dirname "$a")";done} lmv(){for a in ${@:1:$(expr $#-1)};do [ -e "$a" -a -e "${@:$#}" ] && mv "$a";"${@:$#}" && ln -s "${@:$#}"/"$(basename "$a")";"$(dirname "$a")";done} But about half the time I'm using a file manager (Dolphin), so I need a complete solution to this problem. Is a solution available for a GUI file manager? EDIT: The context of this question is that I'm searching for an alternative to hardlinks. I previously asked this question about the pitfalls of hardlinks.

    Read the article

  • Oracle Identity Manager ADF Customization

    - by Arda Eralp
    This blog entry includes an example about customization Oracle Identity Manager (OIM) Self Service screen. Before customization all users that can be logged in OIM Self Service can see "Administration" tab on left menu. On this example we create "Managers" role and only users that have managers role can see "Administration" tab. Step 1: Create "Manager" role  Step 2: Create Sandbox  Step 3: Customize ADF Select "Customize" on the top menu Select "Source" instead of "Design" on top  Select "Administration" tab with blue rectangle and edit component Edit "visible" with expression builder #{oimcontext.currentUser.roles['Manager'] != null} Apply Step 4: Apply to All and Publish sandbox Notes:  This table objects can use for expression. Objects Description #{oimcontext.currentUser['ATTRIBUTE_NAME']} #{oimcontext.currentUser['UDF_NAME']} #{oimcontext.currentUser.roles} #{oimcontext.currentUser.roles['SYSTEM ADMINISTRATORS'] != null} Boolean #{oimcontext.currentUser.adminRoles['OrclOIMSystemAdministrator'] != null} Boolean

    Read the article

  • Ubuntu 12.04 Network Manager: unable to save manual setting to set up a static ip

    - by Andy
    I am fairly familiar with setting up servers, and ubuntu is generally my flavor of choice, but I just installed 12.04 desktop and I am seeing some behavior in network manager that is really puzzling. The network connection works fine if I leave it set on dhcp, but I would like a static IP address for my new web server. When I go into network manager and edit the connection for the one and only NIC I can select MANUAL from the dropdown menu but as soon as I do the Save button becomes greyed out. Even after filling out all fields for the connection it is still grey and I am unable to save the static IP connection information. Any thoughts would be greatly appreciated. I'm hoping there is just some new setting that I am unaware of.... On another note, if I stop the network manager and go edit the interfaces file (and the appropriate hosts/routes/dns files), I do get a static ip address assigned and I can contact my server from the outside, however, the server cannot find the internet. Can't ping even its own ip... I can ping the loopback interface though. I'm really confused on this one. Hoping someone can offer some help.

    Read the article

  • Software Manager who makes developers do Project Management

    - by hdman
    I'm a software developer working in an embedded systems company. We have a Project Manager, who takes care of the overall project schedule (including electrical, quality, software and manufacturing) hence his software schedule is very brief. We also have a Software Manager, who's my boss. He makes me write and maintain the software schedule, design documents (high and low level design), SRS, change management, verification plans and reports, release management, reviews, and ofcourse the software. We only have one Test Engineer for the whole software team (10 members), and at any given time, there are a couple of projects going on. I'm spending 80% of my time making these documents. My boss comes from a Process background, and believes what we need is better documentation to improve software: (1) He considers the design to be paramount, coding is "just writing the design down", it shouldn't take too long, and "all the code should be written before the hardware is ready". (2) Doesn't understand the difference between a Central & Distributed Version control, even after we told him its easier to collaborate with a distributed model. (3) Doesn't understand code, and wants to understand every bug and its proposed solution. (4) Believes verification should be done by developer, and validation by the Tester. Thing is though, our verification only checks if implementation is correct (we don't write unit tests, its never considered in the schedule), and validation is black box testing, so the units tests are missing. I'm really confused. (1) Am I responsible for maintaining all these documents? It makes me feel like I'm doing the Software Project Management, in essence. (2) I don't really like creating documents, I want to solve problems and write code. In my experience, creating design documents only helps to an extent, its never the solution to better or faster code. (3) I feel the boss doesn't really care about making better products, but only about being a good manager in the eyes of the management. What can I do?

    Read the article

  • Community TFS Build Manager available for Visual Studio 2012 RC

    - by Jakob Ehn
    I finally got around to push out a version of the Community TFS Build Manager that is compatible with Visual Studio 2012 RC. Unfortunately I had to do this as a separate extension, it references different versions of the TFS assemblies and also some properties and methods that the 2010 version uses are now obsolete in the TFS 2012 API. To download it, just open the Extension Manager, select Online and search for TFS Build:   You can also download it from this link: http://visualstudiogallery.msdn.microsoft.com/cfdb84b4-285e-4eeb-9fa9-dad9bfe2cd10 The functionality is identical to the 2010 version, the only difference is that you can’t start it from the Team Explorer Builds node (since the TE has been completely rewritten and the extension API’s are not yet published). So, to start it you must use the Tools menu: We will continue shipping updates to both versions in the future, as long as it functionality that is compatible with both TFS 2010 and TFS 2012. You might also note that the color scheme used for the build manager doesn’t look as good with the VS2012 theme….   Hope you will enjoy the tool in Visual Studio 2012 as well. I want to thank all the people who have downloaded and used the 2010 version! For feedback, feature requests, bug reports please post this to the CodePlex site: http://tfsbuildextensions.codeplex.com

    Read the article

  • OBIEE 11.1.1 - (Updated) Best Practices Guide for Tuning Oracle® Business Intelligence Enterprise Edition (Whitepaper)

    - by Ahmed Awan
    Applies To: This whitepaper applies to OBIEE release 11.1.1.3, 11.1.1.5 and 11.1.1.6 Introduction: One of the most challenging aspects of performance tuning is knowing where to begin. To maximize Oracle® Business Intelligence Enterprise Edition performance, you need to monitor, analyze, and tune all the Fusion Middleware / BI components. This guide describes the tools that you can use to monitor performance and the techniques for optimizing the performance of Oracle® Business Intelligence Enterprise Edition components. Click to Download the OBIEE Infrastructure Tuning Whitepaper (Right click or option-click the link and choose "Save As..." to download this file) Disclaimer: All tuning information stated in this guide is only for orientation, every modification has to be tested and its impact should be monitored and analyzed. Before implementing any of the tuning settings, it is recommended to carry out end to end performance testing that will also include to obtain baseline performance data for the default configurations, make incremental changes to the tuning settings and then collect performance data. Otherwise it may worse the system performance.

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • How to install Revolution R Enterprise?

    - by Abe
    Revolution R Enterprise is available as a red-hat rpm file. Normally I would use alien to install an rpm file, but the instructions for installing this package have an install.py file that I am supposed to execute. When I ./install.py, I get the following instructions: rpm: please use alien to install rpm packages on Debian, if you are really sure use --force-debian switch. See README.Debian for more details. There is no README.Debian file in the directory, and although I am not proficient in python, I can tell that there are at least four different directories with *rpm files in them. Has anyone had success with this? If possible, I'd prefer to install the Enterprise version instead of community version in the Ubuntu repository so that I can test it out.

    Read the article

  • Announcing MySQL Enterprise Backup 3.7.1

    - by Hema Sridharan
    The MySQL Enterprise Backup (MEB) Team is pleased to announce the release of MEB 3.7.1, a maintenance release version that includes bug fixes and enhancements to some of the existing features. The most important feature introduced in this release is Automatic Incremental Backup. The new  argument syntax for the --incremental-base option is introduced which makes it simpler to perform automatic incremental backups. When the options --incremental & --incremental-base=history:last_backup are combined, the mysqlbackup command  uses the metadata in the mysql.backup_history table to determine the LSN to use as the lower limit of the incremental backup. You no longer need to keep track of the actual LSN (as in the option --start-lsn=LSN) or even the location of the previous backup (as in the option --incremental-base=dir:directory_path)This release also incudes various bug fixes related to some options used in MEB. The most important are few of them as listed below,1. The option --force now allows overwriting InnoDB data and log files in  combination with the apply-log and apply-incremental-backup options, and replacing the image file in combination with the backup-to-image and backup-dir-to-image options. 2. Resolved a bug that prevented MEB to interface with third-party storage managers to execute backup and restore jobs in combination with the SBT interface and associated --sbt* options for mysqlbackup. 3. When MEB is run with the copy-back option,  it now displays warnings as existing files are overwritten.For more information about other bug fixes, please refer to the change-log in http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/meb-news.html The complete MEB documentation is located at http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/index.html. You will find the binaries for the new release in My Oracle Support,  https://support.oracle.comChoose the "Patches & Updates" tab, and then use the "Product or Family (Advanced Search)" feature. If you haven't looked at MEB 3.7.1 recently, please do so now and let us know how MEB works for you. Send your feedback to [email protected].

    Read the article

  • Oracle E-Business Suite Partners Get Plugged In - Oracle Enterprise Manager 12c

    - by Get_Specialized!
      Oracle E-Business Suite Plug-in, an integral part of Application Management Suite for Oracle E-Business Suite, is Generally Available. More information may be found in note 1434392.1 on MyOracle Support. Oracle E-Business Suite Plug-in can be accessed a few ways: Fresh install Enterprise Manager Store Oracle Software Delivery Cloud   Upgrade Oracle Technology Network Please refer to the Application Management Pack for Oracle E-Business Suite Guide for further details. If you are a partner and have not yet joined the Oracle PartnerNetwork Enterprise Manager KnowledgeZone, be sure and sign up today to learn more about Oracle Application Management and how it can aid your customers and business.

    Read the article

  • Oracle's Vision for the Social-Enabled Enterprise

    - by Peggy Chen
    Register Now Join us for the Webcast. Mon., Sept. 10, 2012 10 a.m. PT / 1 p.m. ET Join the conversation: #oracle and #socbiz Mark Hurd President, Oracle Thomas Kurian Executive Vice President, Product Development, Oracle Reggie Bradford Senior Vice President, Product Development, Oracle Dear Colleague, Smart companies are developing social media strategies to engage customers, gain brand insights, and transform employee collaboration and recruitment. Oracle is powering this transformation with the most comprehensive enterprise social platform that lets you: Monitor and engage in social conversations Collect and analyze social data Build and grow brands through social media Integrate enterprisewide social functionality into a single system Create rich social applications Join Oracle President Mark Hurd and senior Oracle executives to learn more about Oracle’s vision for the social-enabled enterprise. Register now for this Webcast. Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • MySQL Enterprise Backup 3.8.2 - Overview

    - by Priya Jayakumar
      MySQL Enterprise Backup (MEB) is the ideal solution for backing up MySQL databases. MEB 3.8.2 is released in June 2013. MySQL Enterprise Backup 3.8.2 release’s main goal is to improve usability. With this release, users can know the progress of backup completed both in terms of size and as a percentage of the total. This release also offers options to be able to manage the behavior of MEB in case the space on the secondary storage is completely exhausted during backup. The progress indicator is a (short) string that indicates how far the execution of a time-consuming MEB command has progressed. It consists of one or more "meters" that measures the progress of the command. There are two options introduced to control the progress reporting function of mysqlbackup command (1) –show-progress (2) –progress-interval. The user can control the progress indicator by using “--show-progress” option in any of the MEB operations. This option instructs MEB to output periodically short reports on the progress of time-consuming commands. The argument of this option instructs where the output could be sent. For example it could be stderr, stdout, file, fifo and table. With the “--show-progress” option both the total size of the backup to be copied and the size that’s already copied will be shown. Along with this, the state of the operation for example data or meta-data being copied or tables being locked and other such operations will also be reported. This gives more clear information to the DBA on the progress of the backup that’s happening. Interval between progress report in seconds is controlled by “--progress-interval” option. For more information on this please refer progress-report-options. MEB can also be accessed through GUI from MySQL WorkBench’s next version. This can be used as the front end interface for MEB users to perform backup operations at the click of a button. This feature was highly requested by DBAs and will be very useful. Refer http://insidemysql.com/mysql-workbench-6-0-a-sneak-preview/ for WorkBench upcoming release info. Along with the progress report feature some of the important issues like below are also addressed in MEB 3.8.2. In MEB 3.8.2 a new command line option “--on-disk-full” is introduced to abort or warn the user when a backup process encounters a full disk condition. When no option is given, by default it would abort. A few issues related to “incremental-backup” are also addressed in this release. Please refer 3.8.2 documentation for more details. It would be good for MEB users to move to 3.8.2 to take incremental backups. Overall the added usability and the important defects fixed in this release makes MySQL Enterprise Backup 3.8.2 a promising release.  

    Read the article

  • Save the date For AutoVue Enterprise Visualization at Oracle OpenWorld 2013

    - by Gerald Fauteux
    Planning to attend Oracle OpenWorld 2013 (September 22–26, 2013)?  If so, be sure to check out the various Enterprise Visualization activities that you can take advantage of while in San Francisco. Enterprise Visualization Sessions: CON8992 - Qualcomm Streamlines Its Design and Manufacturing Process with AutoVue/Agile Products. Click here for full session description. Customer Speakers: Mary Legaspi - Staff Systems Administrator and Ravi Sankaran - Sr. Staff Systems Analyst, Qualcomm CON8741 - Visual Information Navigator: Next-Generation Interaction Paradigm. Click here for full session description. Speakers: Rozita Naghshin - Sr. Principal Product Manager-Visual Information Navigator and Thierry Bonfante - Senior Director Product Development, Oracle Other Activities  There will be an “Oracle’s AutoVue & Visual Information Navigator" pod in the exhibit hall Come and meet Oracle’s Visualization experts at the SCM product lounge, and have exclusive 1 on 1 or group conversations with development and strategy experts.     Check back shortly for the dates and times of these activities

    Read the article

  • DB Enterprise User Security Integration With Directory Services

    - by Etienne Remillon
    Gain a better understanding of how to integrate Enterprise User Security (EUS) with various Directories by attending this 1 hour Advisor Webcast!  When: July 11, 2012 at 16:00 UK / 17:00 CET / 08:00 am Pacific / 9:00 am Mountain / 11:00 am Eastern Enterprise User Security (EUS) is a DB feature to externalize, and centrally manage DB users in a directory server. The webcast will briefly introduce EUS, followed by a detailed discussion about the various directory options that are supported, including integration with Microsoft Active Directory. We'll conclude how to avoid common pitfalls deploying EUS with directory services. TOPICS WILL INCLUDE: - Understand EUS basics - Understand EUS and directory integration options - Avoid common EUS deployment mistakes Make sure to register and mark this date on your calendar! - Details and registration.

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >