Search Results

Search found 46797 results on 1872 pages for 'common enterprise information model'.

Page 30/1872 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Common Live Upgrade problems

    - by user12611829
    As I have worked with customers deploying Live Upgrade in their environments, several problems seem to surface over and over. With this blog article, I will try to collect these troubles, as well as suggest some workarounds. If this sounds like the beginnings of a Wiki, you would be right. At present, there is not enough material for one, so we will use this blog for the time being. I do expect new material to be posted on occasion, so if you wish to bookmark it for future reference, a permanent link can be found here. Live Upgrade copies over ZFS root clone This was introduced in Solaris 10 10/09 (u8) and the root of the problem is a duplicate entry in the source boot environments ICF configuration file. Prior to u8, a ZFS root file system was not included in /etc/vfstab, since the mount is implicit at boot time. Starting with u8, the root file system is included in /etc/vfstab, and when the boot environment is scanned to create the ICF file, a duplicate entry is recorded. Here's what the error looks like. # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Creating file systems on boot environment . Creating file system for in zone on . The error indicator ----- /usr/lib/lu/lumkfs: test: unknown operator zfs Populating file systems on boot environment . Checking selection integrity. Integrity check OK. Populating contents of mount point . This should not happen ------ Copying. Ctrl-C and cleanup If you weren't paying close attention, you might not even know this is an error. The symptoms are lucreate times that are way too long due to the extraneous copy, or the one that alerted me to the problem, the root file system is filling up - again thanks to a redundant copy. This problem has already been identified and corrected, and a patch (121431-58 or later for x86, 121430-57 for SPARC) is available. Unfortunately, this patch has not yet made it into the Solaris 10 Recommended Patch Cluster. Applying the prerequisite patches from the latest cluster is a recommendation from the Live Upgrade Survival Guide blog, so an additional step will be required until the patch is included. Let's see how this works. # patchadd -p | grep 121431 Patch: 121429-13 Obsoletes: Requires: 120236-01 121431-16 Incompatibles: Packages: SUNWluzone Patch: 121431-54 Obsoletes: 121436-05 121438-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWluu SUNWlur # unzip 121431-58 # patchadd 121431-58 Validating patches... Loading patches installed on the system... Done! Loading patches requested to install. Done! Checking patches that you specified for installation. Done! Approved patches will be installed in this order: 121431-58 Checking installed patches... Executing prepatch script... Installing patch packages... Patch 121431-58 has been successfully installed. See /var/sadm/patch/121431-58/log for details Executing postpatch script... Patch packages installed: SUNWlucfg SUNWlur SUNWluu # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. INFORMATION: Unable to determine size or capacity of slice . Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. INFORMATION: Unable to determine size or capacity of slice . Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Cloning file systems from boot environment to create boot environment . Creating snapshot for on . Creating clone for on . Setting canmount=noauto for in zone on . Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. File propagation successful Copied GRUB menu from PBE to ABE No entry for BE in GRUB menu Population of boot environment successful. Creation of boot environment successful. This time it took just a few seconds. A cursory examination of the offending ICF file (/etc/lu/ICF.3 in this case) shows that the duplicate root file system entry is now gone. # cat /etc/lu/ICF.3 s10u8-baseline:-:/dev/zvol/dsk/panroot/swap:swap:8388608 s10u8-baseline:/:panroot/ROOT/s10u8-baseline:zfs:0 s10u8-baseline:/vbox:pandora/vbox:zfs:0 s10u8-baseline:/setup:pandora/setup:zfs:0 s10u8-baseline:/export:pandora/export:zfs:0 s10u8-baseline:/pandora:pandora:zfs:0 s10u8-baseline:/panroot:panroot:zfs:0 s10u8-baseline:/workshop:pandora/workshop:zfs:0 s10u8-baseline:/export/iso:pandora/iso:zfs:0 s10u8-baseline:/export/home:pandora/home:zfs:0 s10u8-baseline:/vbox/HardDisks:pandora/vbox/HardDisks:zfs:0 s10u8-baseline:/vbox/HardDisks/WinXP:pandora/vbox/HardDisks/WinXP:zfs:0 Solaris 10 9/10 introduces new autoregistration file This one is actually mentioned in the Oracle Solaris 9/10 release notes. I know, I hate it when that happens too. Here's what the "error" looks like. # luupgrade -u -s /mnt -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ERROR: The auto registration file does not exist or incomplete. The auto registration file is mandatory for this upgrade. Use -k argument along with luupgrade command. autoreg_file is path to auto registration information file. See sysidcfg(4) for a list of valid keywords for use in this file. The format of the file is as follows. oracle_user=xxxx oracle_pw=xxxx http_proxy_host=xxxx http_proxy_port=xxxx http_proxy_user=xxxx http_proxy_pw=xxxx For more details refer "Oracle Solaris 10 9/10 Installation Guide: Planning for Installation and Upgrade". As with the previous problem, this is also easy to work around. Assuming that you don't want to use the auto-registration feature at upgrade time, create a file that contains just autoreg=disable and pass the filename on to luupgrade. Here is an example. # echo "autoreg=disable" /var/tmp/no-autoreg # luupgrade -u -s /mnt -k /var/tmp/no-autoreg -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ####################################################################### NOTE: To improve products and services, Oracle Solaris communicates configuration data to Oracle after rebooting. You can register your version of Oracle Solaris to capture this data for your use, or the data is sent anonymously. For information about what configuration data is communicated and how to control this facility, see the Release Notes or www.oracle.com/goto/solarisautoreg. INFORMATION: After activated and booted into new BE , Auto Registration happens automatically with the following Information autoreg=disable ####################################################################### Validating the contents of the media . The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains version . Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE . Checking for GRUB menu on ABE . Saving GRUB menu on ABE . Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE . Performing the operating system upgrade of the BE . CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. The Live Upgrade operation now proceeds as expected. Once the system upgrade is complete, we can manually register the system. If you want to do a hands off registration during the upgrade, see the Oracle Solaris Auto Registration section of the Oracle Solaris Release Notes for instructions on how to do that. Technocrati Tags: Oracle Solaris Patching Live Upgrade var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831";

    Read the article

  • Set Up Of Common Name Of SSL Certificate To Protect Plesk Panel

    - by Cbomb
    A PCI Compliance scanner is balking that the self signed SSL certificate protecting secure access to Plesk Panel contains a name mismatch between the location of the Plesk Panel and the name on the certificate, namely the self-signed cert's name is "Parallels" and the domain to reach Plesk is 'ip address:8443'. So I figured I would go ahead and get a free SSL certificate to try to fiddle with this error. But when I generated the certificate I used my server domain name as the site name when I generated the certificate. So if I visit 'domain name:8443' all is fine, no ssl warning. But if I visit 'ip address:8443' (which I believe is what the scanner does) I get the certificate name mismatch error, Digicert's ssl checker says that the certificate name should be the ip address. Can I even generate a certificate whose common name is the ip address? I am tempted to say I should just do what the PCI scanner accepts, but what is really the correct common name to use? Anybody run into this issue before?

    Read the article

  • Information Spilling Across Object Boundaries

    - by Winston Ewert
    Many times my business objects tend to have situations where information needs to cross object boundaries too often. When doing OO, we want information to be in one object and as much as possible all code dealing with that information should be in that object. However, business rules do not follow this principle giving me trouble. As an example suppose that we have an Order which has a number of OrderItems which refers to an InventoryItem which has a price. I invoke Order.GetTotal() which sums the result of OrderItem.GetPrice() which multiples a quantity by InventoryItem.GetPrice(). So far so good. But then we find out that some items are sold with a two for one deal. We can handle this by having OrderItem.GetPrice() do something like InventoryItem.GetPrice( quantity ) and letting InventoryItem deal with this. However, then we find out that the two-for-one deal only lasts for a particular time period. This time period needs to be based on the date of the order. Now we change OrderItem.GetPrice() to be InventoryItem.GetPrice( quatity, order.GetDate() ) But then we need to support different prices depending on how long the customer has been in the system: InventoryItem.GetPrice( quantity, order.GetDate(), order.GetCustomer() ) But then it turns out that the two-for-one deals apply not just to buying multiple of the same inventory item but multiple for any item in a InventoryCategory. At this point we throw up our hands and just give the InventoryItem the order item and allow it to travel over the object reference graph via accessors to get the information its needs: InventoryItem.GetPrice( this ) TL;DR I want to have coupling in objects, but business rules often force me to access information from all over the place in order to make particular decisions. Are there good techniques for dealing with this? Do others find the same problem?

    Read the article

  • New MOS Community: Oracle Endeca Information Discovery

    - by inowodwo
    Effective November 22, the Oracle Endeca Community has been split into separate communities representing individual Oracle Endeca products. The Oracle Endeca Information Discovery Community will fall under the Business Intelligence (BI) category, and can be found here: https://communities.oracle.com/portal/server.pt/community/oracle_endeca_information_discovery/551. This community will focus on the Oracle Endeca Information Discovery (OEID) product, formerly known as Endeca Latitude and Endeca Discovery Framework. The previous Oracle Endeca Community has been renamed to Oracle Endeca Guided Search Community and will focus on discussions around the Oracle Endeca Guided Search product, formerly known as Endeca Infront and Endeca IAP. The Guided Search Community will continue to be located under the Oracle Commerce Category. Forum threads in the previous Oracle Endeca Community related to Oracle Endeca Information Discovery product have been moved to the new Oracle Endeca Information Discovery Community. We look forward to your continued involvement.

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is Available Now !

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Oracle today announced the availability of Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2). It is now available for download on OTN on ALL platforms. This is the first major release since the launch of Enterprise Manager 12c in October of 2011. This is the first time when Enterprise Manager release is available on all platforms simultaneously. This is primarily a stability release which incorporates many of issues and feedback reported by early adopters. In addition, this release contains many new features and enhancements in areas across the board. New Capabilities and Features Enhanced management capabilities for enterprise private clouds: Introduces new capabilities to allow customers to build and manage a Java Platform-as-a-Service (PaaS) cloud based on Oracle Weblogic Server. The new capabilities include guided set up of PaaS Cloud, self-service provisioning, automatic scale out and metering and chargeback. Enhanced lifecycle management capabilities for Oracle WebLogic Server environments: Combining in-context multiple domain, patching and configuration file synchronizations. Integrated Hardware-Software management for Oracle Exalogic Elastic Cloud through features such as rack schematics visualization and integrated monitoring of all hardware and software components. The latest management capabilities for business-critical applications include: Business Application Management: A new Business Application (BA) target type and dashboard with flexible definitions provides a logical view of an application’s business transactions, end-user experiences and the cloud infrastructure the monitored application is running on. Enhanced User Experience Reporting: Oracle Real User Experience Insight has been enhanced to provide reporting capabilities on client-side issues for applications running in the cloud and has been more tightly coupled with Oracle Business Transaction Management to help ensure that real-time user experience and transaction tracing data is provided to users in context. Several key improvements address ease of administration, reporting and extensibility for massively scalable cloud environments including dynamic groups, self-updateable monitoring templates, bulk operations against many events, etc. New and Revised Plug-Ins: Several plug-Ins have been updated as a part of this release resulting in either new versions or revisions. Revised plug-ins contain only bug-fixes and while new plug-ins incorporate both bug fixes as well as new functionality. Plug-In Name Version Enterprise Manager for Oracle Database 12.1.0.2 (revision) Enterprise Manager for Oracle Fusion Middleware 12.1.0.3 (new) Enterprise Manager for Chargeback and Capacity Planning 12.1.0.3 (new) Enterprise Manager for Oracle Fusion Applications 12.1.0.3 (new) Enterprise Manager for Oracle Virtualization 12.1.0.3 (new) Enterprise Manager for Oracle Exadata 12.1.0.3 (new) Enterprise Manager for Oracle Cloud 12.1.0.4 (new) Installation and Upgrade: All major platforms have been released simultaneously (Linux 32 / 64 bit, Solaris (SPARC), Solaris x86-64, IBM AIX 64-bit, and Windows x86-64 (64-bit) ) Enterprise Manager 12.1.0.2 is a complete release that includes both the EM OMS and Agent versions of 12.1.0.2. Installation options available with EM 12.1.0.2: User can do fresh Install or an upgrade from versions EM 10.2.0.5, 11.1, or 12.1.0.2 ( Bundle Patch 1 not mandatory). Upgrading to EM 12.1.0.2 from EM 12.1.0.1 is not a patch application (similar to Bundle Patch 1) but is achieved through a 1-system upgrade. Documentation: Oracle Enterprise Manager Cloud Control Introduction Document provides a broad overview of capabilities and highlights"What's New" in EM 12.1.0.2. All updated Oracle Enterprise Manager documentation can be found on OTN Upgrade Guide Please feel free to ask questions related to the new Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) at the Oracle Enterprise Manager Forum . You could also share your feedback at twitter  using hash tag #em12c or at Facebook . Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Common mistakes made by new programmers without CS backgrounds [on hold]

    - by mblinn
    I've noticed that there seems to be a class of mistakes that new programmers without CS backgrounds tend to make, that programmers with CS backgrounds tend not to. I'm not talking about not understanding source control, or how to design large programs, or a whole host of other things that both freshly minted CS graduates and non-CS graduates tend to not understand, I'm talking about basic mistakes that having a CS background will prevent a programmer from making. One obvious and well trod example is that folks who don't have a basic understanding of formal languages will often try to parse arbitrary HTML or XML using regular expressions, and possibly summon Cthulu in the process. Another fairly common one that I've seen is using common data structures in suboptimal ways like using a vector and a search function as if it were a hash map. What sorts of other things along these lines would you look out for when on-boarding a batch of newly minted, non-CS programmers.

    Read the article

  • ISACA Information Security & Risk Management Conference, Nov 14-16

    - by Troy Kitch
    Please join Oracle, as a platinum sponsor, at this year's ISACA Information Security and Risk Management Conference in Las Vegas, Nov 14-16. This year’s conference offers up to 32 CPE hours and is designed to meet the needs of information security, governance, compliance, and risk management professionals. The event builds on and includes the key elements of information security, governance, compliance and risk management practices, and offers a fresh perspective on current and future trends. As provider of the world’s most complete, open, and integrated business software and hardware systems, Oracle can uniquely safeguard your information throughout its entire lifecycle and is the recognized leader in Data Security, Identity Management, and Governance, Risk, and Compliance solutions. Also, attend the Oracle Megatrends Session, Gone in 60 Seconds: Mitigating Database Security Risk and stop by our booth, # 100 & #102, to meet with Oracle Security Solution experts, see live product demos, and more. Learn more and register.

    Read the article

  • Oracle Payables Accounting Information Centres

    - by user793553
    Payables Accounting Information Centers Do you have error when trying to create accounting in Payables ? Do you have questions about Payables Accounting ? The following new Information centers include solutions to many of the issues and answers to your questions.          Overview  > Hot Topics > Resources Information Center: Oracle Payables Accounting R12 ( Doc ID 1476284.2) Under Hot Topics: Include details about: Recommended patches, new solution documents, GDF patches, Announcements, links to Communities. Under Resources: Popular Troubleshooting documents, Accounting Documentation, popular Knowledge documents, links to other info centers.  Use   Information Center: Using Oracle Payables Accounting (Doc ID 1478842.2) Include product documentation, Reconciliation, Upgrade, Performance, Undo Accounting, Trace and FND Debug, and Diagnostics. Troubleshoot Information Center: Troubleshooting Oracle Payables Accounting (Doc ID 1478863.2) Include Troubleshooting documents, Period close, GL Transfer, Trial Balance, Budget, Health Check. Do you have other questions..............? Post your question to the Oracle Payables Community 1. Log into My Oracle Support. 2. Click on the 'Community' link at the top of the page. 3. Click in 'Find a Community' field and enter Payables  4. Double click on Payables in the list. OR   Click Here

    Read the article

  • June 13th Webcast: Common Problems Associated with Product Catalog in Sales

    - by Oracle_EBS
    ADVISOR WEBCAST: Common Problems Associated with Product Catalog in SalesPRODUCT FAMILY: Oracle Sales June 13 , 2012 at 12 pm ET, 10 am MT, 9 am PT This session is recommended for technical and functional users who are having problems with product categories and items not showing up in Sales products after setting up the Advanced Product Catalog.TOPICS WILL INCLUDE: Common problems associated with using Advanced Product Catalog in Sales. A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Current Schedule can be found on Note 740966.1 Post Presentation Recordings can be found on Note 740964.1

    Read the article

  • How to Get Noticed on the Information Highway - The Basics of SEO

    The information highway (the internet) resembles a six lane main road at rush hour. There is more and more information available every day. The question is how to stay on top of the information explosion and keep your products and services within view of your customers and also within view of new markets and potential clients.

    Read the article

  • What is the best approach for creating a Common Information Model?

    - by Kaiser Advisor
    Hi, I would like to know the best approach to create a Common Information Model. Just to be clear, I've also heard it referred to as a canonical information model, semantic information model, and master data model - As far as I can tell, they are all referring to the same concept. I've heard in the past that a combined "top-down" and "bottom-up" approach is best. This has the advantage of incorporating "Ivory tower" architects and developers - The work will meet somewhere in the middle and usually be both logical and practical. However, this involves bringing in a lot of people with different skill sets. I've also seen a couple of references to the Distributed Management Task Force, but I can't glean much on best practices in terms of CIM development. This is something I'm quite interested in getting some feedback on since having a strong CIM is a prerequisite to SOA. Thanks for your help! KA Update I've heard another strategy goes along with overall SOA implementation: Get the business involved, and seek executive sponsorship. This would be part of the "Top-down" effort.

    Read the article

  • Enterprise Eclipse Provisioning - Or - How to share your standard Eclipse setup with other developer

    - by nikhil
    We use Eclipse as the IDE for developing all sorts of Java/J2EE applications in our 150 people odd IT department. One of the common problems we have been seeing is that developers download and install different versions of Eclipse and plugins based on their personal likes and dislikes. We have been trying to bring some consistency to this and have standardized on the version and the plugins that developers should be using. So the problem now is how do we distribute this installation to the team. We have zipped the directories and shared it through a shared drive. But I am looking for a better solution using some kind of provisioning tool for Eclipse using which people can install the IDE or get updates. Has anyone faced this problem? What are your solutions to this? How do you ensure a standard Eclipse environment across developers? I found Yoxos as a potential solution to this. Does anyone have any experience with it? Can p2 be used to do this?

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • Many to Many delete in NHibernate two parents with common association

    - by Joshua Grippo
    I have 3 top level entities in my app: Circuit, Issue, Document Circuits can contain Documents and Issues can contain Documents. When I delete a Circuit, I want it to delete the documents associated with it, unless it is used by something else. I would like this same behavior with Issues. I have it working when the only association is in the same table in the db, but if it is in another table, then it fails due to foreign key constraints. ex 1(This will cascade properly, because there is only a foreign constraint from Circuit to Document) Document1 exists. Circuit1 exists and contains a reference to Document1. If I delete Circuit1 then it deletes Document1 with it. ex 2(This will cascade properly, because there is only a foreign constraint from Circuit to Document.) Document1 exists. Circuit1 exists and contains a reference to Document1. Circuit2 exists and contains a reference to Document1. If I delete Circuit1 then it is deleted, but Document1 is not deleted because Circuit2 exists. If I then delete Circuit2, then Document1 is deleted. ex 3(This will throw an error, because when it deletes the Circuit it sees that there are no other circuits that reference the document so it tries to delete the document. However it should not, because there is an Issue that has a foreign constraint to the document.) Document 1 exists. Circuit1 exists and contains a reference to Document1. Issue1 exists and contains a reference to Document1. If I delete Circuit1, then it fails, because it tries to delete Document1, but Issues1 still has a reference. DB: This think won't let upload an image, so here is the ERD to the DB: http://lh3.ggpht.com/_jZWhe7NXay8/TROJhOd7qlI/AAAAAAAAAGU/rkni3oEANvc/CircuitIssues.gif Model: public class Circuit { public virtual int CircuitID { get; set; } public virtual string CJON { get; set; } public virtual IList<Document> Documents { get; set; } } public class Issue { public virtual int IssueID { get; set; } public virtual string Summary { get; set; } public virtual IList<Model.Document> Documents { get; set; } } public class Document { public virtual int DocumentID { get; set; } public virtual string Data { get; set; } } Mapping Files: <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Model" assembly="Model"> <class name="Circuit" table="Circuit"> <id name="CircuitID"> <column name="CircuitID" not-null="true"/> <generator class="identity" /> </id> <property name="CJON" column="CJON" type="string" not-null="true"/> <bag name="Documents" table="CircuitDocument" cascade="save-update,delete-orphan"> <key column="CircuitID"/> <many-to-many class="Document"> <column name="DocumentID" not-null="true"/> </many-to-many> </bag> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Model" assembly="Model"> <class name="Issue" table="Issue"> <id name="IssueID"> <column name="IssueID" not-null="true"/> <generator class="identity" /> </id> <property name="Summary" column="Summary" type="string" not-null="true"/> <bag name="Documents" table="IssueDocument" cascade="save-update,delete-orphan"> <key column="IssueID"/> <many-to-many class="Document"> <column name="DocumentID" not-null="true"/> </many-to-many> </bag> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Model" assembly="Model"> <class name="Document" table="Document"> <id name="DocumentID"> <column name="DocumentID" not-null="true"/> <generator class="identity" /> </id> <property name="Data" column="Data" type="string" not-null="true"/> </class> </hibernate-mapping> Code: using (ISession session = sessionFactory.OpenSession()) { var doc = new Model.Document() { Data = "Doc" }; var circuit = new Model.Circuit() { CJON = "circ" }; circuit.Documents = new List<Model.Document>(new Model.Document[] { doc }); var issue = new Model.Issue() { Summary = "iss" }; issue.Documents = new List<Model.Document>(new Model.Document[] { doc }); session.Save(circuit); session.Save(issue); session.Flush(); } using (ISession session = sessionFactory.OpenSession()) { foreach (var item in session.CreateCriteria<Model.Circuit>().List<Model.Circuit>()) { session.Delete(item); } //this flush fails, because there is a reference to a child document from issue session.Flush(); foreach (var item in session.CreateCriteria<Model.Issue>().List<Model.Issue>()) { session.Delete(item); } session.Flush(); }

    Read the article

  • Backup software for Windows Server 2008 R2 Enterprise with 4 virtual machines (Exchange, SQL, AD, SharePoint)

    - by MadBoy
    What are the options for backup software for: HOST - Windows Server 2008 R2 Enterprise with HyperV VIRTUAL - Windows Server 2008 R2 Enterprise with Exchange 2010 VIRTUAL - Windows Server 2008 R2 Enterprise with SQL Express / SharePoint VIRTUAL - Windows Server 2008 R2 Enterprise with Terminal Services (10 users working on it) VIRTUAL - Windows Server 2008 R2 Enterprise with AD/DNS What I'm looking at is possibility of having an offsite backup thru FTP, maybe copy to usb/esata/lan drives for easy taking backup data outside of company. What I've been looking at: - Symantec Exec Backup 2010 System Recovery has an offsite backup but I would need 5 licenses and it doesn't have granular recovery. - Symantec Exec Backup 2010 seems OK but a bit expensive - Microsoft DPM 2010 requires full SQL Standard and for each machine I would need 4 Enterprise licenses. But does it allow Offsite backup without need for additional license and server outside of company (for doing DPM backup of DPM). What other options? This is 10 people company and so the costs matter but also convenience and security. Offsite backup is requirement.

    Read the article

  • Custom Model binder not firing

    - by mare
    This is my custom model binder. I have my breakpoint set at BindModel but does not get fired with this controller action: public ActionResult Create(TabGroup tabGroup) ... public class BaseContentObjectCommonPropertiesBinder : DefaultModelBinder { public new object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) { if (controllerContext == null) { throw new ArgumentNullException("controllerContext"); } if (bindingContext == null) { throw new ArgumentNullException("bindingContext"); } BaseContentObject obj = (BaseContentObject)base.BindModel(controllerContext, bindingContext); obj.Modified = DateTime.Now; obj.Created = DateTime.Now; obj.ModifiedBy = obj.CreatedBy = controllerContext.HttpContext.User.Identity.Name; return obj; } My registration: // tried both of these two lines ModelBinders.Binders[typeof(TabGroup)] = new BaseContentObjectCommonPropertiesBinder(); ModelBinders.Binders.Add(typeof(TabGroup), new BaseContentObjectCommonPropertiesBinder());

    Read the article

  • How to read a Choice Field from Sharepoint 2010 Client Object Model

    - by Eric
    Hello, I'm using Sharepoint 2010 Object Model. I'm trying to retrive the content of a Custom List. Everything works fine except if when I try to retrieve a Choice Field. When I try to retrieve the choice field, I got an PropertyOrFieldNotInitializedException exception... Here is the code I'm using: ClientContext clientContext = new ClientContext("https://mysite"); clientContext.FormsAuthenticationLoginInfo = new FormsAuthenticationLoginInfo("aaa", bbb"); clientContext.AuthenticationMode = ClientAuthenticationMode.FormsAuthentication; List list = clientContext.Web.Lists.GetByTitle("mylist"); CamlQuery camlQuery = new CamlQuery(); camlQuery.ViewXml = "<View/>"; ListItemCollection listItems = list.GetItems(camlQuery); clientContext.Load(listItems); clientContext.ExecuteQuery(); foreach (ListItem listItem in listItems) { listBoxControl1.Items.Add(listItem["Assigned_x0020_Company"]); } Thank you for you help! Eric

    Read the article

  • Correct way to perform an update using ADO.Net Entity Model in .net 4

    - by sf
    Hi, I just wanted to clarify if this is the appropriate way to perform an update using a designer generated ADO.NET Entity Model. I was wondering if there was a way to perform an update without creating the sqlMenu object. public class MenusRepository { public void UpdateMenu(Menu menu) { // _context is instantiated in constructor Menu sqlMenu = (from m in _context.Menus where m.MenuId == menu.MenuId select m).FirstOrDefault(); if (sqlMenu == null) throw new ArgumentException("Can't update a Menu which does not exist"); // associate values here sqlMenu.Name = menu.Name; _context.SaveChanges(); } }

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >