Search Results

Search found 25554 results on 1023 pages for 'oracle solaris 11 express'.

Page 39/1023 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • SOA & BPM @ Virtual Developer Day: Oracle Fusion Development–July 10th 2012

    - by JuergenKress
    Virtual Developer Day: Oracle Fusion Development Register now for this FREE hands-on online workshop Get up to date and learn everything you wanted to know about Oracle ADF & Fusion Development plus live Q&A chats with Oracle technical staff Oracle Application Development Framework (ADF) is the standards based, strategic framework for Oracle Fusion Applications and Oracle Fusion Middleware. Oracle ADF’s integration with the Oracle SOA Suite, Oracle WebCenter and Oracle BI creates a complete productive development platform for your custom applications. Join us at this FREE virtual event and learn the latest in Fusion Development including: Is Oracle ADF development faster and simpler than Forms, Apex or .Net? Mobile Application Development with ADF Mobile Oracle ADF development with Eclipse Oracle WebCenter Portal and ADF Development Application Lifecycle Management with ADF Building Process Centric Applications with ADF and BPM Oracle Business Intelligence and ADF Integration Live Q&A chats with Oracle technical staff Developer lead, manager or architect – this event has something for everyone. Don’t miss this opportunity. Tuesday, July 10, 2012 9:00 a.m. PT. – 1:00 p.m. PT 11:00 a.m. CT – 3:00 p.m. CT 12:00 p.m. ET – 4:00 p.m. ET 1:00 p.m. BRT – 5:00 p.m. BRT Agenda 9:00 a.m. Opening 9:30 a.m. Keynote: Oracle Fusion Development Track 1 Introduction to Fusion Development Track 2 What's New in Fusion Development Track 3 Fusion Development in the Enterprise 10:00 a.m. Is Oracle ADF Development Faster and Simpler than Oracle Forms, APEX or .Net? Mobile Application Development with ADF Mobile Oracle WebCenter Portal and ADF Development 11:00 a.m. Rich Web UI made simple – an ADF Faces Overview Oracle Enterprise Pack for Eclipse - ADF Development Building Process Centric Applications with ADF and BPM 12:00 noon Next Generation Controller for JSF Application Lifecycle Management for ADF Oracle Business Intelligence and ADF Integration Sessions abstracts Register online now! for this FREE event SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: OTN Virtual Developer Day,Edutcation,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Upgrade from 9.10 to 11

    - by Fernando Costa
    I have an installation in my machine (my version is 9.10 Karmic) and I got a warning to Upgrade to a version 10.04, to me it is okay, and I would like to upgrade to the 10.04, but here is my question. If I do, what will happen to my system of files? All my files? My programs, my apache configuration? All my servers.. Does everything reset to default? Will I lost all my data? Because if YES, will lost everything, why such a warning appears to me? Then the best solution is, format everything and install a brand new ubuntu version 11. Otherwise I still using 9.10 Karmic version,and just update normally as I'm required.. What is the best to do on this situation? Appreciate any help!

    Read the article

  • EPM 11.1.2.2 Architecture: Essbase

    - by Marc Schumacher
    Since a lot of components exist to access or administer Essbase, there are also a couple of client tools available. End users typically use the Excel Add-In or SmartView nowadays. While the Excel Add-In talks to the Essbase server directly using various ports, SmartView connects to Essbase through Provider Services using HTTP protocol. The ability to communicate using a single port is one of the major advantages from SmartView over Excel Add-In. If you consider using Excel Add-In going forward, please make sure you are aware of the Statement of Direction for this component. The Administration Services Console, Integration Services Console and Essbase Studio are clients, which are mainly used by Essbase administrators or application designers. While Integration Services and Essbase Studio are used to setup Essbase applications by loading metadata or simply for data loads, Administration Services are utilized for all kind of Essbase administration. All clients are using only one or two ports to talk to their server counterparts, which makes them work through firewalls easily. Although clients for Provider Services (SmartView) and Administration Services (Administration Services Console) are only using a single port to communicate to their backend services, the backend services itself need the Essbase configured port range to talk to the Essbase server. Any communication to repository databases is done using JDBC connections. Essbase Studio and Integration Services are using different technologies to talk to the Essbase server, Integration Services uses CAPI, Essbase Studio uses JAPI. However, both are using the configured port range on the Essbase server to talk to Essbase. Connections to data sources are either based on ODBC (Integration Service, Essbase) or JDBC (Essbase Studio). As for all other components discussed previously, when setting up firewall rules, be aware of the fact that all services may need to talk to the external authentication sources, this is not only needed for Shared Services.

    Read the article

  • Einladung zum Oracle Partner Day 2012

    - by A&C Redaktion
    EINLADUNG: ORACLE PARTNER DAY GERMANY - 29. Oktober 2012: COMMERZBANK ARENA, Frankfurt Sehr geehrter Oracle Partner, volle Kraft voraus! Unsere Neuausrichtung und Zusammenfassung der Ressourcen, die Sie sicherlich aufmerksam verfolgt haben, ist nun abgeschlossen. Wir sind sehr überzeugt davon, dass wir alle gemeinsam von diesen Veränderungen nachhaltig profitieren werden. Nur ein Stichwort dazu: „One Oracle Red Stack“. Oracle Alliances & Channels und seine Partner können ab sofort das komplette Oracle Produktportfolio bestehend aus Oracle Software Technology, Oracle Applications und Oracle Hardware anbieten – bei bestmöglichem Support aus unserer neuen Organisation. Mein Name ist Christian Werner. Ich bin seit einigen Wochen verantwortlich für alle Alliances & Channels Bereiche in Deutschland. Frau Silvia Kaske leitet jetzt den kompletten A&C Bereich in Europe North in der Funktion als Senior Director Europe North Alliances & Channel Sales. Willkommen an Bord! Was Sie als Oracle Partner davon erwarten können? Wir haben alle unsere Kräfte konzentriert. Aus drei eigenen Bereichen wird ein großes Ganzes. Für Sie bedeutet das: fokussiertes Wachstum, geballte Kompetenz und Supportinfrastruktur. Abgestimmte Prozesse und kürzere Zeiten für Approvals. Wir meinen: Die Zeit ist reif, um jetzt gemeinsam den Hebel auf „Volle Kraft voraus“ zu legen. Und die Bedingungen für Sie sind besser als jemals zuvor, wenn Sie Ihre Kundenbasis erweitern oder neue Marktbereiche erschließen wollen. Leinen los: Kommen Sie zum Oracle Partner Day, wenn Sie wissen wollen, was sich für Sie verbessern wird: am 29. Oktober 2012 in der Commerzbank Arena in Frankfurt. Treffen Sie Ihr neu aufgestelltes A&C-Team. Erleben Sie die Produktneuheiten von der Oracle Open World in San Francisco (30. September bis 4. Oktober 2012) aus erster Hand. Nutzen Sie das neue Speed Dating-Format mit ausgewählten Oracle Experten vor Ort – für konkrete Fragen zu Vertrieb und Produkten. Ganz besonders freue ich mich, dass dieses Jahr Jürgen Kunz, SVP Technology Northern Europe & Country Leader Germany, als Keynote Speaker dabei sein wird. Ich lade Sie ganz herzlich ein und freue mich, Sie in Frankfurt zu begrüßen! Die Teilnahme ist für Sie als Oracle Partner selbstverständlich kostenfrei. Hier finden Sie weitere Informationen zum Oracle Partner Day sowie die unkomplizierte Anmeldemöglichkeit. Ich freue mich auf Sie! Ihr Christian WernerSenior Director Alliances & Channels Germany Übrigens: Direkt nach dem Oracle Partner Day findet der Oracle Day für Endkunden statt. Sie als Partner können natürlich gemeinsam mit Ihren Kunden an dieser Veranstaltung teilnehmen. Bitte melden Sie sich schnell an, da die Plätze limitiert sind. Hier finden Sie weitere Infos zum Oracle Day.

    Read the article

  • The Spotlight is on You

    - by Claudia McDonald
    On the field or off the field, in ballet slippers or singing your heart out on stage, offering a stellar performance every time is key to holding the attention of your audience and having them come back hungry for more. Similarly, showing up to a new business meeting wearing pink tights and a tutu might be one way to holding the attention of your customer, but offering them an unmatched and ground-breaking software solution certainly will get their attention! Simply put, the Oracle Exastack program enables both ISV's and OEM's to rapidly build and deliver faster, more reliable applications. It comes as no surprise that the success of the Oracle Exastack program is centered on establishing Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud as the highest performance, lowest cost platforms available in the industry today.  But here is where the real standing-ovation-worthy facts come in. The Oracle Exadata Database Machine is the only database machine that provides extreme performance for both data warehousing and online transaction processing (OLTP) workloads, making it the ideal platform for consolidating onto private clouds. Whereas the Oracle Exalogic Elastic Cloud is an engineered hardware and software system tested and tuned by Oracle to provide the best foundation for cloud computing, while allowing Java applications, Oracle Applications and other enterprise applications to run with extreme performance. – And the crowd goes wild, ladies and gentlemen! In just four months alone, our partners have already achieved over 150 Oracle Exastack Ready milestones for Oracle Solaris, Oracle Linux, Oracle Database and Oracle WebLogic Server.  As Judson has said, “With the Oracle Exastack program, Oracle is helping partners test, tune and optimize their applications to deliver optimal performance and reliability, accelerating innovation and delivering superior value to customers." And get this, not only are their applications running faster and more efficiently, they are actually being delivered at a lower cost to customers than ever before – extreme performance well deserving of 3 consecutive arabesques! If you haven’t already, check out what some of our partners are saying about the Oracle Exastack program in this video, and find out all that is available to you today. By participating in the Oracle Exastack program, partners now have the ability to achieve Oracle Exadata Optimized, Oracle Exalogic Optimized, Oracle Exadata Ready and Oracle Exalogic Ready status for their solutions. New Oracle Exastack labs, provide OPN members with access to Oracle technical resources, on-premise facilities and remote lab environments. With Oracle Exastack Optimized, partners experience faster and more reliable applications to run on the Oracle Exadata Database Machine, as well as the long awaited Oracle Exalogic Elastic Cloud. Savvy OPN members are leveraging the Oracle Exastack Optimized program toward their advancement to Platinum or Diamond level in OPN. Partners are achieving Oracle Exadata Ready and Oracle Exalogic Ready giving them a competitive advantage and signaling to customers that their applications readily support Oracle Exadata Database Machine or Oracle Exalogic Elastic Cloud to deliver extreme performance. Get your dancing shoes on, The OPN Communications Team

    Read the article

  • Can't install fglrx on Kernel 3.11.0-12

    - by byf-ferdy
    I'm running Ubuntu Gnome 13.10 on a Sony Vaio laptop. I tried to install the newest fglrx driver from the cchtml.com guide but no matter which version I use (13.4 or even 13.9) the installation of the generated .debs fails with this message: Error! Bad return status for module build on kernel: 3.11.0-12-generic (x86_64) This seems to be a confirmed bug on launchpad. Currently I'm running the Radeon SI driver but I really need some 3D acceleration. What is there I can do to install any version of fglrx correctly or to speed up the bug-fixing-process? How long does the fixing of these bugs usually take?

    Read the article

  • installing Oracle Database 10g XE Server in Ubuntu 11.04, "E: Unable to locate package oracle-xe"

    - by Nguyen Phi Vu
    I have read many posts for installing Oracle Database 10g XE Server in Ubuntu, such as this But I get an error: E: Unable to locate package oracle-xe when execute the command sudo apt-get install oracle-xe At the previous step (sudo apt-get update), it also notices that E: Some index files failed to download. They have been ignored, or old ones used instead. Did any one meet and solve this problem? I have searched for this problem but got no proper answer.

    Read the article

  • European Companies Can Now Subscribe to Oracle CRM on Demand Locally

    - by divya.malik
    Here is some important news that was announced by our colleagues in Oracle EMEA last week. Joining our other data centers in Austin, Colorado Springs and Sydney-Australia, Oracle announced a new data center in Europe. Oracle CRM On Demand customers can now have their content securely hosted locally in the region. Here is the press release, and to learn more about Oracle's CRM On Demand offering click here.

    Read the article

  • Mix metrics for May 11, 2010

    - by tim.bonnemann
    It's been a while, sorry about not keeping up. Here once again are our latest community metrics. Any questions or suggestions, please leave a comment. Thanks! Registered Mix users (weekly growth) 62,937 (+0.5%) Active users (percent of total) Last 30 days: 3,928 (6.2%) Last 60 days: 7,850 (12.5%) Last 90 days: 11,875 (18.9%) Traffic (30-day) Visits: 11,623 Page views: 57,846 Twitter Followers: 3,311 List mentions: 193 User-generated content (30-day) New ideas: 31 New questions: 72 New comments: 373 Groups There are currently 1,421 Mix groups (requires login).

    Read the article

  • Ubuntu 11.10, FF 11, ATI Catalyst v12.2, Driver Package version 8.95, WebGL doesn't work

    - by Victor S
    I'm running Ubuntu 11.10, FF 11, ATI Catalyst v12.2, Driver Package version 8.95, WebGL doesn't work. This is a pretty capable setup and definetly did have FF work previously, Note: I've downloaded and installed the ATI drivers from AMD. Not sure how to get WebGL working. Updated My video card info: 01:00.0 VGA compatible controller: ATI Technologies Inc Cypress [Radeon HD 5800 Series] (prog-if 00 [VGA controller]) Subsystem: ATI Technologies Inc Device 0b00 Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at d0000000 (64-bit, prefetchable) [size=256M] Memory at fbee0000 (64-bit, non-prefetchable) [size=128K] I/O ports at d000 [size=256] Expansion ROM at fbec0000 [disabled] [size=128K] Capabilities: [50] Power Management version 3 Capabilities: [58] Express Legacy Endpoint, MSI 00 Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?> Capabilities: [150] Advanced Error Reporting Kernel driver in use: fglrx_pci Kernel modules: fglrx, radeon

    Read the article

  • Live Webcast, Dec. 6: Enterprise Clouds with Oracle VM

    - by Monica Kumar
    Mark your calendar! On Tuesday, Dec. 6th at 9am PT, we are hosting a live webcast with Oracle VM experts. Enterprise Clouds with Oracle VM Tuesday, Dec. 6 at 9 AM US PT The ability to create a cloud leveraging public or private infrastructure has been hampered by the lack of availability of practical, cost-effective choices for server virtualization. In this session, you will learn how Oracle provides a single virtualization solution for your entire infrastructure, and how Oracle Enterprise Manager and Oracle Virtual Assembly Builder help you manage Oracle Applications across the cloud. Also find out how virtualization was leveraged to transform IT for Oracle University and support more than 350,00 students in more than 40,000 classes each year. Those lessons have paved the path to private cloud computing inside Oracle. Speakers: Adam Hawley, Senior Director of Product Management, Oracle Dan Herrup, Principal Systems Engineer, Oracle Corporate Citizenship Register Now.

    Read the article

  • Oracle OpenWorld 2012 - What's New

    - by Cinzia Mascanzoni
    Oracle OpenWorld 2012 is now over. Here a summary of major announcements on Hardware and Technology. Oracle Unveils Expanded Oracle Cloud Services Portfolio Oracle Unveils New Partner Cloud Programs Oracle Announces Latest Release of Oracle Exalogic Elastic Cloud Oracle Announces Oracle Exadata X3 Database In-Memory Machine Oracle Outlines Opportunity to Transform Industries from Device to Data Center with Embedded Java Oracle Announces Oracle Solaris 11.1 Latest Release of Oracle Exalytics In-Memory Machine Software Enables Customers to View and Analyze Data at the Speed of Business New Release of Oracle Business Intelligence Enables Users to Quickly Access and Analyze Key Business Information, Anytime, Anywhere

    Read the article

  • Conventional Parallel Inserts do Exist in Oracle 11

    - by jean-pierre.dijcks
    Had an interesting chat with Greg about said topic and searching showed the following link to discuss this topic in some detail (no reason for me to repeat this). insert /*+ noappend parallel(t1) */ into t1 select /*+ parallel(t2) */ * from t2 generates a load table conventional and does give you a parallel insert without doing a direct path insert. As this is missing from the official documentation it is probably something few people actually know existed, so kudos to Randolf Geist.

    Read the article

  • No keyring secrets found for [ssid] /802-11-wireless-security, ubuntu 12.04

    - by acimer
    I'm on Ubuntu 12.04 x64bit, installed it couple of days ago. The issue i'm having is this: on startup it connects to my wireless without a problem, but after a while I am disconnected and prompted to enter the key for the wireless network (which is entered - saved) so i just click 'ok', but wireless doesn't connect again. Reseting network manager doesn't help either. Only restart, after which, ubuntu connects to the said wireless without a problem. Terminal outputs this error message: ** Message: No keyring secrets found for cimermanovic /802-11-wireless-security; asking user. cimermanovic is the ssid name. also, here are some error messages that network manager is giving: (nm-applet:31926): GdkPixbuf-CRITICAL **: gdk_pixbuf_scale_simple: assertion `dest_width 0' failed (nm-applet:31693): GdkPixbuf-CRITICAL **: gdk_pixbuf_scale_simple: assertion `dest_width 0' failed (nm-applet:30184): GdkPixbuf-CRITICAL **: gdk_pixbuf_scale_simple: assertion `dest_width 0' failed What should i do to fix this? Thanks!

    Read the article

  • ubuntu 11, maximum resolution is a low 1024 x 768

    - by djturbojp7
    I just installed ubuntu 11 and the maximum resolution that it will let me set it at is 1024 x 768. My graphics are onboard, its the intel 82845g. Trying to increase the resolution and support the video more smoothly. UPDATE: user1@pc1:~$ xrandr | grep maximum Screen 0: minimum 320 x 200, current 1024 x 768, maximum 2048 x 2048 user1@pc1:~$ gtf 1280 1024 59.9 # 1280x1024 @ 59.90 Hz (GTF) hsync: 63.49 kHz; pclk: 108.70 MHz Modeline "1280x1024_59.90" 108.70 1280 1360 1496 1712 1024 1025 1028 1060 -HSync +Vsync user1@pc1:~$ xrandr --newmode "1280x1024_59.90" 108.70 1280 1360 1496 1712 1024 1025 1028 1060 -HSync +Vsync X Error of failed request: BadName (named color or font does not exist) Major opcode of failed request: 149 (RANDR) Minor opcode of failed request: 16 (RRCreateMode) Serial number of failed request: 20 Current serial number in output stream: 20 user1@pc1:~$

    Read the article

  • Oracle??????!Oracle Linux 5.6/6??????????

    - by Yusuke.Yamamoto
    ?????????????Oracle Linux 5.6 ?? Oracle Linux 6 ??????????? ????????????????????(????·???????????????????)? Oracle Linux 6 DVDs Now Available(??) Oracle Linux 5.6 DVDs Now Available(??) ????????????????????·????? Unbreakable Enterprise Kernel(UEK) ???????(Red Hat ?????????????)? UEK ??????????????????????????·???????????????????? Oracle Exadata/Exalogic ???UEK ?????????? Oracle on Linux ???????????????????? ???? ?????? Oracle Linux Unbreakable Enterprise Kernel???? - nkjmkzk.net OCFS2 1.6???? - nkjmkzk.net [??]?????????????Fusion Applications??????Linux Kernel??Oracle OpenWorld 2010 - Publickey

    Read the article

  • ???????/????????????????? SQL????????? ??? Part1&2

    - by Yusuke.Yamamoto
    ????? ??:2010/10/12 ??:??????/?? ????????????????????????????????????????????SQL????????????????????????????????????????????????????SQL??????????????????????????????????????????????????SQL???????????????????????????????????????????????????!! ??????SQL???????????????/ SQL?????????????SQL????????????????SQL???????????????????:????SQL??????/ SQL?????????SQL???????????????????:?????SQL??????/ ??????·???????????????????????(???????????????·??)??? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/OCSTuning12_10121330.wmv http://www.oracle.com/technology/global/jp/ondemand/otn-seminar/pdf/SQL.pdf

    Read the article

  • Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache

    - by Ricardo Ferreira
    The concept and usage of data grids are becoming very popular in this days since this type of technology are evolving very fast with some cool lead products like Oracle Coherence. Once for a while, developers need an programmatic way to calculate the total size of a specific cache that are residing in the data grid. In this post, I will show how to accomplish this using Oracle Coherence API. This example has been tested with 3.6, 3.7 and 3.7.1 versions of Oracle Coherence. To start the development of this example, you need to create a POJO ("Plain Old Java Object") that represents a data structure that will hold user data. This data structure will also create an internal fat so I call that should increase considerably the size of each instance in the heap memory. Create a Java class named "Person" as shown in the listing below. package com.oracle.coherence.domain; import java.io.Serializable; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Random; @SuppressWarnings("serial") public class Person implements Serializable { private String firstName; private String lastName; private List<Object> fat; private String email; public Person() { generateFat(); } public Person(String firstName, String lastName, String email) { setFirstName(firstName); setLastName(lastName); setEmail(email); generateFat(); } private void generateFat() { fat = new ArrayList<Object>(); Random random = new Random(); for (int i = 0; i < random.nextInt(18000); i++) { HashMap<Long, Double> internalFat = new HashMap<Long, Double>(); for (int j = 0; j < random.nextInt(10000); j++) { internalFat.put(random.nextLong(), random.nextDouble()); } fat.add(internalFat); } } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } } Now let's create a Java program that will start a data grid into Coherence and will create a cache named "People", that will hold people instances with sequential integer keys. Each person created in this program will trigger the execution of a custom constructor created in the People class that instantiates an internal fat (the random amount of data generated to increase the size of the object) for each person. Create a Java class named "CreatePeopleCacheAndPopulateWithData" as shown in the listing below. package com.oracle.coherence.demo; import com.oracle.coherence.domain.Person; import com.tangosol.net.CacheFactory; import com.tangosol.net.NamedCache; public class CreatePeopleCacheAndPopulateWithData { public static void main(String[] args) { // Asks Coherence for a new cache named "People"... NamedCache people = CacheFactory.getCache("People"); // Creates three people that will be putted into the data grid. Each person // generates an internal fat that should increase its size in terms of bytes... Person pessoa1 = new Person("Ricardo", "Ferreira", "[email protected]"); Person pessoa2 = new Person("Vitor", "Ferreira", "[email protected]"); Person pessoa3 = new Person("Vivian", "Ferreira", "[email protected]"); // Insert three people at the data grid... people.put(1, pessoa1); people.put(2, pessoa2); people.put(3, pessoa3); // Waits for 5 minutes until the user runs the Java program // that calculates the total size of the people cache... try { System.out.println("---> Waiting for 5 minutes for the cache size calculation..."); Thread.sleep(300000); } catch (InterruptedException ie) { ie.printStackTrace(); } } } Finally, let's create a Java program that, using the Coherence API and JMX, will calculate the total size of each cache that the data grid is currently managing. The approach used in this example was retrieve every cache that the data grid are currently managing, but if you are interested on an specific cache, the same approach can be used, you should only filter witch cache will be looked for. Create a Java class named "CalculateTheSizeOfPeopleCache" as shown in the listing below. package com.oracle.coherence.demo; import java.text.DecimalFormat; import java.util.Map; import java.util.Set; import java.util.TreeMap; import javax.management.MBeanServer; import javax.management.MBeanServerFactory; import javax.management.ObjectName; import com.tangosol.net.CacheFactory; public class CalculateTheSizeOfPeopleCache { @SuppressWarnings({ "unchecked", "rawtypes" }) private void run() throws Exception { // Enable JMX support in this Coherence data grid session... System.setProperty("tangosol.coherence.management", "all"); // Create a sample cache just to access the data grid... CacheFactory.getCache(MBeanServerFactory.class.getName()); // Gets the JMX server from Coherence data grid... MBeanServer jmxServer = getJMXServer(); // Creates a internal data structure that would maintain // the statistics from each cache in the data grid... Map cacheList = new TreeMap(); Set jmxObjectList = jmxServer.queryNames(new ObjectName("Coherence:type=Cache,*"), null); for (Object jmxObject : jmxObjectList) { ObjectName jmxObjectName = (ObjectName) jmxObject; String cacheName = jmxObjectName.getKeyProperty("name"); if (cacheName.equals(MBeanServerFactory.class.getName())) { continue; } else { cacheList.put(cacheName, new Statistics(cacheName)); } } // Updates the internal data structure with statistic data // retrieved from caches inside the in-memory data grid... Set<String> cacheNames = cacheList.keySet(); for (String cacheName : cacheNames) { Set resultSet = jmxServer.queryNames( new ObjectName("Coherence:type=Cache,name=" + cacheName + ",*"), null); for (Object resultSetRef : resultSet) { ObjectName objectName = (ObjectName) resultSetRef; if (objectName.getKeyProperty("tier").equals("back")) { int unit = (Integer) jmxServer.getAttribute(objectName, "Units"); int size = (Integer) jmxServer.getAttribute(objectName, "Size"); Statistics statistics = (Statistics) cacheList.get(cacheName); statistics.incrementUnit(unit); statistics.incrementSize(size); cacheList.put(cacheName, statistics); } } } // Finally... print the objects from the internal data // structure that represents the statistics from caches... cacheNames = cacheList.keySet(); for (String cacheName : cacheNames) { Statistics estatisticas = (Statistics) cacheList.get(cacheName); System.out.println(estatisticas); } } public MBeanServer getJMXServer() { MBeanServer jmxServer = null; for (Object jmxServerRef : MBeanServerFactory.findMBeanServer(null)) { jmxServer = (MBeanServer) jmxServerRef; if (jmxServer.getDefaultDomain().equals(DEFAULT_DOMAIN) || DEFAULT_DOMAIN.length() == 0) { break; } jmxServer = null; } if (jmxServer == null) { jmxServer = MBeanServerFactory.createMBeanServer(DEFAULT_DOMAIN); } return jmxServer; } private class Statistics { private long unit; private long size; private String cacheName; public Statistics(String cacheName) { this.cacheName = cacheName; } public void incrementUnit(long unit) { this.unit += unit; } public void incrementSize(long size) { this.size += size; } public long getUnit() { return unit; } public long getSize() { return size; } public double getUnitInMB() { return unit / (1024.0 * 1024.0); } public double getAverageSize() { return size == 0 ? 0 : unit / size; } public String toString() { StringBuffer sb = new StringBuffer(); sb.append("\nCache Statistics of '").append(cacheName).append("':\n"); sb.append(" - Total Entries of Cache -----> " + getSize()).append("\n"); sb.append(" - Used Memory (Bytes) --------> " + getUnit()).append("\n"); sb.append(" - Used Memory (MB) -----------> " + FORMAT.format(getUnitInMB())).append("\n"); sb.append(" - Object Average Size --------> " + FORMAT.format(getAverageSize())).append("\n"); return sb.toString(); } } public static void main(String[] args) throws Exception { new CalculateTheSizeOfPeopleCache().run(); } public static final DecimalFormat FORMAT = new DecimalFormat("###.###"); public static final String DEFAULT_DOMAIN = ""; public static final String DOMAIN_NAME = "Coherence"; } I've commented the overall example so, I don't think that you should get into trouble to understand it. Basically we are dealing with JMX. The first thing to do is enable JMX support for the Coherence client (ie, an JVM that will only retrieve values from the data grid and will not integrate the cluster) application. This can be done very easily using the runtime "tangosol.coherence.management" system property. Consult the Coherence documentation for JMX to understand the possible values that could be applied. The program creates an in memory data structure that holds a custom class created called "Statistics". This class represents the information that we are interested to see, which in this case are the size in bytes and in MB of the caches. An instance of this class is created for each cache that are currently managed by the data grid. Using JMX specific methods, we retrieve the information that are relevant for calculate the total size of the caches. To test this example, you should execute first the CreatePeopleCacheAndPopulateWithData.java program and after the CreatePeopleCacheAndPopulateWithData.java program. The results in the console should be something like this: 2012-06-23 13:29:31.188/4.970 Oracle Coherence 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded operational configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/tangosol-coherence.xml" 2012-06-23 13:29:31.219/5.001 Oracle Coherence 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded operational overrides from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/tangosol-coherence-override-dev.xml" 2012-06-23 13:29:31.219/5.001 Oracle Coherence 3.6.0.4 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/tangosol-coherence-override.xml" is not specified 2012-06-23 13:29:31.266/5.048 Oracle Coherence 3.6.0.4 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified Oracle Coherence Version 3.6.0.4 Build 19111 Grid Edition: Development mode Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved. 2012-06-23 13:29:33.156/6.938 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded Reporter configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/reports/report-group.xml" 2012-06-23 13:29:33.500/7.282 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded cache configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/coherence-cache-config.xml" 2012-06-23 13:29:35.391/9.173 Oracle Coherence GE 3.6.0.4 <D4> (thread=Main Thread, member=n/a): TCMP bound to /192.168.177.133:8090 using SystemSocketProvider 2012-06-23 13:29:37.062/10.844 Oracle Coherence GE 3.6.0.4 <Info> (thread=Cluster, member=n/a): This Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2) joined cluster "cluster:0xC4DB" with senior Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2) 2012-06-23 13:29:37.172/10.954 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Cluster with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Management with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service DistributedCache with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Started cluster Name=cluster:0xC4DB Group{Address=224.3.6.0, Port=36000, TTL=4} MasterMemberSet ( ThisMember=Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle) OldestMember=Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith) ActualMemberSet=MemberSet(Size=2, BitSetCount=2 Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith) Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle) ) RecycleMillis=1200000 RecycleSet=MemberSet(Size=0, BitSetCount=0 ) ) TcpRing{Connections=[1]} IpMonitor{AddressListSize=0} 2012-06-23 13:29:37.891/11.673 Oracle Coherence GE 3.6.0.4 <D5> (thread=Invocation:Management, member=2): Service Management joined the cluster with senior service member 1 2012-06-23 13:29:39.203/12.985 Oracle Coherence GE 3.6.0.4 <D5> (thread=DistributedCache, member=2): Service DistributedCache joined the cluster with senior service member 1 2012-06-23 13:29:39.297/13.079 Oracle Coherence GE 3.6.0.4 <D4> (thread=DistributedCache, member=2): Asking member 1 for 128 primary partitions Cache Statistics of 'People': - Total Entries of Cache -----> 3 - Used Memory (Bytes) --------> 883920 - Used Memory (MB) -----------> 0.843 - Object Average Size --------> 294640 I hope that this post could save you some time when calculate the total size of Coherence cache became a requirement for your high scalable system using data grids. See you!

    Read the article

  • We Need More Migration!

    - by rickramsey
    source Eva Mendez says, "Oye chico, do you really want to keep your data in that tired legacy file system when it could be enjoying encryption, compression, deduplication, snapshots, remote replication and other benefits provided by ZFS in Oracle Solaris 11? It's really not that hard to cross over. If you know how." "I don't know how, me dices? Esta bien, papacito. Go to OTN. Take my word for it. They know how." <blushing> Aw shucks, Eva. Anything for you! </blushing> The Best Way to Migrate Data From Legacy File Systems to ZFS To migrate data from a legacy filesystem to ZFS in Oracle Solaris 11, you need to install the shadow-migration package and enable the shadowd service. Then follow the simple procedure described by Dominic Kay. How to Update to Oracle Solaris 11 Using the Image Packaging System Oracle Solaris 11.1 has been released. You can upgrade using either Oracle's official Solaris release repository or, if you have a support contract, the Support repository. Peter Dennis explains how. How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11 How to use the Oracle Solaris 8 P2V (physical to virtual) Archiver tool, which comes with Oracle Solaris Legacy Containers, to migrate a physical Oracle Solaris 8 system with Oracle Database and an Oracle Automatic Storage Management file system into an Oracle Solaris 8 branded zone inside an Oracle Solaris 10 guest domain on top of an Oracle Solaris 11 control domain. - Ricardo Website Newsletter Facebook Twitter

    Read the article

  • Oracle Installation issue on Oracle Linux 6.4

    - by Pradhyoth
    I am trying to install Oracle 11g(11.2.0.1.0) on Oracle Linux 6.4(Remote Server). I get the following error(s) when Database Configuration Assistant is running ORA-01092: ORACLE instance terminated Disconnection forced ORA-48210: Relation Not Found Can someone help me regarding this errors, I cant seem to find what exactly that I have to do to solve this issue. I have done the same install on Oracle Linux 5.7 but never faced this issue before. The only problem while installing(which happens on 5.7 too) is that the required packages fail, but upon checking, it seems much higher versions are already installed. Also I cant do a yum update because the system seems to have connectivity issues with public-yum.oracle.com as I cant ping it(even though the IP Address gets resolved)

    Read the article

  • Information Indepth Newsletter - Linux Edition

    - by Paulo Folgado
    INFORMATION INDEPTH NEWSLETTERLinux Edition February 2011 Stay Connected:  NEWS Now Available: Oracle Linux 6 Get the latest release of Oracle Linux 6, which includes Unbreakable Enterprise Kernel.Download Oracle Linux 6 Read More Customers Succeed by Using Oracle Exadata with Oracle Linux Watch IT executives from Bank of America, Linkshare, and Johns Hopkins as they talk about the business challenges they faced and why they chose to use Oracle Linux along with Oracle Exadata as the solution. Watch Now Video Interview: Oracle Senior Vice President Wim Coekaerts Watch Wim Coekaerts, senior vice president, Linux and Virtualization Engineering, as he talks about use cases for Oracle VM Templates as well as the Unbreakable Enterprise Kernel for Linux.Watch Now Hot Off the Press: Migrate Your IBM AIX Environment to Oracle Linux This new white paper provides recommendations for planning and implementing the migration of applications from an IBM Power System running AIX to Oracle's Sun Fire X4800 Server with Intel Xeon 7560 Processor running Oracle Linux 5.5.Read More  Back to Top BLOGOSPHERE Just Launched: The Oracle Linux Blog Follow our new Oracle Linux blog  to hear the latest updates, product news, upcoming events, and all the latest happenings, directly from the Linux team at Oracle. Back to Top TECH DIVE NEW: Linux/Oracle Solaris CommandComparo Site from Oracle Technology NetworkThis site gives equivalent command syntax in Oracle Solaris 10 and Oracle Enterprise Linux 5 for common administrative tasks--focusing particularly on tasks that have tricky syntax or that you frequently need to double check. It acts as a quick reference for administrators who operate in these two OS environments. Free Download: Oracle Linux Release 5.6Did you know that by using Oracle Linux 5.5 or 5.6 along with the Unbreakable Enterprise Kernel, you can get all the benefits of Linux mainline kernel 2.6.32 and more, right now, without the need to reinstall or migrate to a new operating system such as RHEL6?Read Release NotesDownload Oracle Linux 5.6 LSB 4.0 Certification Completed for Oracle Linux 5.5Oracle Linux 5.5 with Unbreakable Enterprise Kernel successfully completed the LSB 4.0 certification.  Back to Top WEBCASTS Boost Your Linux Performance with Oracle's Enhancements in Infiniband and RDSRegister to hear Director of Kernel Engineering Chris Mason cover scalability and performance improvements in Linux environment. Get the Facts Oracle's Unbreakable Enterprise KernelSVP Wim Coekaerts and Senior Director Monica Kumar cover the facts about and benefits of using Unbreakable Enterprise Kernel.  View Other Webcasts on Demand   Back to Top EVENTS Collaborate 2011April 10-14 Orlando, Florida Cloud Summit Events, WorldwideVarious dates (check the city for date/time of event) Datacenter Efficiency Events WorldwideThese events include Linux and Oracle VM sessions.Various dates (check the city for date/time of event) Virtualization Events in North America Find an Oracle Event  Back to Top EDUCATION Get Oracle Linux Certified from Oracle University Oracle University offers courses in both Oracle Linux and the administration of Oracle Database on Linux.  Back to Top CUSTOMER SPOTLIGHT Pella Corporation Improves IT Performance and Efficiency with Oracle Linux and Oracle VM To improve IT performance and efficiency and lower operational costs, Pella Corporation, has standardized on Oracle VM and Oracle Linux. Read More Disney Store Deploys POS in 330 Stores and 7 Countries on Oracle Linux Disney Store is running 1,500 registers worldwide on a broad Oracle technology software stack including Oracle Database 11g, Oracle Fusion Middleware, and Oracle Linux. Read More Back to Top PARTNER SPOTLIGHT Emulex and Oracle Announce Data Integrity Features The Unbreakable Enterprise Kernel provides data integrity checking between Oracle Database applications and Emulex 8Gb/s LightPulse Fibre Channel Host Bus Adapters. Read More Dell Inc. Dell Inc. tested and validated configurations support Oracle Linux. Back to Top STAY IN TOUCH Follow @ORCL_Linux on Twitter for the latest penguin tweets Bookmark Oracle.com/Linux Read the Oracle Linux blog Back to Top  Oracle Information InDepth newsletters bring targeted news, articles, customer stories, and special offers to business people who want to find out how to streamline enterprise information management, measure results, improve business processes, and communicate a single truth to their constituents. Please send questions or comments to [email protected]. For answers to questions about subscribing, unsubscribing, and managing your Oracle e-mail communications preferences, please see the Oracle E-Mail Communications page. Copyright © 2011, Oracle Corporation and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor is it subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. 

    Read the article

  • Install and upgrade strategies for Oracle Enterprise Manager 12c - Upcoming Webcasts with live demos

    - by Anand Akela
    At Oracle Open World 2011, we launched the Oracle Enterprise Manager 12c , the only complete cloud management solution for your enterprise cloud. With the new release of Oracle Enterprise Manager Cloud Control 12c, the installation and upgrade process has been enhanced to provide a fast and smooth install experience. In the upcoming webcasts, Oracle Enterprise Manager experts will discuss the installation and upgrade strategies for Oracle Enterprise Manager Cloud Control 12c . These webcasts will include live demonstrations of the install and upgrade processes. In the Webcast on November 17th, we will cover the installation steps and provide recommendations to setup a new Oracle Enterprise Manager Cloud Control 12c environment. We'll also provide a live demonstration of the complete installation process.   Upgrading your Oracle Enterprise Manager environment can be a challenging and complex task especially with large environments consisting of hundreds or thousands of targets. In the webcast on November 18th, we'll describe key facts that administrators must know before upgrading their Enterprise Manager system as well as introduce the different approaches for an upgrade. We'll also walk you through the key steps for upgrading an existing Enterprise Manager 11g (or 10g) Grid Control to Oracle Enterprise Manager Cloud Control 12c. In addition to the live webcasts on Oracle Enterprise Manager Cloud Control 12c install and upgrade processes, please consider attending the replay of  Oracle Enterprise Manager Ops Center webcast with live Q&A . Schedule and registration links of upcoming webcasts  :- Topics Schedule Oracle Enterprise Manager Ops Center: Global Systems Management Made Easy (Replay) November 17 10 a.m PT December 1 10 a.m PT Oracle Enterprise Manager Cloud Control 12c Installation Overview November 17 8 a.m PT Upgrade Smoothly to Oracle Enterprise Manager Cloud Control 12c November 18 8 a.m PT For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter   Facebook YouTube Linkedin

    Read the article

  • Lancement du blog Oracle Applications France

    - by user816714
    Le voilà enfin ! Bienvenue sur notre nouveau blog Oracle Applications France. Pourquoi un blog ? Pour être plus proche de vous, chers utilisateurs ! Parce que nous savons qu’un directeur des systèmes d’information, un directeur des ressources humaines ou un chef d’entreprise ne cherchera pas les mêmes solutions, nous mettons en place ce support de communication pour engager un dialogue constructif et ouvert. Pour mener à bien cette mission, notre équipe marketing, sera derrière les commandes et vous proposera conseils, témoignages mais aussi des contenus multimédias en images et vidéos. En suivant le blog Oracle Applications France vous plongerez dans les coulisses des grandes actualités Oracle Applications. Nous tenterons de vous offrir un regard différent sur nos événements, vous serez les premiers à être informés sur nos lancements produits, et vous rencontrez nos experts au travers d’interviews et analyses exclusives. Notre mission ? Nous voulons devenir votre guide et vous accompagner pour mieux appréhender l’offre Oracle Applications. Parmi l’offre extrêmement dense des solutions qu’Oracle propose, nous vous aiderons à trouver plus rapidement votre chemin. N’hésitez donc pas à nous faire part de vos feedbacks et questions ainsi qu’à commenter nos futurs billets sur le blog Oracle Applications France. Nous vous conseillons également de suivre nos meilleurs experts sur les médias sociaux. En voici une première sélection : Vous voulez devenir incollable sur nos solutions CRM ? Suivez ce compte twitter : http://twitter.com/#!/OracleCRM Devenez fan de cette page facebook : http://www.facebook.com/OracleCRM Regardez nos vidéos youtube : http://www.youtube.com/OracleCRM Et pour les anglophones, rendez-vous sur le blog Oracle CRM en anglais : http://blogs.oracle.com/crm Vous ne jurez que par l’innovation produit aka la PLM ? Suivez ce compte twitter : http://twitter.com/#!/agileplm Devenez fan de cette page facebook : https://www.facebook.com/OracleAgilePLM Regardez nos vidéos youtube : http://www.youtube.com/OracleAgilePLM Et pour les anglophones, rendez-vous sur le blog Oracle PLM en anglais : http://blogs.oracle.com/plm/ Vous ne pouvez plus vivre sans la suite d’applications de gestion Oracle Fusion Applications ? Devenez fan de cette page facebook : https://www.facebook.com/OracleApps Ecoutez notre dernier podcast en anglais : http://streaming.oracle.com/ebn/podcasts/media/10118954_Fusions_Applications_061011.mp3 Et pour les anglophones, rendez-vous sur le blog Oracle Applications en anglais : http://blogs.oracle.com/applications/

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >