Search Results

Search found 4640 results on 186 pages for 'numerical integration'.

Page 11/186 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • How to Avoid Your Next 12-Month Science Project

    - by constant
    While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack. After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades? On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea. Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company. Engineering Systems is Hard Work! The backbone of Exalogic is its InfiniBand network: 4 times better bandwidth than even 10 Gigabit Ethernet, and only about a tenth of its latency. What a potential for increased scalability and throughput across the middleware and database layers! But InfiniBand is a beast that needs to be tamed: It is true that Exalogic uses a standard, open-source Open Fabrics Enterprise Distribution (OFED) InfiniBand driver stack. Unfortunately, this software has been developed by the HPC community with fastest speed in mind (which is good) but, despite the name, not many other enterprise-class requirements are included (which is less good). Here are some of the improvements that Oracle's InfiniBand development team had to add to the OFED stack to make it enterprise-ready, simply because typical HPC users didn't have the need to implement them: More than 100 bug fixes in the pieces that were not related to the Message Passing Interface Protocol (MPI), which is the protocol that HPC users use most of the time, but which is less useful in the enterprise. Performance optimizations and tuning across the whole IB stack: From Switches, Host Channel Adapters (HCAs) and drivers to low-level protocols, middleware and applications. Yes, even the standard HPC IB stack could be improved in terms of performance. Ethernet over IB (EoIB): Exalogic uses InfiniBand internally to reach high performance, but it needs to play nicely with datacenters around it. That's why Oracle added Ethernet over InfiniBand technology to it that allows for creating many virtual 10GBE adapters inside Exalogic's nodes that are aggregated and connected to Exalogic's IB gateway switches. While this is an open standard, it's up to the vendor to implement it. In this case, Oracle integrated the EoIB stack with Oracle's own IB to 10GBE gateway switches, and made it fully virtualized from the beginning. This means that Exalogic customers can completely rewire their server infrastructure inside the rack without having to physically pull or plug a single cable - a must-have for every cloud deployment. Anybody who wants to match this level of integration would need to add an InfiniBand switch development team to their project. Or just buy Oracle's gateway switches, which are conveniently shipped with a whole server infrastructure attached! IPv6 support for InfiniBand's Sockets Direct Protocol (SDP), Reliable Datagram Sockets (RDS), TCP/IP over IB (IPoIB) and EoIB protocols. Because no IPv6 = not very enterprise-class. HA capability for SDP. High Availability is not a big requirement for HPC, but for enterprise-class application servers it is. Every node in Exalogic's InfiniBand network is connected twice for redundancy. If any cable or port or HCA fails, there's always a replacement link ready to take over. This requires extra magic at the protocol level to work. So in addition to Weblogic's failover capabilities, Oracle implemented IB automatic path migration at the SDP level to avoid unnecessary failover operations at the middleware level. Security, for example spoof-protection. Another feature that is less important for traditional users of InfiniBand, but very important for enterprise customers. InfiniBand Partitioning and Quality-of-Service (QoS): One of the first questions we get from customers about Exalogic is: “How can we implement multi-tenancy?” The answer is to partition your IB network, which effectively creates many networks that work independently and that are protected at the lowest networking layer possible. In addition to that, QoS allows administrators to prioritize traffic flow in multi-tenancy environments so they can keep their service levels where it matters most. Resilient IB Fabric Management: InfiniBand is a self-managing network, so a lot of the magic lies in coming up with the right topology and in teaching the subnet manager how to properly discover and manage the network. Oracle's Infiniband switches come with pre-integrated, highly available fabric management with seamless integration into Oracle Enterprise Manager Ops Center. In short: Oracle elevated the OFED InfiniBand stack into an enterprise-class networking infrastructure. Many years and multiple teams of manpower went into the above improvements - this is something you can only get from Oracle, because no other InfiniBand vendor can give you these features across the whole stack! Exabus: Because it's not About the Size of Your Network, it's How You Use it! So let's assume that you somehow were able to get your hands on an enterprise-class IB driver stack. Or maybe you don't care and are just happy with the standard OFED one? Anyway, the next step is to actually leverage that InfiniBand performance. Here are the choices: Use traditional TCP/IP on top of the InfiniBand stack, Develop your own integration between your middleware and the lower-level (but faster) InfiniBand protocols. While more bandwidth is always a good thing, it's actually the low latency that enables superior performance for your applications when running on any networking infrastructure: The lower the latency, the faster the response travels through the network and the more transactions you can close per second. The reason why InfiniBand is such a low latency technology is that it gets rid of most if not all of your traditional networking protocol stack: Data is literally beamed from one region of RAM in one server into another region of RAM in another server with no kernel/drivers/UDP/TCP or other networking stack overhead involved! Which makes option 1 a no-go: Adding TCP/IP on top of InfiniBand is like adding training wheels to your racing bike. It may be ok in the beginning and for development, but it's not quite the performance IB was meant to deliver. Which only leaves option 2: Integrating your middleware with fast, low-level InfiniBand protocols. And this is what Exalogic's "Exabus" technology is all about. Here are a few Exabus features that help applications leverage the performance of InfiniBand in Exalogic: RDMA and SDP integration at the JDBC driver level (SDP), for Oracle Weblogic (SDP), Oracle Coherence (RDMA), Oracle Tuxedo (RDMA) and the new Oracle Traffic Director (RDMA) on Exalogic. Using these protocols, middleware can communicate a lot faster with each other and the Oracle database than by using standard networking protocols, Seamless Integration of Ethernet over InfiniBand from Exalogic's Gateway switches into the OS, Oracle Weblogic optimizations for handling massive amounts of parallel transactions. Because if you have an 8-lane Autobahn, you also need to improve your ramps so you can feed it with many cars in parallel. Integration of Weblogic with Oracle Exadata for faster performance, optimized session management and failover. As you see, “Exabus” is Oracle's word for describing all the InfiniBand enhancements Oracle put into Exalogic: OFED stack enhancements, protocols for faster IB access, and InfiniBand support and optimizations at the virtualization and middleware level. All working together to deliver the full potential of InfiniBand performance. Who else has 100% control over their middleware so they can develop their own low-level protocol integration with InfiniBand? Even if you take an open source approach, you're looking at years of development work to create, test and support a whole new networking technology in your middleware! The Extras: Less Hassle, More Productivity, Faster Time to Market And then there are the other advantages of Engineered Systems that are true for Exalogic the same as they are for every other Engineered System: One simple purchasing process: No headaches due to endless RFPs and no “Will X work with Y?” uncertainties. Everything has been engineered together: All kinds of bugs and problems have been already fixed at the design level that would have only manifested themselves after you have built the system from scratch. Everything is built, tested and integrated at the factory level . Less integration pain for you, faster time to market. Every Exalogic machine world-wide is identical to Oracle's own machines in the lab: Instant replication of any problems you may encounter, faster time to resolution. Simplified patching, management and operations. One throat to choke: Imagine finger-pointing hell for systems that have been put together using several different vendors. Oracle's Engineered Systems have a single phone number that customers can call to get their problems solved. For more business-centric values, read The Business Value of Engineered Systems. Conclusion: Buy Exalogic, or get ready for a 6-12 Month Science Project And here's the reason why it's not easy to "build your own Exalogic": There's a lot of work required to make such a system fly. In fact, anybody who is starting to "just put together a bunch of servers and an InfiniBand network" is really looking at a 6-12 month science project. And the outcome is likely to not be very enterprise-class. And it won't have Exalogic's performance either. Because building an Engineered System is literally rocket science: It takes a lot of time, effort, resources and many iterations of design/test/analyze/fix to build such a system. That's why InfiniBand has been reserved for HPC scientists for such a long time. And only Oracle can bring the power of InfiniBand in an enterprise-class, ready-to use, pre-integrated version to customers, without the develop/integrate/support pain. For more details, check the new Exalogic overview white paper which was updated only recently. P.S.: Thanks to my colleagues Ola, Paul, Don and Andy for helping me put together this article! var flattr_uid = '26528'; var flattr_tle = 'How to Avoid Your Next 12-Month Science Project'; var flattr_dsc = 'While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack.After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades?On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea.Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company.'; var flattr_tag = 'Engineered Systems,Engineered Systems,Infiniband,Integration,latency,Oracle,performance'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2012/04/how-avoid-your-next-12-month-science-project'; var flattr_lng = 'en_GB'

    Read the article

  • Hudson.. another Continuous Integration tool

    - by Narendra Tiwari
    In my previous posts I discussed about Cruisecontrol.net and its legacy support to .Net development. Hudson  is yet another continuous integration tool. Hudson is also free like CCNet and built in java. - CCNet has its legacy support to .Net applications where as Hudson can be easily configured on both the environments (.Net and Java). - One of the major differences in CCNet and Hudson is the richer GUI of Hudson provide user interactive screens for project configuration where as in CCNet we have to play with a few xml configuration files. Both the tools are capable of providing basic features of continuous integration e.g.:- - Source Control configuration - Code Compilation/Build - Ad hoc plugin tools to be configured along with compilation Support for adhoc tools seems to be bigger with CCNet e.g. There are almost every source control plugin available with CCNet where as Hudson has support for limited source control servers. Basically there is an interseting point to see is that there are 2 major partsof whole CI system one performed by build tool and rest. Build tool takes care of all adhoc plugin tools  so no matter if CI tool does not have plugin for that tool if thet tools provides command line support that can be configured in build tool and that build tool is then configured with CI tool inturn. For example if I have a build script configured in MSBuild and CCNet can be easily switched to Hudson. Here we need not to change anything in build script we just need to configure MSBuild on Hudson and pass the path of script file and thats it... all is same. Hudson Resources:- - https://hudson.dev.java.net/ - http://wiki.hudson-ci.org/display/HUDSON/Meet+Hudson - http://wiki.hudson-ci.org/display/HUDSON/Plugins - http://callport.blogspot.com/2009/02/hudson-for-net-projects.html Java support on CCNet http://confluence.public.thoughtworks.org/display/CC/Getting+Started+With+CruiseControl?focusedCommentId=19988484#comment-19988484 Please share your thoughts...

    Read the article

  • Oracle SOA Suite for healthcare integration Dashboard By Nitesh Jain

    - by JuergenKress
    Oracle SOA Suite Healthcare came up with a new way of monitoring where user can configure a dashboard and follow the dynamic runtime changes. Oracle SOA Suite for healthcare integration dashboards display information about the current health of the endpoints in a healthcare integration application. You can create and configure multiple dashboards as needed to monitor the status and volume metrics for the endpoints you have defined. The Dashboards reflects changes that occur in the runtime repository, such as purging runtime instance data, new messages processed, and new error messages. You can display data for various time periods, and you can manually refresh the data in real time or set the dashboard to automatically refresh at set intervals. Dashboard shows the following information: Status: The current status of the endpoint, such as Running, Idle, Disabled, or Errors. Messages Sent: The number of messages sent by the endpoint in the specified time period. Messages Received: The number of messages received by the endpoint in the specified time period. Errors: The number of messages with errors for the endpoint in the given time period. Last Sent: The date and time the last message was sent from the endpoint. Last Received: The date and time the last message was received from the endpoint. Last Error: The date and time of the last error for the endpoint. It also shows the detailed view of a specific Endpoint. The document type. The number of messages received per second. The total number of message processed in the specified time period. The average size of each message. For more information please visit Nitesh Jain blog SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Suite,SOA heathcare,soa health,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • E-Business Integration with SSO using AccessGate

    - by user774220
    Moving away from the legacy Oracle SSO, Oracle E-Business Suite (EBS) came up with EBS AccessGate as the way forward to provide Single Sign On with Oracle Access Manager (OAM). As opposed to AccessGate in OAM terminology, EBS AccessGate has no specific connection with OAM with respect to configuration. Instead, EBS AccessGate uses the header variables sent from the SSO system to create the native user-session, like any other SSO enabled web application. E-Business Suite Integration with Oracle Access Manager It is a known fact that E-Business suite requires Oracle Internet Directory (OID) as the user repository to enable Single Sign On. This is due to the fact that E-Business Suite needs to be registered with OID to for Single Sign On. Additionally, E-Business Suite uses “orclguid” in OID to map the Single Sign On user with the corresponding local user profile. During authentication, EBS AccessGate expects SSO system to return orclguid and EBS username (stored as a user-attribute in SSO user store) in two header variables USER_ORCLGUID and USER_NAME respectively. Following diagram depicts the authentication flow once SSO system returns EBS Username and orclguid after successful authentication: Topic to brainstorm: EBS AccessGate as a generic SSO enablement solution for E-Business Suite AccessGate Even though EBS AccessGate is suggested as an integration approach between OAM and Oracle E-Business Suite, this section attempts to look at EBS AccessGate as a generic solution approach to provide SSO to Oracle E-Business Suite using any Web SSO solution. From the above points, the only dependency on the SSO system is that it should be able to return the corresponding orclguid from the OID which is configured with the E-Business Suite. This can be achieved by a variety of approaches: By using the same OID referred by E-Business Suite as the Single Sign On user store. If SSO System is using a different user store then: Use DIP or OIM to synch orclsguid from E-Business Suite OID to SSO user store Use OVD to provide an LDAP view where orclguid from E-Business Suite OID is part of the user entity in the user store referred by SSO System

    Read the article

  • Error when I try to installth desktop integration features for Openoffice

    - by PENG TENG
    peng@peng-ThinkPad-SL410:~$ cd '/home/peng/Downloads/en-US/DEBS/desktop-integration' peng@peng-ThinkPad-SL410:~/Downloads/en-US/DEBS/desktop-integration$ sudo dpkg -i *.deb (Reading database ... 357248 files and directories currently installed.) Unpacking openoffice.org-debian-menus (from openoffice.org3.4-debian-menus_3.4-9593_all.deb) ... dpkg: error processing openoffice.org3.4-debian-menus_3.4-9593_all.deb (--install): trying to overwrite '/usr/bin/soffice', which is also in package libreoffice-common 1:3.6.2~rc2-0ubuntu3 /usr/bin/gtk-update-icon-cache gtk-update-icon-cache: Cache file created successfully. /usr/bin/gtk-update-icon-cache gtk-update-icon-cache: Cache file created successfully. Processing triggers for menu ... Processing triggers for hicolor-icon-theme ... Processing triggers for gnome-icon-theme ... Processing triggers for shared-mime-info ... Unknown media type in type 'all/all' Unknown media type in type 'all/allfiles' Unknown media type in type 'uri/mms' Unknown media type in type 'uri/mmst' Unknown media type in type 'uri/mmsu' Unknown media type in type 'uri/pnm' Unknown media type in type 'uri/rtspt' Unknown media type in type 'uri/rtspu' Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Errors were encountered while processing: openoffice.org3.4-debian-menus_3.4-9593_all.deb Can anyone solve the problem?

    Read the article

  • Siebel BIP Integration

    - by Tim Dexter
    This post is more of a bookmark for me so that I stop bugging the brown stuff out of the John the Siebel-BIP product manager. I have had multiple customers over the past two weeks asking for help around the integration. What's its capable of? How can I allow my users to click a button to run a BIP report? How can I kick off a report from a Siebel workflow? Start right here - this is a great white paper explaining whats now available with the integration using, the Siebel Report Business Service. Once you have consumed that from start to finish. Get on over to Oracle support and look for the following note that has code samples and lots of other good stuff! Siebel BI Publisher Reports Business Service (8.1.1.7+) [ID 1425724.1] The Reports Business Service enables BI Publisher reports to be executed from the Siebel application via a Workflow Process, or through scripting. The report is generated in the background by connecting to the BI Publisher server. The report output is stored in the Siebel File System and accessed from the My BI Publisher Reports view. Alternatively using appropriate methods, the report can be attached to an entity or sent to a particular delivery channel.

    Read the article

  • Integrating with Oracle Fusion Applications: Discovering Integration Artifacts

    - by Lionel Dubreuil
    Oracle Enterprise Repository serves as the core element to the Oracle SOA Governance solution. An industry-leading metadata repository, Oracle Enterprise Repository provides a solid foundation for delivering governance throughout the service-oriented architecture (SOA) lifecycle by acting as the single source of truth for information surrounding SOA assets and their dependencies. For Fusion Applications, the use of OER has been extended to include other integration asset types such as interface tables and other technical information such as data models, tables, views, lookups, profile options, et cetera. E-Business Suite users familiar with iRepository or eTRM will recognize the functionality in Fusion Applications OER. Oracle Enterprise Repository for Fusion Applications provides a common catalog of technical information, searchable using many different mechanisms. Customers can locate technical information by the name, description or keyword of the information they are looking for. They can also search by the type of asset they are trying to locate and/or where the asset sits in the product taxonomy. They can also see the how the asset dances in the choreography of some illustrative co-existence scenarios. These scenarios are laid out as both functional flow diagrams as well as technical interaction diagrams. Rajesh Raheja, software architect at Oracle, has recently posted an article on this topic: visibility and control are the key tenets to SOA governance, and the first step in integrating with Oracle Fusion Applications is to find out what are the integration options available. Oracle Enterprise Repository, an industry-leading metadata repository, provides this visibility. You can find his full blog post here.

    Read the article

  • RDA and Fusion Middleware Diagnostic Framework Integration

    - by Daniel Mortimer
    Further to my last blog entry "FMw Diagnostic Framework : Automatic Capture of Diagnostic Data Upon First Failure!" I have spent some time exploring how Remote Diagnostic Agent (RDA) integrates with Diagnostic Framework. Remote Diagnostic Agent, by default, collects the information about the last 10 incidents which have been captured in the Automatic Diagnostic Repository (ADR). This information can be located via RDA's Start Page Menu system. See screenshot below. Screenshot - Viewing Diagnostic Framework Incident Information in a RDA Package Note: In the next release of RDA - version 4.30 - the Diagnostic Repository menu label will have it's own position in the weblogic managed server sub menu hierarchy rather than be a child menu item of the logs menuDiagnostic Framework is also capable of launching RDA engine and including the output in an incident package. This is achieved via the command "IPS GENERATE PACKAGE". The RDA output is written to DOMAIN_HOME/servers/<server name>/adr/diag/ofm/<domain name>/<server name>/incpkg/pkg_[number]/seq_[number]/rda The RDA collected files are best viewed (as shown in the screenshot above) by opening the RDA Start Page - "DFW__start.htm" - from this directory in a browser. If you do not have a browser available on the host machine, zip the contents of the rda directory and transfer and extract to a machine which has a browser.  The explanation of the integration goes a little deeper. If you want to know more, read My Oracle Support document: Understanding RDA and FMW 11g Diagnostic Framework Integration [Document 1503644.1]

    Read the article

  • Database Vault integration available

    - by Anthony Shorten
    One of the major features of Oracle Utilities Application Framework V4.1 is the provision of a base solution for integration to the Database Vault product. Database Vault is part of Oracle’s security portfolio of product and allows database user permissions to be locked down to only allow appropriate users appropriate access to the product data. By default, when you install the product database, administrators and SYSDBA users have full DML (SELECT, INSERT, UPDATE and DELETE access) to the schemas they own and in the case of the SYSDBA users, all schemas on the database. This can be perceived as an issue. Database Vault allows an additional layer of security to disable inappropriate access. In Oracle Utilities Application Framework, a prebuilt Database Vault solution has been provided to provide base DML access to product data for product users only. The solution is shipped with the database installation files and includes a set of SQL files to create, disable, enable and delete the Database Vault objects. The solution contains a Database Vault Realm, RuleSets, Rules and Command Rules that can be used as is or extended to meet site specific needs. The solution is consistent with other Database Vault solutions provided for other Oracle applications such as PeopleSoft, E-Business Suite, JD-Edwards and Siebel. Customers familiar with the database vault solutions for those products will recognize the similarities between the solutions. For more details of the solution, refer to the Database Vault Integration for Oracle Utilities Application Framework Based Products on My Oracle Support at KB Id: 1290700.1.

    Read the article

  • YouTube: Chrome Dev Tools Integration with NetBeans IDE!

    - by Geertjan
    Some time ago my colleague David Konecny discussed the question "What works better for you? NetBeans IDE or Chrome Developer Tools?". It's a good read. David highlights the point that it's not a question of either/or but both, since the two tools are like the apple/pear dichotmoy. However, good news! The two worlds are not divided in NetBeans IDE 7.4. Changes you make in Chrome Developer Tools (CDT) are automatically persisted to the related files in NetBeans IDE, as you can see in a new YouTube clip I made today. The new integration of CDT with NetBeans IDE has been mentioned in the NetBeans IDE 7.4 New & Noteworthy, while on Twitter this was sighted yesterday: Watch the movie above and within 5 minutes you too will see the simplicity and power of CDT integration with NetBeans IDE. In other news. I consider the above to be my favorite (though it's a tough choice, since there are so many new features in NetBeans IDE 7.4) new feature, for the article "What is your favorite new NetBeans IDE 7.4 feature?"

    Read the article

  • Technical Integration Roadmap for OBI11g and Oracle Hyperion EPM System

    - by Mike.Hallett(at)Oracle-BI&EPM
    There is an excellent technical whitepaper on the integration roadmap for Oracle business intelligence enterprise edition and the Oracle Hyperion enterprise performance management system  (download at this link).  This document lists the integration points among all current releases of Oracle BI EE with EPM System releases: with live links to other relevant documentation also provided. You may also be interested in the overall Hyperion EPM System Documentation Resources which can be found from the Doc Portal. And, there are two new tools for EPM @ MyOracleSupport  {this needs your oracle logon} : Cumulative Feature Overview Tool This new tool offers a simple way to determine the features developed between releases to assist you in your upgrade implementations. The tool helps you to plan your upgrades by providing concise descriptions of new and enhanced solutions and functionality that are added between your current and target releases. With the Cumulative Feature Overview Tool, you can quickly and easily find information about new features for each EPM System product. Defects Fixed Finder Tool This new tool provides an efficient way to review the defects fixed in patch set updates, patch set exceptions, and patch sets for major releases, starting with Release 11.1.1. The tool helps you plan patch implementations by providing concise descriptions of defects fixed after your current release. The Defects Fixed Finder enables you to easily find information about defects fixed for each EPM System product.

    Read the article

  • Oracle SOA Suite for healthcare integration Dashboard

    - by Nitesh Jain
    Oracle SOA Suite Healthcare came up with a new way of monitoring where user can configure a dashboard and follow the dynamic runtime changes.Oracle SOA Suite for healthcare integration dashboards display information about the current health of the endpoints in a healthcare integration application. You can create and configure multiple dashboards as needed to monitor the status and volume metrics for the endpoints you have defined. The Dashboards reflects changes that occur in the runtime repository, such as purging runtime instance data, new messages processed, and new error messages. You can display data for various time periods, and you can manually refresh the data in real time or set the dashboard to automatically refresh at set intervals.Dashboard shows the following information: Status: The current status of the endpoint, such as Running, Idle, Disabled, or Errors. Messages Sent: The number of messages sent by the endpoint in the specified time period. Messages Received: The number of messages received by the endpoint in the specified time period. Errors: The number of messages with errors for the endpoint in the given time period. Last Sent: The date and time the last message was sent from the endpoint. Last Received: The date and time the last message was received from the endpoint. Last Error: The date and time of the last error for the endpoint.  It also shows the detailed view of a specific Endpoint The document type. The number of messages received per second. The total number of message processed in the specified time period. The average size of each message.

    Read the article

  • Access Control Service: Walkthrough Videos of Web Application, SOAP, REST and Silverlight Integration

    - by Your DisplayName here!
    Over the weekend I worked a little more on my ACS2 sample. Instead of writing it all down, I decided to quickly record four short videos that cover the relevant features and code. Have fun ;) Part 1 – Overview This video does a quick walkthrough of the solution and shows the web application part. This includes driving the sign in UI via JavaScript (thanks Matias) as well as the registration logic I wrote about here. watch Part 2 – SOAP Service and Client The sample app also exposes a WCF SOAP service. This video shows how to wire up the service to ACS and hows how to create a client that first requests a token from an IdP and then sends this token to ACS. watch Part 3 – REST Service and Client This part shows how to set up a WCF REST service that consumes SWT tokens from ACS. Unfortunately there is currently no standard WIF plumbing for REST. For the service integration I had to combine a lot of code from different sources (kzu, zulfiq) as well as the WIF SDK and OAuth CTPs together. But it is working. watch Part 4 – Silverlight and Web Identity Integration This part took by far the most time to write. The Silverlight Client shows ho to sign in to the application using a registered identity provider (including web identities) and using the resulting SWT token to call our REST service. This is designed to be a desktop (OOB) client application (thanks to Jörg for the UI magic). watch code download

    Read the article

  • Fusion HCM SaaS – Integration

    - by Kiran Mundy
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Fusion HCM SaaS – Integration A typical implementation pattern we’re seeing with Fusion Apps early adopters is implementing a few Fusion HCM applications that bring the most benefit to their company with the least disruption to existing programs and interfaces. Very often this ends up being Fusion Goals & Performance, Talent, Compensation or Benefits, often with Taleo for recruiting. The implementation picture looks like what you see below: Here, you can see that all the “downstream integrations” from the On-Premise Core HR, are unaffected because the master for employee data is still your On-Premise Core HR system – all updates and new hires are made here (although they may be fed in from Taleo to start with). As a second phase when customers migrate Core HR to Fusion HCM, they have to come up with a strategy to manage integrations to all their downstream applications that require employee details. For customers coming from EBS HR, a short term strategy that allows for minimal impact, is to extract employee data from Fusion (Via HCM Extract), and load the shared EBS HR tables (which are part of an EBS Financials install anyways), and let your downstream integrations continue to function based on this data as shown below. If you are not coming from EBS HR and there are license implications, you may want to consider: Creating an On-Premise warehouse for extracting data from Fusion Apps. Leveraging Fusion Apps Web Services (available to SaaS customers starting R7) to directly retrieve/write data to Fusion Apps. Integration Tools File Based Loader This is the primary mechanism for loading HCM data (both initial load and incremental updates) into Fusion HCM. Employee & related data can be uploaded into Fusion HCM using File Based Loader. Note that ability to schedule File Based Loader to run on a pre-defined schedule will be available as a patch on top of Rel 5. Hr2Hr has been deprecated in favor of File Based Loader, but for existing customers using Hr2Hr, here are some sample scripts that show how to get more informative error messages. They can be run by creating data model sql queries in BI Publisher. The scripts currently have hard coded values for request id and loader batch id, which your developer will need to update to the correct values for you. The BI Publisher Training Session recorded on Apr 18th is available here (under "Recordings"). This will enable a somewhat technical resource to create a data model sql query. Links to Documentation & Traning Reference documentation for File Based Loader on docs.oracle.com FBL 1.1 MOS Doc Id 1533860.1 Sample demo data files for File Based Loader HCM SaaS Integrations ppt and recording. EBS api's Loading Information into EBS Full or Shared HCM This could be candidate information being loaded from Taleo into EBS or  Employee information being loaded from Fusion HCM into an EBS shared HR install (for downstream applications & EBS Financials). Oracle HRMS Product Family Publicly Callable Business Process APIs (A Reference Consolidation) [ID 216838.1] This is a guide to the EBS R12 Integration Repository accessible from an EBS instance. EBS HRMS Publicly Callable Business Process APIs in Release 11i & 12 [ID 121964.1] Fusion HCM Extract Fusion HCM Extract is the primary mechanism used to extract employee information from Fusion HCM. Refer to the "Configure Identity Sync" doc on MOS  for additional mechanisms. Additional documentation (you'll need an oracle.com account to access) HCM Extracts User Guides (Rel 4 & 5) HCM Extract Entity/Attributes (Rel 5) HCM Extract User Guide (Rel 5) If you don’t have an oracle.com account, download the zipped HCM Extract Rel 5 Docs (Click on File --> Download on next screen). View Training Recordings on Fusion HCM Extract Benefits Extract To setup the benefits extract, refer to the following guide. Page 2-15 of the User Documentation describes how to use the benefits extract. Benefit enrollments can also be uploaded into Fusion Benefits. Instructions are here along with a sample upload file. However, if the defined benefits extract does not meet your requirements, you can use BI Publisher (Link to BI Publisher presentation recording from Apr 18th) to create your own version of Benefits extract. You can start with the data model query underlying the benefits extract. Payroll Interface Fusion Payroll Interface enables you to capture personal payroll information, such as earnings and deductions, along with other data from Oracle Fusion Human Capital Management, and send that information to a third-party payroll provider. Documentation: Payroll interface guide Sample file DBI's used for the payroll interface.Usage Patterns always accessible @ http://www.finapps.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Second Day of Data Integration Track at OpenWorld 2012

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Our second day at OpenWorld and the Data Integration Team was very active with customer meetings, product updates, product demonstrations, sessions, plus much more.  If the volume of traffic by our demo pods is any indicator, this is a record year for attendance at OpenWorld.  The DIS team have had tremendous number of people stop by our demo pods to learn about the latest product releases or to speak to one of our product managers.    For Oracle GoldenGate, there has been a great deal of interest in Integrated Capture and the  Oracle GoldenGate Monitor plug-in for Enterprise Manager.  Our customer panels this year have been very well attended and on Tuesday we held the “Real World Operational Reporting with Oracle GoldenGate Customer Panel”. On this panel this year we had Michael Wells from Raymond James, Joy Mathew and Venki Govindarajan from Comcast, and Serkan Karatas from Turk Telekom. Our panelists have a great mix of experiences and all are passionate about using Oracle Data Integration products to solve very complex use cases. Each panelist was given a ten minute to overview their use of our product, followed by a barrage of questions from the audience. Michael Wells spoke about using Oracle GoldenGate for heterogeneous real time replication from HP (Tandem) NonStop to SQL Server and emphasized the need for using standard naming conventions for when customers configure GoldenGate, as the practices is immensely helpful when debugging a problem. Joy Mathew and Venkat Govindarajan from Comcast described how they have used GoldenGate for over a decade and their experiences of using the product for replicating data from HP nonstop to Terdata. Serkan Karatas from Turk Telekom dove into using Oracle GoldenGate and the value of archiving data in extremely large databases, which in Turk Telekoms case resulted in a 1 month ROI for the entire project. Thanks again to our panelist and audience participants for making the session interactive and informative.  For Wednesday we have a number of sessions available to attendees plus two hands-on labs, which I have listed below.   If you are unable to attend our hands-on lab for Oracle GoldenGate Veridata, it is available online at youtube.com. Sessions  11:45 AM - 12:45 PM Best Practices for High Availability with Oracle GoldenGate on Oracle Exadata -Moscone South - 102 1:15 PM - 2:15 PM Customer Perspectives: Oracle Data Integrator -Marriott Marquis - Golden Gate C3 Oracle GoldenGate Case Study: Real-Time Operational Reporting Deployment at Oracle -Moscone West - 2003 Data Preparation and Ongoing Governance with the Oracle Enterprise Data Quality Platform -Moscone West - 3000 3:30 PM - 4:30 PM Best Practices for Conflict Detection and Resolution in Oracle GoldenGate for Active/Active -Moscone West - 3000 5:00 PM - 6:00 PM Tuning and Troubleshooting Oracle GoldenGate on Oracle Database -Moscone South - 102 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Hands-on Labs 10:15 AM - 11:15 AM Introduction to Oracle GoldenGate Veridata Marriott Marquis - Salon 1/2 11:45 AM - 12:45 PM Oracle Data Integrator and Oracle SOA Suite: Hands-on Lab -Marriott Marquis - Salon 1/2 If you are at OpenWorld please join us in these sessions. For a full review of data integration track at OpenWorld please see our Focus-On Document.

    Read the article

  • Tech Talk: Managing Cloud Integration

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Cloud computing solutions are widely hailed as a way to reduce capital expenditures yet organizations are realizing they need to also consider all of the nuances of integrating cloud applications with existing information systems.Cloud integration, after all, has a direct impact on your costs, maintenance and upgrade efforts. Catch this conversation on Tech Talk with Oracle Vice President, Amit Zavery, to understand how Oracle Fusion Middleware provides a simple and consistent method to maintaining integration interfaces across disparate systems across cloud and on-premise applications. Simplify your IT infrastructure and seamlessly manage data and application integration across your applications with Oracle solutions. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For other Fusion Middleware talks, subscribe to Fusion Middleware Radio today and visit us on oracle.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Photo courtesy: www.cloudtweaks.com

    Read the article

  • how to open multiple projects into the CAST IRON integration tool?

    - by MIshal Shah
    Hi, I am learning the cast iron tool (http://www.castiron.com/) which is widely used now a days for integration purpose,but i can only open 1 project and if i want to open the other project at the same time than i have to close the 1st project and after that able to open the 2nd project. So many times i have to open 2 projects at the same time but i dont know in which way i can open the projects ? can any body give me any urgent solution for the same to open the multiple projects at the same time and to switch between them ? Thanks, Mishal Shah

    Read the article

  • Importing long numerical identifiers into Excel

    - by Niels Basjes
    I have some data in a database that uses ids that have the form of 16 digit numbers. In some situations i need to export the data in such a way that it can be manipulated in excel. So i export the data into a file and import it into excel. I've tried several file formats and I'm stuck. The problem I'm facing is that when reading a file into excel that has a cell that looks like a number then excel treats it as a number. The catch is that (as far as i can tell) all numerical values in excel are double precision floating point which have a precision of less than 16 digits. So my ids are changed: very often the last digit its changed to a 0. So far I've only been able to convince excel to keep the Id unchanged by breaking it myself: by adding a letter or symbol to the Id. This however means that in order to use the value again it must be "unbroken". Is there a way to create a file where i can specify that excel must treat the value as a text without changing the value? Or its there a way to let excel treat the value as a long (64bit integer)?

    Read the article

  • Agile SOA Governance: SO-Aware and Visual Studio Integration

    - by gsusx
    One of the major limitations of traditional SOA governance platforms is the lack of integration as part of the development process. Tools like HP-Systinet or SOA Software are designed to operate by models on which the architects dictate the governance procedures and policies and the rest of the team members follow along. Consequently, those procedures are frequently rejected by developers and testers given that they can’t incorporate it as part of their daily activities. Having SOA governance products...(read more)

    Read the article

  • BPEL Technology: A tool for Apps-to-Apps integration, using Oracle EBS Financials and Oracle's Retai

    Listen to Jeff Wexler, Senior Director of Oracle's Retail Industry Strategy and Amlan Debnath, VP of Oracle's Server Technologies speak with Cliff about their recent experience using Oracle's Business Process Execution Language (BPEL) to support integration between Oracle E-Business Suite Financials and Oracle's Retail Merchandising Industry Solution and find out how to get more info about this technology.

    Read the article

  • Web application / Domain model integration using JSON capable DTOs [on hold]

    - by g-makulik
    I'm a bit confused about architectural choices for the web-applications/java/python world. For c/c++ world the available (open source) choices to implement web applications is pretty limited to zero, involving java or python the choices explode to a,- hard to sort out -, mess of available 'frameworks' and application approaches. I want to sort out a clean MVC model, where the M stands for a fully blown (POCO, POJO driven) domain model (according M.Fowler's EAA pattern) using a mature OO language (Java,C++) for implementation. The background is: I have a system with certain hardware components (that introduce system immanent active behavior) and a configuration database for system meta and HW-components configuration data (these are even usually self contained, since the HW-components are capable to persist their configuration data anyway). For realization of the configuration/status data exchange protocol with the HW-components we have chosen the Google Protobuf format, which works well for the directly wired communication with these components. This protocol is already used successfully with a Java based GUI application via TCP/IP connection to the main system controlling HW-component. This application has some drawbacks and design flaws for historical reasons. Now we want to develop an abstract model (domain model) for configuration and monitoring those HW-components, that represents a more use case oriented view to the overall system behavior. I have the feeling that a plain Java class model would fit best for this (c++ implementation seems to have too much implementation/integration overhead with viable language-bridge interfaces). Google Protobuf message definitions could still serve well to describe DTO objects used to interact with a domain model API. But integrating Google Protobuf messages client side for e.g. data binding in the current view doesn't seem to be a good choice. I'm thinking about some extra serialization features, e.g. for JSON based data exchange with the views/controllers. Most lightweight solutions seem to involve a python based presentation layer using JSON based data transfer (I'm at least not sure to be fully informed about this). Is there some lightweight (applicable for a limited ARM Linux platform) framework available, supporting such architecture to realize a web-application? UPDATE: According to my recent research and comments of colleagues I've noticed that using Java (and some JVM) might not be the preferable choice for integration with python on a limited linux system as we have (running on ARM9 with hard to discuss memory and MCU costs), but C/C++ modules would do well for this (since this forms the native interface to python extensions, doesn't it?). I can imagine to provide a domain model from an appropriate C/C++ API (though I still think it's more efforts and higher skill requirements for the involved developers to do with these languages). Still I'm searching for a good approach that supports such architecture. I'll appreciate any pointers!

    Read the article

  • HyperV integration services v3.4 for 12.10?

    - by nlee
    Networking is sloooow with v3.1 How to upgrade to Integration Services v3.4 in 12.10? modinfo output in 12.10 filename: /lib/modules/3.5.0-17-generic/kernel/drivers/hv/hv_vmbus.ko version: 3.1 license: GPL srcversion: B1AA963EEFBAE322D970F14 alias: acpi*:VMBus:* alias: acpi*:VMBUS:* depends: intree: Y vermagic: 3.5.0-17-generic SMP mod_unload modversions

    Read the article

  • TFS and Project Integration

    - by Enrique Lima
    Recently there have been more and more requests on how to have TFS 2010 and Project 2010 together. Most of the requests have been around working with Agile and Scrum projects and templates. There are some guidance documents that have become available and also labs and Virtual Machine configurations to work with the different scenarios. TechNet Virtual Lab: Microsoft Enterprise Project Management - Project and Portfolio Management with Project 2010 Announcing Visual Studio Team Foundation Server 2010 and Project Server Integration Feature Pack Beta http://code.msdn.microsoft.com/P2010Scrum

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >