Search Results

Search found 17227 results on 690 pages for 'oracle hcm cloud'.

Page 409/690 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • ???????????!???·???

    - by Kumiko Fujita
    “???????????!”???? “???????????!”????????????·????????????????????????????????????????????????????????????? ???????????????????????????????????????????????! ???????·??? ???????????IT???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????????????????·???????/?????????????????????????????????????????????????????????????! ???? ????? ????? ??????????????/??/??? ??????????????? PDF??(WMV)??(MP4) ????????????????????/?? ???????????????????? PDF??(WMV)??(MP4) ??????????????????? ?????????~????/????????~ PDF??(WMV)??(MP4) ???????????????????? ?????????????????????????- Oracle ASM Cluster File System (ACFS)????! PDF??(WMV)??(MP4) ??????????????????EM????? ???????? Oracle Enterprise Manager 12c ??? PDF??(WMV)??(MP4) SPARC???????Solaris?????? SPARC ????? ~ OVM ???????! PDF??(WMV)??(MP4) ???? ?Oracle DB 11g R2 ??????????????????/????????????????! Oracle????????

    Read the article

  • One Active Directory, Multiple Remote Desktop Services (Server 2012 solution)

    - by Trinitrotoluene
    What I am trying to do is quite complex, so I figured I'd throw it out to a wider audience to see if anyone can find a flaw. What I am trying to do (as an MSP/VAR) is design a solution that will give multiple companies a session based remote desktop (companies that need to be kept completely seperate), using only a handful of servers. This is how I imagine it at the moment: CORE SERVER - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC01 (Active Directory Domain Services for mycloud.local) Server2: Cloud-EX01 (Exchange Server 2010 running multi tenant mode) Server3: Cloud-SG01 (Remote Desktop Gateway) CORE SERVER 2 - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC02 (Active Directory Domain Services for mycloud.local) Server2: Cloud-TS01 (Remote Desktop Session Host for Company A) Server3: Cloud-TS02 (Remote Desktop Session Host for Company B) Server4: Cloud-TS03 (Remote Desktop Session Host for Company C) What I thought about doing was setting up each Organisation in their own OU (perhaps creating their OU structure based on the Excahnge 2010 tenant OU structure so the accounts are linked). Each company would get a Remote Desktop Session Host server that would also serve as a file server. This server would be seperated from the rest on its own range. The server Cloud-SG01 would have access to all these networks and route the traffic to the appropriate network when a client connects and authenticated so they are pushed onto the correct server (Based on session collections in 2012). I won't lie this is something I have come up with quite quickly so there may well be something gapingly obvious that I am missing. Any feedback would be appreciated.

    Read the article

  • ¿Acaso el CFO necesita convertirse en un tecnólogo?

    - by RED League Heroes-Oracle
    La tecnología actual está afectando las funciones de toda la empresa. El CIO debe buscar nuevas maneras de ser un socio estratégico para el negocio y el CMO constantemente se enfrenta con decisiones acerca de la tecnología que hagan la función de marketing más eficiente a través de los datos. Incluso el papel CFO no es inmune. "El CFO en realidad no tiene que ser un tecnólogo, pero tienen que entender cómo el poder de la tecnología puede ayudarle a hacer su trabajo ", dice Nicole Anasenes, CFO de la empresa especialista en soluciones de software Infor. "Las presiones sobre el CFO no son tan diferentes de lo que siempre han sido, pero la interconexión del mundo y la tasa de cambio se suman a ellas" En el mundo empresarial actual, Anasenes dice , el CFO se preocupa por la reducción de costes, velocidad y flexibilidad - todo de una manera segura . La tecnología, en particular el cloud computing, es la clave para mejorar en esas tres áreas. Es importante tener en cuenta que el CIO y la línea de los líderes empresariales a menudo defienden diferentes puntos de vista. En general, el CIO tiende a ver el mundo a través de una lente de la contención de costos y seguridad y buscará almacenamiento de precios accequibles. El resto de los líderes empresariales , en cambio, se centran más en los proyectos de generación de ingresos en el espacio de análisis . En ese contexto , los directores financieros deben tratar de fortalecer las relaciones en toda la empresa y tomar ventaja de la tecnología Tomado de: http://www.cio.com/article/753147/Does_the_CFO_Need_to_Become_a_Technologist_?page=2&taxonomyId=3157

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle

    Read the article

  • OraOps10.dll loading problem

    - by Rodnower
    Hello, I have ASP.NET web service built on windows 7 in 32 bit. All dependences of this service compiled in Release mode in x64 bit. Now, I'm installed it on windows 8 64 bit and when I'm access to this service I get error "Could not load OraOps10.dll". I doesn't succeed to find any thing about this problem with oracle client in context of x32-x64 bit incompatibility in internet. Have you any idea? Thank you very much.

    Read the article

  • What permissions do I need to run SQL*Loader?

    - by Jason Baker
    What permissions does a database user need to be able to run oracle's sql loader? For instance, since sql loader will disable indexes and triggers, does it need ALTER permissions for those items? This seems like a simple question, but I can't find any documentation on this in the manual.

    Read the article

  • Memory Usage for Databases on Linux

    - by Kyle Brandt
    So with free output what we care about with application memory usage is generally the amount of free memory in the -/+ buffers/cache line. What about with database applications such as Oracle, is it important to have a good amount of cached and buffers available for a database to run well with all the IO? If that makes any sense, how do you figure out just how much?

    Read the article

  • OAS log files filling up hard drive

    - by Andrew Hampton
    We've had issues with log files for Oracle Application Server filling up the hard drive on our server. The files are in the /network/admin folder and are named server.log_XXXXX.trc and client.log_XXXXX.trc where XXXXX are 5 digits. The files are typically anywhere from 1-2MB in size but can be up to 100MB and thousands of them are created at a rate of about 5-10 per minute. Does anyone know how to disable these logs? Thanks!

    Read the article

  • problem getting php_oci8 working on linux RHEL 5

    - by Jonathan
    Hi All, I'm installing oracle oci8 on a linux server here and I am having an issue where php_oci8.so does not seem to be able to find the libclntsh.so.11.1. I've got the instant client installed and it shows up fine in ldconfig -p , but when I do ldd on the php_oci8.so it shows up as not found. Does anyone have any ideas as for what I can check ?

    Read the article

  • who are the goldengate extract users

    - by sharif
    I am setting up golden gate, this installation guide is quite confusing as it refers to steps which have not been done or already done previously. I am on step 4.8.1 on the ''oracle installation guide''. I is asking for ''Extract'' user name. I do not recall creating such other than the goldengate user. Also what are the other four users it refers to as in 4.6 Extract Replicat Manager DEFGEN what is the usernames for each of these in the db?

    Read the article

  • Rails: constraint violation on create but not on update

    - by justinbach
    Note: This is a "railsier" (and more succinct) version of this question, which was getting a little long. I'm getting Rails behavior on a production server that I can't replicate on the development server. The codebases are identical save for credentials and caching settings, and both are powered by Oracle 10g databases with identical schema (but different data). My Rails application contains a user model, which has_one registration; registration in turn has_and_belongs_to_many company_ownerships through a registration_ownerships table. Upon registering, users fill out data pertinent to all three models, including a series of checkboxes indicating what registration_ownerships might apply to their account. On the dev server, the registration process is seamless, no matter what data is entered. On production, however, if users check off any of the company ownership fields before submitting their registration, Oracle complains about a constraint violation on the primary key of the company_ownerships table (which is a two-field key based on company_ownership_id and registration_id) and users get the standard Rails 500 error screen. In every case, I've verified that no conflicting record on these two fields exists in the production database, so I don't know why the constraint is getting violated. To further confuse things, if a user registers without listing any ownerships and later goes back and modifies their account to reflect ownership data (which is done through the same interface), the application happily complies with their request and Oracle is well-behaved (this is both on production and dev). I've spent the past couple days trying to figure out what might be causing this problem and am reaching the end of my wits. Any advice would be greatly appreciated!

    Read the article

  • ORA- 01157 / Cant connect to database

    - by Tom
    Hi everyone, this is a follow up from this question. Let me start by saying that i am NOT a DBA, so i'm really really lost with this. A few weeks ago, we lost contact with one of our SID'S. All the other services are working, but this one in particular is not. What we got was this message when trying to connect ORA-01033: ORACLE initialization or shutdown in progress An attempt to alter database open ended up in ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' I tried to shutdown / restart the database, but got this message. Total System Global Area 566231040 bytes Fixed Size 1220604 bytes Variable Size 117440516 bytes Database Buffers 444596224 bytes Redo Buffers 2973696 bytes Database mounted. ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' When all continued the same, I erased the dbf files (rm xxx_data.dbf xxx_index.dbf), and recreated them using touch xxx_data.dbf. I also tried to recreate the tablespaces using `CREATE TABLESPACE DATA DATAFILE XXX_DATA.DBF` and got Database not open As I said, i don't know how bad this is, or how far i'm from gaining access to my database (well, to this SID at least, the others are working). I would imagine that a last resource would be to throw everything away, and recreating it, but I don't know how to, and I was hoping there's a less destructive solution. Any help will be greatly appreciated . Thanks in advance.

    Read the article

  • Trying to not need two separate solutions for x86 and x64 program.

    - by Sean Anderson
    Hi all, I have a program which needs to function in both an x86 and an x64 environment. It is using Oracle's ODBC drivers. I have a reference to Oracle.DataAccess.DLL. This DLL is different depending on whether the system is x64 or x86, though. Currently, I have two separate solutions and I am maintaining the code on both. This is atrocious. I was wondering what the proper solution is? I have my platform set to "Any CPU." and it is my understanding that VS should compile the DLL to an intermediary language such that it should not matter if I use the x86 or x64 version. Yet, if I attempt to use the x64 DLL I receive the error "Could not load file or assembly 'Oracle.DataAccess, Version=2.102.3.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format." I am running on a 32 bit machine, so the error message makes sense, but it leaves me wondering how I am supposed to efficiently develop this program when it needs to work on x64. Thanks.

    Read the article

  • Issue with creating index organized table

    - by mtim
    I'm having a weird problem with index organized table. I'm running Oracle 11g standard. i have a table src_table SQL> desc src_table; Name Null? Type --------------- -------- ---------------------------- ID NOT NULL NUMBER(16) HASH NOT NULL NUMBER(3) ........ SQL> select count(*) from src_table; COUNT(*) ---------- 21108244 now let's create another table and copy 2 columns from src_table set timing on SQL> create table dest_table(id number(16), hash number(20), type number(1)); Table created. Elapsed: 00:00:00.01 SQL> insert /*+ APPEND */ into dest_table (id,hash,type) select id, hash, 1 from src_table; 21108244 rows created. Elapsed: 00:00:15.25 SQL> ALTER TABLE dest_table ADD ( CONSTRAINT dest_table_pk PRIMARY KEY (HASH, id, TYPE)); Table altered. Elapsed: 00:01:17.35 It took Oracle < 2 min. now same exercise but with IOT table SQL> CREATE TABLE dest_table_iot ( id NUMBER(16) NOT NULL, hash NUMBER(20) NOT NULL, type NUMBER(1) NOT NULL, CONSTRAINT dest_table_iot_PK PRIMARY KEY (HASH, id, TYPE) ) ORGANIZATION INDEX; Table created. Elapsed: 00:00:00.03 SQL> INSERT /*+ APPEND */ INTO dest_table_iot (HASH,id,TYPE) SELECT HASH, id, 1 FROM src_table; "insert" into IOT takes 18 hours !!! I have tried it on 2 different instances of Oracle running on win and linux and got same results. What is going on here ? Why is it taking so long ?

    Read the article

  • Inner or Outer left Join

    - by user1557856
    I'm having difficulty modifying a script for this situation and wondering if someone maybe able to help: I have an address table and a phone table both sharing the same column called id_number. So id_number = 2 on both tables refers to the same entity. Address and phone information used to be stored in one table (the address table) but it is now split into address and phone tables since we moved to Oracle 11g. There is a 3rd table called both_ids. This table also has an id_number column in addition to an other_ids column storing SSN and some other ids. Before the table was split into address and phone tables, I had this script: (Written in Sybase) INSERT INTO sometable_3 ( SELECT a.id_number, a.other_id, NVL(a1.addr_type_code,0) home_addr_type_code, NVL(a1.addr_status_code,0) home_addr_status_code, NVL(a1.addr_pref_ind,0) home_addr_pref_ind, NVL(a1.street1,0) home_street1, NVL(a1.street2,0) home_street2, NVL(a1.street3,0) home_street3, NVL(a1.city,0) home_city, NVL(a1.state_code,0) home_state_code, NVL(a1.zipcode,0) home_zipcode, NVL(a1.zip_suffix,0) home_zip_suffix, NVL(a1.telephone_status_code,0) home_phone_status, NVL(a1.area_code,0) home_area_code, NVL(a1.telephone_number,0) home_phone_number, NVL(a1.extension,0) home_phone_extension, NVL(a1.date_modified,'') home_date_modified FROM both_ids a, address a1 WHERE a.id_number = a1.id_number(+) AND a1.addr_type_code = 'H'); Now that we moved to Oracle 11g, the address and phone information are split. How can I modify the above script to generate the same result in Oracle 11g? Do I have to first do INNER JOIN between address and phone tables and then do a LEFT OUTER JOIN to both_ids? I tried the following and it did not work: Insert Into.. select ... FROM a1. address INNER JOIN t.Phone ON a1.id_number = t.id_number LEFT OUTER JOIN both_ids a ON a.id_number = a1.id_number WHERE a1.adrr_type_code = 'H'

    Read the article

  • SharePoint Office365 and Azure &ndash; an Overview of what you can use today

    - by Sahil Malik
    SharePoint 2010 Training: more information I will be speaking on cloud related topics – an overview at one of my favorite user groups, CMAPonline on January 24th. Here are the details, When – Tuesday, January 24, 2012      7:00 PMWhere - UMBC Building 21 About - "SharePoint Office365 and Azure – an Overview of what you can use today!"Everyone is talking about the cloud. Everyone is moving to the cloud. Microsoft's cloud offering is probably the most expansive of all. But how does it really compare with other offerings? What is the featureset of Google? Or Amazon? And in the jungle of Beta, what is currently proven and production ready in the Microsoft spectrum? Most of all, how do you move from your current setup to a cloud based setup? In this session, Sahil provides a manager and architect level overview demystifying all these topics and more. Read full article ....

    Read the article

  • MySQL at Mobile World Congress (on Valentine's Day...)

    - by mat.keep(at)oracle.com
    It is that time of year again when the mobile communications industry converges on Barcelona for what many regard as the premier telecommunications show of the year.Starting on February 14th, what better way for a Brit like me to spend Valentines Day with 50,000 mobile industry leaders (my wife doesn't tend to read this blog, so I'm reasonably safe with that statement).As ever, Oracle has an extensive presence at the show, and part of that presence this year includes MySQL.We will be running a live demonstration of the MySQL Cluster database on Booth 7C18 in the App Planet.The demonstration will show how the MySQL Cluster Connector for Java is implemented to provide native connectivity to the carrier grade MySQL Cluster database from Java ME clients via Java SE virtual machines and Java EE servers.  The demonstration will show how end-to-end Java services remain continuously available during both catastrophic failures and scheduled maintenance activities.The MySQL Cluster Connector for Java provides both a native Java API and JPA plug-in that directly maps Java objects to relational tables stored in the MySQL Cluster database, without the overhead and complexity of having to transform objects to JDBC, and then SQL  The result is 10x higher throughput, and a simpler development model for Java engineers.Stop by the stand for a demonstration, and an opportunity to speak with the MySQL telecoms team who will share experiences on how MySQL is being used to bring the innovation of the web to the carrier network.Of course, if you can't make it to Barcelona, you can still learn more about the MySQL Cluster Connector for Java from this whitepaper and are free to download it as part of MySQL Cluster Community Edition  Let us know via the comments if you have Java applications that you think will benefit from the MySQL Cluster Connector for JavaI can't promise that Valentines Day at MWC will be the time you fall in love with MySQL Cluster...but I'm confident you will at least develop a healthy respect for it  

    Read the article

  • Java Champion Stephen Chin on New Features and Functionality in JavaFX

    - by janice.heiss(at)oracle.com
    In an Oracle Technology Network interview, Java Champion Stephen Chin, Chief Agile Methodologist for GXS, and one of the most prolific and innovative JavaFX developers, provides an update on the rapidly developing changes in JavaFX.Chin expressed enthusiasm about recent JavaFX developments:"There is a lot to be excited about -- JavaFX has a new API face. All the JavaFX 2.0 APIs will be exposed via Java classes that will make it much easier to integrate Java server and client code. This also opens up some huge possibilities for JVM language integration with JavaFX." Chin also spoke about developments in Visage, the new language project created to fill the gap left by JavaFX Script:"It's a domain-specific language for writing user interfaces, which addresses the needs of UI developers. Visage takes over where JavaFX Script left off, providing a statically typed, declarative language with lots of features to make UI development a pleasure.""My favorite language features from Visage are the object literal syntax for quickly building scene graphs and the bind keyword for connecting your UI to the backend model. However, the language is built for UI development from the top down, including subtle details like null-safe dereferencing for exception-less code."Read the entire article.

    Read the article

  • Webinar: MySQL Enterprise Backup - Online "Hot" Backup for MySQL

    - by mike.frank(at)oracle.com
    Online backup has been one of the most requested features for MySQL. With MySQL Enterprise Backup, developers and DBAs have tools they need to safely and rapidly backup and restore their databases. In this webinar we will go into the advantages of Hot "Online" backups. We will show how MySQL Enterprise Backup supports full, incremental, partial, and compressed backups that allow you to perform consistent Point-in-Time Recovery, as well as saving both time and money.In this Free Webinar you will learn:    * Backup Strategies & Methods    * Comparison of backup types for MySQL    * MySQL Enterprise Backup: Features    * MySQL Enterprise Backup  Performance    * MySQL Enterprise Backup: Architecture    * MySQL Enterprise Backup: How it Works    * MySQL Enterprise Backup: Script ExamplesEnglish WebinarWhoMike Frank and Alex Roedling WhenThursday, January 20, 2011: 09:00 Pacific time (English)Italian WebinarLuca Olivari Thursday, January 20, 2011: 10:00 Central European time (Italian)Register now: English, Italian.On demand French and German versions available as well.Related articles    * Introducing our "Hot" MySQL Enterprise Backup (blogs.oracle.com)

    Read the article

  • Changing Endpoint URL for a Web Service Data Control

    - by vishal.s.jain(at)oracle.com
    When you move your application from Development to Production, there is more often then not, a need to change the web service endpoint URL in your ADF application. If you are using a Web Service Data Control(WSDC), you can do this in more than one ways. The following example illustrates how this can be done.At Design TimeIf the application workspace is in your control, you can quickly do this by updating the definition in DataControl.dcx file:Along with this, you will also need to change the endpoint in connections.xml. So invoke the Edit Connections dialog: Then, change the endpoint URL.At DeploymentAnother way to change is changing the endpoint at the ear level, at deployment. So when you select Deploy -> Application Server at the Application level, it will bring up a Deployment Configuration dialog, in which you can edit the WSDL URL:Also, change the Port URL:At Post DeploymentIf your need to change this post deployment, you can do it through Oracle Enterprise Manager. But for this, your application needs to be configured with a writable MDS repository. It is recommended you use a Database MDS store during deployment. So have your application configured (by having an entry in adf-config.xml) and server configured (by having a MDS store registered). Once done, you can configure the ADF Connection in EM for this application:Change the WSDL location here on 'Edit':Also, change the Port using Advance Connection Configuration:Change the Endpoint Address here:Apply Changes and you are done!

    Read the article

  • UPK Professional Customer Success Story: Medtronic

    - by [email protected]
    In case you missed the live event, be sure to listen to last week's UPK Customer iSeminar featuring Medtronic. This was the first iSeminar in our quarterly series to showcase UPK Professional (UPK and Knowledge Pathways). Donna Miller and Staci Gilbert gave viewers an inside look at samples of Medtronic's content as they shared their experiences, methodology and best practices for use of the solution. Here are some highlights of the call: • Medtronic initially purchased UPK Professional to support a multi-year, global SAP rollout for 9,000 end users located in 24 countries. • As time went on, they expanded their use of UPK Professional to include several of their other enterprise applications: PeopleSoft, Siebel CRM, Hyperion Financial Management, a number of SAP bolt-ons, Documentum, TrackWise, and many others. • In combination with their Saba LMS, UPK Professional has allowed Medtronic to create, deploy, track and certify consistent end user training for critical transactions and processes across their organization worldwide - essential for a company in a heavily regulated industry. • For key pieces of content or certain end user populations, some Medtronic business units localize/translate the global UPK content. Staci demonstrated examples of their SAP content which has been translated into Japanese. • In the live SAP environment, end users rely on UPK's context sensitive in-application performance support. Medtronic has found this to be very helpful post go-live, giving just-in-time support so end users are confident in a new system or when performing tasks they don't often touch (at quarter or year end). UPK also serves as Medtronic's internal Google. • Medtronic has realized savings on many fronts: reduction in support calls due to in-application performance support, elimination of their training clients, and speedier training (1.5 days rather than 5-7 days) of temporary workers by moving from ILT to a blended solution that includes UPK simulations for eLearning. Thanks again to Donna and Staci for an exceptional presentation. They offered so many great examples for anyone who's looking for ways to get more out of UPK or interested in learning about UPK Professional: Knowledge Pathways. - Karen Rihs, Oracle UPK Outbound Product Management

    Read the article

  • Configuring Request-Reply in JMSAdapter

    - by [email protected]
    Request-Reply is a new feature in 11g JMSAdapter that helps you achieve the following:Allows you to combine Request and Reply in a single step. In the prior releases of the Oracle SOA Suite, you would require to configure two distinct adapters. Performs automatic correlation without you needing to configure BPEL "correlation sets". This would work seamlessly in Mediator and BPMN as well.In order to configure the JMSAdapter Request-Reply, please follow these steps:1) Drag and drop a JMSAdapter onto the "External References" swim lane in your composite editor. 2) Enter default values for the first few screens in the JMS Adapter wizard till you hit the screen where the wizard prompts you to enter the operation name. Select "Request-Reply" as the "Operation Type" and Asynchronous as "Operation Name".3) Select the Request and Reply queues in the following screens of the wizard. The message will be en-queued in the "Request" queue and the reply will be returned in the "Reply" queue. The reason I have used such a selector is that the back-end system that reads from the request queue and generates the response in the response queue actually generates more than one response and hence I must use a filter to exclude the unwanted responses.4) Select the message schema for request as well as response. 5) Add an <invoke> activity in BPEL corresponding to the JMS Adapter partner link. Please note that I am setting an additional header as my third-party application requires this.6) Add a <receive> activity just after the <invoke> and select the "Reply" operation. Please make sure that the "Create Instance" option is unchecked.Your completed BPEL process will something like this:

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >