Search Results

Search found 31311 results on 1253 pages for 'oracle product info'.

Page 197/1253 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • The Top 5 MDM Sessions You Can’t Miss at OpenWorld

    - by Mala Narasimharajan
    Sessions, Demo pods, Hands On Labs, and much more – but where should you focus?  MDM has some excellent sessions planned for OOW –  here is a top 5 list to identify the sessions you just can’t afford to miss. October 3, 2012  1:15 PM - 2:15 PM    Moscone West - 3002/3004     What's There to Know About Oracle’s Master Data Management Portfolio and Roadmap? Hear about product strategy our vision for the future and how Oracle MDM is positioned to excel in helping organizations make the most of their customer,      partner, supplier or product data. October 3, 2012  5:00 PM - 6:00 PM   Westin San Francisco – Metropolitan I Oracle Customer MDM Applications: Implementation Best Practices, Data Governance, and ROI       Customers successes provide solid examples of technology at work and how organizations derive value from it. Attend this session and hear from our customers on how they built a business case, established governance and are realizing the benefits of Oracle Customer Hub. October 2, 2012  10:15 AM - 11:15 AM   Moscone West – 3001 Mastering Product Data: Strategies for Effective Product Information Management                                                                      Product data is vital for any enterprise in being able to provide a consolidated representation of products to their partners, customers and suppliers.  Hear how our customers leverage product information to be a leader in their respective area and how Oracle is critical to achieving this. October 2, 2012  11:45 AM - 12:45 PM   Moscone West – 2022 Enabling Trusted Enterprise Product Data with Oracle Fusion Product Hub                                                                                       Learn how Oracle Fusion Product Hub is paving the way for providing organizations with trusted product data as well as helping organizations make the most of the information and infrastructure they already possess. October 1, 2012  4:45 PM – 5:45 PM   InterContinental - Ballroom A Oracle Hyperion Data Relationship Management: Enabling Enterprise Transformation                                                                         Hear how Data Relationship Management drives enterprise transformation and why any organization embarking on an master data management initiative needs it, plus hear from our customers best practices as well as lessons learned.  Check out the Master Data Management Focus On document for all our sessions at OpenWorld 2012. 

    Read the article

  • Clouds Everywhere But not a Drop of Rain – Part 3

    - by sxkumar
    I was sharing with you how a broad-based transformation such as cloud will increase agility and efficiency of an organization if process re-engineering is part of the plan.  I have also stressed on the key enterprise requirements such as “broad and deep solutions, “running your mission critical applications” and “automated and integrated set of capabilities”. Let me walk you through some key cloud attributes such as “elasticity” and “self-service” and what they mean for an enterprise class cloud. I will also talk about how we at Oracle have taken a very enterprise centric view to developing cloud solutions and how our products have been specifically engineered to address enterprise cloud needs. Cloud Elasticity and Enterprise Applications Requirements Easy and quick scalability for a short-period of time is the signature of cloud based solutions. It is this elasticity that allows you to dynamically redistribute your resources according to business priorities, helps increase your overall resource utilization, and reduces operational costs by allowing you to get the most out of your existing investment. Most public clouds are offering a instant provisioning mechanism of compute power (CPU, RAM, Disk), customer pay for the instance-hours(and bandwidth) they use, adding computing resources at peak times and removing them when they are no longer needed. This type of “just-in-time” serving of compute resources is well known for mid-tiers “state less” servers such as web application servers and web servers that just need another machine to start and run on it but what does it really mean for an enterprise application and its underlying data? Most enterprise applications are not as quite as “state less” and justifiably so. As such, how do you take advantage of cloud elasticity and make it relevant for your enterprise apps? This is where Cloud meets Grid Computing. At Oracle, we have invested enormous amount of time, energy and resources in creating enterprise grid solutions. All our technology products offer built-in elasticity via clustering and dynamic scaling. With products like Real Application Clusters (RAC), Automatic Storage Management, WebLogic Clustering, and Coherence In-Memory Grid, we allow all your enterprise applications to benefit from Cloud elasticity –both vertically and horizontally - without requiring any application changes. A number of technology vendors take a rather simplistic route of starting up additional or removing unneeded VM as the "Cloud Scale-Out" solution. While this may work for stateless mid-tier servers where load balancers can handle the addition and remove of instances transparently but following a similar approach for the database tier - often called as "database sharding" - requires significant application modification and typically does not work with off the shelf packaged applications. Technologies like Oracle Database Real Application Clusters, Automatic Storage Management, etc. on the other hand bring the benefits of incremental scalability and on-demand elasticity to ANY application by providing a simplified abstraction layers where the application does not need deal with data spread over multiple database instances. Rather they just talk to a single database and the database software takes care of aggregating resources across multiple hardware components. It is the technologies like these that truly make a cloud solution relevant for enterprises.  For customers who are looking for a next generation hardware consolidation platform, our engineered systems (e.g. Exadata, Exalogic) not only provide incredible amount of performance and capacity, they also reduce the data center complexity and simplify operations. Assemble, Deploy and Manage Enterprise Applications for Cloud Products like Oracle Virtual assembly builder (OVAB) resolve the complex problem of bringing the cloud speed to complex multi-tier applications. With assemblies, you can not only provision all components of a multi-tier application and wire them together by push of a button, other aspects of application lifecycle, such as real-time application testing, scale-up/scale-down, performance and availability monitoring, etc., are also automated using Oracle Enterprise Manager.  An essential criteria for an enterprise cloud to succeed is the ability to ensure business service levels especially when business users have either full visibility on the usage cost with a “show back” or a “charge back”. With Oracle Enterprise Manager 12c, we have created the most comprehensive cloud management solution in the industry that is capable of managing business service levels “applications-to-disk” in a enterprise private cloud – all from a single console. It is the only cloud management platform in the industry that allows you to deliver infrastructure, platform and application cloud services out of the box. Moreover, it offers integrated and complete lifecycle management of the cloud - including planning and set up, service delivery, operations management, metering and chargeback, etc .  Sounds unbelievable? Well, just watch this space for more details on how Oracle Enterprise Manager 12c is the nerve center of Oracle Cloud! Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies.  Summary  But the intent of this blog post isn't to dwell on how great our solutions are (these are just some examples to illustrate how we at Oracle have approached this problem space). Rather it is to help you ask the right questions before you embark on your cloud journey.  So to summarize, here are the key takeaways.       It is critical that you are clear on why you are building the cloud. Successful organizations keep business benefits as the first and foremost cloud objective. On the other hand, those who approach this purely as a technology project are more likely to fail. Think about where you want to be in 3-5 years before you get started. Your long terms objectives should determine what your first step ought to be. As obvious as it may seem, more people than not make the first move without knowing where they are headed.  Don’t make the mistake of equating cloud to virtualization and Infrastructure-as-a-Service (IaaS). Spinning a VM on-demand will give some short term relief to your IT staff but is unlikely to solve your larger business problems. As such, even if IaaS is your first step towards a more comprehensive cloud, plan the roadmap around those higher level services before you begin. And ask your vendors on how they are going to be your partners in this journey. Capabilities like self-service access and chargeback/showback are absolutely critical if you really expect your cloud to be transformational. Your business won't see the full benefits of the cloud until it empowers them with same kind of control and transparency that they are used to while using a public cloud service.  Evaluate the benefits of integration, as opposed to blindly following the best-of-breed strategy. Integration is a huge challenge and more so in a cloud environment. There are enormous costs associated with stitching a solution out of disparate components and even more in maintaining it. Hope you found these ideas helpful. Looking forward to hearing your thoughts and experiences.

    Read the article

  • Oracle & Active Directory : A love/hate relationship

    - by Frank
    Hi SO'ers, I'm currently trying to access Active Directory via the dbms_ldap API in Pl/Sql (Oracle). The trouble is that I'm not able to connect with my own username and password or anynoymously. However, in C# I can connect anonymously with this code : DirectoryEntry ldap = new DirectoryEntry("LDAP://Hostname"); DirectorySearcher searcher = new DirectorySearcher(ldap); searcher.Filter = "(SAMAccountName=username)"; SearchResult result = searcher.FindOne(); If I try to connect anonymously in Oracle, I only get the error(ORA-31202 : LDAP client/server error) when I try to search (and the result code for the bind is SUCCESS)... my_session := dbms_ldap.init('HOST','389'); retval := dbms_ldap.simple_bind_s(my_session, '', ''); retval := dbms_ldap.search_s(my_session, ldap_base, dbms_ldap.scope_subtree, 'objectclass=*', my_attrs, 0, my_message); Why is the anonymous connection is C# works but doesn't work in Pl/Sql? Do you have any other idea to connect to Active Directory via Oracle? Help me reunite them together. Thanks. Edit When I bind with anonymous credentials I get : ORA-31202: DBMS_LDAP: LDAP client/server error 00000000: LdapErr: DSID-0C090627, comment: In order to perform this operation a successful bind must be completed on the connection And if I try to connect with my credentials, which are supposed to be valid since I'm connected to the domain with it... I get : ORA-31202: DBMS_LDAP: LDAP client/server error Invalid credentials 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error

    Read the article

  • How to find if an Oracle APEX session is expired

    - by Mathieu Longtin
    I have created a single-sign-on system for our Oracle APEX applications, roughly based on this tutorial: http://www.oracle.com/technology/oramag/oracle/09-may/o39security.html The only difference is that my master SSO login is in Perl, rather than another APEX app. It sets an SSO cookie, and the app can check if it's valid with a database procedure. I have noticed that when I arrive in the morning, the whole system doesn't work. I reload a page from the APEX app, it then sends me to the SSO page because the session was expired, I logon, and get redirected back to my original APEX app page. This usually works except first thing in the morning. It seems the APEX session is expired. In that case it seems to find the session, but then refuse to use it, and sends me back to the login page. I've tried my best to trace the problem. The "wwv_flow_custom_auth_std.is_session_valid" function returns true, so I'm assuming the session is valid. But nothing works until I remove the APEX session cookie. Then I can log back in easily. Anybody knows if there is another call that would tell me if the session is expired or not? Thanks

    Read the article

  • Getting Oracle's MD5 to match PHP's MD5

    - by Zenshai
    Hi all, I'm trying to compare an MD5 checksum generated by PHP to one generated by Oracle 10g. However it seems I'm comparing apples to oranges. Here's what I did to test the comparison: //md5 tests //php md5 print md5('testingthemd5function'); print '<br/><br/>'; //oracle md5 $md5query = "select md5hash('testingthemd5function') from dual"; $stid = oci_parse($conn, $md5query); if (!$stid) { $e = oci_error($conn); print htmlentities($e['message']); exit; } $r = oci_execute($stid, OCI_DEFAULT); if (!$r) { $e = oci_error($stid); echo htmlentities($e['message']); exit; } $row = oci_fetch_row($stid); print $row[0]; The md5 function (seen in the query above) in Oracle uses the 'dbms_obfuscation_toolkit.md5' package(?) and is defined like this: CREATE OR REPLACE FUNCTION PORTAL.md5hash (v_input_string in varchar2) return varchar2 is v_checksum varchar2(20); begin v_checksum := dbms_obfuscation_toolkit.md5 (input_string => v_input_string); return v_checksum; end; What comes out on my PHP page is: 29dbb90ea99a397b946518c84f45e016 )Û¹©š9{”eÈOEà Can anyone help me in getting the two to match?

    Read the article

  • Dropping all user tables/sequences in Oracle

    - by Ambience
    As part of our build process and evolving database, I'm trying to create a script which will remove all of the tables and sequences for a user. I don't want to do recreate the user as this will require more permissions than allowed. My script creates a procedure to drop the tables/sequences, executes the procedure, and then drops the procedure. I'm executing the file from sqlplus: drop.sql: create or replace procedure drop_all_cdi_tables is cur integer; begin cur:= dbms_sql.OPEN_CURSOR(); for t in (select table_name from user_tables) loop execute immediate 'drop table ' ||t.table_name|| ' cascade constraints'; end loop; dbms_sql.close_cursor(cur); cur:= dbms_sql.OPEN_CURSOR(); for t in (select sequence_name from user_sequences) loop execute immediate 'drop sequence ' ||t.sequence_name; end loop; dbms_sql.close_cursor(cur); end; / execute drop_all_cdi_tables; / drop procedure drop_all_cdi_tables; / Unfortunately, dropping the procedure causes a problem. There seems to cause a race condition and the procedure is dropped before it executes. E.g.: SQL*Plus: Release 11.1.0.7.0 - Production on Tue Mar 30 18:45:42 2010 Copyright (c) 1982, 2008, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Procedure created. PL/SQL procedure successfully completed. Procedure created. Procedure dropped. drop procedure drop_all_user_tables * ERROR at line 1: ORA-04043: object DROP_ALL_USER_TABLES does not exist SQL Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64 With the Partitioning, OLAP, Data Mining and Real Application Testing options Any ideas on how to get this working?

    Read the article

  • Return an Oracle Associative Array from a function

    - by Paul Johnson
    Does anybody know if it is possible to return an associative array as the result of an Oracle function, if so do you have any examples? I have an Oracle package which contains an associative array declaration as defined below: TYPE EVENTPARAM IS TABLE OF NUMBER INDEX BY BINARY_INTEGER; This is then used in a stored procedure outside the package as follows: v_CompParams areva_interface.eventparam; The intention is to store an associative array of strings in the variable v_CompParams, returned from a Parse function in another package. The definition for which is as follows: PACKAGE STRING_MANIP IS TYPE a_array IS TABLE OF NUMBER INDEX BY BINARY_INTEGER; FUNCTION Parse (v_string VARCHAR2, v_delim VARCHAR2) RETURN a_array; FUNCTION RowCount(colln IN a_array) RETURN NUMBER; END; The code which implements this is: v_CompParams := STRING_MANIP.PARSE(v_CompID,v_Delim); Unfortunately it doesn't work, I get the error 'PLS-00382: expression is of wrong type'. I foolishly assumed, that since a_array derives from the same source Oracle type as the variable v_CompParams, that there would be no problem casting between them. Any help much appreciated. Kind Regards Paul J.

    Read the article

  • Connecting Error to Remote Oracle XE database using ASP.NET

    - by imsatasia
    Hello, I have installed Oracle XE on my Development machine and it is working fine. Then I installed Oracle XE client on my Test machine which is also working fine and I can access Development PC database from Browser. Now, I want to create an ASP.Net application which can access that Oracle XE database. I tried it too, but it always shows me an error on my TEST machine to connect database to the Development Machine using ASP.Net. Here is my code for ASP.Net application: protected void Page_Load(object sender, EventArgs e) { string connectionString = GetConnectionString(); OracleConnection connection = new OracleConnection(connectionString); connection.Open(); Label1.Text = "State: " + connection.State; Label1.Text = "ConnectionString: " + connection.ConnectionString; OracleCommand command = connection.CreateCommand(); string sql = "SELECT * FROM Users"; command.CommandText = sql; OracleDataReader reader = command.ExecuteReader(); while (reader.Read()) { string myField = (string)reader["nID"]; Console.WriteLine(myField); } } static private string GetConnectionString() { // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. return "User Id=System;Password=admin;Data Source=(DESCRIPTION=" + "(ADDRESS=(PROTOCOL=TCP)(HOST=myServerAddress)(PORT=1521))" + "(CONNECT_DATA=(SERVICE_NAME=)));"; }

    Read the article

  • ways to avoid global temp tables in oracle

    - by Omnipresent
    We just converted our sql server stored procedures to oracle procedures. Sql Server SP's were highly dependent on session tables (INSERT INTO #table1...) these tables got converted as global temporary tables in oracle. We ended up with aroun 500 GTT's for our 400 SP's Now we are finding out that working with GTT's in oracle is considered a last option because of performance and other issues. what other alternatives are there? Collections? Cursors? Our typical use of GTT's is like so: Insert into GTT INSERT INTO some_gtt_1 (column_a, column_b, column_c) (SELECT someA, someB, someC FROM TABLE_A WHERE condition_1 = 'YN756' AND type_cd = 'P' AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12' AND (lname LIKE (v_LnameUpper || '%') OR lname LIKE (v_searchLnameLower || '%')) AND (e_flag = 'Y' OR it_flag = 'Y' OR fit_flag = 'Y')); Update the GTT UPDATE some_gtt_1 a SET column_a = (SELECT b.data_a FROM some_table_b b WHERE a.column_b = b.data_b AND a.column_c = 'C') WHERE column_a IS NULL OR column_a = ' '; and later on get the data out of the GTT. These are just sample queries, in actuality the queries are really complext with lot of joins and subqueries. I have a three part question: Can someone show how to transform the above sample queries to collections and/or cursors? Since with GTT's you can work natively with SQL...why go away from the GTTs? are they really that bad. What should be the guidelines on When to use and When to avoid GTT's

    Read the article

  • Oracle ODBC x64 - getting 0 when selecting a number(9) column

    - by MatsL
    I'm having a really weird problem with a third party web service that uses an ODBC connection to Oracle 10.2.0.3.0. I've written a .NET client that generates the same SQL as the web service so I can find out what's going on. The web service is hosted by IIS 6 that's in x64 mode so we use Oracle x64 client. The oracle client version is 10.2.0.1.0. I have a table that looks like this (I've removed some columns and names): SQL> describe tablename; Name Null? Type ----------------------------------------- -------- ---------------------------- KOD VARCHAR2(30) ORDNING NUMBER(5) AVGIFT NUMBER(9) I then in SQL*Plus issue the following statement: SELECT KOD as kod, AVGIFT as riskPoang FROM tablename Where upper(KODTYP) = 'OBJLIVSV_RISKVERKSAMTYP' ORDER BY ORDNING And I get the following result: KOD RISKPOANG ------------------------------ ---------- Hög risk 55 Mellan risk 35 Låg risk 15 Mycket låg risk 5 But when I execute the exact same SQL using the same DSN on the same machine I get this: Values Kod: Hög risk RiskPoäng: 0 Kod: Mellan risk RiskPoäng: 0 Kod: Låg risk RiskPoäng: 0 Kod: Mycket låg risk RiskPoäng: 0 If I first cast the number to varchar and then back again to number, like this: SELECT KOD as kod, to_number(to_char(AVGIFT, '99'), '9999999999') as riskPoang FROM tablename Where upper(KODTYP) = 'OBJLIVSV_RISKVERKSAMTYP' ORDER BY ORDNING I get the correct result: Values Kod: Hög risk RiskPoäng: 55 Kod: Mellan risk RiskPoäng: 35 Kod: Låg risk RiskPoäng: 15 Kod: Mycket låg risk RiskPoäng: 5 Has anyone else experiences this? It's incredibly annoying and I'm completely stuck and not sure what to do next. We have a third party web service that use these tables so I must get the original SQL-statement to work since I can't modify its code. And pointers are greatly appreciated! :-) Best regards, Mats

    Read the article

  • PreparedStatement question in Java against Oracle.

    - by fardon57
    Hi everyone, I'm working on the modification of some code to use preparedStatement instead of normal Statement, for security and performance reason. Our application is currently storing information into an embedded derby database, but we are going to move soon to Oracle. I've found two things that I need your help guys about Oracle and Prepared Statement : 1- I've found this document saying that Oracle doesn't handle bind parameters into IN clauses, so we cannot supply a query like : Select pokemon from pokemonTable where capacity in (?,?,?,?) Is that true ? Is there any workaround ? ... Why ? 2- We have some fields which are of type TIMESTAMP. So with our actual Statement, the query looks like this : Select raichu from pokemonTable where evolution = TO_TIMESTAMP('2500-12-31 00:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') What should be done for a prepared Statement ? Should I put into the array of parameters : 2500-12-31 or TO_TIMESTAMP('2500-12-31 00:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') ? Thanks for your help, I hope my questions are clear ! Regards,

    Read the article

  • Oracle 10.1 and 11.2 produce different XML using the same statement

    - by MindFyer
    I am migrating a database from Oracle 10.1 to 11.2 and I have the following problem. The statement SELECT '<?xml version="1.0" encoding="utf-8" ?>' || (Xml).getClobVal() AS XmlClob FROM ( SELECT XmlElement( "Element1", ( SELECT XmlAgg(tpx.Xml) FROM ( SELECT XmlElement("Element3",XmlForest('content' as Element4)) AS Xml FROM dual ) tpx ) AS "Element2" ) AS Xml FROM dual ) On the original 10.1 database produces XML like this... <?xml version="1.0" encoding="utf-8"?> <Element1> <Element2> <Element3> <ELEMENT4>content</ELEMENT4> </Element3> </Element2> </Element1> On the new 11.2 system it looks like this... <?xml version="1.0" encoding="utf-8"?> <Element1> <Element3> <ELEMENT4>content</ELEMENT4> </Element3> </Element1> Is there some environmental variable I am missing that tells Oracle how to format its XML. There are hundreds of thousands of lines of PL/SQL in the database; it would be a mammoth task to rewrite if it turned out they had changed they way Oracle formats XML between versions. Hopefully someone has come accross this before. Thanks

    Read the article

  • Simple Oracle File repository with folder hierarchy

    - by Ope
    I have an application that stores large amount of files (XML and binary) in folder hierarchies. Currently the main method is storing them in file system or using a legacy CMS, which we want to get rid of. The CMS supports Oracle and a customer wants to keep the files in Oracle because of enterprise policies (backup etc.) The question is: Is there a simple implementation of file repository with folder hierarchy for Oracle? What I am looking for is a small .Net component or example code (PL/SQL and/or .Net) that would have the following methods: Create, Delete, Exists Folder CRUD file Move and potentially Copy file or directory Access to files and folders with paths like "/root/folder1/folder2/file.xml" Ability to get all the files and folders in a folder and potentially also the entire directory tree Tree traversal, getting the parent, all children etc. needs to be fast. I need the implementation in .Net, but if it was just the stored procedures, I could create the .Net calling code. I have pointers to generic articles for creating hierarchies in DB, so if I need to do it from scratch, I know where to start. What I am asking here, is there already an implementation that I could take without doing this from scratch? It seems like such a generic requirement... If the answer is a CMS, Document management system or such it should be Open Source or at least quite cheap (some hundreds / server) and it should be possible to deploy it XCopy - hopefully only couple of DLL:s. I do not need - or want - a full featured big CMS with dozens of dlls and especially not an msi-installation. I have tried to google this, but the words "repository", "CMS", "file hierarchy" etc. give so many answers, the searches are pretty much useless. Thanks, OPe

    Read the article

  • Podcast Show Notes: Evolving Enterprise Architecture

    - by Bob Rhubart
    Back in March Oracle ACE Directors Mike van Alst (IT-Eye) and Jordan Braunstein (Visual Integrator Consulting) and Oracle product manager Jeff Davies participated in an ArchBeat virtual meet-up. The resulting conversation quickly turned to the changing nature of enterprise architecture and the various forces driving that change. All four parts of that wide-ranging conversation are now available. Listen to Part 1 Listen to Part 2 Listen to Part 3 Listen to Part 4 As you’ll hear, Mike, Jordan, and Jeff bring unique perspectives and opinions to this very lively conversation. These are three very sharp, very experienced guys, as and you might expect, they don’t always walk in lock-step when it comes to EA. You can learn more about Mike, Jordan, and Jeff – and share your opinions with them -- through the links below: Mike van Alst Blog | Twitter | LinkedIn | Business |Oracle Mix | Oracle ACE Profile Jordan Braunstein Blog | Twitter | LinkedIn | Business | Oracle Mix | Oracle ACE Profile Jeff Davies Homepage | Blog | LinkedIn | Oracle Mix (Also check out Jeff’s book: The Definitive Guide to SOA: Oracle Service Bus) Up Next Next week’s program features highlights from the panel discussion at the Oracle Technology Architect Day event held in Anaheim, CA on May 19. You’ll hear from Oracle ACE Directors Basheer Khan and Floyd Teter, Oracle virtualization expert and former Sun Microsystems principal engineer Jeff Savit, Oracle security analyst Geri Born, and event MC Ralf Dossman, Director of SOA and Middleware in Oracle’s Enterprise Solutions Group. Stay tuned: RSS

    Read the article

  • Restful Services, oData, and Rest Sharp

    - by jkrebsbach
    After a great presentation by Jason Sheehan at MDC about RestSharp, I decided to implement it. RestSharp is a .Net framework for consuming restful data sources via either Json or XML. My first step was to put together a Restful data source for RestSharp to consume.  Staying entirely withing .Net, I decided to use Microsoft's oData implementation, built on System.Data.Services.DataServices.  Natively, these support Json, or atom+pub xml.  (XML with a few bells and whistles added on) There are three main steps for creating an oData data source: 1)  override CreateDSPMetaData This is where the metadata data is returned.  The meta data defines the structure of the data to return.  The structure contains the relationships between data objects, along with what properties the objects expose.  The meta data can and should be somehow cached so that the structure is not rebuild with every data request. 2) override CreateDataSource The context contains the data the data source will publish.  This method is the conduit which will populate the metadata objects to be returned to the requestor. 3) implement static InitializeService At this point we can set up security, along with setting up properties of the web service (versioning, etc)   Here is a web service which publishes stock prices for various Products (stocks) in various Categories. namespace RestService {     public class RestServiceImpl : DSPDataService<DSPContext>     {         private static DSPContext _context;         private static DSPMetadata _metadata;         /// <summary>         /// Populate traversable data source         /// </summary>         /// <returns></returns>         protected override DSPContext CreateDataSource()         {             if (_context == null)             {                 _context = new DSPContext();                 Category utilities = new Category(0);                 utilities.Name = "Electric";                 Category financials = new Category(1);                 financials.Name = "Financial";                                 IList products = _context.GetResourceSetEntities("Products");                 Product electric = new Product(0, utilities);                 electric.Name = "ABC Electric";                 electric.Description = "Electric Utility";                 electric.Price = 3.5;                 products.Add(electric);                 Product water = new Product(1, utilities);                 water.Name = "XYZ Water";                 water.Description = "Water Utility";                 water.Price = 2.4;                 products.Add(water);                 Product banks = new Product(2, financials);                 banks.Name = "FatCat Bank";                 banks.Description = "A bank that's almost too big";                 banks.Price = 19.9; // This will never get to the client                 products.Add(banks);                 IList categories = _context.GetResourceSetEntities("Categories");                 categories.Add(utilities);                 categories.Add(financials);                 utilities.Products.Add(electric);                 utilities.Products.Add(electric);                 financials.Products.Add(banks);             }             return _context;         }         /// <summary>         /// Setup rules describing published data structure - relationships between data,         /// key field, other searchable fields, etc.         /// </summary>         /// <returns></returns>         protected override DSPMetadata CreateDSPMetadata()         {             if (_metadata == null)             {                 _metadata = new DSPMetadata("DemoService", "DataServiceProviderDemo");                 // Define entity type product                 ResourceType product = _metadata.AddEntityType(typeof(Product), "Product");                 _metadata.AddKeyProperty(product, "ProductID");                 // Only add properties we wish to share with end users                 _metadata.AddPrimitiveProperty(product, "Name");                 _metadata.AddPrimitiveProperty(product, "Description");                 EntityPropertyMappingAttribute att = new EntityPropertyMappingAttribute("Name",                     SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true);                 product.AddEntityPropertyMappingAttribute(att);                 att = new EntityPropertyMappingAttribute("Description",                     SyndicationItemProperty.Summary, SyndicationTextContentKind.Plaintext, true);                 product.AddEntityPropertyMappingAttribute(att);                 // Define products as a set of product entities                 ResourceSet products = _metadata.AddResourceSet("Products", product);                 // Define entity type category                 ResourceType category = _metadata.AddEntityType(typeof(Category), "Category");                 _metadata.AddKeyProperty(category, "CategoryID");                 _metadata.AddPrimitiveProperty(category, "Name");                 _metadata.AddPrimitiveProperty(category, "Description");                 // Define categories as a set of category entities                 ResourceSet categories = _metadata.AddResourceSet("Categories", category);                 att = new EntityPropertyMappingAttribute("Name",                     SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true);                 category.AddEntityPropertyMappingAttribute(att);                 att = new EntityPropertyMappingAttribute("Description",                     SyndicationItemProperty.Summary, SyndicationTextContentKind.Plaintext, true);                 category.AddEntityPropertyMappingAttribute(att);                 // A product has a category, a category has products                 _metadata.AddResourceReferenceProperty(product, "Category", categories);                 _metadata.AddResourceSetReferenceProperty(category, "Products", products);             }             return _metadata;         }         /// <summary>         /// Based on the requesting user, can set up permissions to Read, Write, etc.         /// </summary>         /// <param name="config"></param>         public static void InitializeService(DataServiceConfiguration config)         {             config.SetEntitySetAccessRule("*", EntitySetRights.All);             config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;             config.DataServiceBehavior.AcceptProjectionRequests = true;         }     } }     The objects prefixed with DSP come from the samples on the oData site: http://www.odata.org/developers The products and categories objects are POCO business objects with no special modifiers. Three main options are available for defining the MetaData of data sources in .Net: 1) Generate Entity Data model (Potentially directly from SQL Server database).  This requires the least amount of manual interaction, and uses the edmx WYSIWYG editor to generate a data model.  This can be directly tied to the SQL Server database and generated from the database if you want a data access layer tightly coupled with your database. 2) Object model decorations.  If you already have a POCO data layer, you can decorate your objects with properties to statically inform the compiler how the objects are related.  The disadvantage is there are now tags strewn about your business layer that need to be updated as the business rules change.  3) Programmatically construct metadata object.  This is the object illustrated above in CreateDSPMetaData.  This puts all relationship information into one central programmatic location.  Here business rules are constructed when the DSPMetaData response object is returned.   Once you have your service up and running, RestSharp is designed for XML / Json, along with the native Microsoft library.  There are currently some differences between how Jason made RestSharp expect XML with how atom+pub works, so I found better results currently with the Json implementation - modifying the RestSharp XML parser to make an atom+pub parser is fairly trivial though, so use what implementation works best for you. I put together a sample console app which calls the RestSvcImpl.svc service defined above (and assumes it to be running on port 2000).  I used both RestSharp as a client, and also the default Microsoft oData client tools. namespace RestConsole {     class Program     {         private static DataServiceContext _ctx;         private enum DemoType         {             Xml,             Json         }         static void Main(string[] args)         {             // Microsoft implementation             _ctx = new DataServiceContext(new System.Uri("http://localhost:2000/RestServiceImpl.svc"));             var msProducts = RunQuery<Product>("Products").ToList();             var msCategory = RunQuery<Category>("/Products(0)/Category").AsEnumerable().Single();             var msFilteredProducts = RunQuery<Product>("/Products?$filter=length(Name) ge 4").ToList();             // RestSharp implementation                          DemoType demoType = DemoType.Json;             var client = new RestClient("http://localhost:2000/RestServiceImpl.svc");             client.ClearHandlers(); // Remove all available handlers             // Set up handler depending on what situation dictates             if (demoType == DemoType.Json)                 client.AddHandler("application/json", new RestSharp.Deserializers.JsonDeserializer());             else if (demoType == DemoType.Xml)             {                 client.AddHandler("application/atom+xml", new RestSharp.Deserializers.XmlDeserializer());             }                          var request = new RestRequest();             if (demoType == DemoType.Json)                 request.RootElement = "d"; // service root element for json             else if (demoType == DemoType.Xml)             {                 request.XmlNamespace = "http://www.w3.org/2005/Atom";             }                              // Return all products             request.Resource = "/Products?$orderby=Name";             RestResponse<List<Product>> productsResp = client.Execute<List<Product>>(request);             List<Product> products = productsResp.Data;             // Find category for product with ProductID = 1             request.Resource = string.Format("/Products(1)/Category");             RestResponse<Category> categoryResp = client.Execute<Category>(request);             Category category = categoryResp.Data;             // Specialized queries             request.Resource = string.Format("/Products?$filter=ProductID eq {0}", 1);             RestResponse<Product> productResp = client.Execute<Product>(request);             Product product = productResp.Data;                          request.Resource = string.Format("/Products?$filter=Name eq '{0}'", "XYZ Water");             productResp = client.Execute<Product>(request);             product = productResp.Data;         }         private static IEnumerable<TElement> RunQuery<TElement>(string queryUri)         {             try             {                 return _ctx.Execute<TElement>(new Uri(queryUri, UriKind.Relative));             }             catch (Exception ex)             {                 throw ex;             }         }              } }   Feel free to step through the code a few times and to attach a debugger to the service as well to see how and where the context and metadata objects are constructed and returned.  Pay special attention to the response object being returned by the oData service - There are several properties of the RestRequest that can be used to help troubleshoot when the structure of the response is not exactly what would be expected.

    Read the article

  • links for 2010-04-13

    - by Bob Rhubart
    Frederic Michiar: Manage a flexible and elastic Data Center with Oracle VM Manager Frederic Michiar shares a list of Oracle VM resources. (tags: otn oracle virtualization) Mona Rakibe: BAM Data Control in multiple ADF Faces Components "When two or more ADF Faces components must display the same data, and are bound to the same Oracle BAM data control definition, we have to make sure that we wrap each ADF Faces component in an ADF task flow, and set the Data Control Scope to isolated. " Mona Rakibe shows you how. (tags: oracle otn soa bam adf) Martin Widlake: Performance Tipping Points Martin Widlake offers "a nice example of a performance tipping point. This is where Everything is OK until you reach a point where it all quickly cascades to Not OK." (tags: oracle otn database architecture performance) Steve Chan: EBS Techstack Sessions at OAUG/Collaborate 2010 Steve Chan shares a list of Collaborate 2010 sessions featuring Oracle E-Business Suite Applications Technology Group staffers. (tags: oracle otn collaborate2010 ebs) @ORACLENERD: Developing in APEX Oracle ACE Chet Justice counts the ways... (tags: otn oracle oracleace apex) @bex: Almost Time For IOUG Collaborate 2010 Oracle ACE Director Bex Huff shares details on his Collaborate 2010 presentation, "The Top 10 Things Oracle UCM Customers Need To Know About WebLogic:" (tags: oracle otn oracleace collaborate2010 weblogic ucm enterprise2.0)

    Read the article

  • Metro: Understanding Observables

    - by Stephen.Walther
    The goal of this blog entry is to describe how the Observer Pattern is implemented in the WinJS library. You learn how to create observable objects which trigger notifications automatically when their properties are changed. Observables enable you to keep your user interface and your application data in sync. For example, by taking advantage of observables, you can update your user interface automatically whenever the properties of a product change. Observables are the foundation of declarative binding in the WinJS library. The WinJS library is not the first JavaScript library to include support for observables. For example, both the KnockoutJS library and the Microsoft Ajax Library (now part of the Ajax Control Toolkit) support observables. Creating an Observable Imagine that I have created a product object like this: var product = { name: "Milk", description: "Something to drink", price: 12.33 }; Nothing very exciting about this product. It has three properties named name, description, and price. Now, imagine that I want to be notified automatically whenever any of these properties are changed. In that case, I can create an observable product from my product object like this: var observableProduct = WinJS.Binding.as(product); This line of code creates a new JavaScript object named observableProduct from the existing JavaScript object named product. This new object also has a name, description, and price property. However, unlike the properties of the original product object, the properties of the observable product object trigger notifications when the properties are changed. Each of the properties of the new observable product object has been changed into accessor properties which have both a getter and a setter. For example, the observable product price property looks something like this: price: { get: function () { return this.getProperty(“price”); } set: function (value) { this.setProperty(“price”, value); } } When you read the price property then the getProperty() method is called and when you set the price property then the setProperty() method is called. The getProperty() and setProperty() methods are methods of the observable product object. The observable product object supports the following methods and properties: · addProperty(name, value) – Adds a new property to an observable and notifies any listeners. · backingData – An object which represents the value of each property. · bind(name, action) – Enables you to execute a function when a property changes. · getProperty(name) – Returns the value of a property using the string name of the property. · notify(name, newValue, oldValue) – A private method which executes each function in the _listeners array. · removeProperty(name) – Removes a property and notifies any listeners. · setProperty(name, value) – Updates a property and notifies any listeners. · unbind(name, action) – Enables you to stop executing a function in response to a property change. · updateProperty(name, value) – Updates a property and notifies any listeners. So when you create an observable, you get a new object with the same properties as an existing object. However, when you modify the properties of an observable object, then you can notify any listeners of the observable that the value of a particular property has changed automatically. Imagine that you change the value of the price property like this: observableProduct.price = 2.99; In that case, the following sequence of events is triggered: 1. The price setter calls the setProperty(“price”, 2.99) method 2. The setProperty() method updates the value of the backingData.price property and calls the notify() method 3. The notify() method executes each function in the collection of listeners associated with the price property Creating Observable Listeners If you want to be notified when a property of an observable object is changed, then you need to register a listener. You register a listener by using the bind() method like this: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Change the price observableProduct.price = 2.99; } }; app.start(); })(); In the code above, the bind() method is used to associate the price property with a function. When the price property is changed, the function logs the new value of the price property to the Visual Studio JavaScript console. The price property is associated with the function using the following line of code: // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); Coalescing Notifications If you make multiple changes to a property – one change immediately following another – then separate notifications won’t be sent. Instead, any listeners are notified only once. The notifications are coalesced into a single notification. For example, in the following code, the product price property is updated three times. However, only one message is written to the JavaScript console. Only the last value assigned to the price property is written to the JavaScript Console window: // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Change the price observableProduct.price = 3.99; observableProduct.price = 2.99; observableProduct.price = 1.99; Only the last value assigned to price, the value 1.99, appears in the console: If there is a time delay between changes to a property then changes result in different notifications. For example, the following code updates the price property every second: // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Add 1 to price every second window.setInterval(function () { observableProduct.price += 1; }, 1000); In this case, separate notification messages are logged to the JavaScript Console window: If you need to prevent multiple notifications from being coalesced into one then you can take advantage of promises. I discussed WinJS promises in a previous blog entry: http://stephenwalther.com/blog/archive/2012/02/22/windows-web-applications-promises.aspx Because the updateProperty() method returns a promise, you can create different notifications for each change in a property by using the following code: // Change the price observableProduct.updateProperty("price", 3.99) .then(function () { observableProduct.updateProperty("price", 2.99) .then(function () { observableProduct.updateProperty("price", 1.99); }); }); In this case, even though the price is immediately changed from 3.99 to 2.99 to 1.99, separate notifications for each new value of the price property are sent. Bypassing Notifications Normally, if a property of an observable object has listeners and you change the property then the listeners are notified. However, there are certain situations in which you might want to bypass notification. In other words, you might need to change a property value silently without triggering any functions registered for notification. If you want to change a property without triggering notifications then you should change the property by using the backingData property. The following code illustrates how you can change the price property silently: // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Change the price silently observableProduct.backingData.price = 5.99; console.log(observableProduct.price); // Writes 5.99 The price is changed to the value 5.99 by changing the value of backingData.price. Because the observableProduct.price property is not set directly, any listeners associated with the price property are not notified. When you change the value of a property by using the backingData property, the change in the property happens synchronously. However, when you change the value of an observable property directly, the change is always made asynchronously. Summary The goal of this blog entry was to describe observables. In particular, we discussed how to create observables from existing JavaScript objects and bind functions to observable properties. You also learned how notifications are coalesced (and ways to prevent this coalescing). Finally, we discussed how you can use the backingData property to update an observable property without triggering notifications. In the next blog entry, we’ll see how observables are used with declarative binding to display the values of properties in an HTML document.

    Read the article

  • Commit in SQL

    - by PRajkumar
    SQL Transaction Control Language Commands (TCL)                                           (COMMIT) Commit Transaction As a SQL language we use transaction control language very frequently. Committing a transaction means making permanent the changes performed by the SQL statements within the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all save points in the transaction and releases transaction locks. Oracle Database issues an implicit COMMIT before and after any data definition language (DDL) statement. Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back.   Until you commit a transaction: ·         You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit ·         You can roll back (undo) any changes made during the transaction with the ROLLBACK statement   Note: Most of the people think that when we type commit data or changes of what you have made has been written to data files, but this is wrong when you type commit it means that you are saying that your job has been completed and respective verification will be done by oracle engine that means it checks whether your transaction achieved consistency when it finds ok it sends a commit message to the user from log buffer but not from data buffer, so after writing data in log buffer it insists data buffer to write data in to data files, this is how it works.   Before a transaction that modifies data is committed, the following has occurred: ·         Oracle has generated undo information. The undo information contains the old data values changed by the SQL statements of the transaction ·         Oracle has generated redo log entries in the redo log buffer of the System Global Area (SGA). The redo log record contains the change to the data block and the change to the rollback block. These changes may go to disk before a transaction is committed ·         The changes have been made to the database buffers of the SGA. These changes may go to disk before a transaction is committed   Note:   The data changes for a committed transaction, stored in the database buffers of the SGA, are not necessarily written immediately to the data files by the database writer (DBWn) background process. This writing takes place when it is most efficient for the database to do so. It can happen before the transaction commits or, alternatively, it can happen some times after the transaction commits.   When a transaction is committed, the following occurs: 1.      The internal transaction table for the associated undo table space records that the transaction has committed, and the corresponding unique system change number (SCN) of the transaction is assigned and recorded in the table 2.      The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the redo log file. It also writes the transaction's SCN to the redo log file. This atomic event constitutes the commit of the transaction 3.      Oracle releases locks held on rows and tables 4.      Oracle marks the transaction complete   Note:   The default behavior is for LGWR to write redo to the online redo log files synchronously and for transactions to wait for the redo to go to disk before returning a commit to the user. However, for lower transaction commit latency application developers can specify that redo be written asynchronously and that transaction do not need to wait for the redo to be on disk.   The syntax of Commit Statement is   COMMIT [WORK] [COMMENT ‘your comment’]; ·         WORK is optional. The WORK keyword is supported for compliance with standard SQL. The statements COMMIT and COMMIT WORK are equivalent. Examples Committing an Insert INSERT INTO table_name VALUES (val1, val2); COMMIT WORK; ·         COMMENT Comment is also optional. This clause is supported for backward compatibility. Oracle recommends that you used named transactions instead of commit comments. Specify a comment to be associated with the current transaction. The 'text' is a quoted literal of up to 255 bytes that Oracle Database stores in the data dictionary view DBA_2PC_PENDING along with the transaction ID if a distributed transaction becomes in doubt. This comment can help you diagnose the failure of a distributed transaction. Examples The following statement commits the current transaction and associates a comment with it: COMMIT     COMMENT 'In-doubt transaction Code 36, Call (415) 555-2637'; ·         WRITE Clause Use this clause to specify the priority with which the redo information generated by the commit operation is written to the redo log. This clause can improve performance by reducing latency, thus eliminating the wait for an I/O to the redo log. Use this clause to improve response time in environments with stringent response time requirements where the following conditions apply: The volume of update transactions is large, requiring that the redo log be written to disk frequently. The application can tolerate the loss of an asynchronously committed transaction. The latency contributed by waiting for the redo log write to occur contributes significantly to overall response time. You can specify the WAIT | NOWAIT and IMMEDIATE | BATCH clauses in any order. Examples To commit the same insert operation and instruct the database to buffer the change to the redo log, without initiating disk I/O, use the following COMMIT statement: COMMIT WRITE BATCH; Note: If you omit this clause, then the behavior of the commit operation is controlled by the COMMIT_WRITE initialization parameter, if it has been set. The default value of the parameter is the same as the default for this clause. Therefore, if the parameter has not been set and you omit this clause, then commit records are written to disk before control is returned to the user. WAIT | NOWAIT Use these clauses to specify when control returns to the user. The WAIT parameter ensures that the commit will return only after the corresponding redo is persistent in the online redo log. Whether in BATCH or IMMEDIATE mode, when the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media. A crash occurring after a successful write to the log can prevent the success message from returning to the client. In this case the client cannot tell whether or not the transaction committed. The NOWAIT parameter causes the commit to return to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput. With the WAIT parameter, if the commit message is received, then you can be sure that no data has been lost. Caution: With NOWAIT, a crash occurring after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent. If you omit this clause, then the transaction commits with the WAIT behavior. IMMEDIATE | BATCH Use these clauses to specify when the redo is written to the log. The IMMEDIATE parameter causes the log writer process (LGWR) to write the transaction's redo information to the log. This operation option forces a disk I/O, so it can reduce transaction throughput. The BATCH parameter causes the redo to be buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write of the redo log is initiated. This behavior is called "group commit", as redo for multiple transactions is written to the log in a single I/O operation. If you omit this clause, then the transaction commits with the IMMEDIATE behavior. ·         FORCE Clause Use this clause to manually commit an in-doubt distributed transaction or a corrupt transaction. ·         In a distributed database system, the FORCE string [, integer] clause lets you manually commit an in-doubt distributed transaction. The transaction is identified by the 'string' containing its local or global transaction ID. To find the IDs of such transactions, query the data dictionary view DBA_2PC_PENDING. You can use integer to specifically assign the transaction a system change number (SCN). If you omit integer, then the transaction is committed using the current SCN. ·         The FORCE CORRUPT_XID 'string' clause lets you manually commit a single corrupt transaction, where string is the ID of the corrupt transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to specify this clause. ·         Specify FORCE CORRUPT_XID_ALL to manually commit all corrupt transactions. You must have DBA privileges to specify this clause. Examples Forcing an in doubt transaction. Example The following statement manually commits a hypothetical in-doubt distributed transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to issue this statement. COMMIT FORCE '22.57.53';

    Read the article

  • links for 2010-05-03

    - by Bob Rhubart
    @ORACLENERD: Exadata + The Hartford Oracle ACE Chet "ORACLENERD" Justice went digging for information on Oracle Exadata, and shares the results. (tags: oracle otn oracleace hardware database exadata) @myfear: About the Java EE 6 Web Profile and the Future "If you look at the new web profile in more detail, you see that it is a specified minimal configuration targeted for small footprint servers that should support something called 'typical' web applications. It is thought of as a minimal specification, so a vendor is free to add additional services in their concrete implementation." -- Oracle ACE Director Markus Eisele (tags: oracle otn oracleace java) Edwin Biemond: WSM in FMW 11g Patch Set 2 and OSB 11g Oracle ACE Edwin Biemond of Whitehorses takes a look at the security aspects of Oracle Fusion Middleware and Oracle Service Bus 11g. (tags: oracle otn oracleace fusionmiddleware servicebus osb) James Taylor: Installing SOA Suite 11.1.1.3 James Taylor documents his attempt to implement a complete SOA Environment with SOA Suite, BPM and OSB on the WLS infrastructure. (tags: oracle otn soa soasuite fusionmiddleware) Eric Elzinga: Oracle Service Bus 11g Installation Eric Elzinga illustrates the Oracle Service Bus 11g installation process. (tags: otn oracle soa esb)

    Read the article

  • links for 2010-06-09

    - by Bob Rhubart
    Enterprise Architecture: From Incite comes Insight...: Why aren't we seeing more adoption of open source in large enterprises? (tags: ping.fm entarch opensource linux) Forms Modernization, Part 1: Motivation for change iAdvise blog (tags: ping.fm oracleace apex middleware oracle) OmniGraffle for iPad Now Supports VGA Output (Enterprise Architecture at Oracle) (tags: ping.fm entarch ipad oracle) SysAdmin access in Oracle VDI - Jaap's VDI Blog Space (tags: ping.fm virtualization sunray vdi) Securing Enterprise Data in AWS Oracle PeopleSoft Enterprise Consulting, Support and Training (tags: ping.fm cloud peoplesoft entarch) Enterprise Software Development with Java: ODTUG Kaleidoscope 2010 - preparations and sessions (tags: ping.fm oracle java oracleace) @toddbiske: Enterprise Architecture Must Assist Delivery "In most IT organizations, things get delivered through projects, and enterprise architects don’t typically play the role of project architect. At best, there is an indirect association with delivery." -- Todd Biske (tags: entarch enterprisearchitecture) @pevansgreenwood: The Rules of Enterprise IT "The rules of this game need to change if enterprise IT — as we know it — is to remain relevant in the future." -- Peter Evans Greenwood (tags: entarch enterprisearchitecture) @bex: Oracle UCM 11g Now Released! "Good news!" says Oracle ACE Director Bex Huff. "The 11g version of Oracle UCM is finally available! This version is a bit of a re-write to run on top of the WebLogic application server. Oracle has been talking about this release for some time, so I'm glad to see it finally available." (tags: oracle enteprise2.0 e20 oracleace) Marc Kelderman: SOA 11g Cloning Cloning an Oracle SOA Suite 11g environment is rather simple. Marc Kelderman shows you how. (tags: soa oracle)

    Read the article

  • Right-Time Retail Part 3

    - by David Dorf
    This is part three of the three-part series.  Read Part 1 and Part 2 first. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Marketing Real-time isn’t just about executing faster; it extends to interactions with customers as well. As an industry, we’ve spent many years analyzing all the data that’s been collected. Yes, that data has been invaluable in helping us make better decisions like where to open new stores, how to assort those stores, and how to price our products. But the recent advances in technology are now making it possible to analyze and deliver that data very quickly… fast enough to impact a potential sale in near real-time. Let me give you two examples. Salesmen in car dealerships get pretty good at sizing people up. When a potential customer walks in the door, it doesn’t take long for the salesman to figure out the revenue at stake. Is this person a real buyer, or just looking for a fun test drive? Will this person buy today or three months from now? Will this person opt for the expensive packages, or go bare bones? While the salesman certainly asks some leading questions, much of information is discerned through body language. But body language doesn’t translate very well over the web. Eloqua, which was acquired by Oracle earlier this year, reads internet body language. By tracking the behavior of the people visiting your web site, Eloqua categorizes visitors based on their propensity to buy. While Eloqua’s roots have been in B2B, we’ve been looking at leveraging the technology with ATG to target B2C. Knowing what sites were previously visited, how often the customer has been to your site recently, and how long they’ve spent searching can help understand where the customer is in their purchase journey. And knowing that bit of information may be enough to help close the deal with a real-time offer, follow-up email, or online customer service pop-up. This isn’t so different from the days gone by when the clerk behind the counter of the corner store noticed you were lingering in a particular aisle, so he walked over to help you compare two products and close the sale. You appreciated the personalized service, and he knew the value of the long-term relationship. Move that same concept into the digital world and you have Oracle’s CX Suite, a cloud-based offering of end-to-end customer experience tools, assembled primarily from acquisitions. Those tools are Oracle Marketing (Eloqua), Oracle Commerce (ATG, Endeca), Oracle Sales (Oracle CRM On Demand), Oracle Service (RightNow), Oracle Social (Collective Intellect, Vitrue, Involver), and Oracle Content (Fatwire). We are providing the glue that binds the CIO and CMO together to unleash synergies that drive the top-line higher, and by virtue of the cloud-approach, keep costs at bay. My second example of real-time marketing takes place in the store but leverages the concepts of Web marketing. In 1962 the decline of personalized service in retail began. Anyone know the significance of that year? That’s when Target, K-Mart, and Walmart each opened their first stores, and over the succeeding years the industry chose scale over personal service. No longer were you known as “Jane with the snotty kid so make sure we check her out fast,” but you suddenly became “time-starved female age 20-30 with kids.” I’m not saying that was a bad thing – it was the right thing for our industry at the time, and it enabled a huge amount of growth, cheaper prices, and more variety of products. But scale alone is no longer good enough. Today’s sophisticated consumer demands scale, experience, and personal attention. To some extent we’ve delivered that on websites via the magic of cookies, your willingness to log in, and sophisticated data analytics. What store manager wouldn’t love a report detailing all the visitors to his store, where they came from, and which products that examined? People trackers are getting more sophisticated, incorporating infrared, video analytics, and even face recognition. (Next time you walk in front on a mannequin, don’t be surprised if it’s looking back.) But the ultimate marketing conduit is the mobile phone. Since each mobile phone emits a unique number on WiFi networks, it becomes the cookie of the physical world. Assuming congress keeps privacy safeguards reasonable, we’ll have a win-win situation for both retailers and consumers. Retailers get to know more about the consumer’s purchase journey, and consumers get higher levels of service with the retailer. When I call my bank, a couple things happen before the call is connected. A reverse look-up on my phone number identifies me so my accounts can be retrieved from Siebel CRM. Then the system anticipates why I’m calling based on recent transactions. In this example, it sees that I was just charged a foreign currency fee, so it assumes that’s the reason I’m calling. It puts all the relevant information on the customer service rep’s screen as it connects the call. When I complain about the fee, the rep immediately sees I’m a great customer and I travel lots, so she suggests switching me to their traveler’s card that doesn’t have foreign transaction fees. That technology is powered by a product called Oracle Real-Time Decisions, a rules engine built to execute very quickly, basically in the time it takes the phone to ring once. So let’s combine the power of that product with our new-found mobile cookie and provide contextual customer interactions in real-time. Our first opportunity comes when a customer crosses a pre-defined geo-fence, typically a boundary around the store. Context is the key to our interaction: that’s the customer (known or anonymous), the time of day and day of week, and location. Thomas near the downtown store on a Wednesday at noon means he’s heading to lunch. If he were near the mall location on a Saturday morning, that’s a completely different context. But on his way to lunch, we’ll let Thomas know that we’ve got a new shipment of ASICS running shoes on display with a simple text message. We used the context to look-up Thomas’ past purchases and understood he was an avid runner. We used the fact that this was lunchtime to select the type of message, in this case an informational message instead of an offer. Thomas enters the store, phone in hand, and walks to the shoe department. He scans one of the new ASICS shoes using the convenient QR Codes we provided on the shelf-tags, but then he starts scanning low-end Nikes. Each scan is another opportunity to both learn from Thomas and potentially interact via another message. Since he historically buys low-end Nikes and keeps scanning them, he’s likely falling back into his old ways. Our marketing rules are currently set to move loyal customer to higher margin products. We could have set the dials to increase visit frequency, move overstocked items, increase basket size, or many other settings, but today we are trying to move Thomas to higher-margin products. We send Thomas another text message, this time it’s a personalized offer for 10% off ASICS good for 24 hours. Offering him a discount on Nikes would be throwing margin away since he buys those anyway. We are using our marketing dollars to change behavior that increases the long-term value of Thomas. He decides to buy the ASICS and scans the discount code on his phone at checkout. Checkout is yet another opportunity to interact with Thomas, so the transaction is sent back to Oracle RTD for evaluation. Since Thomas didn’t buy anything with the shoes, we’ll print a bounce-back coupon on the receipt offering 30% off ASICS socks if he returns within seven days. We have successfully started moving Thomas from low-margin to high-margin products. In both of these marketing scenarios, we are able to leverage data in near real-time to decide how best to interact with the customer and lead to an increase in the lifetime value of the customer. The key here is acting at the moment the customer shows interest using the context of the situation. We aren’t pushing random products at haphazard times. We are tailoring the marketing to be very specific to this customer, and it’s the technology that allows this to happen in near real-time. Conclusion As we enable more right-time integrations and interactions, retailers will begin to offer increased service to their customers. Localized and personalized service at scale will drive loyalty and lead to meaningful revenue growth for the retailers that execute well. Our industry needs to support Commerce Anywhere…and commerce anytime as well.

    Read the article

  • Similar But Not The Same

    - by rickramsey
    A few weeks ago we published an article that explained how to use Oracle Solaris Cluster 3.3 5/11 to provide a virtual, multitiered architecture for Oracle Real Application Cluster (Oracle RAC) 11.2.0.2. We called it ... How to Deploy Oracle RAC on Zone Clusters Welllllll ... we just published another article just like it. Except that it's different. The earlier article was for Oracle RAC 11.2.0.2. This one is for Oracle RAC 11.2.0.3. This one describes how to do the same thing as the earlier one --create an Oracle Solaris Zone cluster, install and configure Oracle Grid Infrastructure and Oracle RAC in the zone cluster, and create an Oracle Solaris Cluster resource for Oracle RAC-- but for version 11.2.0.3 of Oracle RAC. Even though the objective is the same, and the version is only a dot-dot-dot release away, the process is quite different. So we decided to call it: How to Deploy Oracle RAC 11.2.0.3 on Zone Clusters Hope you can keep the different versions clear in your head. If not, let me know, and I'll try to make them easier to distinguish. - Rick Website Newsletter Facebook Twitter

    Read the article

  • links for 2010-06-01

    - by Bob Rhubart
    Venkatakrishnan J: Oracle BI EE 10.1.3.4.1 -- Do we need measures in a Fact Table? Troubleshooting from Rittman Mead's Venkatakrishnan J. (tags: oracle otn businessintelligence datawarehouse) Grid container support : JavaFX Composer An overview how JavaFX Composer supports the grid container. (tags: oracle sun javafx) John Brunswick: Site Studio Mobile Example - WCM Reuse The example highlighted in John Brunswick's post takes advantage of dynamic conversion capabilities in Oracle UCM that allow site content to be created and updated via MS Office documents.  (tags: oracle otn enterprise2.0) @glassfish: GlassFish 3 in the EC2 Cloud powering Dutch and Belgian community polls "The infrastructure is Amazon's Elastic Cloud Computing (EC2) environment because of the dynamic provisioning (elasticity) required by such an online service. Requests are handled directly by the grizzly layer of GlassFish with no extra front-end HTTP layer and shows great performance and scalability." -- The Aquarium (tags: oracle java sun glassfish cloud) James Morle: Flash Storage Will Be Cheap: The End of the World is Nigh "We now need technologies that look more like Oracle Exadata v2, with low-latency RDMA interfaces directly into the Operating System/Database. However, they need to easily and natively support other types of storage (unstructured data such as files, VMware datastores and so forth). The Exadata architecture lends itself well to changes in this area in both hardware trends and access protocols." -- James Morle (tags: oracle otn exadata database architecture virtualization) Java / Oracle SOA blog: HTTP binding in Soa Suite 11g PS2 (tags: ping.fm) Confessions of a Software Developer: Some Tips for Installing Oracle BPM 11g on Windows XP (tags: ping.fm) SOA and Java using Oracle technology: Book review: Oracle Coherence 3.5: Create internet scale applications using Oracle's high-performance data grid (tags: ping.fm)

    Read the article

  • links for 2011-01-10

    - by Bob Rhubart
    Clusterware 11gR2: Setting up an Active/Passive failover configuration (Oracle Luxembourg XPS on Database) Some think that expensive third-party cluster systems are necessary when it comes to protecting a system with an Active/Passive architecture with failover capabilities. Not true, according to Gilles Haro. (tags: oracle otn database) Atul Kumar: Part IX : Install OAM Agent - 11g WebGate with OAM 11g Part 9 of Atul's step by step guide to the installation of Oracle Identity Management. (tags: oracle oam identitymanagement security otn) Michel Schildmeijer: Oracle Service Bus: enable / disable proxy service with WLST Amis Technology's Michel Schildmeijer shares a process he found for enabling / disabling a proxy service within Oracle Service Bus 11g with WLST (WebLogic Scripting tool). (tags: oracle soa servicebus weblogic) @andrejusb: SOA & E2.0 Partner Community Forum XIII - in Utrecht, The Netherlands Oracle ACE Director Andrejus Baranovskis shares a nice plug for the SOA & E2.0 Partner Community Forum XIII coming up in March in the Netherlands. (tags: oracle oracleace otn soa enterprise2.0) Oracle Magazine Architect Column: Enterprise Architecture in Interesting Times Oracle ACE Directors Lonneke Dikmans, Ronald van Luttikhuizen, Mike van Alst, and Floyd Teter and Oracle enterprise architect Mans Bhuller share their thoughts on the forces that are shaping enterprise architecture. (tags: oracle otn architect entarch oraclemag) InfoQ: Deriving Agility from SOA and BPM - Ten Things that Separate the Winners from the Losers In this presentation from SOA Symposium 2010, Manas Deb and Clemens Utschig-Utschig discuss how to derive business agility from SOA and BPM, motivations for agility, developing and nurturing agility, influencers and dependencies, how SOA and BPM enable agility, pitfalls and recommendations for organizational culture, and pitfalls and recommendations for business and technical architectures. (tags: ping.fm)

    Read the article

  • links for 2011-02-03

    - by Bob Rhubart
    Webcast: Reduce Complexity and Cost with Application Integration and SOA Speakers: Bruce Tierney (Product Director, Oracle Fusion Middleware) and Rajendran Rajaram (Oracle Technical Consultant). Thursday, February 17, 2011. 10 a.m. PT/1 p.m. ET. (tags: oracle otn soa fusionmiddleware) William Vambenepe: The API, the whole API and nothing but the API William asks: "When programming against a remote service, do you like to be provided with a library (or service stub) or do you prefer 'the API, the whole API, nothing but the API?'" (tags: oracle otn API webservices soa) Gary Myers: Fluffy white Oracle clouds by the hour Gary says: "Pay-by-the-hour options are becoming more common, with Amazon and Oracle are getting even more intimate in the next few months. Yes, you too will be able to pay for a quickie with the king of databases (or queen if you prefer that as a mental image). " (tags: oracle otn cloudcomputing amazon ec2) Conversation as User Assistance (the user assistance experience) "To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies." -- Ultan O'Broin (tags: oracle otn enterprise2.0 userexperience) Webcast: Oracle WebCenter Suite – Giving Users a Modern Experience Thursday, February 10, 2011. 11 a.m. PT/2 p.m. ET. Speakers: Vince Casarez, Vice President of Enterprise 2.0 Product Management, Oracle; Erin Smith, Consulting Practice Manager – Portals, Oracle; Robert Wessa, Consulting Technical Director,  Enterprise 2.0 Infrastructure, Oracle.  (tags: oracle otn enterprise2.0 webcenter)

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >