Search Results

Search found 20172 results on 807 pages for 'oracle forms to adf'.

Page 223/807 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Get Your Enterprise Working With Oracle On Track Communication 1.0

    - by Josh Lannin
    The On Track Development team is very pleased to announce that today On Track is available for our customers to download and evaluate.  To learn more about what On Track does start with our whitepaper and datasheet.   If you are a developer, take a look at our documentation and samples posted to our OTN page. For this first blog post, I’ll be speaking to several notable points about our product. Graceful Escalation via Conversations: On Track addresses the “Collaboration Problem” through a single guiding principle – graceful escalation – within the construct of a Conversation. In On Track, collaboration is based on a context (called a “Conversation”) that gracefully escalates in form, structure, and content, as dictated by the particular needs of a given collaboration.  Within that context, On Track provides a rich set of tools to choose from.  These tools provide for communication, coordination, content management, organization, decision making, and analysis -- all essential aspects of collaboration, but not all of them are essential all of the time.  Every collaborative interaction will evolve differently.  Some will evolve to represent work spreading over the course of years and involving a large, distributed team, while others may involve few people and not evolve at all.  Regardless, all collaborative contexts are built from the same parts, utilize the same concepts, and start the same way.  The principle of graceful escalation is that you only use the tools and structure you need; so you only incur the complexity you need. Purposeful Collaboration: Through application integration, On Track Conversations bring enterprise application users the communication and collaboration capabilities required to complete business process.  By association with specific processes or business objects conversations extend the possible interactions and broaden participation to internal or external non-application users and provide a sophisticated interaction experience, all the while enhancing the data set within the owning application.  Purposeful collaboration not only needs to happen in the context of applications, it must support a full range of real-time and long-running interactions to provide the greatest value. Multi Client, Multi Modal: This On Track 1.0 product release includes the same day availability of  multiple clients, including iPhone and iPad applications which are now available on the Apple Store, a fully capable and accessible Outlook Add-In, along with our browser web client.  With each client we have sought to leverage the strengths of each unique device- our iPhone client supports picture and voice posts, the iPad is optimized for meeting room situations and document viewing, and our Outlook add-in allows you to take emails in context and bring them into On Track.  In addition to supporting a diverse array of clients, On Track provides a unified multi modal experience support starting with basic messages moving through to integrated documents with live annotations, snapshots, application sharing, and voice. Next Generation Web Architecture: We believe On Track will help move the bar higher for what users can expect from all web applications, most notably ones that involve real-time activity.  On Track is built from the ground up with an innovative, real-time architecture that leverages the extensive push capabilities of our server.  Whether you are receiving a new message, viewing where crowds of people are collaborating, or doing live annotation on a document with a set of people, that information comes to you immediately without refreshes or moving back and forth between pages.  We’ve leveraged this core architecture across the product experience and raised the user experience bar for this type of application.  As well these capabilities are based on open standards and protocols, and are fully extensible by anyone- enabling sophisticated integrations to be created with a wide variety of both legacy and next-generation applications. Agile Product Development: As a product team we operate using continuous feedback and modified agile development methodologies.  We have thousands of active internal Oracle users who have helped pilot our product for critical business functions, and the On Track product development team uses our product as our primary vehicle for all our collaboration.  Additionally we been working with early access customers who are adopting our technology and providing us valuable feedback - which our process has rapidly realized in improvements to our software.  On Track agility extends to our server as well, which is built to scale, and is very simple to install and configure. We are pleased to make this product announcement and encourage you to join us on Facebook or follow us on Twitter, as well as checking back here for the latest product information.

    Read the article

  • Transferring data from Salesforce using Apex Data Loader to Oracle

    - by Barret
    While attempting to transfer data from Salesforce using Apex Data Loader to Oracle Keep getting the following error: 26937 [databaseAccountExtract] FATAL com.salesforce.dataloader.dao.database.Data baseContext - Error getting value for SQL parameter: nkey__c. Please make sure that the value exists in the configuration file or is passed in. Database conf iguration: insertAccount. The database-conf.xml has the following beans: <bean id="insertAccount" class="com.salesforce.dataloader.dao.database.DatabaseConfig" singleton="true"> <property name="sqlConfig" ref="insertAccountSql"/> <property name="dataSource" ref="dbDataSource"/> </bean> <bean id="insertAccountSql" class="com.salesforce.dataloader.dao.database.SqlConfig" singleton="true"> <property name="sqlString"> <value> INSERT INTO VANTROPO.SF_ACCOUNTCHANNEL (nkey__c) VALUES (@nkey__c@) </value> </property> <property name="sqlParams"> <map> <entry key="nkey__c" value="java.lang.String"/> </map> </property> </bean> The SDL (mapping file) has the following values: # Account Insert Mapping values for query from Salesforce (left) and insert/update to Oracle (right) # SalesforceFieldName=OracleFieldName nkey__c=NKEY__C Any help appreciated.

    Read the article

  • Perl DBI doesn't work with Oracle 11g

    - by John
    I am getting the following error connecting to an Oracle 11g database using a simple perl script: failed: ERROR OCIEnvNlsCreate. Check ORACLE_HOME (Linux) env var or PATH (Windows) and or NLS settings, permissions, etc. at The script is as follows: #!/usr/local/bin/perl use strict; use DBI; if ($#ARGV < 3) { print "Usage: perl testDbAccess.pl dataBaseUser dataBasePassword SID dataBasePort\n"; exit 0; } my ($user, $pwd, $sid, $port) = @ARGV; my $host = `hostname`; my $dbh; my $sth; my $dbname = "dbi:Oracle:HOST=$host;SID=$sid;PORT=$port"; openDbConnection(); closeDbConnection(); sub openDbConnection() { $dbh = DBI->connect ($dbname, $user ,$pwd , { RaiseError => 1}) || die "Database connection not made: $DBI::errstr"; } sub closeDbConnection() { #$sth->finish(); $dbh->disconnect(); } Anyone seen this problem before? /john

    Read the article

  • Parsing SOAP XML in Oracle

    - by user258587
    Hi I am new to Oracle and I am working on something that needs to parse a SOAP request and save the address to DB Tables. I am using the XML parser in Oracle (XMLType) with XPath but am struggling since I can't figure out the way to parse the SOAP request because it has multiple namespaces. Could anyone give me an example? Thanks in advance!!! edit It would be a typical SOAP request similar to the one below. <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:soap="http://soap.service.****.com"> <soapenv:Header /> <soapenv:Body> <soap:UpdateElem> <soap:request> <soap:att1>123456789</soap:att1> <soap:att2 xsi:nil="true" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" /> <soap:att3>L</soap:att3> ..... </soap:request> </soap:UpdateElem> </soapenv:Body> </soapenv:Envelope> I need to retrieve parameters att1, att2... and save them in to a DB table.

    Read the article

  • Mono ASP.NET Oracle Connection

    - by bladepit
    Hello to everybody, if i want to connect to orcale i became the following error: libclntsh.so Description: HTTP 500. Error processing request. Stack Trace: System.DllNotFoundException: libclntsh.so at (wrapper managed-to-native) System.Data.OracleClient.Oci.OciCalls/OciNativeCalls.OCIEnvCreate (intptr&,System.Data.OracleClient.Oci.OciEnvironmentMode,intptr,intptr,intptr,intptr,int,intptr) <0x0005d at System.Data.OracleClient.Oci.OciCalls.OCIEnvCreate (intptr&,System.Data.OracleClient.Oci.OciEnvironmentMode,intptr,intptr,intptr,intptr,int,intptr) [0x00000] in /src/monoscript/mono-2.4.2.3/mcs/class/System.Data.OracleClient/System.Data.OracleClient.Oci/OciCalls.cs:738 at System.Data.OracleClient.Oci.OciEnvironmentHandle..ctor (System.Data.OracleClient.Oci.OciEnvironmentMode) [0x00013] in /src/monoscript/mono-2.4.2.3/mcs/class/System.Data.OracleClient/System.Data.OracleClient.Oci/OciEnvironmentHandle.cs:35 at System.Data.OracleClient.Oci.OciGlue.CreateConnection (System.Data.OracleClient.OracleConnectionInfo) [0x00000] in /src/monoscript/mono-2.4.2.3/mcs/class/System.Data.OracleClient/System.Data.OracleClient/OciGlue.cs:86 at System.Data.OracleClient.OracleConnectionPoolManager.CreateConnection (System.Data.OracleClient.OracleConnectionInfo) [0x00006] in /src/monoscript/mono-2.4.2.3/mcs/class/System.Data.OracleClient/System.Data.OracleClient/OracleConnectionPoolManager.cs:57 at System.Data.OracleClient.OracleConnectionPool.CreateConnection () [0x0000e] in /src/monoscript/mono-2.4.2.3/mcs/class/System.Data.OracleClient/System.Data.OracleClient/OracleConnectionPool.cs:97 at System.Data.OracleClient.OracleConnectionPool.GetConnection () [0x000ba] in /src/monoscript/mono-2.4.2.3/mcs/class/System.Data.OracleClient/System.Data.OracleClient/OracleConnectionPool.cs:74 at System.Data.OracleClient.OracleConnection.Open () [0x00061] in /src/monoscript/mono-2.4.2.3/mcs/class/System.Data.OracleClient/System.Data.OracleClient/OracleConnection.cs:410 at WebServer.Controllers.HomeController.Index () [0x00006] in /home/bhcweb/Projects/Controllers/HomeController.cs:19 at (wrapper dynamic-method) System.Runtime.CompilerServices.ExecutionScope.lambda_method (System.Runtime.CompilerServices.ExecutionScope,System.Web.Mvc.ControllerBase,object[]) <0x00080 at System.Web.Mvc.ActionMethodDispatcher.Execute (System.Web.Mvc.ControllerBase,object[]) <0x0001b at System.Web.Mvc.ReflectedActionDescriptor.Execute (System.Web.Mvc.ControllerContext,System.Collections.Generic.IDictionary2<string, object>) <0x000fd> at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod (System.Web.Mvc.ControllerContext,System.Web.Mvc.ActionDescriptor,System.Collections.Generic.IDictionary2) <0x0001c at System.Web.Mvc.ControllerActionInvoker/c_AnonStoreyB.<m_E () <0x00067 at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter (System.Web.Mvc.IActionFilter,System.Web.Mvc.ActionExecutingContext,System.Func`1) <0x000c4 What is my Problem there? I have read that i have to set my ORACLE_HOME AND LD_LIBRARY_PATH. If i do echo $ORACLE_HOME and $LD_LIBRARY_PATH the path which i have set is coming out: /usr/lib/oracle/xe/app/oracle/product/10.2.0/client/lib This is the path where the libclntsh.so is in. Is this right? Best regards bladepit

    Read the article

  • Ibator didn't generate Oracle varchar2 field

    - by bugbug
    I have table APP_REQ_APPROVE_COMPARE with following fields: "ID" NUMBER NOT NULL ENABLE, "TRACK_NO" VARCHAR2(20 BYTE) NOT NULL ENABLE, "REQ_DATE" DATE NOT NULL ENABLE, "OFFCODE" CHAR(6 BYTE) NOT NULL ENABLE, "COMPARE_CASE_ID" NUMBER NOT NULL ENABLE, "VEHICLE_NAME" VARCHAR2(100 BYTE), "ENGINE_NO" VARCHAR2(100 BYTE), "BODY_NO" VARCHAR2(100 BYTE), "HOLD_SHIP" NUMBER, "OWNERSHIP" VARCHAR2(200 BYTE), "RENT_NAME" VARCHAR2(200 BYTE), "CONTRACT" VARCHAR2(100 BYTE), "CONTRACT_NO" VARCHAR2(100 BYTE), "CONTRACT_DATE" DATE, "ISLAWBREAKERRENT" CHAR(1 BYTE) NOT NULL ENABLE, "MISTAKE_DETAIL" VARCHAR2(4000 BYTE), "COMPARE_REASON" VARCHAR2(4000 BYTE), "CREATE_BY" NUMBER NOT NULL ENABLE, "CREATE_ON" DATE DEFAULT SYSDATE NOT NULL ENABLE, "UPDATE_BY" NUMBER, "UPDATE_ON" DATE, When I generate a java bean using Ibator , I didn't find trackNo, VehicalName, ... (all fields defined as varchar2). What is the problem in my case? Here is my Ibator configuration file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE ibatorConfiguration PUBLIC "-//Apache Software Foundation//DTD Apache iBATIS Ibator Configuration 1.0//EN" "http://ibatis.apache.org/dtd/ibator-config_1_0.dtd"> <ibatorConfiguration> <classPathEntry location="/dos/connector/oracle_jdbc.jar"/> <ibatorContext id="autoPerson" defaultModelType="flat" targetRuntime="Ibatis2Java2"> <jdbcConnection connectionURL="jdbc:oracle:thin:@192.168.42.144:1521:orcl" driverClass="oracle.jdbc.driver.OracleDriver" userId="user" password="password"/> <javaModelGenerator targetPackage="com.ko.model" targetProject="FormConfig"> <property name="enableSubPackages" value="true"/> <property name="trimStrings" value="true"/> </javaModelGenerator> <sqlMapGenerator targetPackage="com.ko.map" targetProject="FormConfig"> <property name="enableSubPackages" value="true"/> </sqlMapGenerator> <daoGenerator targetPackage="com.ko.model.dao" type="SPRING" targetProject="FormConfig" implementationPackage="com.ko.model.dao.impl" > <property name="enableSubPackges" value="true"/> <property name="methodNameCalculator" value="extended"/> </daoGenerator> <table tableName="APP_REQ_APPROVE_COMPARE" domainObjectName="AppReqApproveCompare"/> <ibatorConfiguration>

    Read the article

  • Add new row in a databound form with a Oracle Sequence as the primary key

    - by Ranhiru
    I am connecting C# with Oracle 11g. I have a DataTable which i fill using an Oracle Data Adapter. OracleDataAdapter da; DataTable dt = new DataTable(); da = new OracleDataAdapter("SELECT * FROM Author", con); da.Fill(dt); I have few text boxes that I have databound to various rows in the data table. txtAuthorID.DataBindings.Add("Text", dt, "AUTHORID"); txtFirstName.DataBindings.Add("Text", dt, "FIRSTNAME"); txtLastName.DataBindings.Add("Text", dt, "LASTNAME"); txtAddress.DataBindings.Add("Text", dt, "ADDRESS"); txtTelephone.DataBindings.Add("Text", dt, "TELEPHONE"); txtEmailAddress.DataBindings.Add("Text", dt, "EMAIL"); I also have a DataGridView below the Text Boxes, showing the contents of the DataTable. dgvAuthor.DataSource = dt; Now when I want to add a new row, i do bm.AddNew(); where bm is defined in Form_Load as BindingManagerBase bm; bm = this.BindingContext[dt]; And when the save button is clicked after all the information is entered and validated, i do this.BindingContext[dt].EndCurrentEdit(); try { da.Update(dt); } catch (Exception ex) { MessageBox.Show(ex.Message); } However the problem comes where when I usually enter a row to the database (using SQL Plus) , I use a my_pk_sequence.nextval for the primary key. But how do i specify that when i add a new row in this method? I catch this exception ORA-01400: cannot insert NULL into ("SYSMAN".AUTHOR.AUTHORID") which is obvious because nothing was specified for the primary key. How do get around this? Thanx a lot in advance :)

    Read the article

  • Oracle doesn't remove cursors after closing result set

    - by Vladimir
    Note: we reuse single connection. ************************************************ public Connection connection() {                try {            if ((connection == null) || (connection.isClosed()))            {               if (connection!=null) log.severe("Connection was closed !");                connection = DriverManager.getConnection(jdbcURL, username, password);            }        } catch (SQLException e) {            log.severe("can't connect: " + e.getMessage());        }        return connection;            } ************************************************** public IngisObject[] select(String query, String idColumnName, String[] columns) { Connection con = connection(); Vector<IngisObject> objects = new Vector<IngisObject>(); try {     Statement stmt = con.createStatement();     String sql = query;     ResultSet rs =stmt.executeQuery(sql);//oracle increases cursors count here     while(rs.next()) {        IngisObject o = new IngisObject("New Result");        o.setIdColumnName(idColumnName);                    o.setDatabase(this);        for(String column: columns) o.attrs().put(column, rs.getObject(column));        objects.add(o);        }     rs.close();// oracle don't decrease cursor count here, while it's expected     stmt.close();     } catch (SQLException ex) {     System.out.println(query);     ex.printStackTrace(); }

    Read the article

  • How to export Oracle statistics

    - by A_M
    Hi, I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production. My development database doesn't have anything like the data volumes of the production database. How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things. Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc. I need to do this in my development database first, before rolling out to test and production. If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production. Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible. All this is using Oracle 9i. Thanks.

    Read the article

  • DBD::Oracle and utf8 issue

    - by goe
    Hi All, I have a problem where my perl code using the latest DBD::Oracle on perl v5.8.8 throws an exception on me when I try to insert characters like 'ñ'. Exception: DBD::Oracle::db do failed: ORA-01756: quoted string not properly terminated (DBD ERROR: OCIStmtPrepare) My $ENV{NLS_LANG} is set to 'AMERICAN_AMERICA.AL32UTF8' These are the DB params based on "SELECT * from NLS_DATABASE_PARAMETERS" 1 NLS_LANGUAGE AMERICAN 2 NLS_TERRITORY AMERICA 3 NLS_CURRENCY $ 4 NLS_ISO_CURRENCY AMERICA 5 NLS_NUMERIC_CHARACTERS ., 6 NLS_CHARACTERSET AL32UTF8 7 NLS_CALENDAR GREGORIAN 8 NLS_DATE_FORMAT DD-MON-RR 9 NLS_DATE_LANGUAGE AMERICAN 10 NLS_SORT BINARY 11 NLS_TIME_FORMAT HH.MI.SSXFF AM 12 NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM 13 NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR 14 NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR 15 NLS_DUAL_CURRENCY $ 16 NLS_COMP BINARY 17 NLS_LENGTH_SEMANTICS BYTE These are perl params based on "$db-ora_nls_parameters()" $VAR1 = { 'NLS_LANGUAGE' => 'AMERICAN', 'NLS_TIME_TZ_FORMAT' => 'HH.MI.SSXFF AM TZR', 'NLS_SORT' => 'BINARY', 'NLS_NUMERIC_CHARACTERS' => '.,', 'NLS_TIME_FORMAT' => 'HH.MI.SSXFF AM', 'NLS_ISO_CURRENCY' => 'AMERICA', 'NLS_COMP' => 'BINARY', 'NLS_CALENDAR' => 'GREGORIAN', 'NLS_DATE_FORMAT' => 'DD-MON-RR', 'NLS_DATE_LANGUAGE' => 'AMERICAN', 'NLS_TIMESTAMP_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM', 'NLS_TERRITORY' => 'AMERICA', 'NLS_LENGTH_SEMANTICS' => 'BYTE', 'NLS_NCHAR_CHARACTERSET' => 'AL16UTF16', 'NLS_DUAL_CURRENCY' => '$', 'NLS_TIMESTAMP_TZ_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM TZR', 'NLS_NCHAR_CONV_EXCP' => 'FALSE', 'NLS_CHARACTERSET' => 'AL32UTF8', 'NLS_CURRENCY' => '$' }; Here are some other strange facts: If I set NLS_LANG to ‘'AMERICAN_AMERICA.UTF8’ the insert executes fine with ‘ñ’ character. If I leave NLS_LANG as ‘'AMERICAN_AMERICA.AL32UTF8' but use ‘Ñ’ the insert will run fine as well.

    Read the article

  • Distributed Cache with Serialized File as DataStore in Oracle Coherence

    - by user226295
    Weired but I am investigating the Oracle Coherence as a substitue for distribute cache. My primarr problem is that we dont have distribituted cache as such as of now in our app. Thats my major concern. And thats what I want to implement. So, lets say if I take up a machine and start a new (3rd) reading process, it will be able to connect to the cache and listen to the cache and will have a full set of cache triplicated (as of now its duplicated) Now thats waste from a common person stanpoint too. The size of the cache is 2 GB and without going distibuted its limiting us. Thats bring me to Coheremce. But now, we dont have database as persistent store too. we have the archival processes as our persistent store. (90 days worth of data) Ok now multiply that with soem where around 2 GB * 90 (thats the bare minimum we want to keep). Preliminary/Intermediate analysis of Coherence as a solution. And a (supposedly) brilliant thought crossed my mind. Why not have this as persistant storage with my distributed cache. Does Oracle Coherence support that. I will get rid of archiving infrastructure too (i hate daemon archiving processes). For some starnge reasons, I dont wanna go to the DB to replace those flat files. What say?, can Coherence be my savior? Any other stable alternate too. (Coherence is imposed on me by big guys, FYI)

    Read the article

  • Ajax and JSF 1.1 using hidden iframe with "proxy forms", what do you think about this development st

    - by Steel Plume
    Hi, currently I am using yet 1.1 specs, so I am trying to make simple what is too complex for me :p, managing backing beans with conflicting navigation rules, external params breaking rules and so on... for example when I need a backing bean used by other "views" simply I call it using FacesContext inside other backing beans, but often it's too wired up to JSF navigation/initialization rules to be really usable, and of course more simple is more useful become the FacesContext. So with only a bit of cross browser Javascript (simply a form copy and a read-write on a "proxy" form), I create a sort of proxy form inside the main user page (totally disassociated from JSF navigation rules, but using JSF taglibs). Ajax gives me flexibility on the user interaction, but data is always managed by JSF. Pratically I demand all "fictious" user actions to an hidden "iframe" which build up all needed forms according JSF rules, then a javascript simply clone its form output and put it into the user view level (CSS for showing/hiding real command buttons and making pretty), the user plays around and when he click submit, a script copies all "proxied" form values into the real JSF form inside the "iframe" that invokes the real submit of the form, what it returns is obviously dependent by your choice. Now JSF is really a pleasure :-p My real interest is to know what are your alternative strategy for using pure Ajax and JSF 1.1 without adopting middle layer like ajax4jsf and others, all good choices but too much "plugins" than specs.

    Read the article

  • Invoking a SOAP ( Web Services ) from ORACLE DB

    - by Mousarules
    Dears, Kindly note that I’m trying to invoke a SOAP (web services) from ORACLE DB using pl\sql , after I have done some investigations it says that I have to use the UTL_HTTP package but It didn't work with me !!! Kindly to advice me , where should I exactly place the following SOAP in pl\SQL to be invoked .... is it posible ? SOAP 1.1 The following is a sample SOAP 1.1 request and response. The placeholders shown need to be replaced with actual values. POST /gmgwebservice/service.asmx HTTP/1.1 Host: bulk.umniah.com Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "http://tempuri.org/SendSMS" <SendSMS xmlns="http://tempuri.org/"> <UserName>string</UserName> <Password>string</Password> <MessageBody>string</MessageBody> <Sender>string</Sender> <Destination>string</Destination> </SendSMS> HTTP/1.1 200 OK Content-Type: text/xml; charset=utf-8 Content-Length: length <SendSMSResponse xmlns="http://tempuri.org/"> <SendSMSResult>string</SendSMSResult> </SendSMSResponse> --This web services refers to a web site called Bulk Messaging ; the web site sends SMS to a specific mobile number by filling in some text boxes , I need it to be done from ORACLE forms when a specific action occurs ( JOB ) but I don’t know how to use it inside my pl\sql code . Hope that it’s clear ,is there something else I have to mention ?

    Read the article

  • Batch insert mode with hibernate and oracle: seems to be dropping back to slow mode silently

    - by Chris
    I'm trying to get a batch insert working with Hibernate into Oracle, according to what i've read here: http://docs.jboss.org/hibernate/core/3.3/reference/en/html/batch.html , but with my benchmarking it doesn't seem any faster than before. Can anyone suggest a way to prove whether hibernate is using batch mode or not? I hear that there are numerous reasons why it may silently drop into normal mode (eg associations and generated ids) so is there some way to find out why it has gone non-batch? My hibernate.cfg.xml contains this line which i believe is all i need to enable batch mode: <property name="jdbc.batch_size">50</property> My insert code looks like this: List<LogEntry> entries = ..a list of 100 LogEntry data classes... Session sess = sessionFactory.getCurrentSession(); for(LogEntry e : entries) { sess.save(e); } sess.flush(); sess.clear(); My 'logentry' class has no associations, the only interesting field is the id: @Entity @Table(name="log_entries") public class LogEntry { @Id @GeneratedValue public Long id; ..other fields - strings and ints... However, since it is oracle, i believe the @GeneratedValue will use the sequence generator. And i believe that only the 'identity' generator will stop bulk inserts. So if anyone can explain why it isn't running in batch mode, or how i can find out for sure if it is or isn't in batch mode, or find out why hibernate is silently dropping back to slow mode, i'd be most grateful. Thanks

    Read the article

  • Forms blank when rendering a partial when using a collection of objects. Help!

    - by dustmoo
    Alright, I know my title is a little obscure but it best describes the problem I am having. Essentially, I have a list of users, and want to be able to edit their information in-line using AJAX. Since the users are showing up in rows, I am using a partial to render the data and the forms (which will be hidden initially by the ajax), however, when the rows are rendered currently only the last item has it's form's fields populated. I suspect this has something to do with the fact that all the form fields have the same id's and it is confusing the DOM. But I don't know how to make sure the id's are unique. Here is a small example: In my view: <%= render :partial => 'shared/user', :collection => @users %> My partial (broke down to just the form) note that I am using the local variable "user" <% form_for user, :html => {:multipart => true} do |f| -%> <%= f.label :name, "Name*" %> <%= f.text_field :title, :class => "input" %> <%= f.label :Address, "Address" %> <%= f.text_field :address, :class => "input" %> <%= f.label :description, "Description*" %> <%= f.text_area :description, :class => "input" %> <% end -%> When the html is rendered each form has a unique id (for the id of the user) but the elements themselves all have the same id, and only the last user form is actually getting populated with values. Does anyone have any ideas?? :) Thanks in advance!

    Read the article

  • Oracle clients dead wait

    - by Macroideal
    hi all friends I meet a problem yesterday. Maybe it's because it is April 1st... but it did exist. I have 3 PCs in remote area, two clients and one oracle server. My app is running separately in the two clients, connecting hourly to the oracle database. My clients worked well before April 1st, but suddenly my app in the client machines went down. Firstly, I did not change any configurations. I used libsqlora8 to connect to the server. I went into a dead loop in the library. I tried sqlplus, but it is dead there in my shell terminal, like it meets an infinite loop: no return until i pressed ctrl + c. The reason I guess is an "infinite loop" somewhere. BTW, when I used my local PC to connect the server, it worked well. Just from this phenomenon, we can see the problem lies in the client machine. I checked the configuration file both in local machine and client machines -they are identical Have you met a problem like this? I hope it's not due to April 1st.

    Read the article

  • Novo Suporte para Combinação e Minificação de Arquivos JavaScript e CSS (Série de posts sobre a ASP.NET 4.5)

    - by Leniel Macaferi
    Este é o sexto post de uma série de posts que estou escrevendo sobre a ASP.NET 4.5. Os próximos lançamentos do .NET e Visual Studio incluem vários novos e ótimos recursos e capacidades. Com a ASP.NET 4.5 você vai ver um monte de melhorias realmente emocionantes em formulários da Web ( Web Forms ) e MVC - assim como no núcleo da base de código da ASP.NET, no qual estas tecnologias são baseadas. O post de hoje cobre um pouco do trabalho que estamos realizando para adicionar suporte nativo para combinação e minificação de arquivos JavaScript e CSS dentro da ASP.NET - o que torna mais fácil melhorar o desempenho das aplicações. Este recurso pode ser utilizado por todas as aplicações ASP.NET, incluindo tanto a ASP.NET MVC quanto a ASP.NET Web Forms. Noções básicas sobre Combinação e Minificação Como mais e mais pessoas usando dispositivos móveis para navegar na web, está se tornando cada vez mais importante que os websites e aplicações que construímos tenham um bom desempenho neles. Todos nós já tentamos carregar sites em nossos smartphones - apenas para, eventualmente, desistirmos em meio à frustração porque os mesmos são carregados lentamente através da lenta rede celular. Se o seu site/aplicação carrega lentamente assim, você está provavelmente perdendo clientes em potencial por causa do mau desempenho/performance. Mesmo com máquinas desktop poderosas, o tempo de carregamento do seu site e o desempenho percebido podem contribuir enormemente para a percepção do cliente. A maioria dos websites hoje em dia são construídos com múltiplos arquivos de JavaScript e CSS para separar o código e para manter a base de código coesa. Embora esta seja uma boa prática do ponto de vista de codificação, muitas vezes isso leva a algumas consequências negativas no tocante ao desempenho geral do site. Vários arquivos de JavaScript e CSS requerem múltiplas solicitações HTTP provenientes do navegador - o que pode retardar o tempo de carregamento do site.  Exemplo Simples A seguir eu abri um site local no IE9 e gravei o tráfego da rede usando as ferramentas do desenvolvedor nativas do IE (IE Developer Tools) que podem ser acessadas com a tecla F12. Como mostrado abaixo, o site é composto por 5 arquivos CSS e 4 arquivos JavaScript, os quais o navegador tem que fazer o download. Cada arquivo é solicitado separadamente pelo navegador e retornado pelo servidor, e o processo pode levar uma quantidade significativa de tempo proporcional ao número de arquivos em questão. Combinação A ASP.NET está adicionando um recurso que facilita a "união" ou "combinação" de múltiplos arquivos CSS e JavaScript em menos solicitações HTTP. Isso faz com que o navegador solicite muito menos arquivos, o que por sua vez reduz o tempo que o mesmo leva para buscá-los. A seguir está uma versão atualizada do exemplo mostrado acima, que tira vantagem desta nova funcionalidade de combinação de arquivos (fazendo apenas um pedido para JavaScript e um pedido para CSS): O navegador agora tem que enviar menos solicitações ao servidor. O conteúdo dos arquivos individuais foram combinados/unidos na mesma resposta, mas o conteúdo dos arquivos permanece o mesmo - por isso o tamanho do arquivo geral é exatamente o mesmo de antes da combinação (somando o tamanho dos arquivos separados). Mas note como mesmo em uma máquina de desenvolvimento local (onde a latência da rede entre o navegador e o servidor é mínima), o ato de combinar os arquivos CSS e JavaScript ainda consegue reduzir o tempo de carregamento total da página em quase 20%. Em uma rede lenta a melhora de desempenho seria ainda maior. Minificação A próxima versão da ASP.NET também está adicionando uma nova funcionalidade que facilita reduzir ou "minificar" o tamanho do download do conteúdo. Este é um processo que remove espaços em branco, comentários e outros caracteres desnecessários dos arquivos CSS e JavaScript. O resultado é arquivos menores, que serão enviados e carregados no navegador muito mais rapidamente. O gráfico a seguir mostra o ganho de desempenho que estamos tendo quando os processos de combinação e minificação dos arquivos são usados ??em conjunto: Mesmo no meu computador de desenvolvimento local (onde a latência da rede é mínima), agora temos uma melhoria de desempenho de 40% a partir de onde originalmente começamos. Em redes lentas (e especialmente com clientes internacionais), os ganhos seriam ainda mais significativos. Usando Combinação e Minificação de Arquivos dentro da ASP.NET A próxima versão da ASP.NET torna realmente fácil tirar proveito da combinação e minificação de arquivos dentro de projetos, possibilitando ganhos de desempenho como os que foram mostrados nos cenários acima. A forma como ela faz isso, te permite evitar a execução de ferramentas personalizadas/customizadas, como parte do seu processo de construção da aplicação/website - ao invés disso, a ASP.NET adicionou suporte no tempo de execução/runtime para que você possa executar a combinação/minificação dos arquivos dinamicamente (cacheando os resultados para ter certeza de que a performance seja realmente satisfatória). Isto permite uma experiência de desenvolvimento realmente limpa e torna super fácil começar a tirar proveito destas novas funcionalidades. Vamos supor que temos um projeto simples com 4 arquivos JavaScript e 6 arquivos CSS: Combinando e Minificando os Arquivos CSS Digamos que você queira referenciar em uma página todas as folhas de estilo que estão dentro da pasta "Styles" mostrada acima. Hoje você tem que adicionar múltiplas referências para os arquivos CSS para obter todos eles - o que se traduziria em seis requisições HTTP separadas: O novo recurso de combinação/minificação agora permite que você combine e minifique todos os arquivos CSS da pasta Styles - simplesmente enviando uma solicitação de URL para a pasta (neste caso, "styles"), com um caminho adicional "/css" na URL. Por exemplo:    Isso fará com que a ASP.NET verifique o diretório, combine e minifique os arquivos CSS que estiverem dentro da pasta, e envie uma única resposta HTTP para o navegador com todo o conteúdo CSS. Você não precisa executar nenhuma ferramenta ou pré-processamento para obter esse comportamento. Isso te permite separar de maneira limpa seus estilos em arquivos CSS separados e condizentes com cada funcionalidade da aplicação mantendo uma experiência de desenvolvimento extremamente limpa - e mesmo assim você não terá um impacto negativo de desempenho no tempo de execução da aplicação. O designer do Visual Studio também vai honrar a lógica de combinação/minificação - assim você ainda terá uma experiência WYSWIYG no designer dentro VS. Combinando e Minificando os Arquivos JavaScript Como a abordagem CSS mostrada acima, se quiséssemos combinar e minificar todos os nossos arquivos de JavaScript em uma única resposta, poderíamos enviar um pedido de URL para a pasta (neste caso, "scripts"), com um caminho adicional "/js":   Isso fará com que a ASP.NET verifique o diretório, combine e minifique os arquivos com extensão .js dentro dele, e envie uma única resposta HTTP para o navegador com todo o conteúdo JavaScript. Mais uma vez - nenhuma ferramenta customizada ou etapas de construção foi necessária para obtermos esse comportamento. Este processo funciona em todos os navegadores. Ordenação dos Arquivos dentro de um Pacote Por padrão, quando os arquivos são combinados pela ASP.NET, eles são ordenados em ordem alfabética primeiramente, exatamente como eles são mostrados no Solution Explorer. Em seguida, eles são automaticamente reorganizados de modo que as bibliotecas conhecidas e suas extensões personalizadas, tais como jQuery, MooTools e Dojo sejam carregadas antes de qualquer outra coisa. Assim, a ordem padrão para a combinação dos arquivos da pasta Scripts, como a mostrada acima será: jquery-1.6.2.js jquery-ui.js jquery.tools.js a.js Por padrão, os arquivos CSS também são classificados em ordem alfabética e depois são reorganizados de forma que o arquivo reset.css e normalize.css (se eles estiverem presentes na pasta) venham sempre antes de qualquer outro arquivo. Assim, o padrão de classificação da combinação dos arquivos da pasta "Styles", como a mostrada acima será: reset.css content.css forms.css globals.css menu.css styles.css A ordenação/classificação é totalmente personalizável, e pode ser facilmente alterada para acomodar a maioria dos casos e qualquer padrão de nomenclatura que você prefira. O objetivo com a experiência pronta para uso, porém, é ter padrões inteligentes que você pode simplesmente usar e ter sucesso com os mesmos. Qualquer número de Diretórios/Subdiretórios é Suportado No exemplo acima, nós tivemos apenas uma única pasta "Scripts" e "Styles" em nossa aplicação. Isso funciona para alguns tipos de aplicação (por exemplo, aplicações com páginas simples). Muitas vezes, porém, você vai querer ter múltiplos pacotes/combinações de arquivos CSS/JS dentro de sua aplicação - por exemplo: um pacote "comum", que tem o núcleo dos arquivos JS e CSS que todas as páginas usam, e então arquivos específicos para páginas ou seções que não são utilizados globalmente. Você pode usar o suporte à combinação/minificação em qualquer número de diretórios ou subdiretórios em seu projeto - isto torna mais fácil estruturar seu código de forma a maximizar os benefícios da combinação/minificação dos arquivos. Cada diretório por padrão pode ser acessado como um pacote separado e endereçável através de uma URL.  Extensibilidade para Combinação/Minificação de Arquivos O suporte da ASP.NET para combinar e minificar é construído com extensibilidade em mente e cada parte do processo pode ser estendido ou substituído. Regras Personalizadas Além de permitir a abordagem de empacotamento - baseada em diretórios - que vem pronta para ser usada, a ASP.NET também suporta a capacidade de registrar pacotes/combinações personalizadas usando uma nova API de programação que estamos expondo.  O código a seguir demonstra como você pode registrar um "customscript" (script personalizável) usando código dentro da classe Global.asax de uma aplicação. A API permite que você adicione/remova/filtre os arquivos que farão parte do pacote de maneira muito granular:     O pacote personalizado acima pode ser referenciado em qualquer lugar dentro da aplicação usando a referência de <script> mostrada a seguir:     Processamento Personalizado Você também pode substituir os pacotes padrão CSS e JavaScript para suportar seu próprio processamento personalizado dos arquivos do pacote (por exemplo: regras personalizadas para minificação, suporte para Saas, LESS ou sintaxe CoffeeScript, etc). No exemplo mostrado a seguir, estamos indicando que queremos substituir as transformações nativas de minificação com classes MyJsTransform e MyCssTransform personalizadas. Elas são subclasses dos respectivos minificadores padrão para CSS e JavaScript, e podem adicionar funcionalidades extras:     O resultado final desta extensibilidade é que você pode se plugar dentro da lógica de combinação/minificação em um nível profundo e fazer algumas coisas muito legais com este recurso. Vídeo de 2 Minutos sobre Combinação e Minificacão de Arquivos em Ação Mads Kristensen tem um ótimo vídeo de 90 segundo (em Inglês) que demonstra a utilização do recurso de Combinação e Minificação de Arquivos. Você pode assistir o vídeo de 90 segundos aqui. Sumário O novo suporte para combinação e minificação de arquivos CSS e JavaScript dentro da próxima versão da ASP.NET tornará mais fácil a construção de aplicações web performáticas. Este recurso é realmente fácil de usar e não requer grandes mudanças no seu fluxo de trabalho de desenvolvimento existente. Ele também suporta uma rica API de extensibilidade que permite a você personalizar a lógica da maneira que você achar melhor. Você pode facilmente tirar vantagem deste novo suporte dentro de aplicações baseadas em ASP.NET MVC e ASP.NET Web Forms. Espero que ajude, Scott P.S. Além do blog, eu uso o Twitter para disponibilizar posts rápidos e para compartilhar links.Lidar com o meu Twitter é: @scottgu Texto traduzido do post original por Leniel Macaferi. google_ad_client = "pub-8849057428395760"; /* 728x90, created 2/15/09 */ google_ad_slot = "4706719075"; google_ad_width = 728; google_ad_height = 90;

    Read the article

  • New Bundling and Minification Support (ASP.NET 4.5 Series)

    - by ScottGu
    This is the sixth in a series of blog posts I'm doing on ASP.NET 4.5. The next release of .NET and Visual Studio include a ton of great new features and capabilities.  With ASP.NET 4.5 you'll see a bunch of really nice improvements with both Web Forms and MVC - as well as in the core ASP.NET base foundation that both are built upon. Today’s post covers some of the work we are doing to add built-in support for bundling and minification into ASP.NET - which makes it easy to improve the performance of applications.  This feature can be used by all ASP.NET applications, including both ASP.NET MVC and ASP.NET Web Forms solutions. Basics of Bundling and Minification As more and more people use mobile devices to surf the web, it is becoming increasingly important that the websites and apps we build perform well with them. We’ve all tried loading sites on our smartphones – only to eventually give up in frustration as it loads slowly over a slow cellular network.  If your site/app loads slowly like that, you are likely losing potential customers because of bad performance.  Even with powerful desktop machines, the load time of your site and perceived performance can make an enormous customer perception. Most websites today are made up of multiple JavaScript and CSS files to separate the concerns and keep the code base tight. While this is a good practice from a coding point of view, it often has some unfortunate consequences for the overall performance of the website.  Multiple JavaScript and CSS files require multiple HTTP requests from a browser – which in turn can slow down the performance load time.  Simple Example Below I’ve opened a local website in IE9 and recorded the network traffic using IE’s built-in F12 developer tools. As shown below, the website consists of 5 CSS and 4 JavaScript files which the browser has to download. Each file is currently requested separately by the browser and returned by the server, and the process can take a significant amount of time proportional to the number of files in question. Bundling ASP.NET is adding a feature that makes it easy to “bundle” or “combine” multiple CSS and JavaScript files into fewer HTTP requests. This causes the browser to request a lot fewer files and in turn reduces the time it takes to fetch them.   Below is an updated version of the above sample that takes advantage of this new bundling functionality (making only one request for the JavaScript and one request for the CSS): The browser now has to send fewer requests to the server. The content of the individual files have been bundled/combined into the same response, but the content of the files remains the same - so the overall file size is exactly the same as before the bundling.   But notice how even on a local dev machine (where the network latency between the browser and server is minimal), the act of bundling the CSS and JavaScript files together still manages to reduce the overall page load time by almost 20%.  Over a slow network the performance improvement would be even better. Minification The next release of ASP.NET is also adding a new feature that makes it easy to reduce or “minify” the download size of the content as well.  This is a process that removes whitespace, comments and other unneeded characters from both CSS and JavaScript. The result is smaller files, which will download and load in a browser faster.  The graph below shows the performance gain we are seeing when both bundling and minification are used together: Even on my local dev box (where the network latency is minimal), we now have a 40% performance improvement from where we originally started.  On slow networks (and especially with international customers), the gains would be even more significant. Using Bundling and Minification inside ASP.NET The upcoming release of ASP.NET makes it really easy to take advantage of bundling and minification within projects and see performance gains like in the scenario above. The way it does this allows you to avoid having to run custom tools as part of your build process –  instead ASP.NET has added runtime support to perform the bundling/minification for you dynamically (caching the results to make sure perf is great).  This enables a really clean development experience and makes it super easy to start to take advantage of these new features. Let’s assume that we have a simple project that has 4 JavaScript files and 6 CSS files: Bundling and Minifying the .css files Let’s say you wanted to reference all of the stylesheets in the “Styles” folder above on a page.  Today you’d have to add multiple CSS references to get all of them – which would translate into 6 separate HTTP requests: The new bundling/minification feature now allows you to instead bundle and minify all of the .css files in the Styles folder – simply by sending a URL request to the folder (in this case “styles”) with an appended “/css” path after it.  For example:    This will cause ASP.NET to scan the directory, bundle and minify the .css files within it, and send back a single HTTP response with all of the CSS content to the browser.  You don’t need to run any tools or pre-processor to get this behavior.  This enables you to cleanly separate your CSS into separate logical .css files and maintain a very clean development experience – while not taking a performance hit at runtime for doing so.  The Visual Studio designer will also honor the new bundling/minification logic as well – so you’ll still get a WYSWIYG designer experience inside VS as well. Bundling and Minifying the JavaScript files Like the CSS approach above, if we wanted to bundle and minify all of our JavaScript into a single response we could send a URL request to the folder (in this case “scripts”) with an appended “/js” path after it:   This will cause ASP.NET to scan the directory, bundle and minify the .js files within it, and send back a single HTTP response with all of the JavaScript content to the browser.  Again – no custom tools or builds steps were required in order to get this behavior.  And it works with all browsers. Ordering of Files within a Bundle By default, when files are bundled by ASP.NET they are sorted alphabetically first, just like they are shown in Solution Explorer. Then they are automatically shifted around so that known libraries and their custom extensions such as jQuery, MooTools and Dojo are loaded before anything else. So the default order for the merged bundling of the Scripts folder as shown above will be: Jquery-1.6.2.js Jquery-ui.js Jquery.tools.js a.js By default, CSS files are also sorted alphabetically and then shifted around so that reset.css and normalize.css (if they are there) will go before any other file. So the default sorting of the bundling of the Styles folder as shown above will be: reset.css content.css forms.css globals.css menu.css styles.css The sorting is fully customizable, though, and can easily be changed to accommodate most use cases and any common naming pattern you prefer.  The goal with the out of the box experience, though, is to have smart defaults that you can just use and be successful with. Any number of directories/sub-directories supported In the example above we just had a single “Scripts” and “Styles” folder for our application.  This works for some application types (e.g. single page applications).  Often, though, you’ll want to have multiple CSS/JS bundles within your application – for example: a “common” bundle that has core JS and CSS files that all pages use, and then page specific or section specific files that are not used globally. You can use the bundling/minification support across any number of directories or sub-directories in your project – this makes it easy to structure your code so as to maximize the bunding/minification benefits.  Each directory by default can be accessed as a separate URL addressable bundle.  Bundling/Minification Extensibility ASP.NET’s bundling and minification support is built with extensibility in mind and every part of the process can be extended or replaced. Custom Rules In addition to enabling the out of the box - directory-based - bundling approach, ASP.NET also supports the ability to register custom bundles using a new programmatic API we are exposing.  The below code demonstrates how you can register a “customscript” bundle using code within an application’s Global.asax class.  The API allows you to add/remove/filter files that go into the bundle on a very granular level:     The above custom bundle can then be referenced anywhere within the application using the below <script> reference:     Custom Processing You can also override the default CSS and JavaScript bundles to support your own custom processing of the bundled files (for example: custom minification rules, support for Saas, LESS or Coffeescript syntax, etc). In the example below we are indicating that we want to replace the built-in minification transforms with a custom MyJsTransform and MyCssTransform class. They both subclass the CSS and JavaScript minifier respectively and can add extra functionality:     The end result of this extensibility is that you can plug-into the bundling/minification logic at a deep level and do some pretty cool things with it. 2 Minute Video of Bundling and Minification in Action Mads Kristensen has a great 90 second video that shows off using the new Bundling and Minification feature.  You can watch the 90 second video here. Summary The new bundling and minification support within the next release of ASP.NET will make it easier to build fast web applications.  It is really easy to use, and doesn’t require major changes to your existing dev workflow.  It is also supports a rich extensibility API that enables you to customize it however you want. You can easily take advantage of this new support within ASP.NET MVC, ASP.NET Web Forms and ASP.NET Web Pages based applications. Hope this helps, Scott P.S. In addition to blogging, I use Twitter to-do quick posts and share links. My Twitter handle is: @scottgu

    Read the article

  • Building vs. Buying a Master Data Management Solution

    - by david.butler(at)oracle.com
    Many organizations prefer to build their own MDM solutions. The argument is that they know their data quality issues and their data better than anyone. Plus a focused solution will cost less in the long run then a vendor supplied general purpose product. This is not unreasonable if you think of MDM as a point solution for a particular data quality problem. But this approach carries significant risk. We now know that organizations achieve significant competitive advantages when they deploy MDM as a strategic enterprise wide solution: with the most common best practice being to deploy a tactical MDM solution and grow it into a full information architecture. A build your own approach most certainly will not scale to a larger architecture unless it is done correctly with the larger solution in mind. It is possible to build a home grown point MDM solution in such a way that it will dovetail into broader MDM architectures. A very good place to start is to use the same basic technologies that Oracle uses to build its own MDM solutions. Start with the Oracle 11g database to create a flexible, extensible and open data model to hold the master data and all needed attributes. The Oracle database is the most flexible, highly available and scalable database system on the market. With its Real Application Clusters (RAC) it can even support the mixed OLTP and BI workloads that represent typical MDM data access profiles. Use Oracle Data Integration (ODI) for batch data movement between applications, MDM data stores, and the BI layer. Use Oracle Golden Gate for more real-time data movement. Use Oracle's SOA Suite for application integration with its: BPEL Process Manager to orchestrate MDM connections to business processes; Identity Management for managing users; WS Manager for managing web services; Business Intelligence Enterprise Edition for analytics; and JDeveloper for creating or extending the MDM management application. Oracle utilizes these technologies to build its MDM Hubs.  Customers who build their own MDM solution using these components will easily migrate to Oracle provided MDM solutions when the home grown solution runs out of gas. But, even with a full stack of open flexible MDM technologies, creating a robust MDM application can be a daunting task. For example, a basic MDM solution will need: a set of data access methods that support master data as a service as well as direct real time access as well as batch loads and extracts; a data migration service for initial loads and periodic updates; a metadata management capability for items such as business entity matrixed relationships and hierarchies; a source system management capability to fully cross-reference business objects and to satisfy seemingly conflicting data ownership requirements; a data quality function that can find and eliminate duplicate data while insuring correct data attribute survivorship; a set of data quality functions that can manage structured and unstructured data; a data quality interface to assist with preventing new errors from entering the system even when data entry is outside the MDM application itself; a continuing data cleansing function to keep the data up to date; an internal triggering mechanism to create and deploy change information to all connected systems; a comprehensive role based data security system to control and monitor data access, update rights, and maintain change history; a flexible business rules engine for managing master data processes such as privacy and data movement; a user interface to support casual users and data stewards; a business intelligence structure to support profiling, compliance, and business performance indicators; and an analytical foundation for directly analyzing master data. Oracle's pre-built MDM Hub solutions are full-featured 3-tier Internet applications designed to participate in the full Oracle technology stack or to run independently in other open IT SOA environments. Building MDM solutions from scratch can take years. Oracle's pre-built MDM solutions can bring quality data to the enterprise in a matter of months. But if you must build, at lease build with the world's best technology stack in a way that simplifies the eventual upgrade to Oracle MDM and to the full enterprise wide information architecture that it enables.

    Read the article

  • Oracle SQL Developer v3.2.1 Now Available

    - by thatjeffsmith
    Oracle SQL Developer version 3.2.1 is now available. I recommend that everyone now upgrade to this release. It features more than 200 bug fixes, tweaks, and polish applied to the 3.2 edition. The high profile bug fixes submitted by customers and users on our forums are listed in all their glory for your review. I want to highlight a few of the changes though, as I recognize many of you lack the time and/or patience to ‘read the docs.’ That would include me, which is why I enjoy writing these kinds of blog posts. I’m lazy – just like you! No more artificial line breaks between CREATE OR REPLACE and your PL/SQL In versions 3.2 and older, when you pull up your stored procedural objects in our editor, you would see a line break inserted between the CREATE OR REPLACE and then the body of your code. In version 3.2.1, we have removed the line break. 3.1 3.2.1 Trivia Did You Know? The database doesn’t store the ‘CREATE’ or ‘CREATE OR REPLACE’ bit of your PL/SQL code in the database. If we look at the USER_SOURCE view, we can see that the code begins with the object name. So the CREATE OR REPLACE bit is ‘artificial’ The intent is to give you the code necessary to recreate your object – and have it ‘compile’ into the database. We pretty much HAVE to add the ‘CREATE OR REPLACE.’ From now on it will appear inline with the first line of your code. Exporting Tables & Views When exporting data from your tables or views, previous versions of SQL Developer presented a 3 step wizard. It allows you to choose your columns and apply data filters for what is exported. This was kind of redundant. The grids already allowed you to select your columns and apply filters. Wouldn’t it be more intuitive AND efficient to just make the grids behave in a What You See Is What You Get (WYSIWYG) fashion? In version 3.2.1, that is exactly what will happen. The wizard now only has two steps and the grid will export the data and columns as defined in the visible grid. Let the grid properties define what is actually exported! And here is what is pasted into my worksheet: "BREWERY"|"CITY" "3 Brewers Restaurant Micro-Brewery"|"Toronto" "Amsterdam Brewing Co."|"Toronto" "Ball Brewing Company Ltd."|"Toronto" "Big Ram Brewing Company"|"Toronto" "Black Creek Historic Brewery"|"Toronto" "Black Oak Brewing"|"Toronto" "C'est What?"|"Toronto" "Cool Beer Brewing Company"|"Toronto" "Denison's Brewing"|"Toronto" "Duggan's Brewery"|"Toronto" "Feathers"|"Toronto" "Fermentations! - Danforth"|"Toronto" "Fermentations! - Mount Pleasant"|"Toronto" "Granite Brewery & Restaurant"|"Toronto" "Labatt's Breweries of Canada"|"Toronto" "Mill Street Brew Pub"|"Toronto" "Mill Street Brewery"|"Toronto" "Molson Breweries of Canada"|"Toronto" "Molson Brewery at Air Canada Centre"|"Toronto" "Pioneer Brewery Ltd."|"Toronto" "Post-Production Bistro"|"Toronto" "Rotterdam Brewing"|"Toronto" "Steam Whistle Brewing"|"Toronto" "Strand Brasserie"|"Toronto" "Upper Canada Brewing"|"Toronto" JUST what I wanted And One Last Thing Speaking of export, sometimes I want to send data to Excel. And sometimes I want to send multiple objects to Excel – to a single Excel file that is. In version 3.2.1 you can now do that. Let’s export the bulk of the HR schema to Excel, with each table going to it’s own workbook in the same worksheet. Select many tables, put them in in a single Excel worksheet If you try this in previous versions of SQL Developer it will just write the first table to the Excel file. This is one of the bugs we addressed in v3.2.1. Here is what the output Excel file looks like now: Many tables - Many workbooks in an Excel Worksheet I have a sneaky suspicion that this will be a frequently used feature going forward. Excel seems to be the cornerstone of many of our popular features. Imagine that!

    Read the article

  • OS Analytics with Oracle Enterprise Manager (by Eran Steiner)

    - by Zeynep Koch
    Oracle Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. The recording of our call to discuss this blog is available here: https://oracleconferencing.webex.com/oracleconferencing/ldr.php?AT=pb&SP=MC&rID=71517797&rKey=4ec9d4a3508564b3Download the presentation here See also: Blog about Alert Monitoring and Problem Notification Blog about Using Operational Profiles to Install Packages and other content Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data Drill down into a process details Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Date Tracking in Oracle HRMS

    - by Manoj Madhusoodanan
    Update Date Track Modes To maintain employee data effectively Oracle HCM is using a mechanism called date tracking.The main motive behind the date track mode is to maintain past,present and future data effectively.The various update date track modes are: CORRECTION : Over writes the data. No history will maintain.UPDATE : Keeps the history and new change will effect as of effective dateUPDATE_CHANGE_INSERT : Inserts the record and preserves the futureUPDATE_OVERRIDE : Inserts the record and overrides the future Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Action: Created Employee # 22 on 01-JAN-2012 The record in PER_ALL_PEOPLE_F is as shown below. Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-DEC-4712 24 2 Action: Updated record in CORRECTION mode Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-DEC-4712 24 Single 3 Action: Updated record in UPDATE mode effective 01-JUN-2012 and Marital Status = Married Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-MAY-2012 24 Single 4 01-JUN-2012 31-DEC-4712 24 Married 5 Action: Updated record in UPDATE mode effective 01-SEP-2012 and Marital Status = Divorced Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-MAY-2012 24 Single 4 01-JUN-2012 31-AUG-2012 24 Married 6 01-SEP-2012 31-DEC-4712 24 Divorced 7 Action: Updated record in UPDATE_CHANGE_INSERT mode effective 01-MAR-2012 and Marital Status = Living Together Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 29-FEB-2012 24 Single 8 01-MAR-2012 31-MAY-2012 24 Living Together 9 01-JUN-2012 31-AUG-2012 24 Married 6 01-SEP-2012 31-DEC-4712 24 Divorced 7 Action: Updated record in UPDATE_OVERRIDE mode effective 01-AUG-2012 and Marital Status = Divorced Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 29-FEB-2012 24 Single 8 01-MAR-2012 31-MAY-2012 24 Living Together 9 01-JUN-2012 31-JUL-2012 24 Married 10 01-AUG-2012 31-DEC-4712 24 Divorced 11  Delete Date Track Modes The various delete date track modes are ZAP : wipes all recordsDELETE : Deletes  current recordFUTURE_CHANGE : Deletes current and future changes.DELETE_NEXT_CHANGE : Deletes next change Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Element Entry records are shown below. Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 12-OCT-2012 129831 3 13-OCT-2012 19-OCT-2012 129831 5 20-OCT-2012 31-DEC-4712 129831 6 Action: Delete record in ZAP mode effective 14-JAN-2012 No rows Action: Delete record in DELETE mode effective 14-OCT-2012 Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 12-OCT-2012 129831 3 13-OCT-2012 14-OCT-2012 129831 6 Action: Delete record in FUTURE_CHANGE mode effective 14-JAN-2012 Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 31-DEC-4712 129831 4 Action: Delete record in NEXT_CHANGE mode effective 14-JAN-2012 Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 19-OCT-2012 129831 4 20-OCT-2012 31-DEC-4712 129831 6

    Read the article

  • ERROR: Not enough space?

    - by dsmoljanovic
    Now this is a very unspecific question. I'm trying to figure out what this message would mean. Here is the story behind it: I'm installing Oracle enterprise manager cloud control (12c r3) on Solaris 10 (5/09). Installer opens up, i enter all needed information and at the last step click Install. It immediately crashes with only "ERROR: Not enough space" written in log and console and nothing else. Now, this could be java error or Solaris error? I'm thinking it's happening either when it starts to copy files or when it tries to launch a process that would do that. What space is it referring to? disk (have ehough), swap (also), memory (yep)... Any ideas are helpful. Edit: i found this exception in the oraInventory logs: oracle.sysman.oii.oiic.OiicInstallAPIException: Not enough space at oracle.sysman.oii.oiic.OiicAPIInstaller.initInstallSession(OiicAPIInstaller.java:2165) at oracle.sysman.oii.oiic.OiicAPIInstaller.initOUIAPISession(OiicAPIInstaller.java:790) at oracle.sysman.install.oneclick.EMGCOUIInstaller.prepareForInstall(EMGCOUIInstaller.java:676) at oracle.sysman.install.oneclick.EMGCSummaryDlgonNext$1.run(EMGCSummaryDlgonNext.java:243) at java.lang.Thread.run(Thread.java:662) at oracle.sysman.install.oneclick.EMGCSummaryDlgonNext.actionsOnClickofNext(EMGCSummaryDlgonNext.java:1067) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at oracle.sysman.install.oneclick.EMGCUtil.performonClickOfNextForClass(EMGCUtil.java:399) at oracle.sysman.install.oneclick.EMGCUtil.performPageLevelValidationsForSilentInstall(EMGCUtil.java:367) at oracle.sysman.install.oneclick.EMGCInstaller.prepareForSilentInstall(EMGCInstaller.java:1459) at oracle.sysman.install.oneclick.EMGCInstaller.main(EMGCInstaller.java:1553) disk status: bash-3.00$ df -h /tmp Filesystem size used avail capacity Mounted on swap 8.1G 2.7G 5.4G 33% /tmp bash-3.00$ df -h /u01 Filesystem size used avail capacity Mounted on / 275G 28G 244G 11% / swap: root@gs12emcc # swap -s total: 18306040k bytes allocated + 3837808k reserved = 22143848k used, 5712664k available

    Read the article

  • Il CRM è al passo con i tempi?

    - by antonella.buonagurio(at)oracle.com
    Il Social Customer Relationship Management è nato grazie alla rivoluzione portata dal Web 2.0, un cambiamento epocale nelle modalità di comunicazione che ha aggiunto una incredibile ricchezza alle conversazioni tra aziende e consumatori. Le aziende dispongono adesso di strumenti per comprendere il proprio mercato senza precedenti, i consumatori, a loro volta, hanno il potere di utilizzare nuovi canali per esprimere le proprie esigenze e per comunicare e condividere commenti ed esperienze. Ma il Web 2.0 non è il solo fattore che impatta sulle scelte strategiche in ambito CRM  che ogni azienda deve considerare per sostenere  questo nuovo rapporto con i propri consumatori.    Vuoi scoprire quali sono le forze (o fattori) che le aziende devono considerare affinchè i processi di gestione della relazione con i clienti stiano al passo con le mutate condizioni sociali ed economiche?   Per saperne di più:   Il whitepaper realizzato da Oracle, Paul Gillin ed  IT Business Edge  ne delinea alcuni: 1.      Il Business. Come è cambiato in funzione dell'esperienza multicanale ora possible, della centralità del cliente e dei social networking che dominano le relazioni on line? 2.      La tecnologiaLe aziende oggi per guadagnare vantaggio competitivo devono dotarsi delle più innovative tecnologie per dare maggior valore al proprio business e per ridurre al minimo i costi di infrastruttura. Quali sono e quali sono gli effettivi vantaggi?   e altri ancora ...... leggendo il white paper "Is your CRM solution keeping up with the times?"

    Read the article

  • New User of UPK?

    - by [email protected]
    The UPK Developer comes with a variety of manuals to help support your organization in the development and deployment of content. The Developer manuals can be found in the \Documentation\Language Code\Reference folder where the Developer has been installed. As of 3.5.x the documentation can also be accessed via the Start menu, Start\Programs\User Productivity Kit\Documentation\Reference. Content Deployment.pdf: This manual provides information on how to deploy content to your audience. Content Development.pdf: This manual provides information on how to create, maintain, and publish content using the Developer. The content of this manual also appears in the Developer help system. Content Player.pdf: This manual provides instructions on how to view content using the Player. The content of this manual also appears in the Player help system. In-Application Support Guide.pdf: This manual provides information on how implement content-sensitive, in-application support for enterprise applications using Player content. Installation & Administration.pdf: This manual provides instructions for installing the Developer in a single-user or multi-user environment as well as information on how to add and manage users and content in a multi-user installation. An Administration help system also appears in the Developer for authors configured as administrators. This manual also provides instructions for installing and configuring Usage Tracking. Upgrade.pdf: This manual provides information on how to upgrade from a previous version to the current version. Usage Tracking Administration & Reporting.pdf: This manual provides instructions on how to manage users and usage tracking reports. - Kathryn Lustenberger, Oracle UPK Outbound Product Management

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >