Search Results

Search found 402 results on 17 pages for 'ora 4031'.

Page 8/17 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • emca fails with "Database instance is unavailable" though available

    - by Giri Mandalika
    The following example shows the symptoms of failure, and the exact error message. $ emca -repos create ... Password for SYSMAN user: Do you wish to continue? [yes(Y)/no(N)]: Y Nov 19, 2012 10:33:42 AM oracle.sysman.emcp.DatabaseChecks \ checkDbAvailabilityImpl WARNING: ORA-01034: ORACLE not available Nov 19, 2012 10:33:42 AM oracle.sysman.emcp.DatabaseChecks \ throwDBUnavailableException SEVERE: Database instance is unavailable. Fix the ORA error thrown and run EM Configuration Assistant again. Some of the possible reasons may be : 1) Database may not be up. 2) Database is started setting environment variable ORACLE_HOME with trailing '/'. Reset ORACLE_HOME and bounce the database. For eg. Database is started setting environment variable ORACLE_HOME=/scratch/db/ . Reset ORACLE_HOME=/scratch/db and bounce the database. Fix: Ensure that the ORACLE_HOME is pointing to the right location in $ORACLE_HOME/bin/emca file. Rather than installing from scratch, if ORACLE_HOME was copied over from another location, likely it results in wrong location for ORACLE_HOME in several Enterprise Manager (EM) specific scripts and files. It usually happens when the directory structure on the target machine is not identical to the structure on the original/source machine, including the top level directory location where Oracle RDBMS was installed properly using the installer.

    Read the article

  • ALERT: Error Processing US Wage Attachment Elements In Payroll Run After RUP Patches

    - by LuciaC
    Customers who have run the Upgrade Wage Attachments process after applying the 2012 RUP are reporting errors similar to those listed below when either running a quickpay or processing a payroll for employee(s) with involuntary deductions. Error: HR_51118_HRPROC_ERR_ON_ASG ASGNO 1115 APP-PAY-51118: Error was encountered when processing assignment 1115 HR_51119_HRPROC_ERR_OCC_ON_ET ETNAME: Garnishment 3 APP-PAY-51119: Error was encountered when processing Element Type Garnishment 3 HR_6881_HRPROC_ORA_ERR SQLERRMC ORA-01403: No data found SQL_NO 520 TABLE_NAME pay_input_values_f APP-PAY-06881:Error ORA-01403: no data found has occured in table pay_input_values_f at location 520 This issue was logged in Bug 14679161 - QUICK PAY ERROR AFTER RUP (2012) AND WAGE ATTACHMENT UPGRADE APP-PAY-06881. The following one off patches have been released to My Oracle Support to resolve this issue*: 11i -  Patch 14679161 12.0 - Patch 14849394:R12.PAY.A 12.1 - Patch 14849394:R12.PAY.B * IMPORTANT:  Depending on when/if customers have run the Wage Attachment upgrade process will determine the appropriate action to take. Any customer who is encountering the above error and/or has run the Wage Attachment upgrade process AFTER applying the 2012 RUP (applicable to their release level) should log a Service Request with Oracle Support to receive assistance on the necessary steps to take to resolve the problem BEFORE applying the above patch. Any customer who has not yet run the Wage Attachment Upgrade process (either before or after applying the 2012 RUP), should follow the action plan documented in the patch readme. For those customers who have already run the Wage Attachment Upgrade process BEFORE applying the 2012 RUP, should apply the patch (applicable to your release) listed above. Be sure to run any post install processes, such as the data install utility and HR global driver.  See the patch readme for full details. Please consult Note 404478.1: Americas (US, CA, MX) HCM High Priority Alert for the latest Alert status.

    Read the article

  • Exadata???DiskGroup

    - by Liu Maclean(???)
    Exadata???Asm Diskgroup ???????: 1.??dcli -g /home/oracle/cell_group -l root cellcli -e list griddisk ????active?griddisk [root@dm01db01 ~]# dcli -g /home/oracle/cell_group -l root cellcli -e list griddisk dm01cel01: DATA_DM01_CD_00_dm01cel01 active dm01cel01: DATA_DM01_CD_01_dm01cel01 active dm01cel01: DATA_DM01_CD_02_dm01cel01 active dm01cel01: DATA_DM01_CD_03_dm01cel01 active dm01cel01: DATA_DM01_CD_04_dm01cel01 active dm01cel01: DATA_DM01_CD_05_dm01cel01 active dm01cel01: DATA_DM01_CD_06_dm01cel01 active dm01cel01: DATA_DM01_CD_07_dm01cel01 active dm01cel01: DATA_DM01_CD_08_dm01cel01 active dm01cel01: DATA_DM01_CD_09_dm01cel01 active dm01cel01: DATA_DM01_CD_10_dm01cel01 active dm01cel01: DATA_DM01_CD_11_dm01cel01 active dm01cel01: DBFS_DG_CD_02_dm01cel01 active dm01cel01: DBFS_DG_CD_03_dm01cel01 active dm01cel01: DBFS_DG_CD_04_dm01cel01 active dm01cel01: DBFS_DG_CD_05_dm01cel01 active dm01cel01: DBFS_DG_CD_06_dm01cel01 active dm01cel01: DBFS_DG_CD_07_dm01cel01 active dm01cel01: DBFS_DG_CD_08_dm01cel01 active dm01cel01: DBFS_DG_CD_09_dm01cel01 active dm01cel01: DBFS_DG_CD_10_dm01cel01 active dm01cel01: DBFS_DG_CD_11_dm01cel01 active dm01cel01: RECO_DM01_CD_00_dm01cel01 active dm01cel01: RECO_DM01_CD_01_dm01cel01 active dm01cel01: RECO_DM01_CD_02_dm01cel01 active dm01cel01: RECO_DM01_CD_03_dm01cel01 active dm01cel01: RECO_DM01_CD_04_dm01cel01 active dm01cel01: RECO_DM01_CD_05_dm01cel01 active dm01cel01: RECO_DM01_CD_06_dm01cel01 active dm01cel01: RECO_DM01_CD_07_dm01cel01 active dm01cel01: RECO_DM01_CD_08_dm01cel01 active dm01cel01: RECO_DM01_CD_09_dm01cel01 active dm01cel01: RECO_DM01_CD_10_dm01cel01 active dm01cel01: RECO_DM01_CD_11_dm01cel01 active dm01cel02: DATA_DM01_CD_00_dm01cel02 active dm01cel02: DATA_DM01_CD_01_dm01cel02 active dm01cel02: DATA_DM01_CD_02_dm01cel02 active dm01cel02: DATA_DM01_CD_03_dm01cel02 active dm01cel02: DATA_DM01_CD_04_dm01cel02 active dm01cel02: DATA_DM01_CD_05_dm01cel02 active dm01cel02: DATA_DM01_CD_06_dm01cel02 active dm01cel02: DATA_DM01_CD_07_dm01cel02 active dm01cel02: DATA_DM01_CD_08_dm01cel02 active dm01cel02: DATA_DM01_CD_09_dm01cel02 active dm01cel02: DATA_DM01_CD_10_dm01cel02 active dm01cel02: DATA_DM01_CD_11_dm01cel02 active dm01cel02: DBFS_DG_CD_02_dm01cel02 active dm01cel02: DBFS_DG_CD_03_dm01cel02 active dm01cel02: DBFS_DG_CD_04_dm01cel02 active dm01cel02: DBFS_DG_CD_05_dm01cel02 active dm01cel02: DBFS_DG_CD_06_dm01cel02 active dm01cel02: DBFS_DG_CD_07_dm01cel02 active dm01cel02: DBFS_DG_CD_08_dm01cel02 active dm01cel02: DBFS_DG_CD_09_dm01cel02 active dm01cel02: DBFS_DG_CD_10_dm01cel02 active dm01cel02: DBFS_DG_CD_11_dm01cel02 active dm01cel02: RECO_DM01_CD_00_dm01cel02 active dm01cel02: RECO_DM01_CD_01_dm01cel02 active dm01cel02: RECO_DM01_CD_02_dm01cel02 active dm01cel02: RECO_DM01_CD_03_dm01cel02 active dm01cel02: RECO_DM01_CD_04_dm01cel02 active dm01cel02: RECO_DM01_CD_05_dm01cel02 active dm01cel02: RECO_DM01_CD_06_dm01cel02 active dm01cel02: RECO_DM01_CD_07_dm01cel02 active dm01cel02: RECO_DM01_CD_08_dm01cel02 active dm01cel02: RECO_DM01_CD_09_dm01cel02 active dm01cel02: RECO_DM01_CD_10_dm01cel02 active dm01cel02: RECO_DM01_CD_11_dm01cel02 active dm01cel03: DATA_DM01_CD_00_dm01cel03 active dm01cel03: DATA_DM01_CD_01_dm01cel03 active dm01cel03: DATA_DM01_CD_02_dm01cel03 active dm01cel03: DATA_DM01_CD_03_dm01cel03 active dm01cel03: DATA_DM01_CD_04_dm01cel03 active dm01cel03: DATA_DM01_CD_05_dm01cel03 active dm01cel03: DATA_DM01_CD_06_dm01cel03 active dm01cel03: DATA_DM01_CD_07_dm01cel03 active dm01cel03: DATA_DM01_CD_08_dm01cel03 active dm01cel03: DATA_DM01_CD_09_dm01cel03 active dm01cel03: DATA_DM01_CD_10_dm01cel03 active dm01cel03: DATA_DM01_CD_11_dm01cel03 active dm01cel03: DBFS_DG_CD_02_dm01cel03 active dm01cel03: DBFS_DG_CD_03_dm01cel03 active dm01cel03: DBFS_DG_CD_04_dm01cel03 active dm01cel03: DBFS_DG_CD_05_dm01cel03 active dm01cel03: DBFS_DG_CD_06_dm01cel03 active dm01cel03: DBFS_DG_CD_07_dm01cel03 active dm01cel03: DBFS_DG_CD_08_dm01cel03 active dm01cel03: DBFS_DG_CD_09_dm01cel03 active dm01cel03: DBFS_DG_CD_10_dm01cel03 active dm01cel03: DBFS_DG_CD_11_dm01cel03 active dm01cel03: RECO_DM01_CD_00_dm01cel03 active dm01cel03: RECO_DM01_CD_01_dm01cel03 active dm01cel03: RECO_DM01_CD_02_dm01cel03 active dm01cel03: RECO_DM01_CD_03_dm01cel03 active dm01cel03: RECO_DM01_CD_04_dm01cel03 active dm01cel03: RECO_DM01_CD_05_dm01cel03 active dm01cel03: RECO_DM01_CD_06_dm01cel03 active dm01cel03: RECO_DM01_CD_07_dm01cel03 active dm01cel03: RECO_DM01_CD_08_dm01cel03 active dm01cel03: RECO_DM01_CD_09_dm01cel03 active dm01cel03: RECO_DM01_CD_10_dm01cel03 active dm01cel03: RECO_DM01_CD_11_dm01cel03 active ??????????griddisk, ?????’cellcli -e drop griddisk’ ?’cellcli -e create griddisk’????griddisk ,??????drop DBFS_DG???griddisk 2.??ASM???create disk group ?????CELL?IP,????????????? [root@dm01db02 ~]# cat /etc/oracle/cell/network-config/cellip.ora cell="192.168.64.131" cell="192.168.64.132" cell="192.168.64.133" SQL> create diskgroup DATA_MAC normal redundancy 2 DISK 3 'o/192.168.64.131/RECO_DM01_CD_*_dm01cel01' 4 ,'o/192.168.64.132/RECO_DM01_CD_*_dm01cel02' 5 ,'o/192.168.64.133/RECO_DM01_CD_*_dm01cel03' 6 attribute 7 'AU_SIZE'='4M', 8 'CELL.SMART_SCAN_CAPABLE'='TRUE', 9 'compatible.rdbms'='11.2.0.2', 10 'compatible.asm'='11.2.0.2' 11 / 3. MOUNT ???DISKGROUP ALTER DISKGROUP DATA_MAC mount ; 4.???crsctl start/stop resource ora.DATA_MAC.dg ?????

    Read the article

  • 2012?6?27?(?) 19:00~??????90?????!Oracle Database??????????

    - by Yuichi Hayashi
    ???????Oracle Database??????/??????????????????????????????????????????????????????????????????????????????Oracle Database????????????????????????????Oracle Database????????·?????????????? ¦ ???? 2012?6?27?(?) 19:00~20:30(18:45????)?20:30~21:30 ???????(????/??????) ¦ ?? ???????? ??(?????????)???? ¦ ?? ???????? DB????? ¦ ???? ??(?????) ¦ ?? 30?(????????????????????????) ¦ ?? ·?????????????2??????????????????????? ¦ ?? ????????(http://www.cosol.jp/) ¦ ??????? ???????? ??????? 03-3264-8800(9:00~18:00/??????)[email protected] ????????????? ?????????? Oracle DBA&Developer Days2010?2011??????????????????????????????Start! Oracle Database~Oracle Database?????????Step by Step~?????????????????????????? Oracle Database????????·?????????????????????????????? ·??????????? ·??????????? ·Oracle Enterprise Manager Database Control?????? ·Oracle Enterprise Manager Database Control?????? ·??????????? ·?????????????????? ·Oracle Net???????(listener.ora) ·?????????(tnsnames.ora) ·???????????(NetCA, Net Manager) ·??????? ·Oracle????????? ·???????(??????????SGA?? ?) ·???????????? ·?????????????? ·?????????????? ·????????????????? ·???????????????????(????????????) ·?????REDO????? ·UNDO??????(????????UNDO_RETENTION??) ???????????????????????????? ??????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????? ?????????????

    Read the article

  • to_date function pl/sql

    - by christine33990
    undefine dates declare v_dateInput VARCHAR(10); v_dates DATE; begin v_dateInput := &&dates; v_dates := to_date(v_dateInput,'dd-mm-yyyy'); DBMS_OUTPUT.put_line(v_dates); end; Not sure why whenever I run this code with ,for example , input of 03-03-1990, this error shows up. Error report: ORA-01847: day of month must be between 1 and last day of month ORA-06512: at line 6 01847. 00000 - "day of month must be between 1 and last day of month" *Cause: *Action:

    Read the article

  • Logback DBAppender url

    - by pedro mendes
    I'm trying to use Logback's DBAppender. My logback.xml has the following appender: </appender> <appender name="DatabaseAppender" class="ch.qos.logback.classic.db.DBAppender"> <connectionSource class="ch.qos.logback.core.db.DriverManagerConnectionSource"> <driverClass>oracle.jdbc.OracleDriver</driverClass> <url>jdbc:oracle:thin:@URL:PORT:SERVICEID</url> <user>USER</user> <password>PASS</password> </connectionSource> </appender> the url given works with other java classes in the same project but it fails with logback giving the following error ORA-00904: "ARG3": invalid identifier at java.sql.SQLException: ORA-00904: "ARG3": invalid identifier where ARG3 is the <url>jdbc:oracle:thin:@URL:PORT:SERVICEID</url>

    Read the article

  • C# LINQ Oracle date Functions

    - by user1079925
    I am trying to generate a sql statement to be used in Oracle11g, using linq. The problem arises when using dates: The SQL generated by linq gives SELECT * FROM <table> WHERE start_date > '24/11/2012 00:00:00' and end_date < '28/11/2012 00:00:00' This causes an oracle error: ORA-01830 - date format picture ends before converting entire input string Adding TO_DATE to the query fixes the ORA-01830, as it is converting the string to a oracle date whilst now knowing the date format. SELECT * FROM <table> WHERE start_date > TO_DATE('24/11/2012 00:00:00','DD/MM/YYYY HH24:MI:SS') and end_date < TO_DATE('28/11/2012 00:00:00','DD/MM/YYYY HH24:MI:SS') So, is there a way to add TO_DATE to LINQ (for oracle)? If not, please tell me how to work around this issue. Thanks

    Read the article

  • I can't make this query work with SUM function

    - by Mehper C. Palavuzlar
    This query gives an error: select ep, case when ob is null and b2b_ob is null then 'a' when ob is not null or b2b_ob is not null then 'b' else null end as type, sum(b2b_d + b2b_t - b2b_i) as sales from table where ... group by ep, type Error: ORA-00904: "TYPE": invalid identifier When I run it with group by ep, the error message becomes: ORA-00979: not a GROUP BY expression The whole query works OK if I remove the lines sum(b2b_d+b2b_t-b2b_i) as sales and group by ..., so the problem should be related to SUM and GROUP BY functions. How can I make this work? Thanks in advance for your help.

    Read the article

  • No_data_found exception is progating in outer block also?

    - by Vineet
    In my code i am entering the salary which is not available in employees table and then again inserting duplicate employee_id in primary key column of employee table in exception block where i am handling no data found exception but i do not why 'No data found exception in the end also? OUTPUT coming: Enter some other sal ORA-01400: cannot insert NULL into ("SCOTT"."EMPLOYEES"."LAST_NAME") ORA-01403: no data found --This should not come according to logic This is the code DECLARE v_sal number:=&p_sal; v_num number; BEGIN BEGIN select salary INTO v_num from employees where salary=v_sal; EXCEPTION WHEN no_data_found THEN DBMS_OUTPUT.PUT_LINE('Enter some other sal'); INSERT INTO employees (employee_id)values(100) ; END; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE(sqlerrm); END;

    Read the article

  • No_data_found exception is propagating to outer block also?

    - by Vineet
    In my code i am entering the salary which is not available in employees table and then again inserting duplicate employee_id in primary key column of employee table in exception block where i am handling no data found exception but i do not why No data found exception in the end also? OUTPUT coming: Enter some other sal ORA-01400: cannot insert NULL into ("SCOTT"."EMPLOYEES"."LAST_NAME") ORA-01403: no data found --This should not come according to logic This is the code: DECLARE v_sal number:=&p_sal; v_num number; BEGIN BEGIN select salary INTO v_num from employees where salary=v_sal; EXCEPTION WHEN no_data_found THEN DBMS_OUTPUT.PUT_LINE('Enter some other sal'); INSERT INTO employees (employee_id)values(100) ; END; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE(sqlerrm); END;

    Read the article

  • XML Return from an Oracle Stored Procedure

    - by Tequila Jinx
    Unfortunately most of my DB experience has been with MSSQL which tends to hold your hand a lot more than Oracle. What I'm trying to do is fairly trivial in tSQL, however, pl/sql is giving me a headache. I have the following procedure: CREATE OR REPLACE PROCEDURE USPX_GetUserbyID (USERID USERS.USERID%TYPE, USERRECORD OUT XMLTYPE) AS BEGIN SELECT XMLELEMENT("user" , XMLATTRIBUTES(u.USERID AS "userid", u.companyid as "companyid", u.usertype as "usertype", u.status as "status", u.personid as "personid") , XMLFOREST( p.FIRSTNAME AS "firstname" , p.LASTNAME AS "lastname" , p.EMAIL AS "email" , p.PHONE AS "phone" , p.PHONEEXTENSION AS "extension") , XMLELEMENT("roles", (SELECT XMLAGG(XMLELEMENT("role", r.ROLETYPE)) FROM USER_ROLES r WHERE r.USERID = USERID AND r.ISACTIVE = 1 ) ) , XMLELEMENT("watches", (SELECT XMLAGG( XMLELEMENT("watch", XMLATTRIBUTES(w.WATCHID AS "id", w.TICKETID AS "ticket") ) ) FROM USER_WATCHES w WHERE w.USERID = USERID AND w.ISACTIVE = 1 ) ) ) AS "RESULT" INTO USERRECORD FROM USERS u LEFT JOIN PEOPLE p ON p.PERSONID = u.PERSONID WHERE u.USERID = USERID; END USPX_GetUserbyID; When executed, it should return an XML document with the following structure: <user userid="" companyid="" usertype="" status="" personid=""> <firstname /> <lastname /> <email /> <phone /> <extension /> <roles> <role /> </roles> <watches> <watch id="" ticket="" /> </watches> </user> When I execute the query itself, replacing the USERID parameter with a string and removing the "into" clause, the query runs fine and returns the expected structure. However, when the procedure attempts to execute the query, passing the results of the XMLELEMENT function into the USERRECORD output parameter, I get the following exception: Error report: ORA-01422: exact fetch returns more than requested number of rows ORA-06512: at "USPX_GETUSERBYID", line 4 ORA-06512: at line 3 01422. 00000 - "exact fetch returns more than requested number of rows" *Cause: The number specified in exact fetch is less than the rows returned. *Action: Rewrite the query or change number of rows requested I'm baffled trying to nail this down, and unfortunately my google-fu hasn't helped. I've found plenty of Oracle SQL|XML examples, but none that deal with XML returns from a procedure. Note: I know that an alternate method of retrieving XML using DBMS methods exists, however, it's my understanding that that functionality is deprecated in favor of SQL|XML.

    Read the article

  • HOW TO SElect line number in TextBox Multiline

    - by Alhambra Eidos
    Hi all, I have large text in System.Windows.Forms.TextBox control in my form (winforms), vs 2008. I want find a text, and select the line number where I've found that text. Sample, I have fat big text, and I find "ERROR en línea", and I want select the line number in textbox multiline. string textoLogDeFuenteSQL = @"SQL*Plus: Release 10.1.0.4.2 - Production on Mar Jun 1 14:35:43 2010 Copyright (c) 1982, 2005, Oracle. All rights reserved. ** MORE TEXT **** Conectado a: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, Data Mining and Real Application Testing options WHERE LAVECODIGO = 'CO_PREANUL' * ERROR en línea 2: ORA-00904: ""LAVECODIGO"": identificador no v?lido INSERT INTO COM_CODIGOS * ERROR en línea 1: ORA-00001: restricción única (XACO.INX_COM_CODIGOS_PK) violada"; ** MORE TEXT **** Any sample code about it ? thanks in advanced,

    Read the article

  • Oracle: show parameters on error

    - by llappall
    When Oracle logs a parameterized SQL query failing, it shows "?" in place of the parameters, i.e. the query before replacing parameters. For example, "SELECT * FROM table where col like '?'" SQL state [99999]; error code [29902]; ORA-29902: error in executing ODCIIndexStart() routine ORA-20000: Oracle Text error: DRG-50901: text query parser syntax error on line 1, column 48 Is there a way to change logging so it shows the parameter values? The information above is absolutely useless unless I can see what the actual parsing problem was. In general, is there a way to set logs in Oracle to show parameters in parameterized query errors?

    Read the article

  • Catching Oracle Errors in Django

    - by Dashdrum
    My Django app runs on an Oracle database. A few times a year, the database is unavailable because of a scheduled process or unplanned downtime. However, I can't see how to catch the error a give a useful message back to the requester. Instead, a 500 error is triggered, and I get an email (or hundreds) showing the exception. One example is: File "/opt/UDO/env/events/lib/python2.6/site-packages/django/db/backends/oracle/base.py", line 447, in _cursor self.connection = Database.connect(conn_string, **conn_params) DatabaseError: ORA-01035: ORACLE only available to users with RESTRICTED SESSION privilege I see a similar error with a different ORA number when the DB is down. Because the exception is thrown deep within the Django libraries, and can be triggered by any of my views or the built in admin views, I don't know where any exception trapping code would go. Any suggestions?

    Read the article

  • .NET connecting to oracle problems with the connectionstring

    - by Oxymoron
    At the moment I'm trying to make a connection to a local server. Connecting via, say, TOAD works fine. When I try to connect using .NET I get ora-12154. Which puzzles me, since I'm using the connectionstring from my TNSNAMES.ora file: XE = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = myPC)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = XE) ) ) As follows: private string connectionString = "Data Source=(DESCRIPTION =" +" (ADDRESS_LIST=(ADDRESS = (PROTOCOL = TCP)(HOST = myPC)(PORT = 1521)))" +" (CONNECT_DATA = (SERVER = DEDICATED)(SERVICE_NAME = XE));" +"User Id=sys;Password=zsxyzabc;"; Any ideas?

    Read the article

  • Retrieving Oracle Cursor with JDBC

    - by BeginnerAmongBeginners
    I have been experiencing some frustrations trying to make a simple Oracle cursor retrieval procedure work with JDBC. I keep on getting an error of "[Oracle][ODBC][Ora]ORA-06553: PLS-306: wrong number or types of arguments in call to 'GETNAME'", but I cannot figure out what I am doing wrong. Here is my code in Java: CallableStatement stmt = connection.prepareCall("call getName(?)"); stmt.registerOutputParameter(1, OracleTypes.CURSOR); stmt.execute(); stmt.close(); con.close(); Here is my procedure in Oracle: CREATE OR REPLACE PROCEDURE getName(cur out SYS_REFCURSOR) IS BEGIN OPEN cur FOR SELECT name FROM customer; END; Thanks in advance. By the way, I am working with Oracle 10.2.0.

    Read the article

  • Why can't we use strong ref cursor with dynamic SQL Statement?

    - by Vineet
    Hi ALL, I am trying to use a strong ref cur with dynamic sql statment but it is giving out an error,but when i use weak cursor it works,Please explain what is the reason and please forward me any link of oracle server architect containing matter about how compilation and parsing is done in Oracle server. THIS is the error along with code. ERROR at line 6: ORA-06550: line 6, column 7: PLS-00455: cursor 'EMP_REF_CUR' cannot be used in dynamic SQL OPEN statement ORA-06550: line 6, column 2: PL/SQL: Statement ignored declare type ref_cur_type IS REF CURSOR RETURN employees%ROWTYPE; --Creating a strong REF cursor,employees is a table emp_ref_cur ref_cur_type; emp_rec employees%ROWTYPE; BEGIN OPEN emp_ref_cur FOR 'SELECT * FROM employees'; LOOP FETCH emp_ref_cur INTO emp_rec; EXIT WHEN emp_ref_cur%NOTFOUND; END lOOP; END;

    Read the article

  • On configuring GC 10.2.0.5 to monitor LISTENER SCAN using UDMs ...

    - by [email protected]
    Hi,Looks like Grid Control 10.2.0.5 is not fully prepared for monitoringthe Grid Infrastructure (11gR2).Even I'm pretty sure the upcoming version of GC (11g) will of course support all the new features of 11gR2, some customersare asking for some "hand-made" procedures for monitoring all the new stuff.I think one of the most critical components that cant be monitored are the LISTENER SCAN, so I have developed a little script for doing sousing the GC User Defined Metrics ( at host level )I am more than happy to share with you:#!/bin/ksh   ###    NAME###     monitor_scan.sh######    DESCRIPTION###      SCAN Listener monitoring######    RETURNS######    NOTES######    MODIFIED           (DD/MM/YY)###      Oracle            25/03/10     - Creation###export ORACLE_HOME=/opt/oracle/soft/11.2/gridRSC_KEY=$1AWK=/sbin/awk   LISTENER_DOWN_COUNT=$(${ORACLE_HOME}/bin/crsctl status resource -w 'TYPE = ora.scan_listener.type' | grep OFFLINE | wc -l)if [ ${LISTENER_DOWN_COUNT} != 0 ]; then  SCAN_DOWN_LIST=$(${ORACLE_HOME}/bin/crsctl status resource  -w 'TYPE = ora.scan_listener.type' | $AWK \ 'BEGIN { FS="="; state = 0; }  $1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state=1};  state == 0 {next;}  $1~/TARGET/ && state == 1 {apptarget = $2; state=2;}  $1~/STATE/ && state == 2 {appstate = $2; state=3;}  state == 3 {printf "%-45s %-10s %-18s\n", appname, apptarget, appstate; state=0;}' | grep OFFLINE | awk '{ print $1 }')  echo em_result=ALERT  echo em_message=There are LISTENER SCAN with down status: [${SCAN_DOWN_LIST}]else  echo em_result=NORMAL  echo em_message=All SCAN Listener are UPfiHope it helpsL

    Read the article

  • Customized Database Listener Names Now Supported for EBS

    - by sreelatha.mahendra(at)oracle.com
    The database listener name can now be configured using AutoConfig with newly introduced context variable s_db_listener. Prior to this certification it was not possible to use AutoConfig generated listener.ora files for managing listeners from SRVCTL when there were multiple RAC instances on the same server.To use this feature E-Business Suite customers need to apply the following patch:11.5.10CU2 - Roll Up Patch 9535311 (RUP-U) or higher12.0.x - R12.TXK.A.delta.7 or higher 12.1.x - R12.TXK.B.delta 3 or higher

    Read the article

  • Partner Lab Program

    - by [email protected]
    Il 1 Giugno è stato pubblicato il Nuovo calendario del PARTNER LAB PROGRAM.Sul sito WEB troverai grandi novità: abbiamo rivisto le agende e i contenuti di tutti i seminari e arricchito la nostra offerta con molti WEBSEMINAR della durata di 1 ora e mezza circa.Non perdere l'opportunità di rimanere sempre aggiornato su tutti i prodotti che compongono l'offerta Tecnologica di Oracle, visita subito il sito del Partner Lab e iscriviti subito ai Seminari in aula e ai Webseminar di tuo interesse.

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • Begin the Clone Wars Have!

    - by Antony Reynolds
    Creating a New Virtual Machine from an Existing Virtual Disk In previous posts I described how I set up an OEL6 machine under VirtualBox that can run an 11gR2 database and FMW 11.1.1.5.  That is great if you want the DB and FMW running in the same virtual image and it has served me well for some proof of concepts and also for some testing of different JVMs.  However I also wanted to run some testing of FMW with the database running on a separate physical machine.  So in this post I will show how to take a VirtualBox image and create a new image based on the disks from that original image. What are my Options? There is more than one way to skin a cat, or in this case to create two separate VMs that can run on different hardware.  Some of the options include: Create new virtual disk images for each new VM. Clone the existing disk images and point the new VM at the cloned images. Point the new VM at the existing snapshots. #1 is too much like hard work, install OEL twice, install a database again, install FMW again, run RCU again!  Life is too short! #2 is probably the safest way of doing things.  VirtualBox allows you to clone a disk image for use in a separate machine.  However this of course duplicates the disk and means that it is now occupying 3 times the space, once for the original disk and twice more for the two clones I would need. #3 is the most space efficient way of doing things.  It does mean however that I can only run the new “cloned” images if I have access to the original image because that is where the base snapshots reside.  However this is not a problem for me as long as I remember to keep all threee images together.  So this is the approach we will follow. Snapshot, What Snapshot? As we are going to create new virtual machines based on existing snapshots we need to figure out which snapshot to use.  We do this by opening the “Media Manager” from within VirtualBox and moving the mouse over the snapshot images until we find the snapshots we want – the snapshot name is identified in the “Attached to:” comment.  In my case I wanted the FMW installed snapshot because that had a database configured for FMW alongside the FMW software.  I made a note of the filename of that snapshot (actually I just noted the first 5 characters as that was all that was needed to uniquely identify the snapshot file). When we create the new machines we will point them at the snapshot filename we have just checked. Network or NotWork? Because we want the two new machines to communicate with each other when hosted in different physical machines we can’t use the default NAT networking mode without a lot of hassle.  But at the same time we need them to have fixed IP addresses relative to each other so that they can see each other whilst also being able to see the outside world. To achieve all these requirements I created two network adapters for each machine.  Adapter 1 was a standard NAT mapping.  This will allow each machine to get a dynamic IP address (10.0.2.15 by default) that can be used to access the external world through the VBox provided NAT gateway.  This is the same as the existing configuration. The second adapter I created as a bridged adapter.  This gives the virtual machine direct access to the host network card and by using fixed IP addresses each machine can see the other.  It is important to choose fixed IP addresses that are not routable across your internal network so you don’t get any clashes with other machines on your network.  Of course you could always get proper fixed IP addresses from your network people, but I have serveral people using my images and as long as I don’t have two instances of the same VM on the same network segment this is easier and avoids reconfiguring the network every time someone wants a copy of my VM.  If it is available I would suggest using the 10.0.3.* network as 10.0.2.* is the default NAT network.  You can check availability by pinging 10.0.3.1 and 10.0.3.2 from your host machine.  If it times out then you are probably safe to use that. Creating the New VMs Now that I had collected the data that I needed I went ahead and created the new VMs. When asked for a “Boot Hard Disk” I used the “Choose a virtual hard disk file…” link to find the snapshot I had previously selected and set that to be the existing hard disk.  I chose the previously existing SOA 11.1.1.5 install for both the new DB and FMW machines because that snapshot had the database with the RCU completed that I wanted for my DB machine and it had the SOA software installed which I wanted for my FMW machine. After the initial creation of the virtual machine go into the network setting section and enable a second adapter which will be bridged.  Make a note of the MAC addresses (the last four digits should be sufficient) of the two adapters so that you can later set the bridged adapter to use fixed IP and the NAT adapter to use DHCP. We are now ready to start the VMs and reconfigure Linux. Reconfiguring Linux Because I now have two new machines I need to change their network configuration.  In particular I need to change the hostname, update the hosts file and change the network settings. Changing the Hostname I renamed both hosts by running the hostname command as root: hostname vboxfmw.oracle.com I also edited the /etc/sysconfig file and set the correct hostname in there. HOSTNAME=vboxfmw.oracle.com Changing the Network Settings I needed to change the network configuration to give the bridged network a fixed IP address.  I first explicitly set the MAC addresses of the two adapters, because the order of the virtual adapters in the VirtualBox Manager is not necessarily the same as the order of the adapters in the guest OS.  So I went in to the System->Preferences->Network Connections screen and explicitly set the “Device MAC address” for the two adapters. Having correctly mapped the Linux adapters to the VirtualBox adapters I then set the Bridged adapter to use fixed IP addressing rather than DHCP.  There is no need for additional routing or default gateways because we expect the two machine to be on the same LAN segment. Updating the Hosts File Having renamed the machines and reconfigured the network I then updated the /etc/hosts file to refer to the new machine name add a new line to the hosts file to provide an additional IP address for my server (the new fixed IP address) add a new line for the fixed IP address of the other virtual machine 10.0.3.101      vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.2.15       vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.3.102      vboxfmw.oracle.com      vboxfmw # Added by NetworkManager 127.0.0.1       localhost.localdomain   localhost ::1     vboxdb.oracle.com       vboxdb  localhost6.localdomain6 localhost6 To make sure everything takes effect I restarted the server. Reconfiguring the Database on the DB Machine Because we changed the hostname the listener and the EM console no longer start so I need to modify the listener.ora to use the new hostname and I also need to rebuild the EM configuration because it also relies on the hostname. I edited the $ORACLE_HOME/network/admin/listener.ora and changed the listening address to the new hostname:       (ADDRESS = (PROTOCOL = TCP)(HOST = vboxdb.oracle.com)(PORT = 1521)) After changing the listener.ora I was able to start the listener using: lsnrctl start I also had to reconfigure the EM database control.  I first deconfigured it using the command: emca -deconfig dbcontrol db -repos drop This drops the repository and removes any existing registered dbcontrols. I then re-configured it using the following command: emca -config dbcontrol db -repos create This creates the EM repository and then configures and starts dbcontrol. Now my database machine is ready so I can close it down and take a snapshot. Disabling the Database on the FMW Machine I set up the database to start automatically by creating a service called “dbora”.  On the FMW machine I do not need the database running so I can prevent it auto-starting by running the following command: chkconfig –del dbora Note that because I am using a snapshot it is not a waste of disk space to have the DB installed but not used.  As long as I don’t run it, it won’t cost me anything. I can now close the FMW machine down and take a snapshot. Creating a New Domain The FMW machine is now ready to create a new domain.  When creating the domain I can point it at the second machine which is running the database.  I can potentially run these machines on two separate physical machines as long as I have the original virtual machine available to both of the physical machines. Gotchas in Snapshotting VirtualBox does not support the concept of linked machines in a network like some virtualization technologies so when creating a snapshot it is a good idea to shut both VMs down and then take a snapshot on both of them.  This is because we want to keep the database in sync with the middleware.  One way to make sure that this happens would be to place all the domain configuration files on the database server via an NFS share, this would mean that all we would need to snapshot would be the database machine because that would hold all the state and configuration. The Sky’s the Limit We have covered a simple case of having just two machines.  I have a more complicated configuration in which two machine run a RAC database off the same base OS image, and two more machines run a SOA cluster based on the same OS image.  Just remember what machine holds state and what are the consequences of taking a snapshot.

    Read the article

  • ??GoldenGate Replicat?HANDLECOLLISIONS??

    - by Liu Maclean(???)
    HANDLECOLLISIONS?????goldengate????????REPLICAT??,???????????????????,???????????????????????????,??????????????????????????reperror????????discard??,????????????????,??????(????error mapping????,???????discard??),??????????????;?????????????????,????????? ??HANDLECOLLISIONS?????: target??delete??(missing delete),??????????discardfile target??update??(missing update) ????????=» update???INSERT ,???????????? ?????????=» ??????????discardfile ????????????target??,???replicat???UPDATE?????????????? ??1 target??delete??(missing delete) : C:\Users\ML>sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 18 13:38:03 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> conn sender/oracle Connected. SQL> create table handlec(t1 int primary key,t2 int); Table created. SQL> insert into handlec values(1,2); 1 row created. SQL> insert into handlec values(3,2); 1 row created. SQL> insert into handlec values(4,2); 1 row created. SQL> commit; Commit complete. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 3 2 4 2 target : SQL> conn receiver/oracle Connected. SQL> create table handlec(t1 int primary key,t2 int); Table created. SQL> insert into handlec values(1,2); 1 row created. SQL> commit; SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 SQL> GGSCI (XIANGBLI-CN) 1> alter extract load2 , begin now EXTRACT altered. GGSCI (XIANGBLI-CN) 4> alter replicat rep2, begin now REPLICAT altered. GGSCI (XIANGBLI-CN) 13> add trandata sender.* Logging of supplemental redo data enabled for table SENDER.HANDLEC. Logging of supplemental redo log data is already enabled for table SENDER.TV. GGSCI (XIANGBLI-CN) 14> start mgr MGR is already running. GGSCI (XIANGBLI-CN) 15> start er * Sending START request to MANAGER ... EXTRACT LOAD2 starting Sending START request to MANAGER ... REPLICAT REP2 starting GGSCI (XIANGBLI-CN) 16> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT RUNNING LOAD2 00:00:00 00:00:01 REPLICAT RUNNING REP2 00:00:00 00:00:08 ***SOURCE?????TARGET????? SQL> delete handlec where t1=3; 1 row deleted. SQL> commit; Commit complete. ??SQL error 1403??,REPLICAT ABORT 2012-09-18 13:45:48 WARNING OGG-01004 Aborted grouped transaction on 'RECEIVER.HANDLEC', Database error 1403 (OCI Error ORA-01403: no data found, SQL ). 2012-09-18 13:45:48 WARNING OGG-01003 Repositioning to rba 1091 in seqno 3. 2012-09-18 13:45:48 WARNING OGG-01154 SQL error 1403 mapping SENDER.HANDLEC to RECEIVER.HANDLEC OCI Error ORA-01403: no data found, SQL . 2012-09-18 13:45:48 WARNING OGG-01003 Repositioning to rba 1091 in seqno 3. Source Context : SourceModule : [er.errors] SourceID : [er/errors.cpp] SourceFunction : [take_rep_err_action] SourceLine : [623] ThreadBacktrace : [8] elements : [D:\ogg\V34342-01\gglog.dll(??1CContextItem@@UEAA@XZ+0x3272) [0x000000018010BDD2]] : [D:\ogg\V34342-01\gglog.dll(?_MSG_ERR_MAP_TO_TANDEM_FAILED@@YAPEAVCMessage@@PEAVCSourceContext@@AEBV?$CQualDBObjName@$00@ggapp@gglib@ggs@@1W4MessageDisposition@CMessageFactory@@@Z+0x138) [0x00000001800AD508]] : [D:\ogg\V34342-01\replicat.exe(ERCALLBACK+0x6e1e) [0x0000000140099D5E]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x4411) [0x00000001400C9BE1]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x289cd) [0x00000001400EE19D]] : [D:\ogg\V34342-01\replicat.exe(CommonLexerNewSSD+0x9440) [0x00000001402AE980]] : [C:\windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007733652D]] : [C:\windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007746C521]] 2012-09-18 13:45:48 ERROR OGG-01296 Error mapping from SENDER.HANDLEC to RECEIVER.HANDLEC. *********************************************************************** * ** Run Time Statistics ** * *********************************************************************** Last record for the last committed transaction is the following: ___________________________________________________________________ Trail name : D:\ogg\V34342-01\ex\ze000003 Hdr-Ind : E (x45) Partition : . (x04) UndoFlag : . (x00) BeforeAfter: B (x42) RecLength : 9 (x0009) IO Time : 2012-09-18 13:45:38.000000 IOType : 3 (x03) OrigNode : 255 (xff) TransInd : . (x03) FormatType : R (x52) SyskeyLen : 0 (x00) Incomplete : . (x00) AuditRBA : 44 AuditPos : 3337232 Continued : N (x00) RecCount : 1 (x01) 2012-09-18 13:45:38.000000 Delete Len 9 RBA 1091 Name: SENDER.HANDLEC ___________________________________________________________________ Reading D:\ogg\V34342-01\ex\ze000003, current RBA 1091, 0 records Report at 2012-09-18 13:45:48 (activity since 2012-09-18 13:45:48) From Table SENDER.HANDLEC to RECEIVER.HANDLEC: # inserts: 0 # updates: 0 # deletes: 0 # discards: 1 Last log location read: FILE: D:\ogg\V34342-01\ex\ze000003 SEQNO: 3 RBA: 1091 TIMESTAMP: 2012-09-18 13:45:38.000000 EOF: NO READERR: 0 2012-09-18 13:45:48 ERROR OGG-01668 PROCESS ABENDING. 2012-09-18 13:45:48 INFO OGG-01237 Trace file D:\ogg\V34342-01\REP_TRACE1.TRC closed. 2012-09-18 13:45:48 INFO OGG-01237 Trace file D:\ogg\V34342-01\REP_TRACE2.TRC closed. CACHE OBJECT MANAGER statistics CACHE MANAGER VM USAGE vm current = 0 vm anon queues = 0 vm anon in use = 0 vm file = 0 vm used max = 0 ==> CACHE BALANCED CACHE CONFIGURATION cache size = 2G cache force paging = 3.41G buffer min = 64K buffer highwater = 8M pageout eligible size = 8M ================================================================================ ??skiptransaction???????? GGSCI (XIANGBLI-CN) 18> start rep2 skiptransaction Sending START request to MANAGER ... REPLICAT REP2 starting ??2 target??update??(missing update),???????? : ???????, ??source????????? SQL> update handlec set t1=5 where t1=4; 1 row updated. SQL> commit; Commit complete. ???target ????(miss update)??????? Database error 1403+OGG-01296 2012-09-18 13:49:30 WARNING OGG-01004 Aborted grouped transaction on 'RECEIVER.HANDLEC', Database error 1403 (OCI Error ORA-01403: no data found, SQL <UPDATE "RECEIVER"."HANDLEC" SET "T1" = :a1 WHERE "T1" = :b0>). 2012-09-18 13:49:30 WARNING OGG-01003 Repositioning to rba 1218 in seqno 3. 2012-09-18 13:49:30 WARNING OGG-01003 Repositioning to rba 1218 in seqno 3. Source Context : SourceModule : [er.errors] SourceID : [er/errors.cpp] SourceFunction : [take_rep_err_action] SourceLine : [623] ThreadBacktrace : [8] elements : [D:\ogg\V34342-01\gglog.dll(??1CContextItem@@UEAA@XZ+0x3272) [0x000000018010BDD2]] : [D:\ogg\V34342-01\gglog.dll(?_MSG_ERR_MAP_TO_TANDEM_FAILED@@YAPEAVCMessage@@PEAVCSourceContext@@AEBV?$CQualDBObjName@$00@ggapp@gglib@ggs@@1W4MessageDisposition@CMessageFactory@@@Z+0x138) [0x00000001800AD508]] : [D:\ogg\V34342-01\replicat.exe(ERCALLBACK+0x6e1e) [0x0000000140099D5E]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x4411) [0x00000001400C9BE1]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x289cd) [0x00000001400EE19D]] : [D:\ogg\V34342-01\replicat.exe(CommonLexerNewSSD+0x9440) [0x00000001402AE980]] : [C:\windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007733652D]] : [C:\windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007746C521]] 2012-09-18 13:49:30 ERROR OGG-01296 Error mapping from SENDER.HANDLEC to RECEIVER.HANDLEC. ??HANDLECOLLISIONS?,rep??????????discard?? GGSCI (XIANGBLI-CN) 23> view params rep2 replicat rep2 userid receiver , password oracle trace ./rep_trace1.trc trace2 ./rep_trace2.trc ASSUMETARGETDEFS HANDLECOLLISIONS map sender.*, target receiver.*; GGSCI (XIANGBLI-CN) 18> start rep2 SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 5 ????T1=5 T2 NULL?????? ,??update?????????????,??replicat??????????????update????????????????,?????T2 ?NULL ,????????????EXTRACT??PKUPDATE??? ????????FETCHOPTIONS FETCHPKUPDATECOLS ????????EXTRACT?????,???EXTRACT? ????extract???????????? ??????: SQL> conn receiver/oracle Connected. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 5 20 200 SQL> delete handlec where t1=5; 1 row deleted. SQL> commit; Commit complete. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 20 200 SQL> conn sender/oracle Connected. SQL> update handlec set t1=t1+1000 where t1=5; 1 row updated. SQL> commit; Commit complete. SQL> conn receiver/oracle Connected. SQL> SQL> SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 20 200 1005 2 ???????FETCHOPTIONS FETCHPKUPDATECOLS??????redo image???trail?,????primary key?????HANDLECOLLISIONS????target??????????? ??3 ????????????target??,???replicat???UPDATE??????????????: *** TARGET SQL> conn receiver/oracle Connected. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 9 5 target????? t1=10 t2=9??? ,????source???(10,100)??? >>SOURCE SQL> insert into handlec values(10,100); 1 row created. SQL> commit; >>TARGET SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 5 ???????source?insert??,???target???????????????HANDLECOLLISIONS?REPLICAT???UPDATE??????COLUMNS ?? HANDLECOLLISIONS?????goldengate????????REPLICAT??,???????????????????,???????????????????????????,??????????????????????????reperror????????discard??,????????????????,??????,??????????????;?????????????????,????????? ??HANDLECOLLISIONS?????: target??delete??(missing delete),??????????discardfile target??update??(missing update) ????????=» update???INSERT ,???????????? ?????????=» ??????????discardfile ????????????target??,???replicat???UPDATE?????????????? ?:???????????Insert/Delete??,????????????????Replicat?????abend,????? ???????????,??target??HANDLECOLLISIONS??update??,?????INSERT??????,???????????????,FETCHOPTIONS FETCHPKUPDATECOLS??????redo image???trail?,????primary key?????HANDLECOLLISIONS????target??????????? ??????send ??????HANDLECOLLISIONS GGSCI (XIANGBLI-CN) 29> send rep2, NOHANDLECOLLISIONS Sending NOHANDLECOLLISIONS request to REPLICAT REP2 ... REP2 NOHANDLECOLLISIONS set for 1 tables and 0 wildcard entries

    Read the article

  • SSIS - Connect to Oracle on a 64-bit machine (Updated for SSIS 2008 R2)

    - by jorg
    We recently had a few customers where a connection to Oracle on a 64 bit machine was necessary. A quick search on the internet showed that this could be a big problem. I found all kind of blog and forum posts of developers complaining about this. A lot of developers will recognize the following error message: Test connection failed because of an error in initializing provider. Oracle client and networking components were not found. These components are supplied by Oracle Corporation and are part of the Oracle Version 7.3.3 or later client software installation. Provider is unable to function until these components are installed. After a lot of searching, trying and debugging I think I found the right way to do it! Problems Because BIDS is a 32 bit application, as well on 32 as on 64 bit machines, it cannot see the 64 bit driver for Oracle. Because of this, connecting to Oracle from BIDS on a 64 bit machine will never work when you install the 64 bit Oracle client. Another problem is the "Microsoft Provider for Oracle", this driver only exists in a 32 bit version and Microsoft has no plans to create a 64 bit one in the near future. The last problem I know of is in the Oracle client itself, it seems that a connection will never work with the instant client, so always use the full client. There are also a lot of problems with the 10G client, one of it is the fact that this driver can't handle the "(x86)" in the path of SQL Server. So using the 10G client is no option! Solution Download the Oracle 11G full client. Install the 32 AND the 64 bit version of the 11G full client (Installation Type: Administrator) and reboot the server afterwards. The 32 bit version is needed for development from BIDS with is 32 bit, the 64 bit version is needed for production with the SQLAgent, which is 64 bit. Configure the Oracle clients (both 32 and 64 bits) by editing  the files tnsnames.ora and sqlnet.ora. Try to do this with an Oracle DBA or, even better, let him/her do this. Use the "Oracle provider for OLE DB" from SSIS, don't use the "Microsoft Provider for Oracle" because a 64 bit version of it does not exist. Schedule your packages with the SQLAgent. Background information Visual Studio (BI Dev Studio)is a 32bit application. SQL Server Management Studio is a 32bit application. dtexecui.exe is a 32bit application. dtexec.exe has both 32bit and 64bit versions. There are x64 and x86 versions of the Oracle provider available. SQLAgent is a 64bit process. My advice to BI consultants is to get an Oracle DBA or professional for the installation and configuration of the 2 full clients (32 and 64 bit). Tell the DBA to download the biggest client available, this way you are sure that they pick the right one ;-) Testing if the clients have been installed and configured in the right way can be done with Windows ODBC Data Source Administrator: Start... Programs... Administrative tools... Data Sources (ODBC) ADITIONAL STEPS FOR SSIS 2008 R2 It seems that, unfortunately, some additional steps are necessary for SQL Server 2008 R2 installations: 1. Open REGEDIT (Start… Run… REGEDIT) on the server and search for the following entry (for the 32 bits driver): HKEY_LOCAL_MACHINE\Software\Microsoft\MSDTC\MTxOCI Make sure the following values are entered: 2. Next, search for (for the 64 bits driver): HKEY_LOCAL_MACHINE\Software\Wow6432Node\Microsoft\MSDTC\MTxOCI Make sure the same values as above are entered. 3. Reboot your server.

    Read the article

  • Guaranteed Restore Points as Fallback Method

    - by Mike Dietrich
    Thanks to the great audience yesterday in the Upgrade & Migration Workshop in Utrecht. That was really fun and I was amazed by our new facilities (and the  "wellness" lights surrounding the plenum room's walls). And another reason why I like to do these workshops is that often I learn new things from you So credits here to Rick van  Ek who has highlighted the following topic to me. Yesterday (and in some previous workshops) I did mention during the discussion about Fallback Strategies that you'll have to switch on Flashback Database beforehand to create a guaranteed restore point in case you'll encounter an issue during the database upgrade. I knew that we've made it possible since Oracle Database 11.2 to switch Flashback Database on without taking the database into MOUNT status (you could switch it off anyway while the database is open before in all releases). But before Oracle Database 11.2 that did require MOUNT status. SQL> create restore point rp1 guarantee flashback database ; create restore point rp1 guarantee flashback database * ERROR at line 1: ORA-38784: Cannot create restore point 'RP1'. ORA-38787: Creating the first guaranteed restore point requires mount mode when flashback database is off. But Rick did mention that I won't need to switch Flashback Database On to create a guaranteed restore point. And he's right - in older releases I would have had to go into MOUNT state to define the restore point which meant to restart the database. But in 11.2 that's no necessary anymore. And the same will apply when you upgrade your pre-11.2 database (e.g. an Oracle Database 10.2.0.4) to Oracle Database 11.2. As soon as you start your "old" not-yet-upgraded database in your 11.2 environment with STARTUP UPGRADE you can define a guaranteed restore point. If you tail the alert.log you'll see that the database will start the RVWR (Recovery Writer) background process - you'll just have to make sure that you'd define the values for db_recovery_file_dest_size and db_recovery_file_dest. SQL> startup upgrade ORACLE instance started. Total System Global Area  417546240 bytes Fixed Size                  2228944 bytes Variable Size             134221104 bytes Database Buffers          272629760 bytes Redo Buffers                8466432 bytes Database mounted. Database opened. SQL> create restore point grpt guarantee flashback database; Restore point created.SQL> drop restore point grpt; And don't forget to drop that restore point the sooner or later as it is guaranteed - and will fill up your Fast Recovery Area pretty quickly Just on the side: in any case archivelog mode is required if you'd like to work with restore points. - Mike

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >