Search Results

Search found 19598 results on 784 pages for 'oracle global hr'.

Page 82/784 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Global menu stopped working for most applications

    - by ltorgo
    I have Ubuntu 12.04 32 bits and I had been enjoying global menus and HUD since the beginning. However, for some strange reason, I stopped having global menus for most applications and thus no HUD fun anymore. Actually, the only two applications where I continue to have global menus are Thunderbird and Firefox, so not sure if it has something to do with these two... I've already browsed through some similar reports and already tried the following: checking if indicator-appmenu was installed. tried to use Unsettings to switch on and off, with reboots in the middle, global menus. logged in with a different user to see if it was some configuration of my user. also checked, using dconf-editor that, com ? canonical ? indicator ? appmenu ? menu-mode = "global" Any hint/help would be appreciated! Thank you in advance

    Read the article

  • How to go from Mainframe to the Cloud?

    - by Ruma Sanyal
    Running applications on IBM mainframes is expensive, complex, and hinders IT responsiveness. The high costs from frequent forced upgrades, long integration cycles, and complex operations infrastructures can only be alleviated by migrating away from a mainframe environment.  Further, data centers are planning for cloud enablement pinned on principles of operating at significantly lower cost, very low upfront investment, operating on commodity hardware and open, standards based systems, and decoupling of hardware, infrastructure software, and business applications. These operating principles are in direct contrast with the principles of operating businesses on mainframes. By utilizing technologies such as Oracle Tuxedo, Oracle Coherence, and Oracle GoldenGate, businesses are able to quickly and safely migrate away from their IBM mainframe environments. Further, running Oracle Tuxedo and Oracle Coherence on Oracle Exalogic, the first and only integrated cloud machine on the market, Oracle customers can not only run their applications on standards-based open systems, significantly cutting their time to market and costs, they can start their journey of cloud enabling their mainframe applications. Oracle Tuxedo re-hosting tools and techniques can provide automated migration coverage for more than 95% of mainframe application assets, at a fraction of the cost Oracle GoldenGate can migrate data from mainframe systems to open systems, eliminating risks associated with the data migration Oracle Coherence hosts transactional data in memory providing mainframe-like data performance and linear scalability Running Oracle software on top of Oracle Exalogic empowers customers to start their journey of cloud enabling their mainframe applications Join us in a series of events across the globe where you you'll learn how you can build your enterprise cloud and add tremendous value to your business. In addition, meet with Oracle experts and your peers to discuss best practices and see how successful organizations are lowering total cost of ownership and achieving rapid returns by moving to the cloud. Register for the Oracle Fusion Middleware Forum event in a city new you!

    Read the article

  • script engine with no global environment (java)

    - by user1886930
    I am curious about how global variables are handled by script engines. I am looking for a script engine that does not preserve the state of global variables upon invocation. Are there such engines out there? We are looking for a scripting language we can use under the script engine API for Java. When making multiple invocations of a script engine, top-level calls to eval() or evaluate() method preserves the state of global variables, meaning that consequent calls to eval() will use the global variables as they were left by the last invocation. Is there a script engine that does not preserve the state, or provides the ability to reset the state, so that global variables are at their initial state every time the script engine is invoked?

    Read the article

  • `this` in global scope in ECMAScript 6

    - by Nathan Wall
    I've tried looking in the ES6 draft myself, but I'm not sure where to look: Can someone tell me if this in ES6 necessarily refers to the global object? Will this object have same members as the global scope? If you could answer for ES5 that would be helpful as well. I know this in global scope refers to the global object in the browser and in most other ES environments, like Node. I just want to know if that's the defined behavior by the spec or if that's extended behavior that implementers have added (and if this behavior will continue in ES6 implementations). In addition, is the global object always the same thing as the global scope? Or are there distinctions? Update - Why I want to know: I am basically trying to figure out how to get the global object reliably in ES5 & 6. I can't rely on window because that's specific to the browser, nor can I rely on global because that's specific to environments like Node. I know this in Node can refer to module in module scope, but I think it still refers to global in global scope. I want a cross-environment ES5 & 6 compliant way to get the global object (if possible). It seems like in all the environments I know of this in global scope does that, but I want to know if it's part of the actual spec (and so reliable across any environment that I may not be familiar with). I also need to know if the global scope and the global object are the same thing by the spec. In other words will all variables in global scope be the same as globalobject.variable_name?

    Read the article

  • The 'desktops' move to Oracle

    - by [email protected]
    The move to Oracle has been most interesting.  Here we have an organization who are interested in what they are interested in.  Not so much in things that aren't 'core'. The legacy Sun desktop products are things that Oracle is interested in.  To that end there are some changes coming to policies and products - and from my perspective they are all good. Very good. One of the changes to the Product suite is that we are now referred to as part of the Virtualization team, falling under Oracle's Chief Corporate Archtiect, Edward Screven.  Edward says that the Products were a 'gem' found inside the great pile of stuff that was Sun. Another change is that while StarOffice/Open Office has been certainly endorsed by Oracle, and it also falls under Edward's purview, and here has been a push on to use it as opposed to... well... you know.    It is not, however, part of the Virtualization team's product suite any more. There are some other really interesting changes coming that you will hear about quite soon.  The big message for today, though, is that Sun Rays, Secure Global Desktop, VirtualBox, and Oracle VDI software are all still alive and kicking and moving forward.  Infact, at the Oracle earnings call last week, Charles Phillips announced more significant wins with Sun Rays in the US Federal Governmnet space.  He could have talked about all kinds of legacy Sun products, but chose to mention Sun Rays in the first Quarterly statement since the acquisition of Sun - you should see this as a very good sign indeed. More soon - until then...

    Read the article

  • Oracle Tutor: Learn Tutor in the comfort of your own home or office

    - by emily.chorba(at)oracle.com
    The primary challenge for companies faced with documenting policies and procedures is to realize that they can do this documentation in-house, with existing resources, using Oracle Tutor. Procedure documentation is a critical success component for supporting corporate governance or other regulatory compliance initiatives and when implementing or upgrading to a new business application. There are over 1000 Oracle Tutor customers worldwide that have used Tutor to create, distribute, and maintain their business procedures. This is easily accomplished because of Tutor's: Ease of use by those who have to write procedures (Microsoft Word based authoring) Ease of company-wide implementation (complex document management activities are centralized) Ease of use by workers who have to follow the procedures (play script format)Ease of access by remote workers (web-enabled) Oracle University is offering Live Virtual Tutor classes! The class lasts four days, starts on Tuesday and finishes on Friday. This course is an introduction to the Oracle Tutor suite of products. It focuses on the Policy and Procedure writing feature set of the Tutor applications. Participants will learn about writing procedures and maintaining these particular process document types, all using the Tutor method. The next three classes are scheduled for: April 19 - 22 May 31 - June 3 July 5 - 8 You will learn to: Write procedures Create procedure Flowcharts Write support documents Create Impact Analysis Reports Create Role-base Employee Manuals Deploy online Employee Manuals on an Intranet Enjoy learning Tutor in your local environment. Start the sign up process from this link Learn More For more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Emily Chorba Principle Product Manager Oracle Tutor & BPM

    Read the article

  • Know More About Oracle Row Lock

    - by Liu Maclean(???)
    ??????Oracle??????????row lock,??ORACLE????????????????????,row lock???????????????????????????????,??Server Process?pin????block buffer????????? ????????,?process A ??update???????? Z?????????, ???????rollback???commit;??Process B??????DML??, ???????rowid???? Z???, ???????????process A????????ITL???,????????cleanout??,????????row???????????commit, ???????Process B????”enq: TX – row lock contention”??????? ????Process B????????????? ?????????Process A???????row,??Process B???????”enq: TX – row lock contention”???? ????????  ????????: SESSION A: SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE    10.2.0.5.0      Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> create table maclean_lock(t1 int); Table created. SQL> insert into maclean_lock values (1); 1 row created. SQL> commit; Commit complete. SQL> select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from maclean_lock; DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) ------------------------------------ ------------------------------------                                67642                                    1 SQL>  select distinct sid from v$mystat;        SID ----------        142 SQL> select pid,spid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat));        PID SPID ---------- ------------         17 15636 ??SESSION A ????savepoint ,?update ?????????         SQL>  savepoint NONLOCK; Savepoint created. SQL> select * From v$Lock where sid=142; no rows selected SQL> set linesize 140 pagesize 1400 SQL>  update maclean_lock set t1=t1+2; 1 row updated. SQL> select * From v$Lock where sid=142; ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK ---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- 0000000091FC69F0 0000000091FC6A18        142 TM      55829          0          3          0          6          0 00000000914B4008 00000000914B4040        142 TX     393232        609          6          0          6          0         SQL> select dump(3,16) from dual; DUMP(3,16) -------------------------------------------------------------------------------- Typ=2 Len=2: c1,4 ALTER SYSTEM DUMP DATAFILE 1 BLOCK 67642;  Object id on Block? Y  seg/obj: 0xda16  csc: 0x00.234718  itc: 2  flg: O  typ: 1 - DATA      fsl: 0  fnx: 0x0 ver: 0x01  Itl           Xid                  Uba         Flag  Lck        Scn/Fsc 0x01   0x000a.00f.000001e0  0x00800075.02a6.29  C---    0  scn 0x0000.00234711 0x02   0x0007.018.000001fe  0x0080065c.017a.02  ----    1  fsc 0x0000.00000000 data_block_dump,data header at 0x81d185c =============== tsiz: 0x1fa0 hsiz: 0x14 pbl: 0x081d185c bdba: 0x0041083a      76543210 flag=-------- ntab=1 nrow=1 frre=-1 fsbo=0x14 fseo=0x1f9a avsp=0x1f83 tosp=0x1f83 0xe:pti[0]      nrow=1  offs=0 0x12:pri[0]     offs=0x1f9a block_row_dump: tab 0, row 0, @0x1f9a tl: 6 fb: --H-FL-- lb: 0x2  cc: 1 col  0: [ 2]  c1 04 end_of_block_dump ?? BLOCK DUMP ???? ??????XID=0x0007.018.000001fe ?transaction?? lb:0x1 ??SESSION B ,?????UPDATE?? ???enq: TX - row lock contention ?? SQL> select distinct sid from v$mystat;        SID ----------        140 SQL> select pid,spid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat));        PID SPID ---------- ------------         24 15652 SQL> alter system set "_trace_events"='10000-10999:255:24'; System altered.         SQL> update maclean_lock set t1=t1+2; select * From v$Lock where sid=142 or sid=140 order by sid; SESSION C: SQL> select * From v$Lock where sid=142 or sid=140 order by sid; ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK ---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- 0000000091FC6B10 0000000091FC6B38        140 TM      55829          0          3          0         84          0 00000000924F4A58 00000000924F4A78        140 TX     458776        510          0          6         84          0 00000000914B51E8 00000000914B5220        142 TX     458776        510          6          0        312          1 0000000091FC69F0 0000000091FC6A18        142 TM      55829          0          3          0        312          0 ???? SESSION B SID=140 ?SESSION A ?TX ENQUEUE ?X mode?REQUEST SQL> oradebug dump systemstate 266; Statement processed. SESSION B waiter's enqueue lock       SO: 0x924f4a58, type: 5, owner: 0x92bb8dc8, flag: INIT/-/-/0x00       (enqueue) TX-00070018-000001FE    DID: 0001-0018-00000022       lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x6       req: X, lock_flag: 0x0, lock: 0x924f4a78, res: 0x925617c0       own: 0x92b76be0, sess: 0x92b76be0, proc: 0x92a737a0, prv: 0x925617e0 TX-00070018-000001FE=> TX 458776 510 SESSION A owner's enqueue lock       SO: 0x914b51e8, type: 40, owner: 0x92b796d0, flag: INIT/-/-/0x00       (trans) flg = 0x1e03, flg2 = 0xc0000, prx = 0x0, ros = 2147483647 bsn = 0xed5 bndsn = 0xee7 spn = 0xef7       efd = 3       file:xct.c lineno:1179       DID: 0001-0011-000000C2       parent xid: 0x0000.000.00000000       env: (scn: 0x0000.00234718  xid: 0x0007.018.000001fe  uba: 0x0080065c.017a.02  statement num=0  parent xid: xid: 0x0000.000.00000000  scn: 0x00 00.00234718 0sch: scn: 0x0000.00000000)       cev: (spc = 7818  arsp = 914e8310  ubk tsn: 1 rdba: 0x0080065c  useg tsn: 1 rdba: 0x00800069             hwm uba: 0x0080065c.017a.02  col uba: 0x00000000.0000.00             num bl: 1 bk list: 0x91435070)             cr opc: 0x0 spc: 7818 uba: 0x0080065c.017a.02       (enqueue) TX-00070018-000001FE    DID: 0001-0011-000000C2       lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x6       mode: X, lock_flag: 0x0, lock: 0x914b5220, res: 0x925617c0       own: 0x92b796d0, sess: 0x92b796d0, proc: 0x92a6ffd8, prv: 0x925617d0        xga: 0x8b7c6d40, heap: UGA       Trans IMU st: 2 Pool index 65535, Redo pool 0x914b58d0, Undo pool 0x914b59b8       Redo pool range [0x86de640 0x86de640 0x86e0e40]       Undo pool range [0x86dbe40 0x86dbe40 0x86de640]         ----------------------------------------         SO: 0x91435070, type: 39, owner: 0x914b51e8, flag: -/-/-/0x00         (List of Blocks) next index = 1         index   itli   buffer hint   rdba       savepoint         -----------------------------------------------------------             0      2   0x647f1fc8    0x41083a     0xee7 ?SESSION A? ROLLBACK ?savepoint: SQL> rollback to NONLOCK; Rollback complete. ????savepoint ??update??????? ??UPDATE???????? ROLLBACK: SQL> select * From v$Lock where sid=142 or sid=140; ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK ---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- 00000000924F4A58 00000000924F4A78        140 TX     458776        510          0          6        822          0 0000000091FC6B10 0000000091FC6B38        140 TM      55829          0          3          0        822          0 00000000914B51E8 00000000914B5220        142 TX     458776        510          6          0       1050          1 ???? SESSION A 142 ???SAVEPOINT ???????TM LOCK ????? ROLLBACK TO SAVEPOINT?????SESSION???TX LOCK!!!! ??????SESSION 142???TX ID1=458776 ID2=510, ????ROLLBACK TO SAVEPOINT?????????ABORT TRANSACTION ?? SESSION B  SID=140??  SESSION A ?? , ?????????SESSION B? update???HANG?? ?????????CACHE?????:  Object id on Block? Y  seg/obj: 0xda16  csc: 0x00.2347b7  itc: 2  flg: O  typ: 1 - DATA      fsl: 0  fnx: 0x0 ver: 0x01  Itl           Xid                  Uba         Flag  Lck        Scn/Fsc 0x01   0x000a.00f.000001e0  0x00800075.02a6.29  C---    0  scn 0x0000.00234711 0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000 data_block_dump,data header at 0x745d85c =============== tsiz: 0x1fa0 hsiz: 0x14 pbl: 0x0745d85c bdba: 0x0041083a      76543210 flag=-------- ntab=1 nrow=1 frre=-1 fsbo=0x14 fseo=0x1f9a avsp=0x1f83 tosp=0x1f83 0xe:pti[0]      nrow=1  offs=0 0x12:pri[0]     offs=0x1f9a block_row_dump: tab 0, row 0, @0x1f9a tl: 6 fb: --H-FL-- lb: 0x0  cc: 1 col  0: [ 2]  c1 02 end_of_block_dump ???? ITL=0x02? ?????????,col  0: [ 2]  c1 02 ????????? ?????????SESSION D ,??????row lock?? ?UPDATE???????? SESSION D: SQL> update maclean_lock set t1=t1+2; 1 row updated. SQL> rollback; Rollback complete. ??SESSION B ??????????? ?????ORACLE????????, ??????????? TX lock?? row lock , ????????2??? row lock?????????, ?TX lock????????ENQUEUE LOCK???? ?????????PROCESS K?DML???????????????????????,??????????TX LOCK, ????PROCESS Z?????????????????????????ROW LOCK????????, ???????OLTP?????????????????????? ??ROW LOCK?Release ??????TX?ENQUEUE LOCK,?????????Process J ????????????, Process K??????????? ,Process K?????????,???row piece?lb??0x0 ,?????ITL, Process Z???ITL???????Process J????XID,?????Process J?????TX lock, PROCESS K ???TX resource?Enqueue Waiter Linked List?????X mode(exclusive)?enqueue lock? ???Process J??TX lock?,Process J?????TX resource?Enqueue Waiter Linked List ???Process K??????,??POST?????Process K? TX lock??????, ???????row lock???????,????????? ?????????? ?????: SESSION A ???PID =17 ?????????????????? SESSION B ???PID =24 ??????? "_trace_events"='10000-10999:255:24';  KST trace ??????? Server Process??? SESSION A PID=17  ?? acqure?SX mode???TM Lock ,?? ????Transaction?????UNDO SEGMENT 7,???XID 7.24.510, ?acquire ?X mode? TX-00070018-000001fe ? ?????? 00070018-000001fe ???? 7- 24 - 510? XID ? 781F4B8A:007A569C    17   142 10704  83 ksqgtl: acquire TM-0000da15-00000000 mode=SX flags=GLOBAL|XACT why="contention" 781F4B92:007A569D    17   142 10704  19 ksqgtl: SUCCESS 781F4BB3:007A569E    17   142 10812   2 0x000000000041083A 0x0000000000000000 0x0000000000234717 781F4BBA:007A569F    17   142 10812   3 0x0000000000000000 0x0000000000000000 0x0000000000000000 781F4BC0:007A56A0    17   142 10812   4 0x0000000000000000 0x0000000000000000 0x0000000000000000 781F4BD3:007A56A1    17   142 10812   5 0x000000000041083A 0x0000000000000000 0x0000000000000000 781F4BFE:007A56A2    17   142 10811   1 0x000000000041083A 0x0000000000000000 0x0000000000234711 0x0000000000000002 781F4C06:007A56A3    17   142 10811   2 0x000000000041083A 0x0000000000000000 0x0000000000234718 0x00007FA074EDA560 781F4C26:007A56A4    17   142 10813   1 ktubnd: Bind usn 7 nax 1 nbx 0 lng 0 par 0 781F4C43:007A56A5    17   142 10813   2 ktubnd: Txn Bound xid: 7.24.510 781F4C4A:007A56A6    17   142 10704  83 ksqgtl: acquire TX-00070018-000001fe mode=X flags=GLOBAL|XACT why="contention" 781F4C51:007A56A7    17   142 10704  19 ksqgtl: SUCCESS ?????????? ???????? 781F4CBF:007A56A8    17   142 10005   1 KSL WAIT BEG [SQL*Net message to client] 1650815232/0x62657100 1/0x1 0/0x0 781F4CCC:007A56A9    17   142 10005   2 KSL WAIT END [SQL*Net message to client] 1650815232/0x62657100 1/0x1 0/0x0 time=13 781F4CDE:007A56AA    17   142 10005   1 KSL WAIT BEG [SQL*Net message from client] 1650815232/0x62657100 1/0x1 0/0x0 786BD85D:007A57E0    17   142 10005   2 KSL WAIT END [SQL*Net message from client] 1650815232/0x62657100 1/0x1 0/0x0 time=5016447 786BD966:007A57E1    17   142 10005   1 KSL WAIT BEG [SQL*Net message to client] 1650815232/0x62657100 1/0x1 0/0x0 786BD96E:007A57E2    17   142 10005   2 KSL WAIT END [SQL*Net message to client] 1650815232/0x62657100 1/0x1 0/0x0 time=8 SESSION B ???PID =24  ,??????? SX mode? TM lock,??row lock? acquire X mode?TX-00070018-000001fe ksqgtl: acquire TM-0000da15-00000000 mode=SX flags=GLOBAL|XACT why="contention" ksqgtl: SUCCESS 0x000000000041083A 0x0000000000000000 0x00000000002354F8 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x000000000041083A 0x0000000000000000 0x00000000002354F8 0x0000000000000001 0x000000000041083A 0x0000000000000000 0x00000000002354F8 0x0000000008A63780 0x0000000000000001 0x0000000000800861 0x0000000000000241 0x0000000000000001 0x000000000041083A 0x0000000000000001 0x0000000000000001 0x000000000041083A 0x0000000000000000 0x00000000002354F9 0x0000000000000002 ksqgtl: acquire TX-00070018-000001fe mode=X flags=GLOBAL|LONG why="row lock contention" C4048EBD:007F52B6    24   140 10005   2 KSL WAIT END [enq: TX - row lock contention] 1415053318/0x54580006 458776/0x70018 510/0x1fe time=2929879 C4048ED4:007F52B7    24   140 10005   1 KSL WAIT BEG [enq: TX - row lock contention] 1415053318/0x54580006 458776/0x70018 510/0x1fe C43146CA:007F535E    24   140 10005   2 KSL WAIT END [enq: TX - row lock contention] 1415053318/0x54580006 458776/0x70018 510/0x1fe time=2930676 ????????? ,PID=24 ??????ksqcmi???????? deadlock C43146D9:007F535F    24   140 10704 134 ksqcmi: performing local deadlock detection on TX-00070018-000001fe C43146F8:007F5360    24   140 10704 150 ksqcmi: deadlock not detected on TX-00070018-000001fe ?? ??? PID 17 ??ROLLBACK ???? ,????????: PID 17 ROLLBACK; D7A495BB:007F9D3E    17   142 10005   4 KSL POST SENT postee=24 loc='ksqrcl' id1=0 id2=0 name=   type=0 D7A495D8:007F9D3F    17   142 10444  12 ABORT TRANSACTION - xid: 0x0007.018.000001fe ??  PID 17 ??? TX resource?Enqueue Waiter linked List ???PID 24???,????KSL POST SENT ?? PID 24, ???ksqrcl???ENQUEUE LOCK ?PID 24??????KSL POST (KSL POST RCVD poster=17), ?ksqgtl???? TX-00070018-000001fe ?? ksqrcl??, ??PID 24???????? TX lock?USN ,??????? USN 3 XID 3.11.582 ,???acquire TX-0003000b-00000246 D7A49616:007F9D41    24   140 10005   3 KSL POST RCVD poster=17 loc='ksqrcl' id1=0 id2=0 name=   type=0 fac#=0 facpost=1 D7A4961C:007F9D42    24   140 10704  19 ksqgtl: SUCCESS D7A4967D:007F9D43    24   140 10704 117 ksqrcl: release TX-00070018-000001fe mode=X D7A496A5:007F9D44    24   140 10813   1 ktubnd: Bind usn 3 nax 1 nbx 0 lng 0 par 0 D7A496C2:007F9D45    24   140 10813   2 ktubnd: Txn Bound xid: 3.11.582 D7A496C7:007F9D46    24   140 10704  83 ksqgtl: acquire TX-0003000b-00000246 mode=X flags=GLOBAL|XACT why="contention" D7A496E4:007F9D47    24   140 10704  19 ksqgtl: SUCCESS ROW LOCK?Release ??????TX?ENQUEUE LOCK,?????????Process J ????????????, Process K??????????? ,Process K?????????,???row piece?lb??0×0 ,?????ITL,Process Z???ITL???????Process J????XID,?????Process J?????TX lock,PROCESS K ???TX resource?Enqueue Waiter Linked List?????X mode(exclusive)?enqueue lock? ???Process J??TX lock?,Process J?????TX resource?Enqueue Waiter Linked List ???Process K??????,??POST?????Process K? TX lock??????,???????row lock???????,?????????

    Read the article

  • Partner Webcast - Oracle VM Server for SPARC

    - by dmitry.nefedkin(at)oracle.com
    Normal 0 false false false RU X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:RO; mso-fareast-language:EN-US;} March 17th, 9am CET  (10am EET)Oracle VM Server for SPARC (previously called Sun Logical Domains) provides highly efficient, enterprise-class virtualization capabilities for Oracle's SPARC T-Series servers. Oracle VM Server for SPARC allows you to create up to 128 virtual servers on one system to take advantage of the massive thread scale offered by SPARC T-Series servers and the Oracle Solaris operating system. And all this capability is available at no additional cost. Agenda Overview of VM technologies from Oracle LDoms introduction Values and benefits Feature details LDoms demo Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. To register, please click here For any questions please contact [email protected].

    Read the article

  • Instructions on how to configure a WebLogic Cluster and use it with Oracle Http Server

    - by Laurent Goldsztejn
    On October 17th I delivered a webcast on WebLogic Clustering that included a demo with Apache as the proxy server.  I realized that many steps are needed to set up the configuration I used during the demo.  The purpose of this article is to go through these steps to show how quickly and easily one can define a new cluster and then proxy requests via an Oracle Http Server (OHS). The domain configuration wizard offers the option to create a cluster.  The administration console or WLST, the Weblogic scripting tool can also be used to define a new cluster.  It can be created at any time but the servers that will participate in it cannot be in a running state. Cluster Creation using the configuration wizard Network and architecture requirements need to be considered while choosing between unicast and multicast. Multicast Vs. Unicast with WebLogic Clustering is of great help to make the best decision between the two messaging modes.  In addition, Configure Cluster offers details on each single field displayed above. After this initial configuration page, individual servers could be assigned to this newly created cluster although servers can be added later to the cluster.  What is not recommended is for the Admin server to participate in a cluster as the main purpose of the Admin server is to perform the bulk of the processing for the domain.  Servers need to stop before being assigned to a cluster.  There is also no minimum number of servers that have to participate in the cluster. At this point the configuration should be done and the cluster created successfully.  This can easily be verified from the console. Each clustered managed server can be launched to join the cluster.   At startup the following messages should be logged for each clustered managed server: <Notice> <WeblogicServer> <BEA-000365> <Server state changed to STARTING> <Notice> <Cluster> <BEA-000197> <Listening for announcements from cluster using messaging_mode cluster messaging> <Notice> <Cluster> <BEA-000133> <Waiting to synchronize with other running members of cluster_name>  It's time to try sending requests to the cluster and we will do this with the help of Oracle Http Server to play the role of a proxy server to demonstrate load balancing.  Proxy Server configuration  The first step is to download Weblogic Server Web Server Plugin that will enhance the web server by handling requests aimed at being sent to the Weblogic cluster.  For our test Oracle Http Server (OHS) will be used.  However plug-ins are also available for Apache Http server, Microsoft Internet Information Server (IIS), Oracle iPlanet Webserver or even WebLogic Server with the HttpClusterServlet. Once OHS is installed on the system, the configuration file, mod_wl_ohs.conf, will need to be altered to include Weblogic proxy specifics. First of all, add the following directive to instruct Apache to load the Weblogic shared object module extracted from the plugins file just downloaded. LoadModule weblogic_module modules/mod_wl_ohs.so and then create an IfModule directive to encapsulate the following location block so that proxy will be enabled by path (each request including /wls will be directed directly to the WebLogic Cluster).  You could also proxy requests by MIME type using MatchExpression in the Location block. <IfModule weblogic_module> <Location /wls>    SetHandler weblogic-handler    PathTrim /wls    WebLogicCluster MS1_URL:port,MS2_URL:port    Debug ON    WLLogFile        c:/tmp/global_proxy.log     WLTempDir        "c:/myTemp"    DebugConfigInfo  On </Location> </IfModule> SetHandler specifies the handler for the plug-in module  PathTrim will instruct the plug-in to trim /w ls from the URL before forwarding the request to the cluster. The list of WebLogic Servers defined in WeblogicCluster could contain a mixed set of clustered and single servers.  However, the dynamic list returned for this parameter will only contain valid clustered servers and may contain more servers if not all clustered servers are listed in WeblogicCluster. Testing proxy and load balancing It's time to start OHS web server which should at this point be configured correctly to proxy requests to the clustered servers.  By default round-robin is the load balancing strategy set by WebLogic. Testing the load balancing can be easily done by disabling cookies on your browser given that a request containing a cookie attempts to connect to the primary server. If that attempt fails, the plug-in attempts to make a connection to the next available server in the list in a round-robin fashion.  With cookies enabled, you could use two different browsers to test the load balancing with a JSP page that contains the following: <%@ page contentType="text/html; charset=iso-8859-1" language="java"  %>  <%  String path = request.getContextPath();   String getProtocol=request.getScheme();   String getDomain=request.getServerName();   String getPort=Integer.toString(request.getLocalPort());   String getPath = getProtocol+"://"+getDomain+":"+getPort+path+"/"; %> <html> <body> Receiving Server <%=getPath%> </body> </html>  Assuming that you name the JSP page Test.jsp and the webapp that contains it TestApp, your browsers should open the following URL: http://localhost/wls/TestApp/Test.jsp  Each browser should connect to a different clustered server and this simple JSP should confirm that.  The webapp that contains the JSP needs to be deployed to the cluster. You can also verify that the load is correctly balanced by looking at the proxy log file.  Each request generates a set of log entries that starts with : timestamp ================New Request: Each request is associated with a primary server and a secondary server if one is available.  For our test request, the following entries should appear in the log as well:Using Uri /wls/TestApp/Test.jsp After trimming path: '/TestApp/Test.jsp' The final request string is '/TestApp/Test.jsp' If an exception occurs, it should also be logged in the proxy log file with the prefix:timestamp *******Exception type   WeblogicBridgeConfig DebugConfigInfo enables runtime statistics and the production of configuration information.  For security purposes, this parameter should be turned off in production. http://webserver_host:port/path/xyz.jsp?__WebLogicBridgeConfig will display a proxy bridge page detailing the plugin configuration followed by runtime statistics which could help in diagnosing issues along with the analyzing of the proxy log file.  In our example the url would be: http://localhost/wls/TestApp/Test.jsp?__WebLogicBridgeConfig  Here is how the top section of the screen can look like: The bottom part of the page contains runtime statistics, here is a snippet of it (unrelated with the previous JSP example).   This entire plugin configuration should be very similar with other web servers, what varies is the name of the proxy server configuration file. So, as you can see, it only takes a few minutes to configure a Weblogic cluster and get servers to join it. 

    Read the article

  • Oracle Enterprise Manager Ops Center : Using Operational Profiles to Install Packages and other Content

    - by LeonShaner
    Oracle Enterprise Manager Ops Center provides numerous ways to deploy content, such as through OS Update Profiles, or as part of an OS Provisioning plan or combinations of those and other "Install Software" capabilities of Deployment Plans.  This short "how-to" blog will highlight an alternative way to deploy content using Operational Profiles. Usually we think of Operational Profiles as a way to execute a simple "one-time" script to perform a basic system administration function, which can optionally be based on user input; however, Operational Profiles can be much more powerful than that.  There is often more to performing an action than merely running a script -- sometimes configuration files, packages, binaries, and other scripts, etc. are needed to perform the action, and sometimes the user would like to leave such content on the system for later use. For shell scripts and other content written to be generic enough to work on any flavor of UNIX, converting the same scripts and configuration files into Solaris 10 SVR4 package, Solaris 11 IPS package, and/or a Linux RPM's might be seen as three times the work, for little appreciable gain.   That is where using an Operational Profile to deploy simple scripts and other generic content can be very helpful.  The approach is so powerful, that pretty much any kind of content can be deployed using an Operational Profile, provided the files involved are not overly large, and it is not necessary to convert the content into UNIX variant-specific formats. The basic formula for deploying content with an Operational Profile is as follows: Begin with a traditional script header, which is a UNIX shell script that will be responsible for decoding and extracting content, copying files into the right places, and executing any other scripts and commands needed to install and configure that content. Include steps to make the script platform-aware, to do the right thing for a given UNIX variant, or a "sorry" message if the operator has somehow tried to run the Operational Profile on a system where the script is not designed to run.  Ops Center can constrain execution by target type, so such checks at this level are an added safeguard, but also useful with the generic target type of "Operating System" where the admin wants the script to "do the right thing," whatever the UNIX variant. Include helpful output to show script progress, and any other informational messages that can help the admin determine what has gone wrong in the case of a problem in script execution.  Such messages will be shown in the job execution log. Include necessary "clean up" steps for normal and error exit conditions Set non-zero exit codes when appropriate -- a non-zero exit code will cause an Operational Profile job to be marked failed, which is the admin's cue to look into the job details for diagnostic messages in the output from the script. That first bullet deserves some explanation.  If Operational Profiles are usually simple "one-time" scripts and binary content is not allowed, then how does the actual content, packages, binaries, and other scripts get delivered along with the script?  More specifically, how does one include such content without needing to first create some kind of traditional package?   All that is required is to simply encode the content and append it to the end of the Operational Profile.  The header portion of the Operational Profile will need to contain the commands to decode the embedded content that has been appended to the bottom of the script.  The header code can do whatever else is needed, and finally clean up any intermediate files that were created during the decoding and extraction of the content. One way to encode binary and other content for inclusion in a script is to use the "uuencode" utility to convert the content into simple base64 ASCII text -- a form that is suitable to be appended to an Operational Profile.   The behavior of the "uudecode" utility is such that it will skip over any parts of the input that do not fit the uuencoded "begin" and "end" clauses.  For that reason, your header script will be skipped over, and uudecode will find your embedded content, that you will uuencode and paste at the end of the Operational Profile.  You can have as many "begin" / "end" clauses as you need -- just separate each embedded file by an empty line between "begin" and "end" clauses. Example:  Install SUNWsneep and set the system serial number Script:  deploySUNWsneep.sh ( <- right-click / save to download) Highlights: #!/bin/sh # Required variables: OC_SERIAL="$OC_SERIAL" # The user-supplied serial number for the asset ... Above is a good practice, showing right up front what kind of input the Operational Profile will require.   The right-hand side where $OC_SERIAL appears in this example will be filled in by Ops Center based on the user input at deployment time. The script goes on to restrict the use of the program to the intended OS type (Solaris 10 or older, in this example, but other content might be suitable for Solaris 11, or Linux -- it depends on the content and the script that will handle it). A temporary working directory is created, and then we have the command that decodes the embedded content from "self" which in scripting terms is $0 (a variable that expands to the name of the currently executing script): # Pass myself through uudecode, which will extract content to the current dir uudecode $0 At that point, whatever content was appended in uuencoded form at the end of the script has been written out to the current directory.  In this example that yields a file, SUNWsneep.7.0.zip, which the rest of the script proceeds to unzip, and pkgadd, followed by running "/opt/SUNWsneep/bin/sneep -s $OC_SERIAL" which is the command that stores the system serial for future use by other programs such as Explorer.   Don't get hung up on the example having used a pkgadd command.  The content started as a zip file and it could have been a tar.gz, or any other file.  This approach simply decodes the file.  The header portion of the script has to make sense of the file and do the right thing (e.g. it's up to you). The script goes on to clean up after itself, whether or not the above was successful.  Errors are echo'd by the script and a non-zero exit code is set where appropriate. Second to last, we have: # just in case, exit explicitly, so that uuencoded content will not cause error OPCleanUP exit # The rest of the script is ignored, except by uudecode # # UUencoded content follows # # e.g. for each file needed, #  $ uuencode -m {source} {source} > {target}.uu5 # then paste the {target}.uu5 files below # they will be extracted into the workding dir at $TDIR # The commentary above also describes how to encode the content. Finally we have the uuencoded content: begin-base64 444 SUNWsneep.7.0.zip UEsDBBQAAAAIAPsRy0Di3vnukAAAAMcAAAAKABUAcmVhZG1lLnR4dFVUCQADOqnVT7up ... VXgAAFBLBQYAAAAAAgACAJEAAADTNwEAAAA= ==== That last line of "====" is the base64 uuencode equivalent of a blank line, followed by "end" and as mentioned you can have as many begin/end clauses as you need.  Just separate each embedded file by a blank line after each ==== and before each begin-base64. Deploying the example Operational Profile looks like this (where I have pasted the system serial number into the required field): The job succeeded, but here is an example of the kind of diagnostic messages that the example script produces, and how Ops Center displays them in the job details: This same general approach could be used to deploy Explorer, and other useful utilities and scripts. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Extend Your Applications Your Way: Oracle OpenWorld Live Poll Results

    - by Applications User Experience
    Lydia Naylor, Oracle Applications User Experience Manager At OpenWorld 2012, I attended one of our team’s very exciting sessions: “Extend Your Applications, Your Way”. It was clear that customers were engaged by the topics presented. Not only did we see many heads enthusiastically nodding in agreement during the presentation, and witness a large crowd surround our speakers Killian Evers, Kristin Desmond and Greg Nerpouni afterwards, but we can prove it…with data! Figure 1. Killian Evers, Kristin Desmond, and Greg Nerpouni of Oracle at the OOW 2012 session. At the beginning of our OOW 2012 journey, Greg Nerpouni, Fusion HCM Principal Product Manager, told me he really wanted to get feedback from the audience on our extensibility direction. Initially, we were thinking of doing a group activity at the OOW UX labs events that we hold every year, but Greg was adamant- he wanted “real-time” feedback. So, after a little tinkering, we came up with a way to use an online survey tool, a simple QR code (Quick Response code: a matrix barcode that can include information like URLs and can be read by mobile device cameras), and the audience’s mobile devices to do just that. Figure 2. Actual QR Code for survey Prior to the session, we developed a short survey in Vovici (an online survey tool), with questions to gather feedback on certain points in the presentation, as well as demographic data from our participants. We used Vovici’s feature to generate a mobile HTML version of the survey. At the session, attendees accessed the survey by simply scanning a QR code or typing in a TinyURL (a shorthand web address that is easily accessible through mobile devices). Killian, Kristin and Greg paused at certain points during the session and asked participants to answer a few survey questions about what they just presented. Figure 3. Session survey deployed on a mobile phone The nice thing about Vovici’s survey tool is that you can see the data real-time as participants are responding to questions - so we knew during the session that not only was our direction on track but we were hitting the mark and fulfilling Greg’s request. We planned on showing the live polling results to the audience at the end of the presentation but it ran just a little over time, and we were gently nudged out of the room by the session attendants. We’ve included a quick summary below and this link to the full results for your enjoyment. Figure 4. Most important extensions to Fusion Applications So what did participants think of our direction for extensibility? A total of 94% agreed that it was an improvement upon their current process. The vast majority, 80%, concurred that the extensibility model accounts for the major roles involved: end user, business systems analyst and programmer. Attendees suggested a few supporting roles such as systems administrator, data architect and integrator. Customers and partners in the audience verified that Oracle‘s Fusion Composers allow them to make changes in the most common areas they need to: user interface, business processes, reporting and analytics. Integrations were also suggested. All top 10 things customers can do on a page rated highly in importance, with all but two getting an average rating above 4.4 on a 5 point scale. The kinds of layout changes our composers allow customers to make align well with customers’ needs. The most common were adding columns to a table (94%) and resizing regions and drag and drop content (both selected by 88% of participants). We want to thank the attendees of the session for allowing us another great opportunity to gather valuable feedback from our customers! If you didn’t have a chance to attend the session, we will provide a link to the OOW presentation when it becomes available.

    Read the article

  • Editor's Notebook - Social Aura: Insights from the Oracle Social Media Summit

    - by user462779
    Panelists talk social marketing at the Oracle Social Media Summit On November 14, I traveled to Las Vegas for the first-ever Oracle Social Media Summit. The two day event featured an impressive collection of social media luminaries including: David Kirkpatrick (founder and CEO of Techonomy Media and author of The Facebook Effect), John Yi (Head of Marketing Partnerships, Facebook), Matt Dickman (EVP of Social Business Innovation, Weber Shandwick), and Lyndsay Iorio (Social Media & Communications Manager, NBC Sports Group) among others. It was also a great opportunity to talk shop with some of our new Vitrue and Involver colleagues who have been returning great social media results even before their companies were acquired by Oracle. I was live tweeting the event from @OracleProfit which was great for those who wanted to follow along with the proceedings from the comfort of their office or blackjack table. But I've also found over the years that live tweeting an event is a handy way to take notes: I can sift back through my record of what people said or thoughts I had at the time and organize the Twitter messages into some kind of summary account of the proceedings. I've had nearly a month to reflect on the presentations and conversations at the event and a few key topics have emerged: David Kirkpatrick's comment during the opening presentation really set the stage for the conversations that followed. Especially if you are a marketer or publisher, the idea that you are in a one-way broadcast relationship with your audience is a thing of the past. "Rising above the noise" does not mean reaching for a megaphone, ALL CAPS, or exclamation marks. Hype will not motivate social media denizens to do anything but unfollow and tune you out. But knowing your audience, creating quality content and/or offers for them, treating them with respect, and making an authentic effort to please them: that's what I believe is now necessary. And Kirkpatrick's comment early in the day really made the point. Later in the day, our friends @Vitrue demonstrated this point by elaborating on a comment by Facebook's John Yi. If a social strategy is comprised of nothing more than cutting/pasting the same message into different social media properties, you're missing the opportunity to have an actual conversation. That's not shouting at your audience, but it does feel like an empty gesture. Walter Benjamin, perplexed by auraless Twitter messages Not to get too far afield, but 20th century cultural critic Walter Benjamin has a concept that is useful for understanding the dynamics of the empty social media gesture: Aura. In his work The Work of Art in the Age of Mechanical Reproduction, Benjamin struggled to understand the difference he percieved between the value of a hand-made art object (a painting, wood cutting, sculpture, etc.) and a photograph. For Benjamin, aura is similar to the "soul" of an artwork--the intangible essence that is created when an artist picks up a tool and puts creative energy and effort into a work. I'll defer to Wikipedia: "He argues that the "sphere of authenticity is outside the technical" so that the original artwork is independent of the copy, yet through the act of reproduction something is taken from the original by changing its context. He also introduces the idea of the "aura" of a work and its absence in a reproduction." So make sure you put aura into your social interactions. Don't just mechanically reproduce them. Keeping aura in your interactions requires the intervention of an actual human being. That's why @NoahHorton's comment about content curation struck me as incredibly important. Maybe it's just my own prejudice, being in the content curation business myself. And it's not to totally discount machine-aided content management systems, content recommendation engines, and other tech-driven tools for building an exceptional content experience. It's just that without that human interaction--that editor who reviews the analytics and responds to user feedback--interactions over social media feel a bit empty. It is SOCIAL media, right? (We'll leave the conversation about social machines for another day). At the end of the day, experimentation is key. Just like trying to find that right joke to tell at the beginning of your presentation or that good opening like at a cocktail party, social media messages and interactions can take some trial and error. Don't be afraid to try things, tinker with incomplete ideas, abandon things that don't work, and engage in the conversation. And make sure your heart is in it, otherwise your audience can tell. And finally:

    Read the article

  • ODI 11g – Oracle Multi Table Insert

    - by David Allan
    With the IKM Oracle Multi Table Insert you can generate Oracle specific DML for inserting into multiple target tables from a single query result – without reprocessing the query or staging its result. When designing this to exploit the IKM you must split the problem into the reusable parts – the select part goes in one interface (I named SELECT_PART), then each target goes in a separate interface (INSERT_SPECIAL and INSERT_REGULAR). So for my statement below… /*INSERT_SPECIAL interface */ insert  all when 1=1 And (INCOME_LEVEL > 250000) then into SCOTT.CUSTOMERS_NEW (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /* INSERT_REGULAR interface */ when 1=1  then into SCOTT.CUSTOMERS_SPECIAL (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /*SELECT*PART interface */ select        CUSTOMERS.EMAIL EMAIL,     CUSTOMERS.CREDIT_LIMIT CREDIT_LIMIT,     UPPER(CUSTOMERS.NAME) NAME,     CUSTOMERS.USER_MODIFIED USER_MODIFIED,     CUSTOMERS.DATE_MODIFIED DATE_MODIFIED,     CUSTOMERS.BIRTH_DATE BIRTH_DATE,     CUSTOMERS.MARITAL_STATUS MARITAL_STATUS,     CUSTOMERS.ID ID,     CUSTOMERS.USER_CREATED USER_CREATED,     CUSTOMERS.GENDER GENDER,     CUSTOMERS.DATE_CREATED DATE_CREATED,     CUSTOMERS.INCOME_LEVEL INCOME_LEVEL from    SCOTT.CUSTOMERS   CUSTOMERS where    (1=1) Firstly I create a SELECT_PART temporary interface for the query to be reused and in the IKM assignment I state that it is defining the query, it is not a target and it should not be executed. Then in my INSERT_SPECIAL interface loading a target with a filter, I set define query to false, then set true for the target table and execute to false. This interface uses the SELECT_PART query definition interface as a source. Finally in my final interface loading another target I set define query to false again, set target table to true and execute to true – this is the go run it indicator! To coordinate the statement construction you will need to create a package with the select and insert statements. With 11g you can now execute the package in simulation mode and preview the generated code including the SQL statements. Hopefully this helps shed some light on how you can leverage the Oracle MTI statement. A similar IKM exists for Teradata. The ODI IKM Teradata Multi Statement supports this multi statement request in 11g, here is an extract from the paper at www.teradata.com/white-papers/born-to-be-parallel-eb3053/ Teradata Database offers an SQL extension called a Multi-Statement Request that allows several distinct SQL statements to be bundled together and sent to the optimizer as if they were one. Teradata Database will attempt to execute these SQL statements in parallel. When this feature is used, any sub-expressions that the different SQL statements have in common will be executed once, and the results shared among them. It works in the same way as the ODI MTI IKM, multiple interfaces orchestrated in a package, each interface contributes some SQL, the last interface in the chain executes the multi statement.

    Read the article

  • Oracle Applications Global User Experience

    - by ultan o'broin
    Today, we're launching Oracle's first ever blog for global user experience (UX) applications issues. We'll be talking about how we design and develop applications for global use, looking at the cultural factors, internationalization (I18n), localization (L10n) and language used for a start. We will also discuss how we study and work with real users so that our customers have applications that allow them to be productive regardless of where they are located in the world. In addition, we will inform you about any globally-related events we know about, and about product features, development frameworks, tools, information and relevant to our worldwide customers. Also, of course, we hope to hear from you, too. If you have anything you want to know about our global user experience, a localization you'd like, or cultural feature you think would be useful, then let us know. If you have any tips or guidelines you'd like to share in this space, then this blog is for you too! As far as global user experience is concerned, you don't have to be lost in translation. Hence the name of the blog!

    Read the article

  • Oracle MDM Maturity Model

    - by David Butler
    A few weeks ago, I discussed the results of a survey conducted by Oracle’s Insight team. The survey was based on the data management maturity model that the Oracle Insight team has developed over the years as they analyzed customer IT organizations to help them get more out of everything they already have. I thought you might like to learn more about the maturity model itself. It can help you figure out where you stand when it comes to getting your organizations data management act together. The model covers maturity levels around five key areas: Profiling data sources; Defining a data strategy; Defining a data consolidation plan; Data maintenance; and Data utilization. Profile data sources: Profiling data sources involves taking an inventory of all data sources from across your IT landscape. Then evaluate the quality of the data in each source system. This enables the scoping of what data to collect into an MDM hub and what rules are needed to insure data harmonization across systems. Define data strategy: A data strategy requires an understanding of the data usage. Given data usage, various data governance requirements need to be developed. This includes data controls and security rules as well as data structure and usage policies. Define data consolidation strategy: Consolidation requires defining your operational data model. How integration is to be accomplished. Cross referencing common data attributes from multiple systems is needed. Synchronization policies also need to be developed. Data maintenance: The desired standardization needs to be defined, including what constitutes a ‘match’ once the data has been standardized. Cleansing rules are a part of this methodology. Data quality monitoring requirements also need to be defined. Utilize the data: What data gets published, and who consumes the data must be determined. How to get the right data to the right place in the right format given its intended use must be understood. Validating the data and insuring security rules are in place and enforced are crucial aspects for full no-risk data utilization. For each of the above data management areas, a maturity level needs to be assessed. Where your organization wants to be should also be identified using the same maturity levels. This results in a sound gap analysis your organization can use to create action plans to achieve the ultimate goals. Marginal is the lowest level. It is characterized by manually maintaining trusted sources; lacking or inconsistent, silo’d structures with limited integration, and gaps in automation. Stable is the next leg up the MDM maturity staircase. It is characterized by tactical MDM implementations that are limited in scope and target a specific division.  It includes limited data stewardship capabilities as well. Best Practice is a serious MDM maturity level characterized by process automation improvements. The scope is enterprise wide. It is a business solution that provides a single version of the truth, with closed-loop data quality capabilities. It is typically driven by an enterprise architecture group with both business and IT representation.   Transformational is the highest MDM maturity level. At this level, MDM is quantitatively managed. It is integrated with Business Intelligence, SOA, and BPM. MDM is leveraged in business process orchestration. Take an inventory using this MDM Maturity Model and see where you are in your journey to full MDM maturity with all the business benefits that accrue to organizations who have mastered their data for the benefit of all operational applications, business processes, and analytical systems. To learn more, Trevor Naidoo and I have written the Oracle MDM Maturity Model whitepaper. It’s free, so go ahead and download it and use it as you see fit.

    Read the article

  • 13 MORE Things from the Oracle Social Summit You Should Know

    - by Mike Stiles
    In our previous blog, we started giving those of you who couldn’t make it just a sampling of the valuable takeaways from the first annual Oracle Social Summit, held Nov 14 and 15 in Las Vegas. And while yes, 13 items is a pretty healthy sampling, we wanted to go the extra mile and give you 13 more, an indication of just how much great information came out of it.  Follow the arrow, and come on in as if you were there with us. 1. Weber Shandwick takes a 70/20/10 approach when advising clients how to allocate resources to paid social opportunities. 70% of spend should go toward paid opportunities the agency and client both know work, 20% should go toward paid social the agency knows works, and 10% should go toward experimentation. (Matt Dickman – Weber Shandwick) 2. By 2017, the technically competent CMO will spend more on IT than the CIO. (Gartner Study) 3. CIOs are focused on infrastructure. As the roles of the CMO and CIO continue coming together, those CIOs have to make a very conscious decision to get CMOs what they need. 4. It’s now harder for brands to differentiate based on product. The advantage will go to the brands that are successful in garnering customer trust. 5. More and more, enterprise software is going to start looking like the software consumers are used to seeing and using. 6. You will see brands prioritizing mobile and dropping investments in www, HTML, POS systems, etc. 7. The social graph has to be added to brands’ customer data for a more holistic view. Customers will give you the information you need if the reward is appropriate. 8. Viacom did a study that showed viewers are most honest on social. Not so much on surveys or other feedback vehicles. 9. How are you determining your influencers? Influence isn’t about reach. It’s about getting people to change behavior. 10. A mix of skills is becoming critically important in a social staff. It shouldn’t be a mixture of several disciplines, not just a bunch of “social experts.” 11. If senior management isn’t engaged, the social team is forced into guessing what might be considered a “success” by the C-suite. 12. Mobile customization will be getting big investments from brands in 2013. Brands need to provide shoppers utility, not just information. 75% will use mobile this holiday season to avoid in-store madness. 13. Data becomes information, information becomes insight, and insight becomes actionable. The Oracle Social Summit brought together brands, agencies, Oracle social experts and industry thought leaders to take a serious look at where social stands today, and where it’s headed in the near future. Given the speed of social’s evolution, attending such events (or at least reading nifty summary blogs) is a good investment in making sure your enterprise isn’t falling gradually behind.

    Read the article

  • Oracle Announces Release of PeopleSoft HCM 9.1 Feature Pack 2

    - by Jay Zuckert
    Big things sometimes come in small packages.  Today Oracle announced the availability of PeopleSoft HCM 9.1 Feature Pack 2 which delivers a new HR self service user experience that fundamentally changes the way managers and employees interact with the HCM system.  Earlier this year we reviewed a number of new concept designs with our Customer Advisory Boards.  With the accelerated feature pack development cycle we have adopted, these innovations are  now available to all 9.1 customers without the need for an upgrade.   There are no new products that need to be licensed for the capabilities below. For more details on Feature Pack 2, please see the Oracle press release. Included in Feature Pack 2 is a new search-based menu-free navigation that allows managers to search for employees by name and take actions directly from the secure search results.  For example, a manager can now simply type in part of an employee’s first or last name and receive meaningful results from documents related to performance, compensation, learning, recruiting, career planning and more.   Delivered actions can be initiated directly from these search results and the actions are securely tied to HCM security and user role.  The feature pack also includes new pages that will enable managers to be more productive by aggregating key employee data into a single page.  The new Manager Dashboard and Talent Summary provide a consolidated view of data related to a manager’s team and individual team members, respectively.   The Manager Dashboard displays information relevant to their direct reports including team learning, objective alignment, alerts, and pending approvals requiring their attention.  The Talent Summary provides managers with an aggregated view of talent management-related data for an individual employee including performance history, salary history, succession options, total rewards, and competencies.   The information displayed in both the Manager Dashboard and Talent Summary is configurable by system administrators and can be personalized by each of your managers. Other Feature Pack 2 enhancements allow organizations to administer Matrix or Dotted-Line Relationship Management, which addresses the challenge of tracking and maintaining project-based organizations that cut across the enterprise and geographic regions.  From within the Company Directory and Org Viewer organization charts, managers now have access to manager self-service transactions from related actions.  More than 70 manager and employee self-service transactions have been tied into the related action framework accessible from Org Viewer, Manager Dashboard, Talent Summary and Secure Enterprise Search (SES) results.  In addition to making it easier to access manager self-service transactions, the feature pack delivers streamlined transaction pages making everyday tasks such as promoting an employee faster and more efficient. With the delivery of PeopleSoft HCM 9.1 Feature Pack 2, Oracle continues to deliver on its commitment to our PeopleSoft customers.  With this feature pack, HCM 9.1 customers will be able to deploy the newest functionality quickly, without a major release upgrade, and realize added value from their existing PeopleSoft investment.    For customers newly deploying 9.1, a new download with all of Feature Pack 2  will be available early next year.   This will aslo include recertified upgrade paths from 8.8, 8.9 and 9.0, for customers in the upgrade process.

    Read the article

  • Oracle GoldenGate Active-Active Part 1

    - by Nick_W
    My name is Nick Wagner, and I'm a recent addition to the Oracle Maximum Availability Architecture (MAA) product management team.  I've spent the last 15+ years working on database replication products, and I've spent the last 10 years working on the Oracle GoldenGate product.  So most of my posting will probably be focused on OGG.  One question that comes up all the time is around active-active replication with Oracle GoldenGate.  How do I know if my application is a good fit for active-active replication with GoldenGate?   To answer that, it really comes down to how you plan on handling conflict resolution.  I will delve into topology and deployment in a later blog, but here is a simple architecture: The two most common resolution routines are host based resolution and timestamp based resolution. Host based resolution is used less often, but works with the fewest application changes.  Think of it like this: any transactions from SystemA always take precedence over any transactions from SystemB.  If there is a conflict on SystemB, then the record from SystemA will overwrite it.  If there is a conflict on SystemA, then it will be ignored.  It is quite a bit less restrictive, and in most cases, as long as all the tables have primary keys, host based resolution will work just fine.  Timestamp based resolution, on the other hand, is a little trickier. In this case, you can decide which record is overwritten based on timestamps. For example, does the older record get overwritten with the newer record?  Or vice-versa?  This method not only requires primary keys on every table, but it also requires every table to have a timestamp/date column that is updated each time a record is inserted or updated on the table.  Most homegrown applications can always be customized to include these requirements, but it's a little more difficult with 3rd party applications, and might even be impossible for large ERP type applications.  If your database has these features - whether it’s primary keys for host based resolution, or primary keys and timestamp columns for timestamp based resolution - then your application could be a great candidate for active-active replication.  But table structure is not the only requirement.  The other consideration applies when there is a conflict; i.e., do I need to perform any notification or track down the user that had their data overwritten?  In most cases, I don't think it's necessary, but if it is required, OGG can always create an exceptions table that contains all of the overwritten transactions so that people can be notified. It's a bit of extra work to implement this type of option, but if the business requires it, then it can be done. Unless someone is constantly monitoring this exception table or has an automated process in dealing with exceptions, there will be a delay in getting a response back to the end user. Ideally, when setting up active-active resolution we can include some simple procedural steps or configuration options that can reduce, or in some cases eliminate the potential for conflicts.  This makes the whole implementation that much easier and foolproof.  And I'll cover these in my next blog. 

    Read the article

  • PaaS, DBaaS and the Oracle Database Cloud Service

    - by yaldahhakim
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} As with many widely hyped areas, there is much more variation within the broad spectrum of products referred to as “Cloud” that is immediately apparent. This variation is evident in one of the key misunderstandings about the Oracle Database Cloud Service. People could be forgiven for thinking that the Database Cloud Service was a Database-as-a-Service (DBaaS), but this is actually not true. The Database Cloud Service is a Platform-as-a-Service, which presents a different user and developer interface and has a different set of qualities. A good way to think about the difference between these two varieties of Cloud offerings is that you, the customer, have to deal with things at the level of the offering, but not for anything below it. In practice, this means that you do not have to deal with hardware or system software, including installation and maintenance, for DBaaS. You also do not have much control over configuration of these options. For PaaS, you don’t have to deal with hardware, system software, or database software – and also do not have control over these levels in the stack. So you cannot modify configuration parameters for the database with the Database Cloud Service – your interface is through SQL and PL/SQL, with Application Express, included in the Database Cloud Service, or through JDBC for Java apps running in the Java Cloud Service, or through RESTful Web Services. You will notice what is not mentioned there – SQL*Net. You cannot access your Oracle Database Cloud Service by changing an entry in the TNSNames file and using SQL*Net. So the effort involved in migrating an existing Oracle Database in your data center to the Database Cloud Service may be prohibitive. The good news is that Application Express and the RESTful Web Services wizard in the Database Cloud Service allow you to develop new applications very quickly, and, of course, the provisioning of the entire Database Cloud Service takes only minutes.

    Read the article

  • Oracle OpenWorld Recap - A Walk in the Clouds (and heat in San Francisco)!

    - by Di Seghposs
    Whether you were one of the 50,000 attendees in San Francisco or one of the million+ online attendees – we’d like to thank you for joining us at Oracle OpenWorld last week! With temperatures in the 80s and 90s, attendees traveled the overheated streets to join packed keynotes and general sessions – all to find the information they came in search of – Oracle solutions to address their business requirements and challenges. The buzz of this year’s OpenWorld was all about ‘The Cloud’. And, the financial management team joined in the cloud buzz with Thomas Kurian’s keynote which highlighted our ERP Cloud Service as the most complete cloud service on the market. Offering the full breadth of business operations, including Financial Management, Risk and Control Management, Project Portfolio Management, Procurement, Sourcing, and Inventory Management, Oracle ERP Cloud Service transforms the back office into a collaborative, efficient, and intuitive hub. And, our product marketing expert on Financial Management, Annette Melatti, provided a glimpse of what the office of finance looks like in the 21st century as well as shared what’s next for Oracle’s financial solutions discussing the future of Financial Management with Fusion Financials, E-Business Suite, PeopleSoft and the JD Edwards solutions. There were over 120 sessions from customers, partners, and Oracle experts that addressed financial management solutions along with demo pods and Meet the Experts sessions. We hope you found what you were looking for! Missed any of the keynotes or general sessions? Watch them on demand here. At OpenWorld, we also announced that Lending Club, the leading platform for investing in and obtaining personal loans, has selected Oracle ERP Cloud Service to help improve decision-making, implement robust reporting, and take advantage of the cost savings provided by the cloud. The CFO of Lending Club, Carrie Dolan had mentioned that they “are an innovative, data-intensive, high-growth company and needed a solution and partner that could match us. We conducted a thorough review of our options, and Oracle ERP Cloud Service was the clear winner in terms of capabilities and business value as well as commitment to us as a customer.” Read the entire release here. For now, it’s back to business as we gear up for the second half of our fiscal year and start planning for Oracle OpenWorld 2013!

    Read the article

  • The Deadline Cometh

    - by Oracle Staff
    Only Two More Days to Suggest a Session. We've received 2,906 votes for presentations since the launch of the Suggest a Session program. Have you voted? Have you suggested a presentation for Oracle OpenWorld 2010 or Oracle Develop 2010? If "yes," you've done well. If "no," you still have a chance to redeem yourself. Get all the information on the Suggest a Session process, timeline, and guidelines and submit your idea or vote. The last grain of sand runs out on June 20.

    Read the article

  • The Internet of Things Is Really the Internet of People

    - by HCM-Oracle
    By Mark Hurd - Originally Posted on LinkedIn As I speak with CEOs around the world, our conversations invariably come down to this central question: Can we change our corporate cultures and the ways we train and reward our people as rapidly as new technology is changing the work we do, the products we make and how we engage with customers? It’s a critical consideration given today’s pace of disruption, which already is straining traditional management models and HR strategies. Winning companies will bring innovation and vision to their employees and partners by attracting people who will thrive in this emerging world of relentless data, predictive analytics and unlimited what-if scenarios. So, where are we going to find employees who are as familiar with complex data as I am with orderly financial statements and business plans? I’m not just talking about high-end data scientists who most certainly will sit at or near the top of the new decision-making pyramid. Global organizations will need creative and motivated people who will devote their time to manipulating, reviewing, analyzing, sorting and reshaping data to drive business and delight customers. This might seem evident, but my conversations with business people across the globe indicate that only a small number of companies get it. In the past few years, executives have been busy keeping pace with seismic upheavals, including the rise of social customer engagement, the rapid acceleration of product-development cycles and the relentless move to mobile-first. But all of that, I think, is the start of an uphill climb to the top of a roller-coaster. Today, about 10 billion devices across the globe are connected to the Internet. In a couple of years, that number will probably double, and not because we will have bought 10 billion more computers, smart phones and tablets. This unprecedented explosion of Big Data is being triggered by the Internet of Things, which is another way of saying that the numerous intelligent devices touching our everyday lives are all becoming interconnected. Home appliances, food, industrial equipment, pets, pharmaceutical products, pallets, cars, luggage, packaged goods, athletic equipment, even clothing will be streaming data. Some data will provide important information about how to run our businesses and lead healthier lives. Much of it will be extraneous. How does a CEO cope with this unimaginable volume and velocity of data, much less harness it to excite and delight customers? Here are three things CEOs must do to tackle this challenge: 1) Take care of your employees, take care of your customers. Larry Ellison recently noted that the two most important priorities for any CEO today revolve around people: Taking care of your employees and taking care of your customers. Companies in today’s hypercompetitive business environment simply won’t be able to survive unless they’ve got world-class people at all levels of the organization. CEOs must demonstrate a commitment to employees by becoming champions for HR systems that empower every employee to fully understand his or her job, how it ties into the corporate framework, what’s expected of them, what training is available, and how they can use an embedded social network to communicate, collaborate and excel. Over the next several years, many of the world’s top industrialized economies will see a turnover in the workforce on an unprecedented scale. Across the United States, Europe, China and Japan, the “baby boomer” generation will be retiring and, by 2020, we’ll see turnovers in those regions ranging from 10 to 30 percent. How will companies replace all that brainpower, experience and know-how? How will CEOs perpetuate the best elements of their corporate cultures in the midst of this profound turnover? The challenge will be daunting, but it can be met with world-class HR technology. As companies begin replacing up to 30 percent of their workforce, they will need thousands of new types of data-native workers to exploit the Internet of Things in the service of the Internet of People. The shift in corporate mindset here can’t be overstated. The CEO has to be at the forefront of this new way of recruiting, training, motivating, aligning and developing truly 21-century talent. 2) Start thinking today about the Internet of People. Some forward-looking companies have begun pursuing the “democratization of data.” This allows more people within a company greater access to data that can help them make better decisions, move more quickly and keep pace with the changing interests and demands of their customers. As a result, we’ve seen organizations flatten out, growing numbers of well-informed people authorized to make decisions without corporate approval and a movement of engagement away from headquarters to the point of contact with the customer. These are profound changes, and I’m a huge proponent. As I think about what the next few years will bring as companies become deluged with unprecedented streams of data, I’m convinced that we’ll need dramatically different organizational structures, decision-making models, risk-management profiles and reward systems. For example, if a car company’s marketing department mines incoming data to determine that customers are shifting rapidly toward neon-green models, how many layers of approval, review, analysis and sign-off will be needed before the factory starts cranking out more neon-green cars? Will we continue to have organizations where too many people are empowered to say “No” and too few are allowed to say “Yes”? If so, how will those companies be able to compete in a world in which customers have more choices, instant access to more information and less loyalty than ever before? That’s why I think CEOs need to begin thinking about this problem right now, not in a year or two when competitors are already reshaping their organizations to match the marketplace’s new realities. 3) Partner with universities to help create a new type of highly skilled workers. Several years ago, universities introduced new undergraduate as well as graduate-level programs in analytics and informatics as the business need for deeper insights into the booming world of data began to explode. Today, as the growth rate of data continues to soar, we know that the Internet of Things will only intensify that growth. Moreover, as Big Data fuels insights that can be shaped into products and services that generate revenue, the demand for data scientists and data specialists will go on unabated. Beyond that top-level expertise, companies are going to need data-native thinkers at all levels of the organization. Where will this new type of worker come from? I think it’s incumbent on the business community to collaborate with universities to develop new curricula designed to turn out graduates who can capitalize on the data-driven world that the Internet of Things is surely going to create. These new workers will create opportunities to help their companies in fields as diverse as product design, customer service, marketing, manufacturing and distribution. They will become innovative leaders in fashioning an entirely new type of workforce and organizational structure optimized to fully exploit the Internet of Things so that it becomes a high-value enabler of the Internet of People. Mark Hurd is President of Oracle Corporation and a member of the company's Board of Directors. He joined Oracle in 2010, bringing more than 30 years of technology industry leadership, computer hardware expertise, and executive management experience to his role with the company. As President, Mr. Hurd oversees the corporate direction and strategy for Oracle's global field operations, including marketing, sales, consulting, alliances and channels, and support. He focuses on strategy, leadership, innovation, and customers.

    Read the article

  • In-House Generated Certificates Supported for Signing E-Business Suite JAR Files

    - by Elke Phelps (Oracle Development)
    The E-Business Suite uses Java Archive (JAR) files to deliver certain types of E-Business Suite content desktop clients.  Previously we announced the support of securing JAR files with 3072-bit certificates signed by a third-party Certificate Authority (CA).  We now support securing JAR files with in-house generated certificates.  The new steps to use an in-house Certificate Authority for securing JAR files are provided in: Enhanced Signing of Oracle E-Business Suite JAR Files (Note 1207184.1) This enhancement is great news for those of you familiar with the warning that is triggered when using a self-signed certificate.  As a result of supporting self-signed certificates, the following warning can be avoided: Oracle E-Business Suite Release 12 Certified Platforms Linux x86 (Oracle Linux 4, 5) Linux x86 (RHEL 3, 4, 5) Linux x86 (SLES 9, 10) Linux x86-64 (Oracle Linux 4, 5) Linux x86-64 (RHEL 4, 5) Linux x86-64 (SLES 9, 10)  Oracle Solaris on SPARC (64-bit) (8, 9, 10) IBM AIX on Power Systems (64-bit) (5.3, 6.1) IBM Linux on System z** (RHEL 5, SLES 9, SLES 10) HP-UX Itanium (11.23, 11.31) HP-UX PA-RISC (64-bit) (11.11, 11.23, 11.31) Microsoft Windows Server (32-bit) (2003, 2008 for EBS 12.1 only) Oracle E-Business Suite Release 11i Certified Platforms Linux x86 (Oracle Enterprise Linux 4, 5) Linux x86 (RHEL 3, 4, 5) Linux x86 (SLES 8, 9, 10) Linux x86 (Asianux 1.0) Oracle Solaris on SPARC (64-bit) (8, 9, 10) IBM AIX on Power Systems (64-bit) (5.3, 6.1) HP-UX PA-RISC (64-bit) (11.11, 11.23, 11.31) HP Tru64 (5.1b) Microsoft Windows Server (32-bit) (2000, 2003) References Enhanced Signing of Oracle E-Business Suite JAR Files (Note 1207184.1) Related Articles Two New Options for Signing E-Business Suite JAR Files Now Available What Are the Minimum Desktop Requirements for EBS? Internet Explorer 9 Certified with Oracle E-Business Suite

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >