Search Results

Search found 288 results on 12 pages for 'accelerate'.

Page 11/12 | < Previous Page | 7 8 9 10 11 12  | Next Page >

  • SQLite DB open time really long Problem

    - by sxingfeng
    I am using sqlite in c++ windows, And I have a db size about 60M, When I open the sqlite db, It takes about 13 second. sqlite3* mpDB; nRet = sqlite3_open16(szFile, &mpDB); And if I closed my application and reopen it again. It takse only less then 1 second. First, I thought It is because of disk cache. So I preload the 60M db file before sqlite open, and read the file using CFile, However, after preloading, the first time is still very slow. BOOL CQFilePro::PreLoad(const CString& strPath) { boost::shared_array<BYTE> temp = boost::shared_array<BYTE>(new BYTE[PRE_LOAD_BUFFER_LENGTH]); int nReadLength; try { CFile file; if (file.Open(strPath, CFile::modeRead) == FALSE) { return FALSE; } do { nReadLength = file.Read(temp.get(), PRE_LOAD_BUFFER_LENGTH); } while (nReadLength == PRE_LOAD_BUFFER_LENGTH); file.Close(); } catch(...) { } return TRUE; } My question is what is the difference between first open and second open. How can I accelerate the sqlite open-process.

    Read the article

  • What are the drawbacks of this Classing format?

    - by Keysle
    This is a 3 layer example of my classing format function __(_){return _.constructor} //class var _ = ( CLASS = function(){ this.variable = 0; this.sub = new CLASS.SUBCLASS(); }).prototype; _.func = function(){ alert('lvl'+this.variable); this.sub.func(); } _.divePeak = function(){ alert('lvl'+this.variable); this.sub.variable += 5; } //sub class _ = ( __(_).SUBCLASS = function(){ this.variable = 1; this.sub = new CLASS.SUBCLASS.DEEPCLASS(); }).prototype; _.func = function(){ alert('lvl'+this.variable); this.sub.func(); } //deep class _ = ( __(_).DEEPCLASS = function(){ this.variable = 2; }).prototype; _.func = function(){ alert('lvl'+this.variable); } Before you blow a gasket, let me explain myself. The purpose behind the underscores is to accelerate the time needed to specify functions for a class and also specify sub classes of a class. To me it's easier to read. I KNOW, this does interfere with underscore.js if you intend to use it in your classes. I'm sure _.js can be easily switched over to another $ymbol though ... oh wait, But I digress. Why have classes within a class? because solar.system() and social.system() mean two totally different things but it's convenient to use the same name. Why user underscores to manage the definition of the class? because "Solar.System.prototype" took me about 2 seconds to type out and 2 typos to correct. It also keeps all function names for all classes in the same column of texts, which is nice for legibility. All I'm doing is presenting my reasoning behind this method and why I came up with it. I'm 3 days into learning OO JS and I am very willing to accept that I might have messed up.

    Read the article

  • Peoplesoft queries - performance

    - by DBa
    Hi, I'm facing a problem with PeopleSoft queries (using Oracle backend database): when a rather complex query involving multiple records is set off by a user, PS does an enforced join of security records, thus producing SQL like this: select .... from ps_job a, PS_EMPL_SRCQRY a1, ps_table2 b, ps_sec_rcd2 b1, ps_table3 c, ps_sec_rcd3 c1 where (...security joins a-a1, b-b1, c-c1...) and (...joins of a, b and c...) and a.setid_dept = 'XYZ'; (let's assume the last condition has a high selectivity and there is an index on the column) Obviously, due to the arrangement of the conditions, first a huge join is created, written to the temp segment, and when the last condition is finally applied, only a small subset is selected. A query formulated in this way is very likely to hit the preset timeout of the APPSRV, and even of the QRYSRV. When writing the query manually, I would rather move the most selective condition to the start, thus limiting the amount of the data being handled, to a considerable level. Any ideas on how to make PS behave like this? Actually, already rewriting "Oracle-styled" SQL to ANSI SQL seems to accelerate the queries - however, PS writes Oracle-style queries... Thanks in advance DBa

    Read the article

  • Architecture of finding movable geotagged objects

    - by itsme
    I currently have a Postgres DB filled with approx. 300.000 data-sets of moving vehicles all over the world. My very frequently repeated query is: Give me all vehicles in a 5/10/20mile radius. Currently I spend around 600 to 1200 ms in the DB to prepare the set of located vehicle-objects. I am looking to vastly improve this time by ideally one or two orders of magnitude if possible. I am working in a Ruby on Rails 3.0beta environment if this is relevant. Any ideas how to architect the whole system to accelerate this query? Any NoSQL database able to deliver this kind of geolocation performance? I know of MongoDB working on an extension to facilitate this scenario but haven't tried it yet. Any intelligent use of Redis to achieve this? One problem with SQL-DBs here seems to be that I can't possibly use indexes because my vehicles are mostly moving around, meaning I had to constantly created DB indexes which, by itself, is probably more expensive than just doing the searching without index. Looking forward to your thoughs, Thanks!

    Read the article

  • Apps crashing with EXC_BAD_ACCESS when changing to a custom keyboard layout

    - by Adam Lindberg
    I have a custom layout installed (svdvorak_mac6.keylayout). After a reboot another keyboard layout was selected, so I selected my usual one instead. This lead to a lot of apps suddenly starting to crash (Chrome, Skype, Adium etc). I can change to any other built in layout for OS X, but as soon as I choose one custom installed one (either form ~/Library/Keyboard Layouts/ or from /Library/Keyboard Layouts/) the apps crash. The only thing I can remember that I did before the reboot was to install Google's video chat plugin for the browser. Here's the crash report from Adium: Process: Adium [372] Path: /Applications/Adium.app/Contents/MacOS/Adium Identifier: com.adiumX.adiumX Version: 1.4.1 (1.4.1) Code Type: X86 (Native) Parent Process: launchd [182] Date/Time: 2010-12-22 22:39:47.833 +0100 OS Version: Mac OS X 10.6.5 (10H574) Report Version: 6 Interval Since Last Report: 257401 sec Crashes Since Last Report: 39 Per-App Interval Since Last Report: 1178959 sec Per-App Crashes Since Last Report: 8 Anonymous UUID: 7CBACDEB-FBAF-4CD5-9C15-7AEA8AC4B5EF Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 com.apple.HIToolbox 0x99722b6d islGetInputSourceProperty + 1107 1 com.apple.HIToolbox 0x9972262c TSMGetInputSourceProperty + 526 2 com.apple.HIToolbox 0x9972787a _ISSendWindowServerKeyboardLayoutUpdate + 412 3 com.apple.HIToolbox 0x9972622b _TSMSetInputSourceSelected + 1429 4 com.apple.HIToolbox 0x99980209 TSMMessagePortCallBack + 574 5 com.apple.CoreFoundation 0x9226840c __CFMessagePortPerform + 540 6 com.apple.CoreFoundation 0x921d34db __CFRunLoopRun + 6523 7 com.apple.CoreFoundation 0x921d1464 CFRunLoopRunSpecific + 452 8 com.apple.CoreFoundation 0x921d1291 CFRunLoopRunInMode + 97 9 com.apple.HIToolbox 0x99717f58 RunCurrentEventLoopInMode + 392 10 com.apple.HIToolbox 0x99717d0f ReceiveNextEventCommon + 354 11 com.apple.HIToolbox 0x99717b94 BlockUntilNextEventMatchingListInMode + 81 12 com.apple.AppKit 0x9189478d _DPSNextEvent + 847 13 com.apple.AppKit 0x91893fce -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] + 156 14 com.apple.AppKit 0x91856247 -[NSApplication run] + 821 15 com.apple.AppKit 0x9184e2d9 NSApplicationMain + 574 16 com.adiumX.adiumX 0x0000322e 0x1000 + 8750 Thread 1: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x968f5982 kevent + 10 1 libSystem.B.dylib 0x968f609c _dispatch_mgr_invoke + 215 2 libSystem.B.dylib 0x968f5559 _dispatch_queue_invoke + 163 3 libSystem.B.dylib 0x968f52fe _dispatch_worker_thread2 + 240 4 libSystem.B.dylib 0x968f4d81 _pthread_wqthread + 390 5 libSystem.B.dylib 0x968f4bc6 start_wqthread + 30 Thread 2: 0 libSystem.B.dylib 0x968f4a12 __workq_kernreturn + 10 1 libSystem.B.dylib 0x968f4fa8 _pthread_wqthread + 941 2 libSystem.B.dylib 0x968f4bc6 start_wqthread + 30 Thread 3: com.apple.CFSocket.private 0 libSystem.B.dylib 0x968ee0c6 select$DARWIN_EXTSN + 10 1 com.apple.CoreFoundation 0x92211c83 __CFSocketManager + 1091 2 libSystem.B.dylib 0x968fc85d _pthread_start + 345 3 libSystem.B.dylib 0x968fc6e2 thread_start + 34 Thread 4: 0 libSystem.B.dylib 0x968f4a12 __workq_kernreturn + 10 1 libSystem.B.dylib 0x968f4fa8 _pthread_wqthread + 941 2 libSystem.B.dylib 0x968f4bc6 start_wqthread + 30 Thread 5: 0 libSystem.B.dylib 0x968cf15a semaphore_timedwait_signal_trap + 10 1 libSystem.B.dylib 0x968fcce5 _pthread_cond_wait + 1066 2 libSystem.B.dylib 0x9692bac8 pthread_cond_timedwait_relative_np + 47 3 ...apple.AddressBook.framework 0x9310043f -[ABRemoteImageLoader workLoop] + 283 4 com.apple.Foundation 0x97822bf0 -[NSThread main] + 45 5 com.apple.Foundation 0x97822ba0 __NSThread__main__ + 1499 6 libSystem.B.dylib 0x968fc85d _pthread_start + 345 7 libSystem.B.dylib 0x968fc6e2 thread_start + 34 Thread 6: 0 libSystem.B.dylib 0x968cf0fa mach_msg_trap + 10 1 libSystem.B.dylib 0x968cf867 mach_msg + 68 2 com.apple.CoreFoundation 0x921d237f __CFRunLoopRun + 2079 3 com.apple.CoreFoundation 0x921d1464 CFRunLoopRunSpecific + 452 4 com.apple.CoreFoundation 0x921d1291 CFRunLoopRunInMode + 97 5 com.apple.Foundation 0x9785b7d0 +[NSURLConnection(NSURLConnectionReallyInternal) _resourceLoadLoop:] + 329 6 com.apple.Foundation 0x97822bf0 -[NSThread main] + 45 7 com.apple.Foundation 0x97822ba0 __NSThread__main__ + 1499 8 libSystem.B.dylib 0x968fc85d _pthread_start + 345 9 libSystem.B.dylib 0x968fc6e2 thread_start + 34 Thread 0 crashed with X86 Thread State (32-bit): eax: 0x00000670 ebx: 0x9972272e ecx: 0x00000000 edx: 0x00000002 edi: 0xa0c3b214 esi: 0x00000004 ebp: 0xbfffe6c8 esp: 0xbfffe660 ss: 0x0000001f efl: 0x00010202 eip: 0x99722b6d cs: 0x00000017 ds: 0x0000001f es: 0x0000001f fs: 0x00000000 gs: 0x00000037 cr2: 0x00000000 Binary Images: 0x1000 - 0x1a0ff7 +com.adiumX.adiumX 1.4.1 (1.4.1) <136586E8-F3F5-99ED-DB1F-48C0027686CB> /Applications/Adium.app/Contents/MacOS/Adium 0x1f9000 - 0x23efe7 +AIUtilities ??? (???) <565A1BC2-4B50-6277-D127-AFBF01F90CE3> /Applications/Adium.app/Contents/Frameworks/AIUtilities.framework/Versions/A/AIUtilities 0x2af000 - 0x307ff7 +com.adiumX.AdiumPurple ??? (1.0) <F4C2A8E4-695E-7CCE-41F3-F8960AFDC149> /Applications/Adium.app/Contents/Frameworks/AdiumLibpurple.framework/Versions/A/AdiumLibpurple 0x34b000 - 0x3ddfe7 +Adium ??? (???) <774A171B-ED45-D221-6A37-486AA15C8BA5> /Applications/Adium.app/Contents/Frameworks/Adium.framework/Versions/A/Adium 0x439000 - 0x510ff7 +com.googlepages.openspecies.rtool.libglib 2.0.0 (2.0.0) <C620AA58-CFC4-855E-1F2F-F84D9335CD5D> /Applications/Adium.app/Contents/Frameworks/libglib.framework/Versions/2.0.0/libglib 0x53d000 - 0x53eff7 +com.googlepages.openspecies.rtool.libgmodule 2.0.0 (2.0.0) <11FF9396-454A-394B-1B12-D84AD535F6F6> /Applications/Adium.app/Contents/Frameworks/libgmodule.framework/Versions/2.0.0/libgmodule 0x542000 - 0x578fe7 +com.googlepages.openspecies.rtool.libgobject 2.0.0 (2.0.0) <D69FB8D0-D271-EC20-42DD-04FCC65A72BF> /Applications/Adium.app/Contents/Frameworks/libgobject.framework/Versions/2.0.0/libgobject 0x58e000 - 0x590ff7 +com.googlepages.openspecies.rtool.libgthread 2.0.0 (2.0.0) <5D4B8DC6-28E3-9285-8E2A-2D7A3CBE11C5> /Applications/Adium.app/Contents/Frameworks/libgthread.framework/Versions/2.0.0/libgthread 0x594000 - 0x59eff7 +com.googlepages.openspecies.rtool.libintl 8 (8) <343C9F94-8840-4465-64E4-86A0092AD69F> /Applications/Adium.app/Contents/Frameworks/libintl.framework/Versions/8/libintl 0x5a3000 - 0x5cbff7 +com.googlepages.openspecies.rtool.libmeanwhile 1 (1) <7B341D44-FA86-F7C3-E800-7D1169EB9CE2> /Applications/Adium.app/Contents/Frameworks/libmeanwhile.framework/Versions/1/libmeanwhile 0x5e0000 - 0x81bff7 +libpurple 8.5.0 (compatibility 8.0.0) <DEB5CE6C-2A4A-16CA-E0EF-DDE812865406> /Applications/Adium.app/Contents/Frameworks/libpurple.framework/Versions/0/libpurple 0x8c3000 - 0x978fe7 libcrypto.0.9.7.dylib 0.9.7 (compatibility 0.9.7) <AACC86C0-86B4-B1A7-003F-2A0AF68973A2> /usr/lib/libcrypto.0.9.7.dylib 0x9be000 - 0x9ccff7 +com.dpompa.fribidi ??? (1.0) <EA8AEBCF-DFE5-85FB-5C0E-EB3AB5B0A950> /Applications/Adium.app/Contents/Frameworks/FriBidi.framework/Versions/A/FriBidi 0x21d2000 - 0x21e1fe7 +AutoHyperlinks ??? (???) <A8B5F9E1-E259-F33F-9E60-F4E37B1ED500> /Applications/Adium.app/Contents/Frameworks/AutoHyperlinks.framework/Versions/A/AutoHyperlinks 0x21e7000 - 0x21f3ff7 +net.brockerhoff.RBSplitView.Framework 1.1.4 (1.1.4) <D92691AA-294F-A85D-E7E1-01AD0A0717D2> /Applications/Adium.app/Contents/Frameworks/RBSplitView.framework/Versions/A/RBSplitView 0x21fb000 - 0x220efff +org.andymatuschak.Sparkle 1.5 Beta (bzr) (340) <E0109DBE-F614-66D0-9B82-6151BC40DAD7> /Applications/Adium.app/Contents/Frameworks/Sparkle.framework/Versions/A/Sparkle 0x221c000 - 0x226cfef +com.adiumX.OTR ??? (1.0) <BAE9D6BD-60D5-B53B-19BC-C17287F55EE9> /Applications/Adium.app/Contents/Frameworks/OTR.framework/Versions/A/OTR 0x227d000 - 0x2280ff7 +org.boredzo.LMX ??? (1.0) <92632179-5CFB-EA6B-AAE7-5F4B98BF0CD9> /Applications/Adium.app/Contents/Frameworks/LMX.framework/Versions/A/LMX 0x2286000 - 0x228dff1 +net.oauth.OAuthConsumer ??? (0.1.1) <025882EC-04DA-763B-18F5-5266A5D185FD> /Applications/Adium.app/Contents/Frameworks/OAuthConsumer.framework/Versions/A/OAuthConsumer 0x2296000 - 0x22a6fe7 +com.googlepages.openspecies.rtool.libjson-glib 1.0.0 (1.0.0) <016CAFB1-DD85-3C9D-411C-C696D9D57213> /Applications/Adium.app/Contents/Frameworks/libjson-glib.framework/Versions/1.0.0/libjson-glib 0x2784000 - 0x2788ff3 com.apple.audio.AudioIPCPlugIn 1.1.6 (1.1.6) <F402CF88-D96C-42A0-3207-49759F496AE8> /System/Library/Extensions/AudioIPCDriver.kext/Contents/Resources/AudioIPCPlugIn.bundle/Contents/MacOS/AudioIPCPlugIn 0x278d000 - 0x2793ffb com.apple.audio.AppleHDAHALPlugIn 1.9.9 (1.9.9f12) <82BFF5E9-2B0E-FE8B-8370-445DD94DA434> /System/Library/Extensions/AppleHDA.kext/Contents/PlugIns/AppleHDAHALPlugIn.bundle/Contents/MacOS/AppleHDAHALPlugIn 0x15fda000 - 0x15fdcff7 apop.so ??? (???) <B365DF5B-6A00-9595-27FF-4811B12B1C19> /usr/lib/sasl2/apop.so 0x15fe0000 - 0x15fe9ff7 digestmd5WebDAV.so ??? (???) <FC8C0A3E-1BC3-5016-95E1-E7EF9FF37242> /usr/lib/sasl2/digestmd5WebDAV.so 0x15fee000 - 0x15ff0ff7 libanonymous.2.so ??? (???) <41A1E196-0AB4-1ADD-6362-BB53A0E57ABA> /usr/lib/sasl2/libanonymous.2.so 0x15ff4000 - 0x15ff6ff7 libcrammd5.2.so ??? (???) <032F08C3-2D26-F956-4799-1012A1BBCB71> /usr/lib/sasl2/libcrammd5.2.so 0x15ffa000 - 0x15ffcff7 login.so ??? (???) <4E0B45F7-243E-A3FD-AA75-EF653590BF17> /usr/lib/sasl2/login.so 0x16100000 - 0x16116ff7 dhx.so ??? (???) <B50D8278-4246-4086-E0AF-3CBE96AE9837> /usr/lib/sasl2/dhx.so 0x16123000 - 0x1612bff7 libdigestmd5.2.so ??? (???) <E8D78B02-D51C-F2CB-C4BA-AC9231ED8006> /usr/lib/sasl2/libdigestmd5.2.so 0x16130000 - 0x16135ff7 libgssapiv2.2.so ??? (???) <193995B9-1C15-BEB2-40B7-1598D82F29BB> /usr/lib/sasl2/libgssapiv2.2.so 0x1613a000 - 0x1613fff7 libntlm.so ??? (???) <F97C955D-E521-216F-E8F0-79E8C907217A> /usr/lib/sasl2/libntlm.so 0x16144000 - 0x1614bff7 libotp.2.so ??? (???) <3DF61F7F-4929-A37D-01CB-9A7A90E3B9B7> /usr/lib/sasl2/libotp.2.so 0x16152000 - 0x16154ff7 libplain.2.so ??? (???) <5CC9D89A-9656-EEE8-64AB-E61A22FA8465> /usr/lib/sasl2/libplain.2.so 0x16158000 - 0x1615cff7 libpps.so ??? (???) <C5A25A99-412E-AD7F-D6FD-C4CC07B7B2A5> /usr/lib/sasl2/libpps.so 0x16161000 - 0x16164ff7 mschapv2.so ??? (???) <34DFB657-5E2E-5B83-713B-F57ACFB1E091> /usr/lib/sasl2/mschapv2.so 0x16169000 - 0x1616bff7 shadow_auxprop.so ??? (???) <4073854F-B4C8-A0D4-C0FF-7A0C93BFC70E> /usr/lib/sasl2/shadow_auxprop.so 0x16170000 - 0x16172ff7 smb_lm.so ??? (???) <4B7A54D8-241D-CC8C-8759-4C7DC562369D> /usr/lib/sasl2/smb_lm.so 0x16177000 - 0x1617aff7 smb_nt.so ??? (???) <7B7D31B1-10A1-1AE9-E323-C19A3C52DC03> /usr/lib/sasl2/smb_nt.so 0x1617f000 - 0x16182ff7 smb_ntlmv2.so ??? (???) <3BFE5AA9-F215-36B5-E7D7-46BE1BFD63EA> /usr/lib/sasl2/smb_ntlmv2.so 0x194c1000 - 0x19639fe7 GLEngine ??? (???) <A4BBE58C-1211-0473-7B78-C3BA7AC29C9B> /System/Library/Frameworks/OpenGL.framework/Resources/GLEngine.bundle/GLEngine 0x1966b000 - 0x19a70fe7 libclh.dylib 3.1.1 C (3.1.1) <D1A3D8AD-0FED-4AD2-AB43-CF804B7BDBF9> /System/Library/Extensions/GeForceGLDriver.bundle/Contents/MacOS/libclh.dylib 0x19ae8000 - 0x19b0cfe7 GLRendererFloat ??? (???) <EFE5EC6D-74B2-37A2-92E4-526A2EF6B791> /System/Library/Frameworks/OpenGL.framework/Resources/GLRendererFloat.bundle/GLRendererFloat 0x8f0c8000 - 0x8f811ff7 com.apple.GeForceGLDriver 1.6.24 (6.2.4) <DCC16E52-B1F1-90E6-E839-D30DF4CBA468> /System/Library/Extensions/GeForceGLDriver.bundle/Contents/MacOS/GeForceGLDriver 0x8fe00000 - 0x8fe4162b dyld 132.1 (???) <39AC3185-E633-68AA-7CD6-1230E7F1CEF4> /usr/lib/dyld 0x90003000 - 0x90005fe7 com.apple.ExceptionHandling 1.5 (10) <03218275-EBEC-39AA-895A-BA72A5FDBB7A> /System/Library/Frameworks/ExceptionHandling.framework/Versions/A/ExceptionHandling 0x90006000 - 0x90074ff7 com.apple.QuickLookUIFramework 2.3 (327.6) <74706A08-5399-24FE-00B2-4A702A6B83C1> /System/Library/Frameworks/Quartz.framework/Versions/A/Frameworks/QuickLookUI.framework/Versions/A/QuickLookUI 0x90075000 - 0x900b2ff7 com.apple.SystemConfiguration 1.10.5 (1.10.2) <362DF639-6E5F-9371-9B99-81C581A8EE41> /System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfiguration 0x900fa000 - 0x901d5feb com.apple.DesktopServices 1.5.9 (1.5.9) <CED00AC1-924B-0E45-7D5E-1CEA8929F5BE> /System/Library/PrivateFrameworks/DesktopServicesPriv.framework/Versions/A/DesktopServicesPriv 0x901d6000 - 0x9021aff3 com.apple.coreui 2 (114) <2234855E-3BED-717F-0BFA-D1A289ECDBDA> /System/Library/PrivateFrameworks/CoreUI.framework/Versions/A/CoreUI 0x9021b000 - 0x9021bff7 com.apple.CoreServices 44 (44) <51CFA89A-33DB-90ED-26A8-67D461718A4A> /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices 0x90258000 - 0x90302fe7 com.apple.CFNetwork 454.11.5 (454.11.5) <D8963574-285A-3BD6-6B25-07D39C6F67A4> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CFNetwork.framework/Versions/A/CFNetwork 0x90303000 - 0x9033efeb libFontRegistry.dylib ??? (???) <4FB144ED-8AF9-27CF-B315-DCE5575D5231> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libFontRegistry.dylib 0x90342000 - 0x90366ff7 libJPEG.dylib ??? (???) <46AF3A0F-2B8D-87B9-62D4-0905678A64DA> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libJPEG.dylib 0x90367000 - 0x9036afe7 libmathCommon.A.dylib 315.0.0 (compatibility 1.0.0) <1622A54F-1A98-2CBE-B6A4-2122981A500E> /usr/lib/system/libmathCommon.A.dylib 0x9036b000 - 0x903d5fe7 libstdc++.6.dylib 7.9.0 (compatibility 7.0.0) <411D87F4-B7E1-44EB-F201-F8B4F9227213> /usr/lib/libstdc++.6.dylib 0x903d6000 - 0x903daff7 IOSurface ??? (???) <D849E1A5-6B0C-2A05-2765-850EC39BA2FF> /System/Library/Frameworks/IOSurface.framework/Versions/A/IOSurface 0x903db000 - 0x903edff7 com.apple.MultitouchSupport.framework 207.10 (207.10) <E1A6F663-570B-CE54-0F8A-BBCCDECE3B42> /System/Library/PrivateFrameworks/MultitouchSupport.framework/Versions/A/MultitouchSupport 0x90437000 - 0x90470ff7 libcups.2.dylib 2.8.0 (compatibility 2.0.0) <D6F24434-8217-DF72-2126-1953080680D7> /usr/lib/libcups.2.dylib 0x9049b000 - 0x904ccff7 libGLImage.dylib ??? (???) <78F59EAB-BBD4-7366-CA84-970547501978> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLImage.dylib 0x904ec000 - 0x909a5ffb com.apple.VideoToolbox 0.484.20 (484.20) <E7B9F015-2569-43D7-5268-375ED937ECA5> /System/Library/PrivateFrameworks/VideoToolbox.framework/Versions/A/VideoToolbox 0x909a6000 - 0x90ab5fe7 com.apple.WebKit 6533.19 (6533.19.4) <A942073C-83DF-33ED-3D01-A24DE8AEAB3D> /System/Library/Frameworks/WebKit.framework/Versions/A/WebKit 0x90ab6000 - 0x90ae6ff7 com.apple.MeshKit 1.1 (49.2) <5A74D1A4-4B97-FE39-4F4D-E0B80F0ADD87> /System/Library/PrivateFrameworks/MeshKit.framework/Versions/A/MeshKit 0x90cc5000 - 0x90cfdff7 com.apple.LDAPFramework 2.0 (120.1) <131ED804-DD88-D84F-13F8-D48E0012B96F> /System/Library/Frameworks/LDAP.framework/Versions/A/LDAP 0x90cfe000 - 0x90f61fef com.apple.security 6.1.1 (37594) <1949216A-7583-B73A-6112-4D55CA5852E3> /System/Library/Frameworks/Security.framework/Versions/A/Security 0x90f62000 - 0x90f64ff7 com.apple.securityhi 4.0 (36638) <E7D83480-77BB-72F9-72F3-AEE198CE589F> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/SecurityHI.framework/Versions/A/SecurityHI 0x90f65000 - 0x910e7fe7 libicucore.A.dylib 40.0.0 (compatibility 1.0.0) <35DB7644-0780-D2AB-F6A9-45F28D2D434A> /usr/lib/libicucore.A.dylib 0x910e8000 - 0x91217fe3 com.apple.audio.toolbox.AudioToolbox 1.6.5 (1.6.5) <0A0F68E5-4806-DB51-764B-D97554B801AD> /System/Library/Frameworks/AudioToolbox.framework/Versions/A/AudioToolbox 0x91218000 - 0x91538ff3 com.apple.CoreServices.CarbonCore 861.23 (861.23) <B08756E4-32C5-CC33-0268-7C00A5ED7537> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/CarbonCore 0x91539000 - 0x91578ff7 com.apple.ImageCaptureCore 1.0.3 (1.0.3) <7E02D104-F31C-CF72-71B4-DA5DF7B48337> /System/Library/Frameworks/ImageCaptureCore.framework/Versions/A/ImageCaptureCore 0x91579000 - 0x915b0fe7 libssl.0.9.8.dylib 0.9.8 (compatibility 0.9.8) <7DCB5938-3140-E71A-92BD-8C242F30C8F5> /usr/lib/libssl.0.9.8.dylib 0x915c4000 - 0x9163ffff com.apple.AppleVAFramework 4.10.12 (4.10.12) <89C4EBE2-FE27-3160-0BD1-D0C2ED5F3605> /System/Library/PrivateFrameworks/AppleVA.framework/Versions/A/AppleVA 0x91640000 - 0x91742fef com.apple.MeshKitIO 1.1 (49.2) <D0401AC5-1F92-2BBB-EBAB-58EDD3BA61B9> /System/Library/PrivateFrameworks/MeshKit.framework/Versions/A/Frameworks/MeshKitIO.framework/Versions/A/MeshKitIO 0x91743000 - 0x91744ff7 com.apple.audio.units.AudioUnit 1.6.5 (1.6.5) <BE4C2495-B758-AD22-DCC0-56A6791E948E> /System/Library/Frameworks/AudioUnit.framework/Versions/A/AudioUnit 0x91769000 - 0x91777fe7 libz.1.dylib 1.2.3 (compatibility 1.0.0) <33C1B260-ED05-945D-FC33-EF56EC791E2E> /usr/lib/libz.1.dylib 0x91778000 - 0x9179afef com.apple.DirectoryService.Framework 3.6 (621.9) <F2EEE9D7-D4FB-14F3-E647-ABD32754F557> /System/Library/Frameworks/DirectoryService.framework/Versions/A/DirectoryService 0x9184c000 - 0x9212cff7 com.apple.AppKit 6.6.7 (1038.35) <ABC7783C-E4D5-B848-BED6-99451D94D120> /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit 0x9212d000 - 0x9218efe7 com.apple.CoreText 3.5.0 (???) <BB50C045-25F5-65B8-B1DB-8CDAEF45EB46> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreText.framework/Versions/A/CoreText 0x9218f000 - 0x92194ff7 com.apple.OpenDirectory 10.6 (10.6) <C1B46982-7D3B-3CC4-3BC2-3E4B595F0231> /System/Library/Frameworks/OpenDirectory.framework/Versions/A/OpenDirectory 0x92195000 - 0x92310fe7 com.apple.CoreFoundation 6.6.4 (550.42) <C78D5079-663E-9734-7AFA-6CE79A0539F1> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x92311000 - 0x92351ff3 com.apple.securityinterface 4.0.1 (37214) <43CE8A8D-64E5-F36E-4900-FBB1BD6557F1> /System/Library/Frameworks/SecurityInterface.framework/Versions/A/SecurityInterface 0x92352000 - 0x92355ff7 libCGXType.A.dylib 545.0.0 (compatibility 64.0.0) <B624AACE-991B-0FFA-2482-E69970576CE1> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libCGXType.A.dylib 0x924b9000 - 0x925fcfef com.apple.QTKit 7.6.6 (1756) <4D809734-4E1B-8E18-C825-86C5422FC3DC> /System/Library/Frameworks/QTKit.framework/Versions/A/QTKit 0x925fd000 - 0x9260dff7 libsasl2.2.dylib 3.15.0 (compatibility 3.0.0) <E276514D-394B-2FDD-6264-07A444AA6A4E> /usr/lib/libsasl2.2.dylib 0x92615000 - 0x92618ff7 libCoreVMClient.dylib ??? (???) <1F738E81-BB71-32C5-F1E9-C1302F71021C> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCoreVMClient.dylib 0x9262c000 - 0x92652ffb com.apple.DictionaryServices 1.1.2 (1.1.2) <43E1D565-6E01-3681-F2E5-72AE4C3A097A> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/DictionaryServices.framework/Versions/A/DictionaryServices 0x9294d000 - 0x9295dff7 com.apple.DSObjCWrappers.Framework 10.6 (134) <95DC4010-ECC4-3A75-5DEE-11BB2AE895EE> /System/Library/PrivateFrameworks/DSObjCWrappers.framework/Versions/A/DSObjCWrappers 0x9295e000 - 0x929a0ff7 libvDSP.dylib 268.0.1 (compatibility 1.0.0) <8A4721DE-25C4-C8AA-EA90-9DA7812E3EBA> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvDSP.dylib 0x929a1000 - 0x929abfe7 com.apple.audio.SoundManager 3.9.3 (3.9.3) <DE0E0EF6-8190-3F65-6BDD-5AC9D8A025D6> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/CarbonSound.framework/Versions/A/CarbonSound 0x929ac000 - 0x92a5cff3 com.apple.ColorSync 4.6.3 (4.6.3) <0354B408-665F-8B3F-87FF-64E6322276F0> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ColorSync.framework/Versions/A/ColorSync 0x92a5d000 - 0x92c3ffff com.apple.imageKit 2.0.3 (1.0) <B4DB05F7-01C5-35EE-7AB9-41BD9D63F075> /System/Library/Frameworks/Quartz.framework/Versions/A/Frameworks/ImageKit.framework/Versions/A/ImageKit 0x92cf5000 - 0x92d4bff7 com.apple.MeshKitRuntime 1.1 (49.2) <CB9F38B1-E107-EA62-EDFF-02EE79F6D1A5> /System/Library/PrivateFrameworks/MeshKit.framework/Versions/A/Frameworks/MeshKitRuntime.framework/Versions/A/MeshKitRuntime 0x92d4c000 - 0x92d55ff7 com.apple.DiskArbitration 2.3 (2.3) <6AA6DDF6-AFC3-BBDB-751A-64AE3580A49E> /System/Library/Frameworks/DiskArbitration.framework/Versions/A/DiskArbitration 0x92dc6000 - 0x92df8fe3 libTrueTypeScaler.dylib ??? (???) <6E9D1A50-330E-F1F4-F93D-9ECC8A61B21A> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libTrueTypeScaler.dylib 0x92e7a000 - 0x92e88ff7 com.apple.opengl 1.6.11 (1.6.11) <286D1BC4-4CD8-3CD4-F723-5C196FE15FE0> /System/Library/Frameworks/OpenGL.framework/Versions/A/OpenGL 0x92e89000 - 0x92ec7ff7 com.apple.QuickLookFramework 2.3 (327.6) <66955C29-0C99-D02C-DB18-4952AFB4E886> /System/Library/Frameworks/QuickLook.framework/Versions/A/QuickLook 0x92ec8000 - 0x92f92fef com.apple.CoreServices.OSServices 357 (357) <3A26F553-722D-3536-EEDE-FB41FCDAA7FD> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/OSServices.framework/Versions/A/OSServices 0x92f93000 - 0x92f97ff7 libGIF.dylib ??? (???) <DA5758A4-71B0-DD6E-7402-B7FB15387569> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libGIF.dylib 0x92f98000 - 0x93099fe7 libxml2.2.dylib 10.3.0 (compatibility 10.0.0) <ED8E45C6-B078-15E8-938D-99D8FD1EAE64> /usr/lib/libxml2.2.dylib 0x9309a000 - 0x932a1feb com.apple.AddressBook.framework 5.0.3 (875) <759B660B-00F6-F08C-37CD-69468C774B5E> /System/Library/Frameworks/AddressBook.framework/Versions/A/AddressBook 0x932a2000 - 0x933aeff7 libGLProgrammability.dylib ??? (???) <8B308FAE-843F-EE76-0254-3374CBFFA7B3> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLProgrammability.dylib 0x933bf000 - 0x933c2ffb com.apple.help 1.3.1 (41) <6A5AD406-9D8E-5BAC-51E1-E09AB9A6D159> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Help.framework/Versions/A/Help 0x933c3000 - 0x9372eff7 com.apple.QuartzCore 1.6.3 (227.34) <CC1C1631-D8D1-D416-171E-A1683274E479> /System/Library/Frameworks/QuartzCore.framework/Versions/A/QuartzCore 0x9372f000 - 0x93740ff7 com.apple.LangAnalysis 1.6.6 (1.6.6) <3036AD83-4F1D-1028-54EE-54165E562650> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/LangAnalysis.framework/Versions/A/LangAnalysis 0x93741000 - 0x93755ffb com.apple.speech.synthesis.framework 3.10.35 (3.10.35) <9F5CE4F7-D05C-8C14-4B76-E43D07A8A680> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/SpeechSynthesis.framework/Versions/A/SpeechSynthesis 0x93790000 - 0x937e1ff7 com.apple.HIServices 1.8.1 (???) <51BDD848-32A5-2425-BE07-BD037A89630A> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/HIServices.framework/Versions/A/HIServices 0x937e2000 - 0x937e2ff7 liblangid.dylib ??? (???) <FCC37057-CDD7-2AF1-21AF-52A06C4048FF> /usr/lib/liblangid.dylib 0x937e3000 - 0x93a0eff3 com.apple.QuartzComposer 4.2 ({156.28}) <08AF01DC-110D-9443-3916-699DBDED0149> /System/Library/Frameworks/Quartz.framework/Versions/A/Frameworks/QuartzComposer.framework/Versions/A/QuartzComposer 0x93a17000 - 0x93a21ffb com.apple.speech.recognition.framework 3.11.1 (3.11.1) <7486003F-8FDB-BD6C-CB34-DE45315BD82C> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/SpeechRecognition.framework/Versions/A/SpeechRecognition 0x93a22000 - 0x93a28fff com.apple.CommonPanels 1.2.4 (91) <CE92759E-865E-8A3B-1488-ECD497E4074D> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/CommonPanels.framework/Versions/A/CommonPanels 0x93b47000 - 0x93b94feb com.apple.DirectoryService.PasswordServerFramework 6.0 (6.0) <27F3FF53-F818-9836-2101-3E963FE0C0E0> /System/Library/PrivateFrameworks/PasswordServer.framework/Versions/A/PasswordServer 0x93b95000 - 0x93c30ff7 com.apple.ApplicationServices.ATS 4.4 (???) <ECB16606-4DF8-4AFB-C91D-F7947C26040F> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/ATS 0x93c31000 - 0x93c31ff7 com.apple.quartzframework 1.5 (1.5) <7DD4EBF1-60C4-9329-08EF-6E59731D9430> /System/Library/Frameworks/Quartz.framework/Versions/A/Quartz 0x93d51000 - 0x93d72fe7 com.apple.opencl 12.3 (12.3) <DEA600BF-4F54-66B5-DB2F-DC57FD518543> /System/Library/Frameworks/OpenCL.framework/Versions/A/OpenCL 0x93d73000 - 0x93e77fe7 libcrypto.0.9.8.dylib 0.9.8 (compatibility 0.9.8) <BDEFA030-5E75-7C47-2904-85AB16937F45> /usr/lib/libcrypto.0.9.8.dylib 0x93e78000 - 0x93ef8feb com.apple.SearchKit 1.3.0 (1.3.0) <7AE32A31-2B8E-E271-C03A-7A0F7BAFC85C> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/SearchKit.framework/Versions/A/SearchKit 0x93ef9000 - 0x93f68ff7 libvMisc.dylib 268.0.1 (compatibility 1.0.0) <595A5539-9F54-63E6-7AAC-C04E1574B050> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib 0x93f69000 - 0x94122feb com.apple.ImageIO.framework 3.0.4 (3.0.4) <C145139E-24C4-5A3D-B17C-809D528354B2> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/ImageIO 0x94123000 - 0x94166ff7 com.apple.NavigationServices 3.5.4 (182) <8DC6FD4A-6C74-9C23-A4C3-715B44A8D28C> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/NavigationServices.framework/Versions/A/NavigationServices 0x941a0000 - 0x941fafe7 com.apple.CorePDF 1.3 (1.3) <EA168671-F44F-BFE4-AA7D-3801DA29A650> /System/Library/PrivateFrameworks/CorePDF.framework/Versions/A/CorePDF 0x941fb000 - 0x94258ff7 com.apple.framework.IOKit 2.0 (???) <A769737F-E0D6-FB06-29B4-915CF4F43420> /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit 0x94259000 - 0x9425dff7 libGFXShared.dylib ??? (???) <C3A805C4-C0E5-B300-430A-7E811395CB8E> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGFXShared.dylib 0x942ef000 - 0x9439dff3 com.apple.ink.framework 1.3.3 (107) <233A981E-A2F9-56FB-8BDE-C2DEC3F20784> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Ink.framework/Versions/A/Ink 0x9439e000 - 0x94b8d557 com.apple.CoreGraphics 1.545.0 (???) <1AB39678-00D5-FB88-3B41-93D78348E0DE> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/CoreGraphics 0x94b8e000 - 0x94bd7fe7 libTIFF.dylib ??? (???) <AC1FC806-F7F4-174B-375F-FE5D6008666C> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libTIFF.dylib 0x94e21000 - 0x94f4ffe7 com.apple.CoreData 102.1 (251) <87FE6861-F2D6-773D-ED45-345272E56463> /System/Library/Frameworks/CoreData.framework/Versions/A/CoreData 0x95ea3000 - 0x95ee6ff7 libGLU.dylib ??? (???) <F8580594-0B38-F3ED-A715-CB3776B747A0> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLU.dylib 0x95eef000 - 0x95f71ffb SecurityFoundation ??? (???) <A8D248DE-8670-970D-39E3-A9738CFDBEE1> /System/Library/Frameworks/SecurityFoundation.framework/Versions/A/SecurityFoundation 0x95f72000 - 0x95f7fff7 com.apple.NetFS 3.2.1 (3.2.1) <94A52A6D-F071-09D7-E80F-F633F17233FE> /System/Library/Frameworks/NetFS.framework/Versions/A/NetFS 0x95f80000 - 0x95f82ff7 libRadiance.dylib ??? (???) <10048B4A-2AE8-A4E2-21B8-C6E7A8C5B76F> snip... Superuser character limit :-(

    Read the article

  • Oracle Announces New Oracle Exastack Program for ISV Partners

    - by pfolgado
    Oracle Exastack Program Enables ISV Partners to Leverage a Scalable, Integrated Infrastructure to Deliver Their Applications Tuned and Optimized for High-Performance News Facts Enabling Independent Software Vendors (ISVs) and other members of Oracle Partner Network (OPN) to rapidly build and deliver faster, more reliable applications to end customers, Oracle today introduced Oracle Exastack Ready, available now, and Oracle Exastack Optimized, available in fall 2011 through OPN. The Oracle Exastack Program focuses on helping ISVs run their solutions on Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud -- integrated systems in which the software and hardware are engineered to work together. These products provide partners with a lower cost and high performance infrastructure for database and application workloads across on-premise and cloud based environments. Leveraging the new Oracle Exastack Program in which applications can qualify as Oracle Exastack Ready or Oracle Exastack Optimized, partners can use available OPN resources to optimize their applications to run faster and more reliably -- providing increased performance to their end users. By deploying their applications on Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud, ISVs can reduce the cost, time and support complexities typically associated with building and maintaining a disparate application infrastructure -- enabling them to focus more on their core competencies, accelerating innovation and delivering superior value to customers. After qualifying their applications as Oracle Exastack Ready, partners can note to customers that their applications run on and support Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud component products including Oracle Solaris, Oracle Linux, Oracle Database and Oracle WebLogic Server. Customers can be confident when choosing a partner's Oracle Exastack Optimized application, knowing it has been tuned by the OPN member on Oracle Exadata Database Machine or Oracle Exalogic Elastic Cloud with a goal of delivering optimum speed, scalability and reliability. Partners participating in the Oracle Exastack Program can also leverage their Oracle Exastack Ready and Oracle Exastack Optimized applications to advance to Platinum or Diamond level in OPN. Oracle Exastack Programs Provide ISVs a Reliable, High-Performance Application Infrastructure With the Oracle Exastack Program ISVs have several options to qualify and tune their applications with Oracle Exastack, including: Oracle Exastack Ready: Oracle Exastack Ready provides qualifying partners with specific branding and promotional benefits based on their adoption of Oracle products. If a partner application supports the latest major release of one of these products, the partner may use the corresponding logo with their product marketing materials: Oracle Solaris Ready, Oracle Linux Ready, Oracle Database Ready, and Oracle WebLogic Ready. Oracle Exastack Ready is available to OPN members at the Gold level or above. Additionally, OPN members participating in the program can leverage their Oracle Exastack Ready applications toward advancement to the Platinum or Diamond levels in the OPN Specialized program and toward achieving Oracle Exastack Optimized status. Oracle Exastack Optimized: When available, for OPN members at the Gold level or above, Oracle Exastack Optimized will provide direct access to Oracle technical resources and dedicated Oracle Exastack lab environments so OPN members can test and tune their applications to deliver optimal performance and scalability on Oracle Exadata Database Machine or Oracle Exalogic Elastic Cloud. Oracle Exastack Optimized will provide OPN members with specific branding and promotional benefits including the use of the Oracle Exastack Optimized logo. OPN members participating in the program will also be able to leverage their Oracle Exastack Optimized applications toward advancement to Platinum or Diamond level in the OPN Specialized program. Oracle Exastack Labs and ISV Enablement: Dedicated Oracle Exastack lab environments and related technical enablement resources (including Guided Learning Paths and Boot Camps) will be available through OPN for OPN members to further their knowledge of Oracle Exastack offerings, and qualify their applications for Oracle Exastack Optimized or Oracle Exastack Ready. Oracle Exastack labs will be available to qualifying OPN members at the Gold level or above. Partners are eligible to participate in the Oracle Exastack Ready program immediately, which will help them meet the requirements to attain Oracle Exastack Optimized status in the future. Guidelines for Oracle Exastack Optimized, as well as Oracle Exastack Labs will be available in fall 2011. Supporting Quotes "In order to effectively differentiate their software applications in the marketplace, ISVs need to rapidly deliver new capabilities and performance improvements," said Judson Althoff, Oracle senior vice president of Worldwide Alliances and Channels and Embedded Sales. "With Oracle Exastack, ISVs have the ability to optimize and deploy their applications with a complete, integrated and cloud-ready infrastructure that will help them accelerate innovation, unlock new features and functionality, and deliver superior value to customers." "We view performance as absolutely critical and a key differentiator," said Tom Stock, SVP of Product Management, GoldenSource. "As a leading provider of enterprise data management solutions for securities and investment management firms, with Oracle Exadata Database Machine, we see an opportunity to notably improve data processing performance -- providing high quality 'golden copy' data in a reduced timeframe. Achieving Oracle Exastack Optimized status will be a stamp of approval that our solution will provide the performance and scalability that our customers demand." "As a leading provider of Revenue Intelligence solutions for telecommunications, media and entertainment service providers, our customers continually demand more readily accessible, enriched and pre-analyzed information to minimize their financial risks and maximize their margins," said Alon Aginsky, President and CEO of cVidya Networks. "Oracle Exastack enables our solutions to deliver the power, infrastructure, and innovation required to transform our customers' business operations and stay ahead of the game." Supporting Resources Oracle PartnerNetwork (OPN) Oracle Exastack Oracle Exastack Datasheet Judson Althoff blog Connect with the Oracle Partner community at OPN on Facebook, OPN on LinkedIn, OPN on YouTube, or OPN on Twitter

    Read the article

  • SQL Developer Blitz at ODTUG Kscope12

    - by thatjeffsmith
    Oracle Development Tools User Group (ODTUG) puts on an outstanding event, and I enjoy that the content comes FIRST. Yes, the after-event parties and entertainment are first class, but I look forward most to sitting in on some excellent sessions. For Kscope12 one would expect Oracle to have a large presence, and you would be absolutely correct! The APEX team will be there in full force, and we’ll have sessions on JDeveloper, ADF, and .NET. But what I want to talk about today is our awesome line-up of coverage for Oracle SQL Developer (Surprise!) DB and Developer’s Toolbox Symposium Kris Rice or @krisrice, Product Development Manager for SQL Developer, will speak at 10AM Sunday about SQL Developer Data Modeler. Our free data modeling solution allows one to reverse engineer a data dictionary to a model, modify it, and create a script of the changes. Collaboration is an important part of any development team; with built-in subversion support, the modeler makes collaboration easy, not just possible. After the morning break, I’ll be talking about SQL Developer’s PL/SQL support. From creating your code, to debugging, tuning, testing, and documenting PL/SQL – SQL Developer fits the bill. Since I have a full hour, I should have time to do a little riff on using source control to version and manage your revisions too! At 3:15 Jagan Athreya will talk about the new integration between SQL Developer and Enterprise Manager Cloud Control 12c. Enabling developers to define changes in SQLDeveloper and allowing DBAs to promote these changes to Test and Production via Enterprise Manager will reduce errors, accelerate productivity, and help eliminate unplanned downtime. Get your SQL Developer groove on at ODTUG Kscope12! Presentations SQL Developer Tips and Tricks Monday June 25, Session 5, 4:15 pm – 5:15 pm I’ll take you through my favorite keyboard shortcuts, top 10 preferences every user should tweak, and spotlight features that the average user probably hasn’t discovered yet. My goal for this session is for everyone to take 1-2 tips they can implement immediately to save mucho time. I enjoy interacting with the audience so no two versions of this presentation are the same. Oracle SQL Developer and Data Modeler New Features When: Tuesday June 26, Session 6, 8:30 am – 9:30 am Ashley Chen, my PM-partner-in-crime, will be covering all the new features from our two latest updates. So if you’re new to SQL Developer, or you’ve been using an older version, stop by and see what new toys you have to play with. I also have a bet with Ashley that she will have more attendees than me, so be sure to show up so I can collect. Debugging PL/SQL With SQL Developer When: Wednesday June 27, Session 16, 3:00 pm – 4:00 pm Me again – sorry. This time I have an entire hour to JUST talk about PL/SQL and debugging! Should you use a watch with a break condition, or a breakpoint with a passcount? How does external debugging with a Perl script work? Can I just debug an anonymous PL/SQL block. So if debugging to you is just a DBMS_OUTPUT.PUT_LINE() call, stop by and see how our IDE can help you take things to the next level! Or is that level++? Hands-on-Training SQL Developer Soup to Nuts When: Tuesday, 8:00 AM – 9:30 AM If you learn by doing, this is the session for you. Bring your own laptop or use one of the lab machines. We’ll give you a VirtualBox OEL image running 11gR2 EE Database with all the fixin’s (that’s Southern speak for Partitioning, Advanced Compression, Tuning & Diagnostic Packs, etc), TimesTen, APEX and much more. All you have to do is login and run through our lab exercises. You can start with a model and work your way up to debugging and testing your own appliction, or you can pick and choose your lessons to suit your needs. We’ll have people on hand to help you out and answer your questions. Booth Hours We’ll be in the vendor area and have our very own ‘demo pod’ for SQL Developer. Between Kris, Ashley, and I we should be able to answer your questions or show you how to ‘do that thing’ in the tool. Or just stop by and say hello! We’ll be around the following hours’ish: Sunday, June 24, 2012 6:00 PM – 8:00 PM Monday, June 25, 2012 9:00 AM – 4:30 PM Tuesday, June 26, 2012 9:30 AM – 3:30 PM Wednesday, June 27, 2012 10:15 AM – 2:00 PM No Excuses – If You Have Questions, This is Your Chance to Get Your Answers! We’re doing just about everything outside of a scavenger hunt to bring information and value to our users. Let us know what you like, what you don’t like, and we’ll do our best to do more of the former and less of the latter!

    Read the article

  • Off The Beaten Path—Three Things Growing Midsize Companies are Thankful For

    - by Christine Randle
    By: Jim Lein, Senior Director, Oracle Accelerate Last Sunday I went on a walkabout.  That’s when I just step out the door of my Colorado home and hike through the mountains for hours with no predetermined destination. I favor “social trails”, the unmapped routes pioneered by both animal and human explorers.  These tracks  are usually more challenging than established, marked routes and you can’t be 100% sure of where you’re going to end up. But I’ve found the rewards to be much greater. For awhile, I pondered on how—depending upon your perspective—the current economic situation worldwide could be viewed as either a classic “the glass is half empty” or a “the glass is half full” scenario. Midsize companies buy Oracle to grow and so I’m continually amazed and fascinated by the success stories our customers relate to me.  Oracle’s successful midsize companies are growing via innovation, agility, and opportunity. For them, the glass isn’t half full—it’s overflowing. Growing Midsize Companies are Thankful for: Innovation The sun angling through the pine trees reminded me of a conversation with a European customer a year ago May.  You might not recognize the name but, chances are, your local evening weather report relies on this company’s weather observation, monitoring and measurement products.  For decades, the company was recognized in its industry for product innovation, but its recent rapid growth comes from tailoring end to end product and service solutions based on the needs of distinctly different customer groups across industrial, public sector, and defense sectors.  Hours after that phone call I was walking my dog in a local park and came upon a small white plastic box sprouting short antennas and dangling by a nylon cord from a tree branch.  I cut it down. The name of that customer’s company was stamped on the housing. “It’s a radiosonde from a high altitude weather balloon,” he told me the next day. “Keep it as a souvenir.”  It sits on my fireplace mantle and elicits many questions from guests. Growing Midsize Companies are Thankful for: Agility In July, I had another interesting discussion with the CFO of an Asia-Pacific company which owns and operates a large portfolio of leisure assets. They are best known for their epic outdoor theme parks. However, their primary growth today is coming from a chain of indoor amusement centers in the USA where billiards, bowling, and laser tag take the place of roller coasters, kiddy rides, and wave pools. With mountains and rivers right out my front door, I’m not much for theme parks, but I’ll take a spirited game of laser tag any day.  This company has grown dramatically since first implementing Oracle ERP more than a decade ago. Their profitable expansion into a completely foreign market is derived from the ability to replicate proven and efficient best business practices across diverse operating environments.  They recently went live on Oracle’s Fusion HCM and Taleo. Their CFO explained to me how, with thousands of employees in three countries, Fusion HCM and Taleo would enable them to remain incredibly agile by acting on trends linking individual employee performance to their management, establishing and maintaining those best practices. Growing Midsize Companies are Thankful for: Opportunity I have three GPS apps on my iPhone. I use them mainly to keep track of my stats—distance, time, and vertical gain. However, every once in awhile I need to find the most efficient route back home before dark from my current location (notice I didn’t use the word “lost”). In August I listened in on an interview with the CFO of another European company that designs and delivers telematics solutions—the integrated use of telecommunications and informatics—for managing the mobile workforce. These solutions enable customers to achieve evolutionary step-changes in their performance and service delivery. Forgive the overused metaphor, but this is route optimization on steroids.  The company’s executive team saw an opportunity in this emerging market and went “all in”. Consequently, they are being rewarded with tremendous growth results and market domination by providing the ability for their clients to collect and analyze performance information related to fuel consumption, service workforce safety, and asset productivity. This Thanksgiving, I’m thankful for health, family, friends, and a career with an innovative company that helps companies leverage top tier software to drive and manage growth. And I’m thankful to have learned the lesson that good things happen when you get off the beaten path—both when hiking and when forging new routes through a complex world economy. Halfway through my walkabout on Sunday, after scrambling up a long stretch of scree-covered hill, I crested a ridge with an obstructed view of 14,265 ft Mt Evans just a few miles to the west.  There, nowhere near a house or a trail, someone had placed a wooden lounge chair. Its wood was worn and faded but it was sturdy. I had lunch and a cold drink in my pack. Opportunity knocked and I seized it. Happy Thanksgiving.  

    Read the article

  • Oracle Database 12c Spatial: Vector Performance Acceleration

    - by Okcan Yasin Saygili-Oracle
    Most business information has a location component, such as customer addresses, sales territories and physical assets. Businesses can take advantage of their geographic information by incorporating location analysis and intelligence into their information systems. This allows organizations to make better decisions, respond to customers more effectively, and reduce operational costs – increasing ROI and creating competitive advantage. Oracle Database, the industry’s most advanced database,  includes native location capabilities, fully integrated in the kernel, for fast, scalable, reliable and secure spatial and massive graph applications. It is a foundation for deploying enterprise-wide spatial information systems and locationenabled business applications. Developers can extend existing Oracle-based tools and applications, since they can easily incorporate location information directly in their applications, workflows, and services. Spatial Features The geospatial data features of Oracle Spatial and Graph option support complex geographic information systems (GIS) applications, enterprise applications and location services applications. Oracle Spatial and Graph option extends the spatial query and analysis features included in every edition of Oracle Database with the Oracle Locator feature, and provides a robust foundation for applications that require advanced spatial analysis and processing in the Oracle Database. It supports all major spatial data types and models, addressing challenging business-critical requirements from various industries, including transportation, utilities, energy, public sector, defense and commercial location intelligence. Network Data Model Graph Features The Network Data Model graph explicitly stores and maintains a persistent data model withnetwork connectivity and provides network analysis capability such as shortest path, nearest neighbors, within cost and reachability. It loads partitioned networks into memory on demand, overcomingthe limitations of in-memory analysis. Partitioning massive networks into manageable sub-networkssimplifies the network analysis. RDF Semantic Graph Features RDF Semantic Graph has native support for World Wide Web Consortium standards. It has open, scalable, and secure features for storing RDF/OWL ontologies anddata; native inference with OWL 2, SKOS and user-defined rules; and querying RDF/OWL data withSPARQL 1.1, Java APIs, and SPARQLgraph patterns in SQL. Video: Oracle Spatial and Graph Overview Oracle spatial is embeded on oracle database product. So ,we can use oracle installer (OUI).The Oracle Universal Installer (OUI) is used to install Oracle Database software. OUI is a graphical user interface utility that enables you to view the Oracle software that is installed on your machine, install new Oracle Database software, and delete Oracle software that you no longer need to use. Online Help is available to guide you through the installation process. One of the installation options is to create a database. If you select database creation, OUI automatically starts Oracle Database Configuration Assistant (DBCA) to guide you through the process of creating and configuring a database. If you do not create a database during installation, you must invoke DBCA after you have installed the software to create a database. You can also use DBCA to create additional databases. For installing Oracle Database 12c you may check the Installing Oracle Database Software and Creating a Database tutorial under the Oracle Database 12c 2-Day DBA Series.You can always check if spatial is available in your database using  "select comp_id, version, status, comp_name from dba_registry where comp_id='SDO';"   One of the most notable improvements with Oracle Spatial and Graph 12c can be seen in performance increases in vector data operations. Enabling the Spatial Vector Acceleration feature (available with the Spatial option) dramatically improves the performance of commonly used vector data operations, such as sdo_distance, sdo_aggr_union, and sdo_inside. With 12c, these operations also run more efficiently in parallel than in prior versions through the use of metadata caching. For organizations that have been facing processing limitations, these enhancements enable developers to make a small set of configuration changes and quickly realize significant performance improvements. Results include improved index performance, enhanced geometry engine performance, optimized secondary filter optimizations for Spatial operators, and improved CPU and memory utilization for many advanced vector functions. Vector performance acceleration is especially beneficial when using Oracle Exadata Database Machine and other large-scale systems. Oracle Spatial and Graph vector performance acceleration builds on general improvements available to all SDO_GEOMETRY operations in these areas: Caching of index metadata, Concurrent update mechanisms, and Optimized spatial predicate selectivity and cost functions. These optimizations enable more efficient use of: CPU, Memory, and Partitioning Resulting in substantial query performance improvements.UsageTo accelerate the performance of spatial operators, it is recommended that you set the SPATIAL_VECTOR_ACCELERATION database system parameter to the value TRUE. (This parameter is authorized for use only by licensed Oracle Spatial users, and its default value is FALSE.) You can set this parameter for the whole system or for a single session. To set the value for the whole system, do either of the following:Enter the following statement from a suitably privileged account:   ALTER SYSTEM SET SPATIAL_VECTOR_ACCELERATION = TRUE;Add the following to the database initialization file (xxxinit.ora):   SPATIAL_VECTOR_ACCELERATION = TRUE;To set the value for the current session, enter the following statement from a suitably privileged account:   ALTER SESSION SET SPATIAL_VECTOR_ACCELERATION = TRUE; Checkout the complete list of new features on Oracle.com @ http://www.oracle.com/technetwork/database/options/spatialandgraph/overview/index.html Spatial and Graph Data Sheet (PDF) Spatial and Graph White Paper (PDF)

    Read the article

  • SPARC M7 Chip - 32 cores - Mind Blowing performance

    - by Angelo-Oracle
    The M7 Chip Oracle just announced its Next Generation Processor at the HotChips HC26 conference. As the Tech Lead in our Systems Division's Partner group, I had a front row seat to the extraordinary price performance advantage of Oracle current T5 and M6 based systems. Partner after partner tested  these systems and were impressed with it performance. Just read some of the quotes to see what our partner has been saying about our hardware. We just announced our next generation processor, the M7. This has 32 cores (up from 16-cores in T5 and 12-cores in M6). With 20 nm technology  this is our most advanced processor. The processor has more cores than anything else in the industry today. After the Sun acquisition Oracle has released 5 processors in 4 years and this is the 6th.  The S4 core  The M7 is built using the foundation of the S4 core. This is the next generation core technology. Like its predecessor, the S4 has 8 dynamic threads. It increases the frequency while maintaining the Pipeline depth. Each core has its own fine grain power estimator that keeps the core within its power envelop in 250 nano-sec granularity. Each core also includes Software in Silicon features for Application Acceleration Support. Each core includes features to improve Application Data Integrity, with almost no performance loss. The core also allows using part of the Virtual Address to store meta-data.  User-Level Synchronization Instructions are also part of the S4 core. Each core has 16 KB Instruction and 16 KB Data L1 cache. The Core Clusters  The cores on the M7 chip are organized in sets of 4-core clusters. The core clusters share  L2 cache.  All four cores in the complex share 256 KB of 4 way set associative L2 Instruction Cache, with over 1/2 TB/s of throughput. Two cores share 256 KB of 8 way set associative L2 Data Cache, with over 1/2 TB/s of throughput. With this innovative Core Cluster architecture, the M7 doubles core execution bandwidth. to maximize per-thread performance.  The Chip  Each  M7 chip has 8 sets of these core-clusters. The chip has 64 MB on-chip L3 cache. This L3 caches is shared among all the cores and is partitioned into 8 x 8 MB chunks. Each chunk is  8-way set associative cache. The aggregate bandwidth for the L3 cache on the chip is over 1.6TB/s. Each chip has 4 DDR4 memory controllers and can support upto 16 DDR4 DIMMs, allowing for 2 TB of RAM/chip. The chip also includes 4 internal links of PCIe Gen3 I/O controllers.  Each chip has 7 coherence links, allowing for 8 of these chips to be connected together gluelessly. Also 32 of these chips can be connected in an SMP configuration. A potential system with 32 chips will have 1024 cores and 8192 threads and 64 TB of RAM.  Software in Silicon The M7 chip has many built in Application Accelerators in Silicon. These features will be exposed to our Software partners using the SPARC Accelerator Program.  The M7  has built-in logic to decompress data at the speed of memory access. This means that applications can directly work on compressed data in memory increasing the data access rates. The VA Masking feature allows the use of part of the virtual address to store meta-data.  Realtime Application Data Integrity The Realtime Application Data Integrity feature helps applications safeguard against invalid, stale memory reference and buffer overflows. The first 4-bits if the Pointer can be used to store a version number and this version number is also maintained in the memory & cache lines. When a pointer accesses memory the hardware checks to make sure the two versions match. A SEGV signal is raised when there is a mismatch. This feature can be used by the Database, applications and the OS.  M7 Database In-Memory Query Accelerator The M7 chip also includes a In-Silicon Query Engines.  These accelerate tasks that work on In-Memory Columnar Vectors. Oracle In-Memory options stores data in Column Format. The M7 Query Engine can speed up In-Memory Format Conversion, Value and Range Comparisons and Set Membership lookups. This engine can work on Compressed data - this means not only are we accelerating the query performance but also increasing the memory bandwidth for queries.  SPARC Accelerated Program  At the Hotchips conference we also introduced the SPARC Accelerated Program to provide our partners and third part developers access to all the goodness of the M7's SPARC Application Acceleration features. Please get in touch with us if you are interested in knowing more about this program. 

    Read the article

  • 3 Ways to Make Steam Even Faster

    - by Chris Hoffman
    Have you ever noticed how slow Steam’s built-in web browser can be? Do you struggle with slow download speeds? Or is Steam just slow in general? These tips will help you speed it up. Steam isn’t a game itself, so there are no 3D settings to change to achieve maximum performance. But there are some things you can do to speed it up dramatically. Speed Up the Steam Web Browser Steam’s built-in web browser — used in both the Steam store and in Steam’s in-game overlay to provide a web browser you can quickly use within games – can be frustratingly slow on many systems. Rather than the typical speed we’ve come to expect from Chrome, Firefox, or even Internet Explorer, Steam seems to struggle. When you click a link or go to a new page, there’s a noticeable delay before the new page appears — something that doesn’t happen in desktop browsers. Many people seem to have made peace with this slowness, accepting that Steam’s built-in browser is just bad. However, there’s a trick that will eliminate this delay on many systems and make the Steam web browser fast. This problem seems to arise from an incompatibility with the Automatically Detect Proxy Settings option, which is enabled by default on Windows. This is a compatibility option that very few people should actually need, so it’s safe to disable it. To disable this option, open the Internet Options dialog — press the Windows key to access the Start menu or Start screen, type Internet Options, and click the Internet Options shortcut. Select the Connections tab in the Internet Options window and click the LAN settings button. Uncheck the Automatically detect settings option here, then click OK to save your settings. If you experienced a significant delay every time a web page loaded in Steam’s web browser, it should now be gone. In the unlikely event that you encounter some sort of problem with your network connection, you could always re-enable this option. Increase Steam’s Game Download Speed Steam attempts to automatically select the nearest download server to your location. However, it may not always select the ideal download server. Or, in the case of high-traffic events like big seasonal sales and huge game launches, you may benefit from selecting a less-congested server. To do this, open Steam’s settings by clicking the Steam menu in Steam and selecting Settings. Click over to the Downloads tab and select the closest download server from the Download Region box. You should also ensure that Steam’s download bandwidth isn’t limited from here. You may want to restart Steam and see if your download speeds improve after changing this setting. In some cases, the closest server might not be the fastest. One a bit farther away could be faster if your local server is more congested, for example. Steam once provided information about content server load, which allowed you to select a regional server that wasn’t under high-load, but this information no longer seems to be available. Steam still provides a page that shows you the amount of download activity happening in different regions, including statistics about the difference in download speeds in different US states, but this information isn’t as useful. Accelerate Steam and Your Games One way to speed up all your games — and Steam itself —  is by getting a solid-state drive and installing Steam to it. Steam allows you to easily move your Steam folder — at C:\Program Files (x86)\Steam by default — to another hard drive. Just move it like you would any other folder. You can then launch the Steam.exe program as if you had never moved Steam’s files. Steam also allows you to configure multiple game library folders. This means that you can set up a Steam library folder on a solid-state drive and one on your larger magnetic hard drive. Install your most frequently played games to the solid-state drive for maximum speed and your less frequently played ones to the slower magnetic hard drive to save SSD space. To set up additional library folders, open Steam’s Settings window and click the Downloads tab. You’ll find the Steam Library Folders option here. Click the Add Library Folder button and create a new game library on another hard drive. When you install a game in Steam, you’ll be asked which library folder you want to install it to. With the proxy compatibility option disabled, the correct download server chosen, and Steam installed to a fast SSD, it should be a speed demon. There’s not much more you can do to speed up Steam, short of upgrading other hardware like your computer’s CPU. Image Credit: Andrew Nash on Flickr     

    Read the article

  • ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

    - by Andreea Vaduva
    Untitled Document There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be? We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service. Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels. Let’s take a brief look at some of the most useful tools available and see how they make a difference. Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round. Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities. Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And b ecause it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert. Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win! I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort. Five Top Tips to take away: Start by working out “why” and “how” customers are contacting you. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves. Script any complex, critical or sensitive interactions to ensure consistency and accuracy. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.   Want to learn more? Having attending the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the Oracle RightNow Customer Portal Design and 2-day Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.   Useful resources: Review the Best Practice Guide Review the tune-up guide   About the Author: Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class: RightNow Customer Service Administration (3 days) RightNow Customer Portal Design and Dynamic Agent Desktop Administration (2 days) RightNow Analytics (2 days) Rightnow Chat Cloud Service Administration (2 days)

    Read the article

  • Imaging: Paper Paper Everywhere, but None Should be in Sight

    - by Kellsey Ruppel
    Author: Vikrant Korde, Technical Architect, Aurionpro's Oracle Implementation Services team My wedding photos are stored in several empty shoeboxes. Yes...I got married before digital photography was mainstream...which means I'm old. But my parents are really old. They have shoeboxes filled with vacation photos on slides (I doubt many of you have even seen a home slide projector...and I hope you never do!). Neither me nor my parents should have shoeboxes filled with any form of photographs whatsoever. They should obviously live in the digital world...with no physical versions in sight (other than a few framed on our walls). Businesses grapple with similar challenges. But instead of shoeboxes, they have file cabinets and warehouses jam packed with paper invoices, legal documents, human resource files, material safety data sheets, incident reports, and the list goes on and on. In fact, regulatory and compliance rules govern many industries, requiring that this paperwork is available for any number of years. It's a real challenge...especially trying to find archived documents quickly and many times with no backup. Which brings us to a set of technologies called Image Process Management (or simply Imaging or Image Processing) that are transforming these antiquated, paper-based processes. Oracle's WebCenter Content Imaging solution is a combination of their WebCenter suite, which offers a robust set of content and document management features, and their Business Process Management (BPM) suite, which helps to automate business processes through the definition of workflows and business rules. Overall, the solution provides an enterprise-class platform for end-to-end management of document images within transactional business processes. It's a solution that provides all of the capabilities needed - from document capture and recognition, to imaging and workflow - to effectively transform your ‘shoeboxes’ of files into digitally managed assets that comply with strict industry regulations. The terminology can be quite overwhelming if you're new to the space, so we've provided a summary of the primary components of the solution below, along with a short description of the two paths that can be executed to load images of scanned documents into Oracle's WebCenter suite. WebCenter Imaging (WCI): the electronic document repository that provides security, annotations, and search capabilities, and is the primary user interface for managing work items in the imaging solution SOA & BPM Suites (workflow): provide business process management capabilities, including human tasks, workflow management, service integration, and all other standard SOA features. It's interesting to note that there a number of 'jumpstart' processes available to help accelerate the integration of business applications, such as the accounts payable invoice processing solution for E-Business Suite that facilitates the processing of large volumes of invoices WebCenter Enterprise Capture (WEC): expedites the capture process of paper documents to digital images, offering high volume scanning and importing from email, and allows for flexible indexing options WebCenter Forms Recognition (WFR): automatically recognizes, categorizes, and extracts information from paper documents with greatly reduced human intervention WebCenter Content: the backend content server that provides versioning, security, and content storage There are two paths that can be executed to send data from WebCenter Capture to WebCenter Imaging, both of which are described below: 1. Direct Flow - This is the simplest and quickest way to push an image scanned from WebCenter Enterprise Capture (WEC) to WebCenter Imaging (WCI), using the bare minimum metadata. The WEC activities are defined below: The paper document is scanned (or imported from email). The scanned image is indexed using a predefined indexing profile. The image is committed directly into the process flow 2. WFR (WebCenter Forms Recognition) Flow - This is the more complex process, during which data is extracted from the image using a series of operations including Optical Character Recognition (OCR), Classification, Extraction, and Export. This process creates three files (Tiff, XML, and TXT), which are fed to the WCI Input Agent (the high speed import/filing module). The WCI Input Agent directory is a standard ingestion method for adding content to WebCenter Imaging, the process for doing so is described below: WEC commits the batch using the respective commit profile. A TIFF file is created, passing data through the file name by including values separated by "_" (underscores). WFR completes OCR, classification, extraction, export, and pulls the data from the image. In addition to the TIFF file, which contains the document image, an XML file containing the extracted data, and a TXT file containing the metadata that will be filled in WCI, are also created. All three files are exported to WCI's Input agent directory. Based on previously defined "input masks", the WCI Input Agent will pick up the seeding file (often the TXT file). Finally, the TIFF file is pushed in UCM and a unique web-viewable URL is created. Based on the mapping data read from the TXT file, a new record is created in the WCI application.  Although these processes may seem complex, each Oracle component works seamlessly together to achieve a high performing and scalable platform. The solution has been field tested at some of the largest enterprises in the world and has transformed millions and millions of paper-based documents to more easily manageable digital assets. For more information on how an Imaging solution can help your business, please contact [email protected] (for U.S. West inquiries) or [email protected] (for U.S. East inquiries). About the Author: Vikrant is a Technical Architect in Aurionpro's Oracle Implementation Services team, where he delivers WebCenter-based Content and Imaging solutions to Fortune 1000 clients. With more than twelve years of experience designing, developing, and implementing Java-based software solutions, Vikrant was one of the founding members of Aurionpro's WebCenter-based offshore delivery team. He can be reached at [email protected].

    Read the article

  • career advice for PhD scientist seeking to program?

    - by C SD
    I'm largely a self-taught programmer. In fact, I first started programming about half way through biophysics grad school, and even though I think I've done some pretty nice work, I've never worked as part of a 'serious' development team that had more than one or two other developers (and I wouldn't hesitate to call them equally inexperienced in software development as a profession). After finishing my PhD I applied to Google, on a lark, since I had some confidence in my abilities, if not necessarily my experience, and I was hoping to maybe slip in and absorb all the experience and talent I'd be surrounded with and become productive enough, quickly enough, that they wouldn't immediately regret their decision. I was excited to actually get invited to interview up at Mountain View (this was ~ mid 2008). Overall, my memory of the interview was very positive, but after close to a three month wait (is that normal?) they ended up turning me down. I wasn't too surprised or disappointed (aside from the uncomfortably long wait) given my unusual background and admitted lack of experience. I decided to continue as a postdoc, but focus on improving my skills rather than doing research. I've done about three years of that, and my honest assessment is that I've learned a ton more, but I really need more of a peer group to maintain or accelerate my growth. Google invited me to interview again about eight months ago, and the interview process went even better than the first time around (I thought), though they again declined to give me an offer. I have to admit this second rejection was much more discouraging. They had insisted I interview even after I mentioned to them that a move on my part was unlikely given that I had bought a house, gotten married, etc. since the first interview. I guess I was hoping they'd at least give me an offer that I could parlay into a more conventional, but still interesting, programming position close to home. So here I am, going on my third year out of grad school, a glorified postdoc and I'm starting to get pretty discouraged. Even though I could technically get 'back-on-track' for a career in science, I have been focusing the vast majority of this time on gaining programming experience rather than on research and publications. The problem is, whenever I look, most job listings have requirements that seem impossibly grandiose and I hesitate to apply. That, or the job/project seems incredibly dull. Ironically, applying to Google struck me as less intimidating. I suspect that either most people are just a lot less realistic than I am when it comes to assessing how long it will take for them to get up to speed, or they don't care; my fear is that I'm just woefully unqualified for any interesting, well paying work. IE: I'm confident I could switch fully back into C++ mode with a couple weeks work (I mostly use C,Python,C# daily) but I don't list myself as being 'proficient' in C++ on my CV, or applying for jobs that 'require' such knowledge. The few applications for which I did feel I was a legitimately good match have not elicited a response. I suspect the following things are potential problems with my application/CV and I would like feedback on: I don't have a CS degree. My BS was in biochemistry and molecular biology, my PhD in biophysics. I took a undergrad and grad level CS course at UCSD and completely killed them, but I don't know how to translate that to my CV effectively. I have a PhD, but it's not in CS... I've been debating if I should remove it from my CV, and wether or not it would then be misleading to list at least some of those years as some kind of 'programming' job (in many respects it was). I think there are sometimes strong stigmas associated with 'self-taught' programmers. I am certainly one of those. I even recognize that some of those stigmas hold a hint of truth, but I really do want to be an asset to a team. How do I communicate that even though I have been largely self-directing for ~8 years I can still take marching orders when needed? Do I just say so outright? Should I just become a lot less scrupulous about the whole process? anecdote: I have a friend who applied for positions where he completely fudged his qualifications to get past the first culling. He was much more honest and forthcoming about his actual qualifications when contacted and he still managed to get invited to a couple of interviews and even got some offers. His balls are larger than mine though.

    Read the article

  • XBRL - Moving from Production to Consumption

    - by jmorourke
    Here's an update on what’s new with XBRL and how it can actually benefit your organization versus adding extra time and costs to financial reporting.  On February 29th (leap day) of 2012 I attended the XBRL and Financial Analysis Technology Conference at Baruch College in NYC.  The event, which attracted over 300 XBRL gurus and fans was presented by XBRL US, The New York Society of Security Analysts’ Improved Corporate Reporting Committee, and Baruch College’s Robert Zicklin Center for Corporate Integrity.  The event featured keynotes from the U.S. Securities and Exchange Commission (SEC), and the CFA Institute as well as panels covering alternative research tools and data, corporate reporting to stakeholders and a demonstration of XBRL analysis tools.  The program culminated in a presentation of the finalists and the winner of the $20,000 XBRL Challenge.    Some of the key points made in the sessions included: The focus of XBRL tools is moving from production to consumption. As of February 2012, over 9000 companies are reporting in XBRL, with over 10 million facts filed to date XBRL taxonomy extensions have dropped from 27% to 11% making comparisons easier The SEC reports that XBRL makes it easier to analyze disclosures, focus on accounting issues XBRL is helping standards-setters like the FASB speed their analysis of impacts of proposed accounting rule changes Companies like Thomson Reuters report that XBRL is helping speed the delivery of data to clients The most interesting part of the program though, was the session highlighting the 5 finalists in the XBRL Challenge competition and the winning solution.  The XBRL Challenge was launched in 2011 as a means of spurring the development of more end-user tools to help with the consumption of XBRL-based financial information.       Over an 8-month process handled by 5 judges, there were 84 registrants, 15 completed submissions, 5 finalists and one winner of the challenge.  All of the solutions are open-sourced tools and most of them focus on consuming XBRL-based data.  The 5 finalists included: Advanced XBRL Processing from Oxide solutions – XBRL viewer for taxonomies, filings and company data with peer comparison capabilities. Arrelle – API for XBRL processes, supports SEC Validations, RSS Feeds to access filings etc. Calcbench – XBRL data analysis tool that can be embedded in other web applications.  This tool can combine XBRL filings with real-time market data. XBRL to XL – allows the importing of XBRL data into Microsoft Excel for analysis, comparisons.  Users start on the web and populate Excel with XBRL data. XBurble – allows users to search and view XBRL filings, export to Excel, merge for comparison, and includes a workflow interface. The winner of the $20,000 XBRL Challenge prize was CalcBench.  More information about the XBRL Challenge and the finalists can be found at www.XBRLUS.org/challenge XBRL for Sustainability Reporting – other recent news on the XBRL front was the announcement by the Global Reporting Initiative (GRI) of an XBRL taxonomy for Sustainability Reporting.  This taxonomy was co-developed by the GRI and Deloitte and is designed to make the consumption of data found in Sustainability Reports much easier.  Although there is no government mandate to file Sustainability Reports in XBRL format, organizations that do use the GRI guidelines for Sustainability Reporting are encouraged to tag and submit their data voluntarily to the GRI – who will populate a database with Sustainability Reporting data and make this available to the public.  For more information about this initiative, you can go to the GRI web site:  www.globalreporting.org. So how does all of this benefit corporate filers and investors?  Since its introduction, the consensus in the market is that XBRL has mainly benefited the regulators and investment analysts who need to consume and analyze large volumes of financial data.  But with the emergence of more end-user tools for consuming and analyzing XBRL-based data, and the ability to perform quick comparisons of one company versus its peers and competitors in an industry group, will soon accelerate the benefits to corporate finance staff, as well as individual investors.  This could apply to financial results tagged in XBRL, as well as non-financial information such as Sustainability Reporting – which over the long-term will likely be integrated with financial reporting.   And as multiple regulators and agencies in a country adopt the XBRL standard for corporate filings, more benefits will accrue as companies will be able to leverage one set of XBRL-based financial data for multiple regulatory filings.     For more information about the latest developments in XBRL, check out the XBRL US or XBRL International web sites:  www.xbrl.org, www.xbrlus.org. For more information about what Oracle is doing to support XBRL, here are some links: http://www.oracle.com/us/solutions/ent-performance-bi/disclosure-management-065892.html http://www.oracle.com/technetwork/database/features/xmldb/index-087631.html Feel free to contact me if you have any questions or need more information:  [email protected]

    Read the article

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • #OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

    - by Eric Bezille
    The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 512GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this only the first day !

    Read the article

  • To Make Diversity Work, Managers Must Stop Ignoring Difference

    - by HCM-Oracle
    By Kate Pavao - Originally posted on Profit Executive coaches Jane Hyun and Audrey S. Lee noticed something during their leadership development coaching and consulting: Frustrated employees and overwhelmed managers. “We heard from voices saying, ‘I wish my manager understood me better’ or ‘I hope my manager would take the time to learn more about me and my background,’” remembers Hyun. “At the same token, the managers we were coaching had a hard time even knowing how to start these conversations.”  Hyun and Lee wrote Flex to address some of the fears managers have when it comes to leading diverse teams—such as being afraid of offending their employees by stumbling into sensitive territory—and also to provide a sure-footed strategy for becoming a more effective leader. Here, Hyun talks about what it takes to create innovate and productive teams in an increasingly diverse world, including the key characteristics successful managers share. Q: What does it mean to “flex”? Hyun: Flexing is the art of switching between leadership styles to work more effectively with people who are different from you. It’s not fundamentally changing who you are, but it’s understanding when you need to adapt your style in a situation so that you can accommodate people and make them feel more comfortable. It’s understanding the gap that might exist between you and others who are different, and then flexing across that gap to get the result that you're looking for. It’s up to all of us, not just managers, but also employees, to learn how to flex. When you hire new people to the organization, they're expected to adapt. The new people in the organization may need some guidance around how to best flex. They can certainly take the initiative, but if you can give them some direction around the important rules, and connect them with insiders who can help them figure out the most critical elements of the job, that will accelerate how quickly they can contribute to your organization. Q: Why is it important right now for managers to understand flexing? Hyun: The workplace is becoming increasingly younger, multicultural and female. The numbers bear it out. Millennials are entering the workforce and becoming a larger percentage of it, which is a global phenomenon. Thirty-six percent of the workforce is multicultural, and close to half is female. It makes sense to better understand the people who are increasingly a part of your workforce, and how to best lead them and manage them as well. Q: What do companies miss out on when managers don’t flex? Hyun: There are high costs for losing people or failing to engage them. The estimated costs of replacing an employee is about 150 percent of that person’s salary. There are studies showing that employee disengagement costs the U.S. something like $450 billion a year. But voice is the biggest thing you miss out on if you don’t flex. Whenever you want innovation or increased productivity from your people, you need to figure out how to unleash these things. The way you get there is to make sure that everybody’s voice is at the table. Q: What are some of the common misassumptions that managers make about the people on their teams? Hyun: One is what I call the Golden Rule mentality: We assume when we go to the workplace that people are going to think like us and operate like us. But sometimes when you work with people from a different culture or a different generation, they may have a different mindset about doing something, or a different approach to solving a problem, or a different way to manage some situation. When see something that’s different, we don't understand it, so we don't trust it. We have this hidden bias for people who are like us. That gets in the way of really looking at how we can tap our team members best potential by understanding how their difference may help them be effective in our workplace. We’re trained, especially in the workplace, to make assumptions quickly, so that you can make the best business decision. But with people, it’s better to remain curious. If you want to build stronger cross-cultural, cross-generational, cross-gender relationships, before you make a judgment, share what you observe with that team member, and connect with him or her in ways that are mutually adaptive, so that you can work together more effectively. Q: What are the common characteristics you see in leaders who are successful at flexing? Hyun: One is what I call “adaptive ability”—leaders who are able to understand that someone on their team is different from them, and willing to adapt his or her style to do that. Another one is “unconditional positive regard,” which is basically acceptance of others, even in their vulnerable moments. This attitude of grace is critical and essential to a healthy environment in developing people. If you think about when people enter the workforce, they're only 21 years old. It’s quite a formative time for them. They may not have a lot of management experience, or experience managing complex or even global projects. Creating the best possible condition for their development requires turning their mistakes into teachable moments, and giving them an opportunity to really learn. Finally, these leaders are not rigid or constrained in a single mode or style. They have this insatiable curiosity about other people. They don’t judge when they see behavior that doesn’t make sense, or is different from their own. For example, maybe someone on their team is a less aggressive than they are. The leader needs to remain curious and thinks, “Wow, I wonder how I can engage in a dialogue with this person to get their potential out in the open.”

    Read the article

  • Cepstral Analysis for pitch detection

    - by Ohmu
    Hi! I'm looking to extract pitches from a sound signal. Someone on IRC just explain to me how taking a double FFT achieves this. Specifically: take FFT take log of square of absolute value (can be done with lookup table) take another FFT take absolute value I am attempting this using vDSP I can't understand how I didn't come across this technique earlier. I did a lot of hunting and asking questions; several weeks worth. More to the point, I can't understand why I didn't think of it. I am attempting to achieve this with vDSP library. it looks as though it has functions to handle all of these tasks. However, I'm wondering about the accuracy of the final result. I have previously used a technique which scours the frequency bins of a single FFT for local maxima. when it encounters one, it uses a cunning technique (the change in phase since the last FFT) to more accurately place the actual peak within the bin. I am worried that this precision will be lost with this technique I'm presenting here. I guess the technique could be used after the second FFT to get the fundamental accurately. But it kind of looks like the information is lost in step 2. as this is a potentially tricky process, could someone with some experience just look over what I'm doing and check it for sanity? also, I've heard there is an alternative technique involving fitting a quadratic over neighbouring bins. Is this of comparable accuracy? if so, I would favour it, as it doesn't involve remembering bin phases. so questions: does this approach makes sense? Can it be improved? I'm a bit worried about And the log square component; there seems to be a vDSP function to do exactly that: vDSP_vdbcon however, there is no indication it precalculates a log-table -- I assume it doesn't, as the FFT function requires an explicit pre-calculation function to be called and passed into it. and this function doesn't. Is there some danger of harmonics being picked up? is there any cunning way of making vDSP pull out the maxima, biggest first? Can anyone point me towards some research or literature on this technique? the main question: is it accurate enough? Can the accuracy be improved? I have just been told by an expert that the accuracy IS INDEED not sufficient. Is this the end of the line? Pi PS I get SO annoyed (npi) when I want to create tags, but cannot. :| I have suggested to the maintainers that SO keep track of attempted tags, but I'm sure I was ignored. we need tags for vDSP, accelerate framework, cepstral analysis

    Read the article

  • Improving the Industry’s Best Cloud Project Portfolio Management (PPM) Solution – New Release of Instantis EnterpriseTrack

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Yasser Mahmud, Vice President of Product Strategy & Industry Marketing, Oracle Primavera We know that in today’s rapidly changing world, organizations and leaders must adapt to fierce competition, business climate change and customers consistently demanding more for less. And project portfolio management (PPM) initiatives are a key component to help organizations thrive and stand out among competitors. That’s why I’m excited to announce Instantis EnterpriseTrack 8.5. Since Oracle’s acquisition of Instantis late last year, we’ve been busy working to enhance the leading cloud PPM solution. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here’s what’s new: Perform more precise resource planning and management  Gain more precise capacity visibility for resource planning and project execution with resource calendars that capture vacation, LOA and part-time resource availability Ensure compliance and governance processes  with activity labor cost capitalization Improve project labor cost estimation, tracking and administration with variable resource rates Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Optimize Project Demand Management And Execution Enhance productivity and analysis with project request flexible staffing plan and simplified finance estimation Improve project status communication and execution with estimated time to complete (ETC) in timesheets and projects Achieve audit compliance and governance with field change history for key project and project request fields Enforce proper financial accounting processes with the new strict finance lock/close period option Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Improve Reporting and the User Experience Enhance user productivity and analysis with improved listing pages Improve program reporting with new program filters in listing pages and reports Run large data volume user defined Excel reports with MS Excel 2010 support Accelerate user productivity and satisfaction with an improved user interface for project issues, risks, and scope changes Enjoy faster system response and improved user experience with  optimized listing pages, resource planning, and application cache Deliver user self-service training on demand with UPK support And if that wasn’t enough, we’ve also made additional improvements to timesheets, field change history and finance lock/close period. Learn more about Instantis EnterpriseTrack 8.5.

    Read the article

  • Pre-rentrée Oracle Open World 2012 : à vos agendas

    - by Eric Bezille
    A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont y être dévoilées... Et sans lever le voile, je vous engage à prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des développements logiciels) et John Fowler (responsable des développements systèmes) afin de vous donner un avant goût. Stratégie et Roadmaps Oracle Bien entendu, au-delà des séances plénières qui vous donnerons  une vision précise de la stratégie, et pour ceux qui seront sur place, je vous engage à ne pas manquer les séances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis : "Accelerate your Business with the Oracle Hardware Advantage" avec John Fowler, le lundi 1er Octobre, 3:15pm-4:15pm "Why Oracle Softwares Runs Best on Oracle Hardware" , avec Bradley Carlile, le responsable des Benchmarks, le lundi 1er Octobre, 12:15pm-13:15pm "Engineered Systems - from Vision to Game-changing Results", avec Robert Shimp, le lundi 1er Octobre 1:45pm-2:45pm "Database and Application Consolidation on SPARC Supercluster", avec Hugo Rivero, responsable dans les équipes d'intégration matériels et logiciels, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle’s SPARC Server Strategy Update", avec Masood Heydari, responsable des développements serveurs SPARC, le mardi 2 Octobre, 10:15am - 11:15am "Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap", avec Markus Flier, responsable des développements Solaris, le mercredi 3 Octobre, 10:15am - 11:15am "Oracle Virtualization Strategy and Roadmap", avec Wim Coekaerts, responsable des développement Oracle VM et Oracle Linux, le lundi 1er Octobre, 12:15pm-1:15pm "Big Data: The Big Story", avec Jean-Pierre Dijcks, responsable du développement produits Big Data, le lundi 1er Octobre, 3:15pm-4:15pm "Scaling with the Cloud: Strategies for Storage in Cloud Deployments", avec Christine Rogers,  Principal Product Manager, et Chris Wood, Senior Product Specialist, Stockage , le lundi 1er Octobre, 10:45am-11:45am Retours d'expériences et témoignages Si Oracle Open World est l'occasion de partager avec les équipes de développement d'Oracle en direct, c'est aussi l'occasion d'échanger avec des clients et experts qui ont mis en oeuvre  nos technologies pour bénéficier de leurs retours d'expériences, comme par exemple : "Oracle Optimized Solution for Siebel CRM at ACCOR", avec les témoignages d'Eric Wyttynck, directeur IT Multichannel & CRM  et Pascal Massenet, VP Loyalty & CRM systems, sur les bénéfices non seulement métiers, mais également projet et IT, le mercredi 3 Octobre, 1:15pm-2:15pm "Tips from AT&T: Oracle E-Business Suite, Oracle Database, and SPARC Enterprise", avec le retour d'expérience des experts Oracle, le mardi 2 Octobre, 11:45am-12:45pm "Creating a Maximum Availability Architecture with SPARC SuperCluster", avec le témoignage de Carte Wright, Database Engineer à CKI, le mercredi 3 Octobre, 11:45am-12:45pm "Multitenancy: Everybody Talks It, Oracle Walks It with Pillar Axiom Storage", avec le témoignage de Stephen Schleiger, Manager Systems Engineering de Navis, le lundi 1er Octobre, 1:45pm-2:45pm "Oracle Exadata for Database Consolidation: Best Practices", avec le retour d'expérience des experts Oracle ayant participé à la mise en oeuvre d'un grand client du monde bancaire, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle Exadata Customer Panel: Packaged Applications with Oracle Exadata", animé par Tim Shetler, VP Product Management, mardi 2 Octobre, 1:15pm-2:15pm "Big Data: Improving Nearline Data Throughput with the StorageTek SL8500 Modular Library System", avec le témoignage du CTO de CSC, Alan Powers, le jeudi 4 Octobre, 12:45pm-1:45pm "Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC", avec le témoignage de Syed Qadri, Lead DBA et Michael Arnold, System Architect d'US Cellular, le mardi 2 Octobre, 10:15am-11:15am "Transform Data Center TCO with Oracle Optimized Servers: A Customer Panel", avec les témoignages notamment d'AT&T et Liberty Global, le mardi 2 Octobre, 11:45am-12:45pm "Data Warehouse and Big Data Customers’ View of the Future", avec The Nielsen Company US, Turkcell, GE Retail Finance, Allianz Managed Operations and Services SE, le lundi 1er Octobre, 4:45pm-5:45pm "Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization", le témoignage de l'IT interne d'Oracle sur la transformation et la migration de l'ensemble de notre infrastructure de stockage, mardi 2 Octobre, 1:15pm-2:15pm Echanges avec les groupes d'utilisateurs et les équipes de développement Oracle Si vous avez prévu d'arriver suffisamment tôt, vous pourrez également échanger dès le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les équipes de développement Oracle sur des sujets comme : "To Exalogic or Not to Exalogic: An Architectural Journey", avec Todd Sheetz - Manager of DBA and Enterprise Architecture, Veolia Environmental Services, le dimanche 30 Septembre, 2:30pm-3:30pm "Oracle Exalytics and Oracle TimesTen for Exalytics Best Practices", avec Mark Rittman, de Rittman Mead Consulting Ltd, le dimanche 30 Septembre, 10:30am-11:30am "Introduction of Oracle Exadata at Telenet: Bringing BI to Warp Speed", avec Rudy Verlinden & Eric Bartholomeus - Managers IT infrastructure à Telenet, le dimanche 30 Septembre, 1:15pm-2:00pm "The Perfect Marriage: Sun ZFS Storage Appliance with Oracle Exadata", avec Melanie Polston, directeur, Data Management, de Novation et Charles Kim, Managing Director de Viscosity, le dimanche 30 Septembre, 9:00am-10am "Oracle’s Big Data Solutions: NoSQL, Connectors, R, and Appliance Technologies", avec Jean-Pierre Dijcks et les équipes de développement Oracle, le lundi 1er Octobre, 6:15pm-7:00pm Testez et évaluez les solutions Et pour finir, vous pouvez même tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Systèmes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme : "Deploying an IaaS Environment with Oracle VM", le mardi 2 Octobre, 10:15am-11:15am "Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-on Lab", le mardi 2 Octobre, 11:45am-12:45pm (il est fortement conseillé d'avoir suivi le "Hands-on-Labs" précédent avant d'effectuer ce Lab. "x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance", le mercredi 3 Octobre, 5:00pm-6:00pm "StorageTek Tape Analytics: Managing Tape Has Never Been So Simple", le mercredi 3 Octobre, 1:15pm-2:15pm "Oracle’s Pillar Axiom 600 Storage System: Power and Ease", le lundi 1er Octobre, 12:15pm-1:15pm "Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c", le lundi 1er Octobre, 1:45pm-2:45pm "Managing Storage in the Cloud", le mardi 2 Octobre, 5:00pm-6:00pm "Learn How to Write MapReduce on Oracle’s Big Data Platform", le lundi 1er Octobre, 12:15pm-1:15pm "Oracle Big Data Analytics and R", le mardi 2 Octobre, 1:15pm-2:15pm "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications", le lundi 1er Octobre, 10:45am-11:45am "Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11", le lundi 1er Octobre, 4:45pm-5:45pm "Virtualizing Your Oracle Solaris 11 Environment", le mardi 2 Octobre, 1:15pm-2:15pm "Large-Scale Installation and Deployment of Oracle Solaris 11", le mercredi 3 Octobre, 3:30pm-4:30pm En conclusion, une semaine très riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos préoccupations, de la stratégie à l'implémentation... Cette semaine doit se préparer, pour tailler votre agenda sur mesure, à travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

    Read the article

  • SQL SERVER – Faster SQL Server Databases and Applications – Power and Control with SafePeak Caching Options

    - by Pinal Dave
    Update: This blog post is written based on the SafePeak, which is available for free download. Today, I’d like to examine more closely one of my preferred technologies for accelerating SQL Server databases, SafePeak. Safepeak’s software provides a variety of advanced data caching options, techniques and tools to accelerate the performance and scalability of SQL Server databases and applications. I’d like to look more closely at some of these options, as some of these capabilities could help you address lagging database and performance on your systems. To better understand the available options, it is best to start by understanding the difference between the usual “Basic Caching” vs. SafePeak’s “Dynamic Caching”. Basic Caching Basic Caching (or the stale and static cache) is an ability to put the results from a query into cache for a certain period of time. It is based on TTL, or Time-to-live, and is designed to stay in cache no matter what happens to the data. For example, although the actual data can be modified due to DML commands (update/insert/delete), the cache will still hold the same obsolete query data. Meaning that with the Basic Caching is really static / stale cache.  As you can tell, this approach has its limitations. Dynamic Caching Dynamic Caching (or the non-stale cache) is an ability to put the results from a query into cache while maintaining the cache transaction awareness looking for possible data modifications. The modifications can come as a result of: DML commands (update/insert/delete), indirect modifications due to triggers on other tables, executions of stored procedures with internal DML commands complex cases of stored procedures with multiple levels of internal stored procedures logic. When data modification commands arrive, the caching system identifies the related cache items and evicts them from cache immediately. In the dynamic caching option the TTL setting still exists, although its importance is reduced, since the main factor for cache invalidation (or cache eviction) become the actual data updates commands. Now that we have a basic understanding of the differences between “basic” and “dynamic” caching, let’s dive in deeper. SafePeak: A comprehensive and versatile caching platform SafePeak comes with a wide range of caching options. Some of SafePeak’s caching options are automated, while others require manual configuration. Together they provide a complete solution for IT and Data managers to reach excellent performance acceleration and application scalability for  a wide range of business cases and applications. Automated caching of SQL Queries: Fully/semi-automated caching of all “read” SQL queries, containing any types of data, including Blobs, XMLs, Texts as well as all other standard data types. SafePeak automatically analyzes the incoming queries, categorizes them into SQL Patterns, identifying directly and indirectly accessed tables, views, functions and stored procedures; Automated caching of Stored Procedures: Fully or semi-automated caching of all read” stored procedures, including procedures with complex sub-procedure logic as well as procedures with complex dynamic SQL code. All procedures are analyzed in advance by SafePeak’s  Metadata-Learning process, their SQL schemas are parsed – resulting with a full understanding of the underlying code, objects dependencies (tables, views, functions, sub-procedures) enabling automated or semi-automated (manually review and activate by a mouse-click) cache activation, with full understanding of the transaction logic for cache real-time invalidation; Transaction aware cache: Automated cache awareness for SQL transactions (SQL and in-procs); Dynamic SQL Caching: Procedures with dynamic SQL are pre-parsed, enabling easy cache configuration, eliminating SQL Server load for parsing time and delivering high response time value even in most complicated use-cases; Fully Automated Caching: SQL Patterns (including SQL queries and stored procedures) that are categorized by SafePeak as “read and deterministic” are automatically activated for caching; Semi-Automated Caching: SQL Patterns categorized as “Read and Non deterministic” are patterns of SQL queries and stored procedures that contain reference to non-deterministic functions, like getdate(). Such SQL Patterns are reviewed by the SafePeak administrator and in usually most of them are activated manually for caching (point and click activation); Fully Dynamic Caching: Automated detection of all dependent tables in each SQL Pattern, with automated real-time eviction of the relevant cache items in the event of “write” commands (a DML or a stored procedure) to one of relevant tables. A default setting; Semi Dynamic Caching: A manual cache configuration option enabling reducing the sensitivity of specific SQL Patterns to “write” commands to certain tables/views. An optimization technique relevant for cases when the query data is either known to be static (like archive order details), or when the application sensitivity to fresh data is not critical and can be stale for short period of time (gaining better performance and reduced load); Scheduled Cache Eviction: A manual cache configuration option enabling scheduling SQL Pattern cache eviction based on certain time(s) during a day. A very useful optimization technique when (for example) certain SQL Patterns can be cached but are time sensitive. Example: “select customers that today is their birthday”, an SQL with getdate() function, which can and should be cached, but the data stays relevant only until 00:00 (midnight); Parsing Exceptions Management: Stored procedures that were not fully parsed by SafePeak (due to too complex dynamic SQL or unfamiliar syntax), are signed as “Dynamic Objects” with highest transaction safety settings (such as: Full global cache eviction, DDL Check = lock cache and check for schema changes, and more). The SafePeak solution points the user to the Dynamic Objects that are important for cache effectiveness, provides easy configuration interface, allowing you to improve cache hits and reduce cache global evictions. Usually this is the first configuration in a deployment; Overriding Settings of Stored Procedures: Override the settings of stored procedures (or other object types) for cache optimization. For example, in case a stored procedure SP1 has an “insert” into table T1, it will not be allowed to be cached. However, it is possible that T1 is just a “logging or instrumentation” table left by developers. By overriding the settings a user can allow caching of the problematic stored procedure; Advanced Cache Warm-Up: Creating an XML-based list of queries and stored procedure (with lists of parameters) for periodically automated pre-fetching and caching. An advanced tool allowing you to handle more rare but very performance sensitive queries pre-fetch them into cache allowing high performance for users’ data access; Configuration Driven by Deep SQL Analytics: All SQL queries are continuously logged and analyzed, providing users with deep SQL Analytics and Performance Monitoring. Reduce troubleshooting from days to minutes with database objects and SQL Patterns heat-map. The performance driven configuration helps you to focus on the most important settings that bring you the highest performance gains. Use of SafePeak SQL Analytics allows continuous performance monitoring and analysis, easy identification of bottlenecks of both real-time and historical data; Cloud Ready: Available for instant deployment on Amazon Web Services (AWS). As you can see, there are many options to configure SafePeak’s SQL Server database and application acceleration caching technology to best fit a lot of situations. If you’re not familiar with their technology, they offer free-trial software you can download that comes with a free “help session” to help get you started. You can access the free trial here. Also, SafePeak is available to use on Amazon Cloud. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • The Birth of a Method - Where did OUM come from?

    - by user702549
    It seemed fitting to start this blog entry with the OUM vision statement. The vision for the Oracle® Unified Method (OUM) is to support the entire Enterprise IT lifecycle, including support for the successful implementation of every Oracle product.  Well, it’s that time of year again; we just finished testing and packaging OUM 5.6.  It will be released for general availability to qualifying customers and partners this month.  Because of this, I’ve been reflecting back on how the birth of Oracle’s Unified method - OUM came about. As the Release Director of OUM, I’ve been honored to package every method release.  No, maybe you’d say it’s not so special.  Of course, anyone can use packaging software to create an .exe file.  But to me, it is pretty special, because so many people work together to make each release come about.  The rich content that results is what makes OUM’s history worth talking about.   To me, professionally speaking, working on OUM, well it’s been “a labor of love”.  My youngest child was just 8 years old when OUM was born, and she’s now in High School!  Watching her grow and change has been fascinating, if you ask her, she’s grown up hearing about OUM.  My son would often walk into my home office and ask “How is OUM today, Mom?”  I am one of many people that take care of OUM, and have watched the method “mature” over these last 6 years.  Maybe that makes me a "Method Mom" (someone in one of my classes last year actually said this outloud) but there are so many others who collaborate and care about OUM Development. I’ve thought about writing this blog entry for a long time just to reflect on how far the Method has come. Each release, as I prepare the OUM Contributors list, I see how many people’s experience and ideas it has taken to create this wealth of knowledge, process and task guidance as well as templates and examples.  If you’re wondering how many people, just go into OUM select the resources button on the top of most pages of the method, and on that resources page click the ABOUT link. So now back to my nostalgic moment as I finished release 5.6 packaging.  I reflected back, on all the things that happened that cause OUM to become not just a dream but to actually come to fruition.  Here are some key conditions that make it possible for each release of the method: A vision to have one method instead of many methods, thereby focusing on deeper, richer content People within Oracle’s consulting Organization  willing to contribute to OUM providing Subject Matter Experts who are willing to write down and share what they know. Oracle’s continued acquisition of software companies, the need to assimilate high quality existing materials from these companies The need to bring together people from very different backgrounds and provide a common language to support Oracle Product implementations that often involve multiple product families What came first, and then what was the strategy? Initially OUM 4.0 was based on Oracle’s J2EE Custom Development Method (JCDM), it was a good “backbone”  (work breakdown structure) it was Unified Process based, and had good content around UML as well as custom software development.  But it needed to be extended in order to achieve the OUM Vision. What happened after that was to take in the “best of the best”, the legacy and acquired methods were scheduled for assimilation into OUM, one release after another.  We incrementally built OUM.  We didn’t want to lose any of the expertise that was reflected in AIM (Oracle’s legacy Application Implementation Method), Compass (People Soft’s Application implementation method) and so many more. When was OUM born? OUM 4.1 published April 30, 2006.  This release allowed Oracles Advanced Technology groups to begin the very first implementations of Fusion Middleware.  In the early days of the Method we would prepare several releases a year.  Our iterative release development cycle began and continues to be refined with each Method release.  Now we typically see one major release each year. The OUM release development cycle is not unlike many Oracle Implementation projects in that we need to gather requirements, prioritize, prepare the content, test package and then go production.  Typically we develop an OUM release MoSCoW (must have, should have, could have, and won’t have) right after the prior release goes out.   These are the high level requirements.  We break the timeframe into increments, frequent checkpoints that help us assess the content and progress is measured through frequent checkpoints.  We work as a team to prioritize what should be done in each increment. Yes, the team provides the estimates for what can be done within a particular increment.  We sometimes have Method Development workshops (physically or virtually) to accelerate content development on a particular subject area, that is where the best content results. As the written content nears the final stages, it goes through edit and evaluation through peer reviews, and then moves into the release staging environment.  Then content freeze and testing of the method pack take place.  This iterative cycle is run using the OUM artifacts that make sense “fit for purpose”, project plans, MoSCoW lists, Test plans are just a few of the OUM work products we use on a Method Release project. In 2007 OUM 4.3, 4.4 and 4.5 were published.  With the release of 4.5 our Custom BI Method (Data Warehouse Method FastTrack) was assimilated into OUM.  These early releases helped us align Oracle’s Unified method with other industry standards Then in 2008 we made significant changes to the OUM “Backbone” to support Applications Implementation projects with that went to the OUM 5.0 release.  Now things started to get really interesting.  Next we had some major developments in the Envision focus area in the area of Enterprise Architecture.  We acquired some really great content from the former BEA, Liquid Enterprise Method (LEM) along with some SMEs who were willing to work at bringing this content into OUM.  The Service Oriented Architecture content in OUM is extensive and can help support the successful implementation of Fusion Middleware, as well as Fusion Applications. Of course we’ve developed a wealth of OUM training materials that work also helps to improve the method content.  It is one thing to write “how to”, and quite another to be able to teach people how to use the materials to improve the success of their projects.  I’ve learned so much by teaching people how to use OUM. What's next? So here toward the end of 2012, what’s in store in OUM 5.6, well, I’m sure you won’t be surprised the answer is Cloud Computing.   More details to come in the next couple of weeks!  The best part of being involved in the development of OUM is to see how many people have “adopted” OUM over these six years, Clients, Partners, and Oracle Consultants.  The content just gets better with each release.   I’d love to hear your comments on how OUM has evolved, and ideas for new content you’d like to see in the upcoming releases.

    Read the article

  • Why We Should Learn to Stop Worrying and Love Millennials

    - by HCM-Oracle
    By Christine Mellon Much is said and written about the new generations of employees entering our workforce, as though they are a strange specimen, a mysterious life form to be “figured out,” accommodated and engaged – at a safe distance, of course.  At its worst, this talk takes a critical and disapproving tone, with baby boomer employees adamantly refusing to validate this new breed of worker, let alone determine how to help them succeed and achieve their potential.   The irony of our baby-boomer resentments and suspicions is that they belie the fact that we created the very vision that younger employees are striving to achieve.  From our frustrations with empty careers that did not fulfill us, from our opposition to “the man,” from our sharp memories of our parents’ toiling for 30 years just for the right to retire, from the simple desire not to live our lives in a state of invisibility, came the seeds of hope for something better. One characteristic of Millennial workers that grew from these seeds is the desire to experience as much as possible.  They are the “Experiential Employee”, with a passion for growing in diverse ways and expanding personal and professional horizons.  Rather than rooting themselves in a single company for a career, or even in a single career path, these employees are committed to building a broad portfolio of experiences and capabilities that will enable them to make a difference and to leave a mark of significance in the world.  How much richer is the organization that nurtures and leverages this inclination?  Our curmudgeonly ways must be surrendered and our focus redirected toward building the next generation of talent ecosystems, if we are to optimize what future generations have to offer.   Accelerating Professional Development In spite of our Boomer grumblings about Millennials’ “unrealistic” expectations, the truth is that we have a well-matched set of circumstances.  We have executives-in-waiting who want to learn quickly and a concurrent, urgent need to ramp up their development time, based on anticipated high levels of retirement in the next 10+ years.  Since we need to rapidly skill up these heirs to the corporate kingdom, isn’t it a fortunate coincidence that they are hungry to learn, develop and move fluidly throughout our organizations??  So our challenge now is to efficiently operationalize the wisdom we have acquired about effective learning and development.   We have already evolved from classroom-based models to diverse instructional methods.  The next step is to find the best approaches to help younger employees learn quickly and apply new learnings in an impactful way.   Creating temporary or even permanent functional partnerships among Millennial employees is one way to maximize outcomes.  This might take the form of 2 or more employees owning aspects of what once fell under a single role.  While one might argue this would mean duplication of resources, it could be a short term cost while employees come up to speed.  And the potential benefits would be numerous:  leveraging and validating the inherent sense of community of new generations, creating cross-functional skills with broad applicability, yielding additional perspectives and approaches to traditional work outcomes, and accelerating the performance curve for incumbents through Cooperative Learning (Johnson, D. and Johnson R., 1989, 1999).  This well-researched teaching strategy, where students support each other in the absorption and application of new information, has been shown to deliver faster, more efficient learning, and greater retention. Alternately, perhaps short term contracts with exiting retirees, or former retirees, to help facilitate the development of following generations may have merit.  Again, a short term cost, certainly.  However, the gains realized in shortening the learning curve, and strengthening engagement are substantial and lasting. Ultimately, there needs to be creative thinking applied for each organization on how to accelerate the capabilities of our future leaders in unique ways that mesh with current culture. The manner in which performance is evaluated must finally shift as well.  Employees will need to be assessed on how well they have developed key skills and capabilities vs. end-to-end mastery of functional positions they have no interest in keeping for an entire career. As we become more comfortable in placing greater and greater weight on competencies vs. tasks, we will realize increased organizational agility via this new generation of workers, which will be further enhanced by their natural flexibility and appetite for change. Revisiting Succession  For many years, organizations have failed to deliver desired succession planning outcomes.  According to CEB’s 2013 research, only 28% of current leaders were pre-identified in a succession plan. These disappointing results, along with the entrance of the experiential, Millennial employee into the workforce, may just provide the needed impetus for HR to reinvent succession processes.   We have recognized that the best professional development efforts are not always linear, and the time has come to fully adopt this philosophy in regard to succession as well.  Paths to specific organizational roles will not look the same for newer generations who seek out unique learning opportunities, without consideration of a singular career destination.  Rather than charting particular jobs as precursors for key positions, the experiences and skills behind what makes an incumbent successful must become essential in succession mapping.  And the multitude of ways in which those experiences and skills may be acquired must be factored into the process, along with the individual employee’s level of learning agility. While this may seem daunting, it is necessary and long overdue.  We have talked about the criticality of competency-based succession, however, we have not lived up to our own rhetoric.  Many Boomers have experienced the same frustration in our careers; knowing we are capable of shining in a particular role, but being denied the opportunity due to how our career history lined up, on paper, with documented job requirements.  These requirements usually emphasized past jobs/titles and specific tasks, versus capabilities, drive and willingness (let alone determination) to learn new things.  How satisfying would it be for us to leave a legacy where such narrow thinking no longer applies and potential is amplified? Realizing Diversity Another bloom from the seeds we Boomers have tried to plant over the past decades is a completely evolved view of diversity.  Millennial employees assume a diverse workforce, and are startled by anything less.  Their social tolerance, nurtured by wide and diverse networks, is unprecedented.  College graduates expect a similar landscape in the “real world” to what they experienced throughout their lives.  They appreciate and seek out divergent points of view and experiences without needing any persuasion.  The face of our U.S. workforce will likely see dramatic change as Millennials apply their fresh take on hiring and building strong teams, with an inherent sense of inclusion.  This wonderful aspect of the Millennial wave should be celebrated and strongly encouraged, as it is the fulfillment of our own aspirations. Future Perfect The Experiential Employee is operating more as a free agent than a long term player, and their commitment will essentially last as long as meaningful organizational culture and personal/professional opportunities keep their interest.  As Boomers, we have laid the foundation for this new, spirited employment attitude, and we should take pride in knowing that.  Generations to come will challenge organizations to excel in how they identify, manage and nurture talent. Let’s support and revel in the future that we’ve helped invent, rather than lament what we think has been lost.  After all, the future is always connected to the past.  And as so eloquently phrased by Antoine Lavoisier, French nobleman, chemist and politico:  “Nothing is Lost, Nothing is Created, and Everything is Transformed.” Christine has over 25 years of diverse HR experience.  She has held HR consulting and corporate roles, including CHRO positions for Echostar in Denver, a 6,000+ employee global engineering firm, and Aepona, a startup software firm, successfully acquired by Intel. Christine is a resource to Oracle clients, to assist in Human Capital Management strategy development and implementation, compensation practices, talent development initiatives, employee engagement, global HR management, and integrated HR systems and processes that support the full employee lifecycle. 

    Read the article

  • Unlocking Productivity

    - by Michael Snow
    Unlocking Productivity in Life Sciences with Consolidated Content Management by Joe Golemba, Vice President, Product Management, Oracle WebCenter As life sciences organizations look to become more operationally efficient, the ability to effectively leverage information is a competitive advantage. Whether data mining at the drug discovery phase or prepping the sales team before a product launch, content management can play a key role in developing, organizing, and disseminating vital information. The goal of content management is relatively straightforward: put the information that people need where they can find it. A number of issues can complicate this; information sits in many different systems, each of those systems has its own security, and the information in those systems exists in many different formats. Identifying and extracting pertinent information from mountains of farflung data is no simple job, but the alternative—wasted effort or even regulatory compliance issues—is worse. An integrated information architecture can enable health sciences organizations to make better decisions, accelerate clinical operations, and be more competitive. Unstructured data matters Often when we think of drug development data, we think of structured data that fits neatly into one or more research databases. But structured data is often directly supported by unstructured data such as experimental protocols, reaction conditions, lot numbers, run times, analyses, and research notes. As life sciences companies seek integrated views of data, they are typically finding diverse islands of data that seemingly have no relationship to other data in the organization. Information like sales reports or call center reports can be locked into siloed systems, and unavailable to the discovery process. Additionally, in the increasingly networked clinical environment, Web pages, instant messages, videos, scientific imaging, sales and marketing data, collaborative workspaces, and predictive modeling data are likely to be present within an organization, and each source potentially possesses information that can help to better inform specific efforts. Historically, content management solutions that had 21CFR Part 11 capabilities—electronic records and signatures—were focused mainly on content-enabling manufacturing-related processes. Today, life sciences companies have many standalone repositories, requiring different skills, service level agreements, and vendor support costs to manage them. With the amount of content doubling every three to six months, companies have recognized the need to manage unstructured content from the beginning, in order to increase employee productivity and operational efficiency. Using scalable and secure enterprise content management (ECM) solutions, organizations can better manage their unstructured content. These solutions can also be integrated with enterprise resource planning (ERP) systems or research systems, making content available immediately, in the context of the application and within the flow of the employee’s typical business activity. Administrative safeguards—such as content de-duplication—can also be applied within ECM systems, so documents are never recreated, eliminating redundant efforts, ensuring one source of truth, and maintaining content standards in the organization. Putting it in context Consolidating structured and unstructured information in a single system can greatly simplify access to relevant information when it is needed through contextual search. Using contextual filters, results can include therapeutic area, position in the value chain, semantic commonalities, technology-specific factors, specific researchers involved, or potential business impact. The use of taxonomies is essential to organizing information and enabling contextual searches. Taxonomy solutions are composed of a hierarchical tree that defines the relationship between different life science terms. When overlaid with additional indexing related to research and/or business processes, it becomes possible to effectively narrow down the amount of data that is returned during searches, as well as prioritize results based on specific criteria and/or prior search history. Thus, search results are more accurate and relevant to an employee’s day-to-day work. For example, a search for the word "tissue" by a lab researcher would return significantly different results than a search for the same word performed by someone in procurement. Of course, diverse data repositories, combined with the immense amounts of data present in an organization, necessitate that the data elements be regularly indexed and cached beforehand to enable reasonable search response times. In its simplest form, indexing of a single, consolidated data warehouse can be expected to be a relatively straightforward effort. However, organizations require the ability to index multiple data repositories, enabling a single search to reference multiple data sources and provide an integrated results listing. Security and compliance Beyond yielding efficiencies and supporting new insight, an enterprise search environment can support important security considerations as well as compliance initiatives. For example, the systems enable organizations to retain the relevance and the security of the indexed systems, so users can only see the results to which they are granted access. This is especially important as life sciences companies are working in an increasingly networked environment and need to provide secure, role-based access to information across multiple partners. Although not officially required by the 21 CFR Part 11 regulation, the U.S. Food and Drug Administraiton has begun to extend the type of content considered when performing relevant audits and discoveries. Having an ECM infrastructure that provides centralized management of all content enterprise-wide—with the ability to consistently apply records and retention policies along with the appropriate controls, validations, audit trails, and electronic signatures—is becoming increasingly critical for life sciences companies. Making the move Creating an enterprise-wide ECM environment requires moving large amounts of content into a single enterprise repository, a daunting and risk-laden initiative. The first key is to focus on data taxonomy, allowing content to be mapped across systems. The second is to take advantage new tools which can dramatically speed and reduce the cost of the data migration process through automation. Additional content need not be frozen while it is migrated, enabling productivity throughout the process. The ability to effectively leverage information into success has been gaining importance in the life sciences industry for years. The rapid adoption of enterprise content management, both in operational processes as well as in scientific management, are clear indicators that the companies are looking to use all available data to be better informed, improve decision making, minimize risk, and increase time to market, to maintain profitability and be more competitive. As more and more varieties and sources of information are brought under the strategic management umbrella, the ability to divine knowledge from the vast pool of information is increasingly difficult. Simple search engines and basic content management are increasingly unable to effectively extract the right information from the mountains of data available. By bringing these tools into context and integrating them with business processes and applications, we can effectively focus on the right decisions that make our organizations more profitable. More Information Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

< Previous Page | 7 8 9 10 11 12  | Next Page >