Search Results

Search found 6479 results on 260 pages for 'audio analysis'.

Page 137/260 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Apps crashing with EXC_BAD_ACCESS when changing to a custom keyboard layout

    - by Adam Lindberg
    I have a custom layout installed (svdvorak_mac6.keylayout). After a reboot another keyboard layout was selected, so I selected my usual one instead. This lead to a lot of apps suddenly starting to crash (Chrome, Skype, Adium etc). I can change to any other built in layout for OS X, but as soon as I choose one custom installed one (either form ~/Library/Keyboard Layouts/ or from /Library/Keyboard Layouts/) the apps crash. The only thing I can remember that I did before the reboot was to install Google's video chat plugin for the browser. Here's the crash report from Adium: Process: Adium [372] Path: /Applications/Adium.app/Contents/MacOS/Adium Identifier: com.adiumX.adiumX Version: 1.4.1 (1.4.1) Code Type: X86 (Native) Parent Process: launchd [182] Date/Time: 2010-12-22 22:39:47.833 +0100 OS Version: Mac OS X 10.6.5 (10H574) Report Version: 6 Interval Since Last Report: 257401 sec Crashes Since Last Report: 39 Per-App Interval Since Last Report: 1178959 sec Per-App Crashes Since Last Report: 8 Anonymous UUID: 7CBACDEB-FBAF-4CD5-9C15-7AEA8AC4B5EF Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 com.apple.HIToolbox 0x99722b6d islGetInputSourceProperty + 1107 1 com.apple.HIToolbox 0x9972262c TSMGetInputSourceProperty + 526 2 com.apple.HIToolbox 0x9972787a _ISSendWindowServerKeyboardLayoutUpdate + 412 3 com.apple.HIToolbox 0x9972622b _TSMSetInputSourceSelected + 1429 4 com.apple.HIToolbox 0x99980209 TSMMessagePortCallBack + 574 5 com.apple.CoreFoundation 0x9226840c __CFMessagePortPerform + 540 6 com.apple.CoreFoundation 0x921d34db __CFRunLoopRun + 6523 7 com.apple.CoreFoundation 0x921d1464 CFRunLoopRunSpecific + 452 8 com.apple.CoreFoundation 0x921d1291 CFRunLoopRunInMode + 97 9 com.apple.HIToolbox 0x99717f58 RunCurrentEventLoopInMode + 392 10 com.apple.HIToolbox 0x99717d0f ReceiveNextEventCommon + 354 11 com.apple.HIToolbox 0x99717b94 BlockUntilNextEventMatchingListInMode + 81 12 com.apple.AppKit 0x9189478d _DPSNextEvent + 847 13 com.apple.AppKit 0x91893fce -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] + 156 14 com.apple.AppKit 0x91856247 -[NSApplication run] + 821 15 com.apple.AppKit 0x9184e2d9 NSApplicationMain + 574 16 com.adiumX.adiumX 0x0000322e 0x1000 + 8750 Thread 1: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x968f5982 kevent + 10 1 libSystem.B.dylib 0x968f609c _dispatch_mgr_invoke + 215 2 libSystem.B.dylib 0x968f5559 _dispatch_queue_invoke + 163 3 libSystem.B.dylib 0x968f52fe _dispatch_worker_thread2 + 240 4 libSystem.B.dylib 0x968f4d81 _pthread_wqthread + 390 5 libSystem.B.dylib 0x968f4bc6 start_wqthread + 30 Thread 2: 0 libSystem.B.dylib 0x968f4a12 __workq_kernreturn + 10 1 libSystem.B.dylib 0x968f4fa8 _pthread_wqthread + 941 2 libSystem.B.dylib 0x968f4bc6 start_wqthread + 30 Thread 3: com.apple.CFSocket.private 0 libSystem.B.dylib 0x968ee0c6 select$DARWIN_EXTSN + 10 1 com.apple.CoreFoundation 0x92211c83 __CFSocketManager + 1091 2 libSystem.B.dylib 0x968fc85d _pthread_start + 345 3 libSystem.B.dylib 0x968fc6e2 thread_start + 34 Thread 4: 0 libSystem.B.dylib 0x968f4a12 __workq_kernreturn + 10 1 libSystem.B.dylib 0x968f4fa8 _pthread_wqthread + 941 2 libSystem.B.dylib 0x968f4bc6 start_wqthread + 30 Thread 5: 0 libSystem.B.dylib 0x968cf15a semaphore_timedwait_signal_trap + 10 1 libSystem.B.dylib 0x968fcce5 _pthread_cond_wait + 1066 2 libSystem.B.dylib 0x9692bac8 pthread_cond_timedwait_relative_np + 47 3 ...apple.AddressBook.framework 0x9310043f -[ABRemoteImageLoader workLoop] + 283 4 com.apple.Foundation 0x97822bf0 -[NSThread main] + 45 5 com.apple.Foundation 0x97822ba0 __NSThread__main__ + 1499 6 libSystem.B.dylib 0x968fc85d _pthread_start + 345 7 libSystem.B.dylib 0x968fc6e2 thread_start + 34 Thread 6: 0 libSystem.B.dylib 0x968cf0fa mach_msg_trap + 10 1 libSystem.B.dylib 0x968cf867 mach_msg + 68 2 com.apple.CoreFoundation 0x921d237f __CFRunLoopRun + 2079 3 com.apple.CoreFoundation 0x921d1464 CFRunLoopRunSpecific + 452 4 com.apple.CoreFoundation 0x921d1291 CFRunLoopRunInMode + 97 5 com.apple.Foundation 0x9785b7d0 +[NSURLConnection(NSURLConnectionReallyInternal) _resourceLoadLoop:] + 329 6 com.apple.Foundation 0x97822bf0 -[NSThread main] + 45 7 com.apple.Foundation 0x97822ba0 __NSThread__main__ + 1499 8 libSystem.B.dylib 0x968fc85d _pthread_start + 345 9 libSystem.B.dylib 0x968fc6e2 thread_start + 34 Thread 0 crashed with X86 Thread State (32-bit): eax: 0x00000670 ebx: 0x9972272e ecx: 0x00000000 edx: 0x00000002 edi: 0xa0c3b214 esi: 0x00000004 ebp: 0xbfffe6c8 esp: 0xbfffe660 ss: 0x0000001f efl: 0x00010202 eip: 0x99722b6d cs: 0x00000017 ds: 0x0000001f es: 0x0000001f fs: 0x00000000 gs: 0x00000037 cr2: 0x00000000 Binary Images: 0x1000 - 0x1a0ff7 +com.adiumX.adiumX 1.4.1 (1.4.1) <136586E8-F3F5-99ED-DB1F-48C0027686CB> /Applications/Adium.app/Contents/MacOS/Adium 0x1f9000 - 0x23efe7 +AIUtilities ??? (???) <565A1BC2-4B50-6277-D127-AFBF01F90CE3> /Applications/Adium.app/Contents/Frameworks/AIUtilities.framework/Versions/A/AIUtilities 0x2af000 - 0x307ff7 +com.adiumX.AdiumPurple ??? (1.0) <F4C2A8E4-695E-7CCE-41F3-F8960AFDC149> /Applications/Adium.app/Contents/Frameworks/AdiumLibpurple.framework/Versions/A/AdiumLibpurple 0x34b000 - 0x3ddfe7 +Adium ??? (???) <774A171B-ED45-D221-6A37-486AA15C8BA5> /Applications/Adium.app/Contents/Frameworks/Adium.framework/Versions/A/Adium 0x439000 - 0x510ff7 +com.googlepages.openspecies.rtool.libglib 2.0.0 (2.0.0) <C620AA58-CFC4-855E-1F2F-F84D9335CD5D> /Applications/Adium.app/Contents/Frameworks/libglib.framework/Versions/2.0.0/libglib 0x53d000 - 0x53eff7 +com.googlepages.openspecies.rtool.libgmodule 2.0.0 (2.0.0) <11FF9396-454A-394B-1B12-D84AD535F6F6> /Applications/Adium.app/Contents/Frameworks/libgmodule.framework/Versions/2.0.0/libgmodule 0x542000 - 0x578fe7 +com.googlepages.openspecies.rtool.libgobject 2.0.0 (2.0.0) <D69FB8D0-D271-EC20-42DD-04FCC65A72BF> /Applications/Adium.app/Contents/Frameworks/libgobject.framework/Versions/2.0.0/libgobject 0x58e000 - 0x590ff7 +com.googlepages.openspecies.rtool.libgthread 2.0.0 (2.0.0) <5D4B8DC6-28E3-9285-8E2A-2D7A3CBE11C5> /Applications/Adium.app/Contents/Frameworks/libgthread.framework/Versions/2.0.0/libgthread 0x594000 - 0x59eff7 +com.googlepages.openspecies.rtool.libintl 8 (8) <343C9F94-8840-4465-64E4-86A0092AD69F> /Applications/Adium.app/Contents/Frameworks/libintl.framework/Versions/8/libintl 0x5a3000 - 0x5cbff7 +com.googlepages.openspecies.rtool.libmeanwhile 1 (1) <7B341D44-FA86-F7C3-E800-7D1169EB9CE2> /Applications/Adium.app/Contents/Frameworks/libmeanwhile.framework/Versions/1/libmeanwhile 0x5e0000 - 0x81bff7 +libpurple 8.5.0 (compatibility 8.0.0) <DEB5CE6C-2A4A-16CA-E0EF-DDE812865406> /Applications/Adium.app/Contents/Frameworks/libpurple.framework/Versions/0/libpurple 0x8c3000 - 0x978fe7 libcrypto.0.9.7.dylib 0.9.7 (compatibility 0.9.7) <AACC86C0-86B4-B1A7-003F-2A0AF68973A2> /usr/lib/libcrypto.0.9.7.dylib 0x9be000 - 0x9ccff7 +com.dpompa.fribidi ??? (1.0) <EA8AEBCF-DFE5-85FB-5C0E-EB3AB5B0A950> /Applications/Adium.app/Contents/Frameworks/FriBidi.framework/Versions/A/FriBidi 0x21d2000 - 0x21e1fe7 +AutoHyperlinks ??? (???) <A8B5F9E1-E259-F33F-9E60-F4E37B1ED500> /Applications/Adium.app/Contents/Frameworks/AutoHyperlinks.framework/Versions/A/AutoHyperlinks 0x21e7000 - 0x21f3ff7 +net.brockerhoff.RBSplitView.Framework 1.1.4 (1.1.4) <D92691AA-294F-A85D-E7E1-01AD0A0717D2> /Applications/Adium.app/Contents/Frameworks/RBSplitView.framework/Versions/A/RBSplitView 0x21fb000 - 0x220efff +org.andymatuschak.Sparkle 1.5 Beta (bzr) (340) <E0109DBE-F614-66D0-9B82-6151BC40DAD7> /Applications/Adium.app/Contents/Frameworks/Sparkle.framework/Versions/A/Sparkle 0x221c000 - 0x226cfef +com.adiumX.OTR ??? (1.0) <BAE9D6BD-60D5-B53B-19BC-C17287F55EE9> /Applications/Adium.app/Contents/Frameworks/OTR.framework/Versions/A/OTR 0x227d000 - 0x2280ff7 +org.boredzo.LMX ??? (1.0) <92632179-5CFB-EA6B-AAE7-5F4B98BF0CD9> /Applications/Adium.app/Contents/Frameworks/LMX.framework/Versions/A/LMX 0x2286000 - 0x228dff1 +net.oauth.OAuthConsumer ??? (0.1.1) <025882EC-04DA-763B-18F5-5266A5D185FD> /Applications/Adium.app/Contents/Frameworks/OAuthConsumer.framework/Versions/A/OAuthConsumer 0x2296000 - 0x22a6fe7 +com.googlepages.openspecies.rtool.libjson-glib 1.0.0 (1.0.0) <016CAFB1-DD85-3C9D-411C-C696D9D57213> /Applications/Adium.app/Contents/Frameworks/libjson-glib.framework/Versions/1.0.0/libjson-glib 0x2784000 - 0x2788ff3 com.apple.audio.AudioIPCPlugIn 1.1.6 (1.1.6) <F402CF88-D96C-42A0-3207-49759F496AE8> /System/Library/Extensions/AudioIPCDriver.kext/Contents/Resources/AudioIPCPlugIn.bundle/Contents/MacOS/AudioIPCPlugIn 0x278d000 - 0x2793ffb com.apple.audio.AppleHDAHALPlugIn 1.9.9 (1.9.9f12) <82BFF5E9-2B0E-FE8B-8370-445DD94DA434> /System/Library/Extensions/AppleHDA.kext/Contents/PlugIns/AppleHDAHALPlugIn.bundle/Contents/MacOS/AppleHDAHALPlugIn 0x15fda000 - 0x15fdcff7 apop.so ??? (???) <B365DF5B-6A00-9595-27FF-4811B12B1C19> /usr/lib/sasl2/apop.so 0x15fe0000 - 0x15fe9ff7 digestmd5WebDAV.so ??? (???) <FC8C0A3E-1BC3-5016-95E1-E7EF9FF37242> /usr/lib/sasl2/digestmd5WebDAV.so 0x15fee000 - 0x15ff0ff7 libanonymous.2.so ??? (???) <41A1E196-0AB4-1ADD-6362-BB53A0E57ABA> /usr/lib/sasl2/libanonymous.2.so 0x15ff4000 - 0x15ff6ff7 libcrammd5.2.so ??? (???) <032F08C3-2D26-F956-4799-1012A1BBCB71> /usr/lib/sasl2/libcrammd5.2.so 0x15ffa000 - 0x15ffcff7 login.so ??? (???) <4E0B45F7-243E-A3FD-AA75-EF653590BF17> /usr/lib/sasl2/login.so 0x16100000 - 0x16116ff7 dhx.so ??? (???) <B50D8278-4246-4086-E0AF-3CBE96AE9837> /usr/lib/sasl2/dhx.so 0x16123000 - 0x1612bff7 libdigestmd5.2.so ??? (???) <E8D78B02-D51C-F2CB-C4BA-AC9231ED8006> /usr/lib/sasl2/libdigestmd5.2.so 0x16130000 - 0x16135ff7 libgssapiv2.2.so ??? (???) <193995B9-1C15-BEB2-40B7-1598D82F29BB> /usr/lib/sasl2/libgssapiv2.2.so 0x1613a000 - 0x1613fff7 libntlm.so ??? (???) <F97C955D-E521-216F-E8F0-79E8C907217A> /usr/lib/sasl2/libntlm.so 0x16144000 - 0x1614bff7 libotp.2.so ??? (???) <3DF61F7F-4929-A37D-01CB-9A7A90E3B9B7> /usr/lib/sasl2/libotp.2.so 0x16152000 - 0x16154ff7 libplain.2.so ??? (???) <5CC9D89A-9656-EEE8-64AB-E61A22FA8465> /usr/lib/sasl2/libplain.2.so 0x16158000 - 0x1615cff7 libpps.so ??? (???) <C5A25A99-412E-AD7F-D6FD-C4CC07B7B2A5> /usr/lib/sasl2/libpps.so 0x16161000 - 0x16164ff7 mschapv2.so ??? (???) <34DFB657-5E2E-5B83-713B-F57ACFB1E091> /usr/lib/sasl2/mschapv2.so 0x16169000 - 0x1616bff7 shadow_auxprop.so ??? (???) <4073854F-B4C8-A0D4-C0FF-7A0C93BFC70E> /usr/lib/sasl2/shadow_auxprop.so 0x16170000 - 0x16172ff7 smb_lm.so ??? (???) <4B7A54D8-241D-CC8C-8759-4C7DC562369D> /usr/lib/sasl2/smb_lm.so 0x16177000 - 0x1617aff7 smb_nt.so ??? (???) <7B7D31B1-10A1-1AE9-E323-C19A3C52DC03> /usr/lib/sasl2/smb_nt.so 0x1617f000 - 0x16182ff7 smb_ntlmv2.so ??? (???) <3BFE5AA9-F215-36B5-E7D7-46BE1BFD63EA> /usr/lib/sasl2/smb_ntlmv2.so 0x194c1000 - 0x19639fe7 GLEngine ??? (???) <A4BBE58C-1211-0473-7B78-C3BA7AC29C9B> /System/Library/Frameworks/OpenGL.framework/Resources/GLEngine.bundle/GLEngine 0x1966b000 - 0x19a70fe7 libclh.dylib 3.1.1 C (3.1.1) <D1A3D8AD-0FED-4AD2-AB43-CF804B7BDBF9> /System/Library/Extensions/GeForceGLDriver.bundle/Contents/MacOS/libclh.dylib 0x19ae8000 - 0x19b0cfe7 GLRendererFloat ??? (???) <EFE5EC6D-74B2-37A2-92E4-526A2EF6B791> /System/Library/Frameworks/OpenGL.framework/Resources/GLRendererFloat.bundle/GLRendererFloat 0x8f0c8000 - 0x8f811ff7 com.apple.GeForceGLDriver 1.6.24 (6.2.4) <DCC16E52-B1F1-90E6-E839-D30DF4CBA468> /System/Library/Extensions/GeForceGLDriver.bundle/Contents/MacOS/GeForceGLDriver 0x8fe00000 - 0x8fe4162b dyld 132.1 (???) <39AC3185-E633-68AA-7CD6-1230E7F1CEF4> /usr/lib/dyld 0x90003000 - 0x90005fe7 com.apple.ExceptionHandling 1.5 (10) <03218275-EBEC-39AA-895A-BA72A5FDBB7A> /System/Library/Frameworks/ExceptionHandling.framework/Versions/A/ExceptionHandling 0x90006000 - 0x90074ff7 com.apple.QuickLookUIFramework 2.3 (327.6) <74706A08-5399-24FE-00B2-4A702A6B83C1> /System/Library/Frameworks/Quartz.framework/Versions/A/Frameworks/QuickLookUI.framework/Versions/A/QuickLookUI 0x90075000 - 0x900b2ff7 com.apple.SystemConfiguration 1.10.5 (1.10.2) <362DF639-6E5F-9371-9B99-81C581A8EE41> /System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfiguration 0x900fa000 - 0x901d5feb com.apple.DesktopServices 1.5.9 (1.5.9) <CED00AC1-924B-0E45-7D5E-1CEA8929F5BE> /System/Library/PrivateFrameworks/DesktopServicesPriv.framework/Versions/A/DesktopServicesPriv 0x901d6000 - 0x9021aff3 com.apple.coreui 2 (114) <2234855E-3BED-717F-0BFA-D1A289ECDBDA> /System/Library/PrivateFrameworks/CoreUI.framework/Versions/A/CoreUI 0x9021b000 - 0x9021bff7 com.apple.CoreServices 44 (44) <51CFA89A-33DB-90ED-26A8-67D461718A4A> /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices 0x90258000 - 0x90302fe7 com.apple.CFNetwork 454.11.5 (454.11.5) <D8963574-285A-3BD6-6B25-07D39C6F67A4> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CFNetwork.framework/Versions/A/CFNetwork 0x90303000 - 0x9033efeb libFontRegistry.dylib ??? (???) <4FB144ED-8AF9-27CF-B315-DCE5575D5231> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libFontRegistry.dylib 0x90342000 - 0x90366ff7 libJPEG.dylib ??? (???) <46AF3A0F-2B8D-87B9-62D4-0905678A64DA> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libJPEG.dylib 0x90367000 - 0x9036afe7 libmathCommon.A.dylib 315.0.0 (compatibility 1.0.0) <1622A54F-1A98-2CBE-B6A4-2122981A500E> /usr/lib/system/libmathCommon.A.dylib 0x9036b000 - 0x903d5fe7 libstdc++.6.dylib 7.9.0 (compatibility 7.0.0) <411D87F4-B7E1-44EB-F201-F8B4F9227213> /usr/lib/libstdc++.6.dylib 0x903d6000 - 0x903daff7 IOSurface ??? (???) <D849E1A5-6B0C-2A05-2765-850EC39BA2FF> /System/Library/Frameworks/IOSurface.framework/Versions/A/IOSurface 0x903db000 - 0x903edff7 com.apple.MultitouchSupport.framework 207.10 (207.10) <E1A6F663-570B-CE54-0F8A-BBCCDECE3B42> /System/Library/PrivateFrameworks/MultitouchSupport.framework/Versions/A/MultitouchSupport 0x90437000 - 0x90470ff7 libcups.2.dylib 2.8.0 (compatibility 2.0.0) <D6F24434-8217-DF72-2126-1953080680D7> /usr/lib/libcups.2.dylib 0x9049b000 - 0x904ccff7 libGLImage.dylib ??? (???) <78F59EAB-BBD4-7366-CA84-970547501978> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLImage.dylib 0x904ec000 - 0x909a5ffb com.apple.VideoToolbox 0.484.20 (484.20) <E7B9F015-2569-43D7-5268-375ED937ECA5> /System/Library/PrivateFrameworks/VideoToolbox.framework/Versions/A/VideoToolbox 0x909a6000 - 0x90ab5fe7 com.apple.WebKit 6533.19 (6533.19.4) <A942073C-83DF-33ED-3D01-A24DE8AEAB3D> /System/Library/Frameworks/WebKit.framework/Versions/A/WebKit 0x90ab6000 - 0x90ae6ff7 com.apple.MeshKit 1.1 (49.2) <5A74D1A4-4B97-FE39-4F4D-E0B80F0ADD87> /System/Library/PrivateFrameworks/MeshKit.framework/Versions/A/MeshKit 0x90cc5000 - 0x90cfdff7 com.apple.LDAPFramework 2.0 (120.1) <131ED804-DD88-D84F-13F8-D48E0012B96F> /System/Library/Frameworks/LDAP.framework/Versions/A/LDAP 0x90cfe000 - 0x90f61fef com.apple.security 6.1.1 (37594) <1949216A-7583-B73A-6112-4D55CA5852E3> /System/Library/Frameworks/Security.framework/Versions/A/Security 0x90f62000 - 0x90f64ff7 com.apple.securityhi 4.0 (36638) <E7D83480-77BB-72F9-72F3-AEE198CE589F> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/SecurityHI.framework/Versions/A/SecurityHI 0x90f65000 - 0x910e7fe7 libicucore.A.dylib 40.0.0 (compatibility 1.0.0) <35DB7644-0780-D2AB-F6A9-45F28D2D434A> /usr/lib/libicucore.A.dylib 0x910e8000 - 0x91217fe3 com.apple.audio.toolbox.AudioToolbox 1.6.5 (1.6.5) <0A0F68E5-4806-DB51-764B-D97554B801AD> /System/Library/Frameworks/AudioToolbox.framework/Versions/A/AudioToolbox 0x91218000 - 0x91538ff3 com.apple.CoreServices.CarbonCore 861.23 (861.23) <B08756E4-32C5-CC33-0268-7C00A5ED7537> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/CarbonCore 0x91539000 - 0x91578ff7 com.apple.ImageCaptureCore 1.0.3 (1.0.3) <7E02D104-F31C-CF72-71B4-DA5DF7B48337> /System/Library/Frameworks/ImageCaptureCore.framework/Versions/A/ImageCaptureCore 0x91579000 - 0x915b0fe7 libssl.0.9.8.dylib 0.9.8 (compatibility 0.9.8) <7DCB5938-3140-E71A-92BD-8C242F30C8F5> /usr/lib/libssl.0.9.8.dylib 0x915c4000 - 0x9163ffff com.apple.AppleVAFramework 4.10.12 (4.10.12) <89C4EBE2-FE27-3160-0BD1-D0C2ED5F3605> /System/Library/PrivateFrameworks/AppleVA.framework/Versions/A/AppleVA 0x91640000 - 0x91742fef com.apple.MeshKitIO 1.1 (49.2) <D0401AC5-1F92-2BBB-EBAB-58EDD3BA61B9> /System/Library/PrivateFrameworks/MeshKit.framework/Versions/A/Frameworks/MeshKitIO.framework/Versions/A/MeshKitIO 0x91743000 - 0x91744ff7 com.apple.audio.units.AudioUnit 1.6.5 (1.6.5) <BE4C2495-B758-AD22-DCC0-56A6791E948E> /System/Library/Frameworks/AudioUnit.framework/Versions/A/AudioUnit 0x91769000 - 0x91777fe7 libz.1.dylib 1.2.3 (compatibility 1.0.0) <33C1B260-ED05-945D-FC33-EF56EC791E2E> /usr/lib/libz.1.dylib 0x91778000 - 0x9179afef com.apple.DirectoryService.Framework 3.6 (621.9) <F2EEE9D7-D4FB-14F3-E647-ABD32754F557> /System/Library/Frameworks/DirectoryService.framework/Versions/A/DirectoryService 0x9184c000 - 0x9212cff7 com.apple.AppKit 6.6.7 (1038.35) <ABC7783C-E4D5-B848-BED6-99451D94D120> /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit 0x9212d000 - 0x9218efe7 com.apple.CoreText 3.5.0 (???) <BB50C045-25F5-65B8-B1DB-8CDAEF45EB46> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreText.framework/Versions/A/CoreText 0x9218f000 - 0x92194ff7 com.apple.OpenDirectory 10.6 (10.6) <C1B46982-7D3B-3CC4-3BC2-3E4B595F0231> /System/Library/Frameworks/OpenDirectory.framework/Versions/A/OpenDirectory 0x92195000 - 0x92310fe7 com.apple.CoreFoundation 6.6.4 (550.42) <C78D5079-663E-9734-7AFA-6CE79A0539F1> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x92311000 - 0x92351ff3 com.apple.securityinterface 4.0.1 (37214) <43CE8A8D-64E5-F36E-4900-FBB1BD6557F1> /System/Library/Frameworks/SecurityInterface.framework/Versions/A/SecurityInterface 0x92352000 - 0x92355ff7 libCGXType.A.dylib 545.0.0 (compatibility 64.0.0) <B624AACE-991B-0FFA-2482-E69970576CE1> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libCGXType.A.dylib 0x924b9000 - 0x925fcfef com.apple.QTKit 7.6.6 (1756) <4D809734-4E1B-8E18-C825-86C5422FC3DC> /System/Library/Frameworks/QTKit.framework/Versions/A/QTKit 0x925fd000 - 0x9260dff7 libsasl2.2.dylib 3.15.0 (compatibility 3.0.0) <E276514D-394B-2FDD-6264-07A444AA6A4E> /usr/lib/libsasl2.2.dylib 0x92615000 - 0x92618ff7 libCoreVMClient.dylib ??? (???) <1F738E81-BB71-32C5-F1E9-C1302F71021C> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCoreVMClient.dylib 0x9262c000 - 0x92652ffb com.apple.DictionaryServices 1.1.2 (1.1.2) <43E1D565-6E01-3681-F2E5-72AE4C3A097A> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/DictionaryServices.framework/Versions/A/DictionaryServices 0x9294d000 - 0x9295dff7 com.apple.DSObjCWrappers.Framework 10.6 (134) <95DC4010-ECC4-3A75-5DEE-11BB2AE895EE> /System/Library/PrivateFrameworks/DSObjCWrappers.framework/Versions/A/DSObjCWrappers 0x9295e000 - 0x929a0ff7 libvDSP.dylib 268.0.1 (compatibility 1.0.0) <8A4721DE-25C4-C8AA-EA90-9DA7812E3EBA> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvDSP.dylib 0x929a1000 - 0x929abfe7 com.apple.audio.SoundManager 3.9.3 (3.9.3) <DE0E0EF6-8190-3F65-6BDD-5AC9D8A025D6> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/CarbonSound.framework/Versions/A/CarbonSound 0x929ac000 - 0x92a5cff3 com.apple.ColorSync 4.6.3 (4.6.3) <0354B408-665F-8B3F-87FF-64E6322276F0> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ColorSync.framework/Versions/A/ColorSync 0x92a5d000 - 0x92c3ffff com.apple.imageKit 2.0.3 (1.0) <B4DB05F7-01C5-35EE-7AB9-41BD9D63F075> /System/Library/Frameworks/Quartz.framework/Versions/A/Frameworks/ImageKit.framework/Versions/A/ImageKit 0x92cf5000 - 0x92d4bff7 com.apple.MeshKitRuntime 1.1 (49.2) <CB9F38B1-E107-EA62-EDFF-02EE79F6D1A5> /System/Library/PrivateFrameworks/MeshKit.framework/Versions/A/Frameworks/MeshKitRuntime.framework/Versions/A/MeshKitRuntime 0x92d4c000 - 0x92d55ff7 com.apple.DiskArbitration 2.3 (2.3) <6AA6DDF6-AFC3-BBDB-751A-64AE3580A49E> /System/Library/Frameworks/DiskArbitration.framework/Versions/A/DiskArbitration 0x92dc6000 - 0x92df8fe3 libTrueTypeScaler.dylib ??? (???) <6E9D1A50-330E-F1F4-F93D-9ECC8A61B21A> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libTrueTypeScaler.dylib 0x92e7a000 - 0x92e88ff7 com.apple.opengl 1.6.11 (1.6.11) <286D1BC4-4CD8-3CD4-F723-5C196FE15FE0> /System/Library/Frameworks/OpenGL.framework/Versions/A/OpenGL 0x92e89000 - 0x92ec7ff7 com.apple.QuickLookFramework 2.3 (327.6) <66955C29-0C99-D02C-DB18-4952AFB4E886> /System/Library/Frameworks/QuickLook.framework/Versions/A/QuickLook 0x92ec8000 - 0x92f92fef com.apple.CoreServices.OSServices 357 (357) <3A26F553-722D-3536-EEDE-FB41FCDAA7FD> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/OSServices.framework/Versions/A/OSServices 0x92f93000 - 0x92f97ff7 libGIF.dylib ??? (???) <DA5758A4-71B0-DD6E-7402-B7FB15387569> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libGIF.dylib 0x92f98000 - 0x93099fe7 libxml2.2.dylib 10.3.0 (compatibility 10.0.0) <ED8E45C6-B078-15E8-938D-99D8FD1EAE64> /usr/lib/libxml2.2.dylib 0x9309a000 - 0x932a1feb com.apple.AddressBook.framework 5.0.3 (875) <759B660B-00F6-F08C-37CD-69468C774B5E> /System/Library/Frameworks/AddressBook.framework/Versions/A/AddressBook 0x932a2000 - 0x933aeff7 libGLProgrammability.dylib ??? (???) <8B308FAE-843F-EE76-0254-3374CBFFA7B3> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLProgrammability.dylib 0x933bf000 - 0x933c2ffb com.apple.help 1.3.1 (41) <6A5AD406-9D8E-5BAC-51E1-E09AB9A6D159> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Help.framework/Versions/A/Help 0x933c3000 - 0x9372eff7 com.apple.QuartzCore 1.6.3 (227.34) <CC1C1631-D8D1-D416-171E-A1683274E479> /System/Library/Frameworks/QuartzCore.framework/Versions/A/QuartzCore 0x9372f000 - 0x93740ff7 com.apple.LangAnalysis 1.6.6 (1.6.6) <3036AD83-4F1D-1028-54EE-54165E562650> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/LangAnalysis.framework/Versions/A/LangAnalysis 0x93741000 - 0x93755ffb com.apple.speech.synthesis.framework 3.10.35 (3.10.35) <9F5CE4F7-D05C-8C14-4B76-E43D07A8A680> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/SpeechSynthesis.framework/Versions/A/SpeechSynthesis 0x93790000 - 0x937e1ff7 com.apple.HIServices 1.8.1 (???) <51BDD848-32A5-2425-BE07-BD037A89630A> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/HIServices.framework/Versions/A/HIServices 0x937e2000 - 0x937e2ff7 liblangid.dylib ??? (???) <FCC37057-CDD7-2AF1-21AF-52A06C4048FF> /usr/lib/liblangid.dylib 0x937e3000 - 0x93a0eff3 com.apple.QuartzComposer 4.2 ({156.28}) <08AF01DC-110D-9443-3916-699DBDED0149> /System/Library/Frameworks/Quartz.framework/Versions/A/Frameworks/QuartzComposer.framework/Versions/A/QuartzComposer 0x93a17000 - 0x93a21ffb com.apple.speech.recognition.framework 3.11.1 (3.11.1) <7486003F-8FDB-BD6C-CB34-DE45315BD82C> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/SpeechRecognition.framework/Versions/A/SpeechRecognition 0x93a22000 - 0x93a28fff com.apple.CommonPanels 1.2.4 (91) <CE92759E-865E-8A3B-1488-ECD497E4074D> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/CommonPanels.framework/Versions/A/CommonPanels 0x93b47000 - 0x93b94feb com.apple.DirectoryService.PasswordServerFramework 6.0 (6.0) <27F3FF53-F818-9836-2101-3E963FE0C0E0> /System/Library/PrivateFrameworks/PasswordServer.framework/Versions/A/PasswordServer 0x93b95000 - 0x93c30ff7 com.apple.ApplicationServices.ATS 4.4 (???) <ECB16606-4DF8-4AFB-C91D-F7947C26040F> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/ATS 0x93c31000 - 0x93c31ff7 com.apple.quartzframework 1.5 (1.5) <7DD4EBF1-60C4-9329-08EF-6E59731D9430> /System/Library/Frameworks/Quartz.framework/Versions/A/Quartz 0x93d51000 - 0x93d72fe7 com.apple.opencl 12.3 (12.3) <DEA600BF-4F54-66B5-DB2F-DC57FD518543> /System/Library/Frameworks/OpenCL.framework/Versions/A/OpenCL 0x93d73000 - 0x93e77fe7 libcrypto.0.9.8.dylib 0.9.8 (compatibility 0.9.8) <BDEFA030-5E75-7C47-2904-85AB16937F45> /usr/lib/libcrypto.0.9.8.dylib 0x93e78000 - 0x93ef8feb com.apple.SearchKit 1.3.0 (1.3.0) <7AE32A31-2B8E-E271-C03A-7A0F7BAFC85C> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/SearchKit.framework/Versions/A/SearchKit 0x93ef9000 - 0x93f68ff7 libvMisc.dylib 268.0.1 (compatibility 1.0.0) <595A5539-9F54-63E6-7AAC-C04E1574B050> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib 0x93f69000 - 0x94122feb com.apple.ImageIO.framework 3.0.4 (3.0.4) <C145139E-24C4-5A3D-B17C-809D528354B2> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/ImageIO 0x94123000 - 0x94166ff7 com.apple.NavigationServices 3.5.4 (182) <8DC6FD4A-6C74-9C23-A4C3-715B44A8D28C> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/NavigationServices.framework/Versions/A/NavigationServices 0x941a0000 - 0x941fafe7 com.apple.CorePDF 1.3 (1.3) <EA168671-F44F-BFE4-AA7D-3801DA29A650> /System/Library/PrivateFrameworks/CorePDF.framework/Versions/A/CorePDF 0x941fb000 - 0x94258ff7 com.apple.framework.IOKit 2.0 (???) <A769737F-E0D6-FB06-29B4-915CF4F43420> /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit 0x94259000 - 0x9425dff7 libGFXShared.dylib ??? (???) <C3A805C4-C0E5-B300-430A-7E811395CB8E> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGFXShared.dylib 0x942ef000 - 0x9439dff3 com.apple.ink.framework 1.3.3 (107) <233A981E-A2F9-56FB-8BDE-C2DEC3F20784> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Ink.framework/Versions/A/Ink 0x9439e000 - 0x94b8d557 com.apple.CoreGraphics 1.545.0 (???) <1AB39678-00D5-FB88-3B41-93D78348E0DE> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/CoreGraphics 0x94b8e000 - 0x94bd7fe7 libTIFF.dylib ??? (???) <AC1FC806-F7F4-174B-375F-FE5D6008666C> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libTIFF.dylib 0x94e21000 - 0x94f4ffe7 com.apple.CoreData 102.1 (251) <87FE6861-F2D6-773D-ED45-345272E56463> /System/Library/Frameworks/CoreData.framework/Versions/A/CoreData 0x95ea3000 - 0x95ee6ff7 libGLU.dylib ??? (???) <F8580594-0B38-F3ED-A715-CB3776B747A0> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLU.dylib 0x95eef000 - 0x95f71ffb SecurityFoundation ??? (???) <A8D248DE-8670-970D-39E3-A9738CFDBEE1> /System/Library/Frameworks/SecurityFoundation.framework/Versions/A/SecurityFoundation 0x95f72000 - 0x95f7fff7 com.apple.NetFS 3.2.1 (3.2.1) <94A52A6D-F071-09D7-E80F-F633F17233FE> /System/Library/Frameworks/NetFS.framework/Versions/A/NetFS 0x95f80000 - 0x95f82ff7 libRadiance.dylib ??? (???) <10048B4A-2AE8-A4E2-21B8-C6E7A8C5B76F> snip... Superuser character limit :-(

    Read the article

  • Fake ISAPI Handler to serve static files with extention that are rewritted by url rewriter

    - by developerit
    Introduction I often map html extention to the asp.net dll in order to use url rewritter with .html extentions. Recently, in the new version of www.nouvelair.ca, we renamed all urls to end with .html. This works great, but failed when we used FCK Editor. Static html files would not get serve because we mapped the html extension to the .NET Framework. We can we do to to use .html extension with our rewritter but still want to use IIS behavior with static html files. Analysis I thought that this could be resolve with a simple HTTP handler. We would map urls of static files in our rewriter to this handler that would read the static file and serve it, just as IIS would do. Implementation This is how I coded the class. Note that this may not be bullet proof. I only tested it once and I am sure that the logic behind IIS is more complicated that this. If you find errors or think of possible improvements, let me know. Imports System.Web Imports System.Web.Services ' Author: Nicolas Brassard ' For: Solutions Nitriques inc. http://www.nitriques.com ' Date Created: April 18, 2009 ' Last Modified: April 18, 2009 ' License: CPOL (http://www.codeproject.com/info/cpol10.aspx) ' Files: ISAPIDotNetHandler.ashx ' ISAPIDotNetHandler.ashx.vb ' Class: ISAPIDotNetHandler ' Description: Fake ISAPI handler to serve static files. ' Usefull when you want to serve static file that has a rewrited extention. ' Example: It often map html extention to the asp.net dll in order to use url rewritter with .html. ' If you want to still serve static html file, add a rewritter rule to redirect html files to this handler Public Class ISAPIDotNetHandler Implements System.Web.IHttpHandler Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest ' Since we are doing the job IIS normally does with html files, ' we set the content type to match html. ' You may want to customize this with your own logic, if you want to serve ' txt or xml or any other text file context.Response.ContentType = "text/html" ' We begin a try here. Any error that occurs will result in a 404 Page Not Found error. ' We replicate the behavior of IIS when it doesn't find the correspoding file. Try ' Declare a local variable containing the value of the query string Dim uri As String = context.Request("fileUri") ' If the value in the query string is null, ' throw an error to generate a 404 If String.IsNullOrEmpty(uri) Then Throw New ApplicationException("No fileUri") End If ' If the value in the query string doesn't end with .html, then block the acces ' This is a HUGE security hole since it could permit full read access to .aspx, .config, etc. If Not uri.ToLower.EndsWith(".html") Then ' throw an error to generate a 404 Throw New ApplicationException("Extention not allowed") End If ' Map the file on the server. ' If the file doesn't exists on the server, it will throw an exception and generate a 404. Dim fullPath As String = context.Server.MapPath(uri) ' Read the actual file Dim stream As IO.StreamReader = FileIO.FileSystem.OpenTextFileReader(fullPath) ' Write the file into the response context.Response.Output.Write(stream.ReadToEnd) ' Close and Dipose the stream stream.Close() stream.Dispose() stream = Nothing Catch ex As Exception ' Set the Status Code of the response context.Response.StatusCode = 404 'Page not found ' For testing and bebugging only ! This may cause a security leak ' context.Response.Output.Write(ex.Message) Finally ' In all cases, flush and end the response context.Response.Flush() context.Response.End() End Try End Sub ' Automaticly generated by Visual Studio ReadOnly Property IsReusable() As Boolean Implements IHttpHandler.IsReusable Get Return False End Get End Property End Class Conclusion As you see, with our static files map to this handler using query string (ex.: /ISAPIDotNetHandler.ashx?fileUri=index.html) you will have the same behavior as if you ask for the uri /index.html. Finally, test this only in IIS with the html extension map to aspnet_isapi.dll. Url rewritting will work in Casini (Internal Web Server shipped with Visual Studio) but it’s not the same as with IIS since EVERY request is handle by .NET. Versions First release

    Read the article

  • SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison

    - by pinaldave
    Earlier, I have written about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. I got many emails asking for performance analysis of paging. Here is the quick analysis of it. The real challenge of paging is all the unnecessary IO reads from the database. Network traffic was one of the reasons why paging has become a very expensive operation. I have seen many legacy applications where a complete resultset is brought back to the application and paging has been done. As what you have read earlier, SQL Server 2011 offers a better alternative to an age-old solution. This article has been divided into two parts: Test 1: Performance Comparison of the Two Different Pages on SQL Server 2011 Method In this test, we will analyze the performance of the two different pages where one is at the beginning of the table and the other one is at its end. Test 2: Performance Comparison of the Two Different Pages Using CTE (Earlier Solution from SQL Server 2005/2008) and the New Method of SQL Server 2011 We will explore this in the next article. This article will tackle test 1 first. Test 1: Retrieving Page from two different locations of the table. Run the following T-SQL Script and compare the performance. SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO You will notice that when we are reading the page from the beginning of the table, the database pages read are much lower than when the page is read from the end of the table. This is very interesting as when the the OFFSET changes, PAGE IO is increased or decreased. In the normal case of the search engine, people usually read it from the first few pages, which means that IO will be increased as we go further in the higher parts of navigation. I am really impressed because using the new method of SQL Server 2011,  PAGE IO will be much lower when the first few pages are searched in the navigation. Test 2: Retrieving Page from two different locations of the table and comparing to earlier versions. In this test, we will compare the queries of the Test 1 with the earlier solution via Common Table Expression (CTE) which we utilized in SQL Server 2005 and SQL Server 2008. Test 2 A : Page early in the table -- Test with pages early in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO Test 2 B : Page later in the table -- Test with pages later in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO From the resultset, it is very clear that in the earlier case, the pages read in the solution are always much higher than the new technique introduced in SQL Server 2011 even if we don’t retrieve all the data to the screen. If you carefully look at both the comparisons, the PAGE IO is much lesser in the case of the new technique introduced in SQL Server 2011 when we read the page from the beginning of the table and when we read it from the end. I consider this as a big improvement as paging is one of the most used features for the most part of the application. The solution introduced in SQL Server 2011 is very elegant because it also improves the performance of the query and, at large, the database. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Netbeans 7.2 Missing Modules Warning

    - by el10780
    Everytime I start Netbeans and the splash screen shows up when it gets to the part to load the modules I receive the following error message : Warning - could not install some modules: Editor Library 2 - None of the modules providing the capability org.netbeans.modules.editor.actions could be installed. Tags Based Editors Library - The module named org.netbeans.modules.editor.deprecated.pre65formatting/0-1 was needed and not found. Java Editor Library - The module named org.netbeans.modules.editor.deprecated.pre65formatting/0-1 was needed and not found. Preprocessor Bridge - None of the modules providing the capability org.netbeans.modules.java.preprocessorbridge.spi.JavaSourceUtilImpl could be installed. Freeform Ant Projects - The module named org.netbeans.modules.editor.indent.project/0-1 was needed and not found. Editor Code Templates - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Static Analysis Core - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Java Source - The module named org.netbeans.modules.editor.indent.project/0-1 was needed and not found. Eclipse Project Importer - The module named org.netbeans.modules.java.api.common/0-1 was needed and not found. Java Hints SPI - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Java Refactoring - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Java Editor - The module named org.netbeans.modules.editor.bracesmatching/0-1 was needed and not found. Java Editor - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Java Editor - The module named org.netbeans.modules.editor.deprecated.pre65formatting/0-1 was needed and not found. Java Hints UI - The module named org.netbeans.modules.code.analysis/0-1 was needed and not found. Java Hints UI - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Legacy Java Hints SPI - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Java Hints - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Java Declarative Hints - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Javadoc - The module named org.netbeans.modules.editor.bracesmatching/0-1 was needed and not found. Javadoc - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Common Scripting Language API (new) - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. XML Text Editor - The module named org.netbeans.modules.editor.bracesmatching/0-1 was needed and not found. XML Text Editor - The module named org.netbeans.modules.editor.deprecated.pre65formatting/0-1 was needed and not found. CSS Editor - The module named org.netbeans.modules.editor.bracesmatching/0-1 was needed and not found. HTML Editor - The module named org.netbeans.modules.editor.bracesmatching/0-1 was needed and not found. JavaScript Editing - The module named org.netbeans.modules.editor.bracesmatching/0-1 was needed and not found. JavaScript Hints - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Editing Files - The module named org.netbeans.modules.editor.bracesmatching/0-1 was needed and not found. IDE Platform - The module named org.netbeans.modules.editor.macros/0-1 was needed and not found. Java SE Projects - The module named org.netbeans.modules.java.api.common/0-1 was needed and not found. 86 further modules could not be installed due to the above problems. Whatever I press either Exit or Disable Modules and Continue or even I close from the "X" Button the Warning window closes and then Netbeans never starts. I have looked it up on the Internet,but I couldn't find a solution.

    Read the article

  • Oracle BI and XS Energy Drinks – Don’t Miss the Amway Presentation!

    - by Maria Forney
    Amway is a global leader in the direct sales industry with $10.9B in annual sales in more than 100 countries and territories. The company has implemented a global BI framework that provides accurate, consistent, and timely insights to support global, regional and local analytical research, business planning, performance measurement and assessment. Oracle BI EE is used by 1500 employees across Amway sales, marketing, finance, and supply chain business units as well as Amway affiliates in Europe, Russia, South Africa, Japan, Australia, Latin America, Malaysia, Vietnam, and Indonesia. Last week, I spoke with Lead Data Analyst with Amway Global Sales, Dan Arganbright, and IT Manager with Amway BI Competency Center, Mike Olson, about their upcoming presentation at Oracle OpenWorld in San Francisco. Scheduled during a prime speaking slot on Monday, October 1 at 12:15pm in Moscone West, 2007, Dan and Mike will discuss their experience building Amway’s Distributor Consulting solution, powered by Oracle BI EE. You can find more information here. As background, Amway offers people an opportunity to own their own businesses and consumers exclusive products in health and wellness, beauty and home care.  The Amway internal Sales organization is charged with consulting leadership-level Distributors to help them with data insights and ultimately grow their business. Until recently, this was a resource-intense process of gathering and formatting data. In some markets, it took over 40 hours to collect the data and produce the analysis needed for one consultation session. Amway began its global BI journey in 2006 and since then the company has migrated from having multiple technology providers and integration points to an integrated strategic vendor approach. Today, the company has standardized on Oracle technology for BI.  Amway has achieved cost savings through the retirement of redundant technology platforms. In addition, Mike’s organization has led the charge to align disparate BI organizations into a BI Competency Center.  The following diagram highlights the simplicity of the standardized architecture of Amway today. Dubbed Distributor Consulting, Amway has developed a BI solution using the Oracle technology stack to help Distributor leaders grow their businesses. The Distributor Consulting solution provides over 40 metrics for Sales staff to provide data-driven insights on the Distributors and organizations they support.  Using Oracle BI EE, Exadata, and Oracle Data Integrator, Amway provides customized and personalized business intelligence, and the Oracle BI EE dashboards were developed by the Amway Sales organization, which demonstrates business empowerment of the technology. Amway is also leveraging the power of BI to drive business growth in all of its markets.  A new set of Distributor Segmentation metrics are enabling a better understanding of distributor behaviors. A Global Scorecard that Amway developed provides key metrics at a market and global level for executive-level discussions. Product Analysis teams can now highlight repeat purchase rates, product penetration and the success of CRM campaigns. In the words of Dan and Mike, the addition of Exadata 11 months ago has been “a game changer.”  Amway has been able to dramatically reduce complexity, improve performance and increase business productivity and cost savings. For example, the number of indexes on the global data warehouse was reduced from more than 1,000 to less than 20.  Pulling data for the highest level distributors or the largest markets in the company now can be done in minutes instead of hours.  As a result, IT has shifted from performance tuning and keeping the system operational to higher-value business-focused activities. •       “The distributors that have been introduced to the BI reports have found them extremely helpful. Because they have never had this kind of information before, when they were presented with the reports, they wanted to take action immediately!”  -     Sales Development Manager in Latin America Without giving away more, the Amway case study presentation will be one of the unique customer sessions at OpenWorld this year. Speakers Dan Arganbright and Mike Olson have planned an interactive and entertaining session on Monday October 1 at 12:15pm in Moscone West, 2007. I’ll see you there!

    Read the article

  • Oracle BI and XS Energy Drinks – Don’t Miss the Amway Presentation!

    - by Michelle Kimihira
    By Maria Forney Amway is a global leader in the direct sales industry with $10.9B in annual sales in more than 100 countries and territories. The company has implemented a global BI framework that provides accurate, consistent, and timely insights to support global, regional and local analytical research, business planning, performance measurement and assessment. Oracle BI EE is used by 1500 employees across Amway sales, marketing, finance, and supply chain business units as well as Amway affiliates in Europe, Russia, South Africa, Japan, Australia, Latin America, Malaysia, Vietnam, and Indonesia. Last week, I spoke with Lead Data Analyst with Amway Global Sales, Dan Arganbright, and IT Manager with Amway BI Competency Center, Mike Olson, about their upcoming presentation at Oracle OpenWorld in San Francisco. Scheduled during a prime speaking slot on Monday, October 1 at 12:15pm in Moscone West, 2007, Dan and Mike will discuss their experience building Amway’s Distributor Consulting solution, powered by Oracle BI EE. You can find more information here. As background, Amway offers people an opportunity to own their own businesses and consumers exclusive products in health and wellness, beauty and home care.  The Amway internal Sales organization is charged with consulting leadership-level Distributors to help them with data insights and ultimately grow their business. Until recently, this was a resource-intense process of gathering and formatting data. In some markets, it took over 40 hours to collect the data and produce the analysis needed for one consultation session. Amway began its global BI journey in 2006 and since then the company has migrated from having multiple technology providers and integration points to an integrated strategic vendor approach. Today, the company has standardized on Oracle technology for BI.  Amway has achieved cost savings through the retirement of redundant technology platforms. In addition, Mike’s organization has led the charge to align disparate BI organizations into a BI Competency Center.  The following diagram highlights the simplicity of the standardized architecture of Amway today. Dubbed Distributor Consulting, Amway has developed a BI solution using the Oracle technology stack to help Distributor leaders grow their businesses. The Distributor Consulting solution provides over 40 metrics for Sales staff to provide data-driven insights on the Distributors and organizations they support.  Using Oracle BI EE, Exadata, and Oracle Data Integrator, Amway provides customized and personalized business intelligence, and the Oracle BI EE dashboards were developed by the Amway Sales organization, which demonstrates business empowerment of the technology. Amway is also leveraging the power of BI to drive business growth in all of its markets.  A new set of Distributor Segmentation metrics are enabling a better understanding of distributor behaviors. A Global Scorecard that Amway developed provides key metrics at a market and global level for executive-level discussions. Product Analysis teams can now highlight repeat purchase rates, product penetration and the success of CRM campaigns. In the words of Dan and Mike, the addition of Exadata 11 months ago has been “a game changer.”  Amway has been able to dramatically reduce complexity, improve performance and increase business productivity and cost savings. For example, the number of indexes on the global data warehouse was reduced from more than 1,000 to less than 20.  Pulling data for the highest level distributors or the largest markets in the company now can be done in minutes instead of hours.  As a result, IT has shifted from performance tuning and keeping the system operational to higher-value business-focused activities. •       “The distributors that have been introduced to the BI reports have found them extremely helpful. Because they have never had this kind of information before, when they were presented with the reports, they wanted to take action immediately!”  -     Sales Development Manager in Latin America Without giving away more, the Amway case study presentation will be one of the unique customer sessions at OpenWorld this year. Speakers Dan Arganbright and Mike Olson have planned an interactive and entertaining session on Monday October 1 at 12:15pm in Moscone West, 2007. I’ll see you there!

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • ORA-4031 Troubleshooting

    - by [email protected]
      QUICKLINK: Note 396940.1 Troubleshooting and Diagnosing ORA-4031 Error Note 1087773.1 : ORA-4031 Diagnostics Tools [Video]   Have you observed an ORA-04031 error reported in your alert log? An ORA-4031 error is raised when memory is unavailable for use or reuse in the System Global Area (SGA).  The error message will indicate the memory pool getting errors and high level information about what kind of allocation failed and how much memory was unavailable.  The challenge with ORA-4031 analysis is that the error and associated trace is for a "victim" of the problem.   The failing code ran into the memory limitation, but in almost all cases it was not part of the root problem.    Looking for the best way to diagnose? When an ORA-4031 error occurs, a trace file is raised and noted in the alert log if the process experiencing the error is a background process.   User processes may experience errors without reports in the alert log or traces generated.   The V$SHARED_POOL_RESERVED view will show reports of misses for memory over the life of the database. Diagnostics scripts are available in Note 430473.1 to help in analysis of the problem.  There is also a training video on using and interpreting the script data Note 1087773.1. 11g DiagnosabilityStarting with Oracle Database 11g Release 1, the Diagnosability infrastructure was introduced which places traces and core files into a location controlled by the DIAGNOSTIC_DEST initialization parameter when an incident, such as an ORA-4031 occurs. For earlier versions, the trace file will be written to either USER_DUMP_DEST (if the error was caught in a user process) or BACKGROUND_DUMP_DEST (if the error was caught in a background process like PMON or SMON). The trace file contains vital information about what led to the error condition.  Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support[Video]Oracle Configuration Manager (OCM)Oracle Configuration Manager (OCM) works with My Oracle Support to enable proactive support capability that helps you organize, collect and manage your Oracle configurations.Oracle Configuration Manager Quick Start GuideNote 548815.1: My Oracle Support Configuration Management FAQ Note 250434.1: BULLETIN: Learn More About My Oracle Support Configuration Manager    Common Causes/Solutions The ORA-4031 can occur for many different reasons.  Some possible causes are: SGA components too small for workload Auto-tuning issues Fragmentation due to application design Bug/leaks in memory allocationsFor more on the 4031 and how this affects the SGA, see Note 396940.1 Troubleshooting and Diagnosing ORA-4031 Error Because of the multiple potential causes, it is important to gather enough diagnostics so that an appropriate solution can be identified.  However, most commonly the cause is associated with configuration tuning.   Ensuring that MEMORY_TARGET or SGA_TARGET are large enough to accommodate workload can get around many scenarios.  The default trace associated with the error provides very high level information about the memory problem and the "victim" that ran into the issue.   The data in the default trace is not going to point to the root cause of the problem. When migrating from 9i to 10g and higher, it is necessary to increase the size of the Shared Pool due to changes in the basic design of the shared memory area. Note 270935.1 Shared pool sizing in 10gNOTE: Diagnostics on the errors should be investigated as close to the time of the error(s) as possible.  If you must restart a database, it is not feasible to diagnose the problem until the database has matured and/or started seeing the problems again. Note 801787.1 Common Cause for ORA-4031 in 10gR2, Excess "KGH: NO ACCESS" Memory Allocation ***For reference to the content in this blog, refer to Note.1088239.1 Master Note for Diagnosing ORA-4031 

    Read the article

  • LexisNexis and Oracle Join Forces to Prevent Fraud and Identity Abuse

    - by Tanu Sood
    Author: Mark Karlstrand About the Writer:Mark Karlstrand is a Senior Product Manager at Oracle focused on innovative security for enterprise web and mobile applications. Over the last sixteen years Mark has served as director in a number of tech startups before joining Oracle in 2007. Working with a team of talented architects and engineers Mark developed Oracle Adaptive Access Manager, a best of breed access security solution.The world’s top enterprise software company and the world leader in data driven solutions have teamed up to provide a new integrated security solution to prevent fraud and misuse of identities. LexisNexis Risk Solutions, a Gold level member of Oracle PartnerNetwork (OPN), today announced it has achieved Oracle Validated Integration of its Instant Authenticate product with Oracle Identity Management.Oracle provides the most complete Identity and Access Management platform. The only identity management provider to offer advanced capabilities including device fingerprinting, location intelligence, real-time risk analysis, context-aware authentication and authorization makes the Oracle offering unique in the industry. LexisNexis Risk Solutions provides the industry leading Instant Authenticate dynamic knowledge based authentication (KBA) service which offers customers a secure and cost effective means to authenticate new user or prove authentication for password resets, lockouts and such scenarios. Oracle and LexisNexis now offer an integrated solution that combines the power of the most advanced identity management platform and superior data driven user authentication to stop identity fraud in its tracks and, in turn, offer significant operational cost savings. The solution offers the ability to challenge users with dynamic knowledge based authentication based on the risk of an access request or transaction thereby offering an additional level to other authentication methods such as static challenge questions or one-time password when needed. For example, with Oracle Identity Management self-service, the forgotten password reset workflow utilizes advanced capabilities including device fingerprinting, location intelligence, risk analysis and one-time password (OTP) via short message service (SMS) to secure this sensitive flow. Even when a user has lost or misplaced his/her mobile phone and, therefore, cannot receive the SMS, the new integrated solution eliminates the need to contact the help desk. The Oracle Identity Management platform dynamically switches to use the LexisNexis Instant Authenticate service for authentication if the user is not able to authenticate via OTP. The advanced Oracle and LexisNexis integrated solution, thus, both improves user experience and saves money by avoiding unnecessary help desk calls. Oracle Identity and Access Management secures applications, Juniper SSL VPN and other web resources with a thoroughly modern layered and context-aware platform. Users don't gain access just because they happen to have a valid username and password. An enterprise utilizing the Oracle solution has the ability to predicate access based on the specific context of the current situation. The device, location, temporal data, and any number of other attributes are evaluated in real-time to determine the specific risk at that moment. If the risk is elevated a user can be challenged for additional authentication, refused access or allowed access with limited privileges. The LexisNexis Instant Authenticate dynamic KBA service plugs into the Oracle platform to provide an additional layer of security by validating a user's identity in high risk access or transactions. The large and varied pool of data the LexisNexis solution utilizes to quiz a user makes this challenge mechanism even more robust. This strong combination of Oracle and LexisNexis user authentication capabilities greatly mitigates the risk of exposing sensitive applications and services on the Internet which helps an enterprise grow their business with confidence.Resources:Press release: LexisNexis® Achieves Oracle Validated Integration with Oracle Identity Management Oracle Access Management (HTML)Oracle Adaptive Access Manager (pdf)

    Read the article

  • Oracle Data Integration 12c: Perspectives of Industry Experts, Customers and Partners

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 As you may have seen from our recent blog posts on Oracle Data Integrator 12c and Oracle GoldenGate 12c, we are very excited to share with you the great new features the 12c release brings to Oracle’s data integration solutions. And, fortunately we are not alone in this sentiment. Since the press announcement October 17th, which incorporates our customers' and experts' testimonials, we have seen positive comments in leading technology publications and social media as well. Here are some examples: In CIO and PCWorld you can find Joab Jackson’s article, Oracle Data Integrator 12c ready for real-time analysis, where wrote about the tight integration between Oracle Data Integrator and Oracle GoldenGate . He noted “Heeding the call from enterprise customers who clamor for more immediacy in their data-driven reports, Oracle has updated its data-integration software portfolio so that it can more rapidly deliver data to data warehouses and analysis applications.” Integration Developer News’ Vance McCarthy wrote the article Oracle Ships ‘Future Proofs’ Integration Tools for Traditional, Cloud, Big Data, Real-Time Projects and mentioned that “Oracle Data Integrator 12c and Oracle GoldenGate 12c sport a wide range of improvements to let devs more easily deliver data integration for cloud, analytics, big data and other new projects that leverage multiple datasets for business.“ InformationWeek’s Doug Henschen gave a great overview to several key features including the new flow-based UI in Oracle Data Integrator. Doug said “Oracle Data Integrator 12c introduces a complete makeover of the job-building experience, while real-time oriented GoldenGate 12c introduces performance gains “. In Database Trends and Applications’ article Oracle Strengthens Data Integration with Release of Oracle Data Integrator 12c and Oracle GoldenGate 12c highlighted the productivity aspect of the new solution with his remarks: “tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training”. We are also thrilled about what our customers and partners have to say about our products and the new release. And we are equally excited to share those perspectives with you in our upcoming launch video webcast on November 12th. SolarWorld Industries America’s Senior Database Manager, Russ Toyama will join our executives in our studio in Redwood Shores to discuss GoldenGate’s core benefits and the new release, while Surren Partharb, CTO of Strategic Technology Services for BT, and Mark Rittman, CTO of Rittman Mead, will provide their comments via the interviews conducted in the UK. This interactive panel discussion in the video webcast will unveil the new release with the expertise of our development executives and the great insight from our customers and partners. In addition, our product experts will be available online to answer chat questions. This is really a great opportunity to learn how Oracle's data integration offering has changed the integration and replication technology space with the new release, and established itself as the new leader. If you have not registered for this free event yet, you can do so via this link. We will run the live event at 8am PT/4pm GMT, followed by a replay of the event with live chat for Q&A  at 10am PT/6pm GMT. The replay will be available on-demand for those who register but cannot attend either session on November 12th. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}

    Read the article

  • Master Data Management for Location Data - Oracle Site Hub

    - by david.butler(at)oracle.com
    Most MDM discussions cover key domains such as customer, supplier, product, service, and reference data. It is usually understood that these domains have complex structures and hundreds if not thousands of attributes that need governing. Location, on the other hand, strikes most people as address data. How hard can that be? But for many industries, locations are complex, and site information is critical to efficient operations and relevant analytics. Retail stores and malls, bank branches, construction sites come to mind. But one of the best industries for illustrating the power of a site mastering application is Oil & Gas.   Oracle's Master Data Management solution for location data is the Oracle Site Hub. It is a location mastering solution that enables organizations to centralize site and location specific information from heterogeneous systems, creating a single view of site information that can be leveraged across all functional departments and analytical systems.   Let's take a look at the location entities the Oracle Site Hub can manage for the Oil & Gas industry: organizations, property, land, buildings, roads, oilfield, service center, inventory site, real estate, facilities, refineries, storage tanks, vendor locations, businesses, assets; project site, area, well, basin, pipelines, critical infrastructure, offshore platform, compressor station, gas station, etc. Any site can be classified into multiple hierarchies, like organizational hierarchy, operational hierarchy, geographic hierarchy, divisional hierarchies and so on. Any site can also be associated to multiple clusters, i.e. collections of sites, and these can be used as a foundation for driving reporting, analysis, organize daily work, etc. Hierarchies can also be used to model entities which are structured or non-structured collections of nodes, like for example routes, pipelines and more. The User Defined Attribute Framework provides the needed infrastructure to add single row attributes groups like well base attributes (well IDs, well type, well structure and key characterizing measures, and more) and well geometry, and multi row attribute groups like well applications, permits, production data, activities, operations, logs, treatments, tests, drills, treatments, and KPIs. Site Hub can also model areas, lands, fields, basins, pools, platforms, eco-zones, and stratigraphic layers as specific sites, tracking their base attributes, aliases, descriptions, subcomponents and more. Midstream entities (pipelines, logistic sites, pump stations) and downstream entities (cylinders, tanks, inventories, meters, partner's sites, routes, facilities, gas stations, and competitor sites) can also be easily modeled, together with their specific attributes and relationships. Site Hub can store any type of unstructured data associated to a site. This could be stored directly or on an external content management solution, like Oracle Universal Content Management. Considering a well, for example, Site Hub can store any relevant associated multimedia file such as: CAD drawings of the well profile, structure and/or parts, engineering documents, contracts, applications, permits, logs, pictures, photos, videos and more. For any site entity, Site Hub can associate all the related assets and equipments at the site, as well as all relationships between sites, between a site and multiple parties, and between a site and any purchasable or sellable item, over time. Items can be equipment, instruments, facilities, services, products, production entities, production facilities (pipelines, batteries, compressor stations, gas plants, meters, separators, etc.), support facilities (rigs, roads, transmission or radio towers, airstrips, etc.), supplier products and services, catalogs, and more. Items can just be associated to sites using standard Site Hub features, or they can be fully mastered by implementing Oracle Product Hub. Site locations (addresses or geographical coordinates) are also managed with out-of-the-box address geo-coding capabilities coupled with Google Maps integration to deliver powerful mapping capabilities and spatial data analysis. Locations can be shared between different sites. Centered on the site location, any site can also have associated areas. Site Hub can master any site location specific information, like for example cadastral, ownership, jurisdictional, geological, seismic and more, and any site-centric area specific information, like for example economical, political, risk, weather, logistic, traffic information and more. Now if anyone ever asks you why locations need MDM, think about how all these Oil & Gas entities and attributes would translate into your business locations. To learn more about Oracle's full MDM solution for the digital oil field, here is a link to Roberto Negro's outstanding whitepaper: Oracle Site Master Data Management for mastering wells and other PPDM entities in a digital oilfield context  

    Read the article

  • Build-time dependency resolving coming to Entity Framework. Now, how about those BI tools too?

    - by jamiet
    Three months ago I wrote a blog post entitled Some thoughts on Visual Studio database references and how they should be used for SQL Server BI where I shared some thoughts on a feature available to database developers in Visual Studio 2010 that I would love to see added to SQL Server Integration Services (SSIS), Analysis Services (SSAS) and Reporting Services (SSRS). In there I said: Over the past few weeks I have been making heavy use of the Database tools in Visual Studio 2010 and one of the features that has most impressed me has been database references.   Database references allow you to have stored procedures in your database project that refer to objects (tables, views, stored procedures etc…) that exist in other database projects and hence when you build your database project it is able to resolve those references.   It occurred to me that similar functionality would be incredibly useful for SQL Server Integration Services(SSIS), Analysis Services (SSAS) & Reporting Services (SSRS) projects. After all reports, packages and data source views are rife with references to database objects – why shouldn’t we be able to have design-time dependency checking in our BI projects the same way that database and .Net developers do? In that blog post I shared links to three Connect submissions where I requested this feature be added to SSIS, SSAS & SSRS. In addition I also submitted a request that the feature be extended to .Net projects so that any reference to a database object in a .Net assembly can be resolved at build time. That Connect submission is at [Entity FX] Use database references to constrain the EDM and overnight it received this comment from Microsoft: We have been working on this feature for a while and and will be available soon This is really good news - it improves the Microsoft developer ecosystem by ensuring invalid references to database references get caught at build time (ideally as part of a Continuous integration build) rather than run time. [Hopefully it might nip this code-first nonsense in the bud too (Ooo...way to incite flame comments :) ) ]. If you want to see this feature in action then check out a video from Teched Europe last month entitled SQL Server Developer Tools Code-named "Juneau" where it is demo'd by Lance Delano and Tim Laverty.   The point of this blog post though is not just to draw attention to this forthcoming feature for .Net developers, it is to ask you to petition Microsoft to get this feature added to SSIS/SSAS/SSRS too. After all, we already know (from the video above) that the feature is coming to this new code-name Juneau development environment plus we also know that Juneau will be the development environment for SSIS/SSAS/SSRS as well - is it really much of a stretch to expect the BI tools to have access to this great feature too? I don't think so and if you agree with me then I urge you to vote and add a comment to the Connection submissions that are requesting this feature. They are at: [SSAS] Declare Object Dependancies [SSRS] Declare Object Dependancies [SSIS] Declare Object Dependancies (Update, Apparently someone at Microsoft has deemed it necassary to set this to private and I am not able to change it back even though I submitted it. You can still vote on the other two though.) Let's close that SQL Developer Gap!   @Jamiet    

    Read the article

  • Is Oracle Policy Automation a Fit for My Agency? I'll bet it is.

    - by jeffrey.waterman
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Recently, I stumbled upon a new(-ish) whitepaper now posted on the Oracle Technology Network around Oracle Policy Automation (OPA). This paper is certain to become a must read for any customer interested in rules automation. What is OPA?  If you are not sitting in your favorite Greek restaurant waiting for that order of Saganaki to appear, OPA is Oracle’s solution for automated streamlining, standardizing, and the maintenance of policy. It is a specialized rules platform that simplifies the automation of rules and policies, putting the analysis in the hands of the analysts, not the IT organization. In other words, OPA allows the organization to be more efficient by eliminating (or at a minimum, reducing the engagement of) the middle man from the process. The whitepaper I mention above is titled, “Is Oracle Policy Automation a Good Fit for My Business?”. This short document walks the reader through use cases and advice for the reader to consider when deciding if OPA is right for their agency. The paper outlines many different scenarios, different uses of OPA in production today and, where OPA may not be a good fit. Many of the use case examples revolve around end user questionnaires or analyst research. What is often overlooked is OPA’s ability to act as a rules engine behind the scenes. That is, take inputs from one source (e.g., personnel data), process that data in OPA and send the output (e.g., pay data with benefits deductions) to a second source. The rules have been automated, no necessary human intervention to perform analysis. A few of my customers have used the embedded OPA solution to improve transaction processing and reduce the time spent analyzing exceptions. I suggest any reader whose organization is reliant on or deals with high complexity, volume or volatility in rules that are based on documentation – or which need to be documented – take a look at Oracle Policy Automation. You can find the white paper on Oracle Technology Network. You can find the white paper in the Oracle Policy Automation of the OTN. You can find more information around OPA on oracle.com. Finally, you can send me a question any time at [email protected] Thank you for reading. If you have any topics around Oracle Applications in the Federal or Public Sector industries you would like to see addressed in this blog, please leave suggestions in the comments section and I will do my best to address in a future post.

    Read the article

  • Advanced Record-Level Business Intelligence with Inner Queries

    - by gt0084e1
    While business intelligence is generally applied at an aggregate level to large data sets, it's often useful to provide a more streamlined insight into an individual records or to be able to sort and rank them. For instance, a salesperson looking at a specific customer could benefit from basic stats on that account. A marketer trying to define an ideal customer could pull the top entries and look for insights or patterns. Inner queries let you do sophisticated analysis without the overhead of traditional BI or OLAP technologies like Analysis Services. Example - Order History Constancy Let's assume that management has realized that the best thing for our business is to have customers ordering every month. We'll need to identify and rank customers based on how consistently they buy and when their last purchase was so sales & marketing can respond accordingly. Our current application may not be able to provide this and adding an OLAP server like SSAS may be overkill for our needs. Luckily, SQL Server provides the ability to do relatively sophisticated analytics via inner queries. Here's the kind of output we'd like to see. Creating the Queries Before you create a view, you need to create the SQL query that does the calculations. Here we are calculating the total number of orders as well as the number of months since the last order. These fields might be very useful to sort by but may not be available in the app. This approach provides a very streamlined and high performance method of delivering actionable information without radically changing the application. It's also works very well with self-service reporting tools like Izenda. SELECT CustomerID,CompanyName, ( SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID ) As Orders, DATEDIFF(mm, ( SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) ,getdate() ) AS MonthsSinceLastOrder FROM Customers Creating Views To turn this or any query into a view, just put CREATE VIEW AS before it. If you want to change it use the statement ALTER VIEW AS. Creating Computed Columns If you'd prefer not to create a view, inner queries can also be applied by using computed columns. Place you SQL in the (Formula) field of the Computed Column Specification or check out this article here. Advanced Scoring and Ranking One of the best uses for this approach is to score leads based on multiple fields. For instance, you may be in a business where customers that don't order every month require more persistent follow up. You could devise a simple formula that shows the continuity of an account. If they ordered every month since their first order, they would be at 100 indicating that they have been ordering 100% of the time. Here's the query that would calculate that. It uses a few SQL tricks to make this happen. We are extracting the count of unique months and then dividing by the months since initial order. This query will give you the following information which can be used to help sales and marketing now where to focus. You could sort by this percentage to know where to start calling or to find patterns describing your best customers. Number of orders First Order Date Last Order Date Percentage of months order was placed since last order. SELECT CustomerID, (SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) As Orders, (SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS LastOrder, (SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS FirstOrder, DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) AS MonthsSinceFirstOrder, 100*(SELECT COUNT(DISTINCT 100*DATEPART(yy,OrderDate) + DATEPART(mm,OrderDate)) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) / DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) As OrderPercent FROM Customers

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Comparing Apples and Pairs

    - by Tony Davis
    A recent study, High Costs and Negative Value of Pair Programming, by Capers Jones, pulls no punches in its assessment of the costs-to- benefits ratio of pair programming, two programmers working together, at a single computer, rather than separately. He implies that pair programming is a method rushed into production on a wave of enthusiasm for Agile or Extreme Programming, without any real regard for its effectiveness. Despite admitting that his data represented a far from complete study of the economics of pair programming, his conclusions were stark: it was 2.5 times more expensive, resulted in a 15% drop in productivity, and offered no significant quality benefits. The author provides a more scientific analysis than Jon Evans’ Pair Programming Considered Harmful, but the theme is the same. In terms of upfront-coding costs, pair programming is surely more expensive. The claim of productivity loss is dubious and contested by other studies. The third claim, though, did surprise me. The author’s data suggests that if both the pair and the individual programmers employ static code analysis and testing, then there is no measurable difference in the resulting code quality, in terms of defects per function point. In other words, pair programming incurs a massive extra cost for no tangible return in investment. There were, inevitably, many criticisms of his data and his conclusions, a few of which are persuasive. Firstly, that the driver/observer model of pair programming, on which the study bases its findings, is far from the most effective. For example, many find Ping-Pong pairing, based on use of test-driven development, far more productive. Secondly, that it doesn’t distinguish between “expert” and “novice” pair programmers– that is, independently of other programming skills, how skilled was an individual at pair programming. Thirdly, that his measure of quality is too narrow. This point rings true, certainly at Red Gate, where developers don’t pair program all the time, but use the method in short bursts, while tackling a tricky problem and needing a fresh perspective on the best approach, or more in-depth knowledge in a particular domain. All of them argue that pair programming, and collective code ownership, offers significant rewards, if not in terms of immediate “bug reduction”, then in removing the likelihood of single points of failure, and improving the overall quality and longer-term adaptability/maintainability of the design. There is also a massive learning benefit for both participants. One developer told me how he once worked in the same team over consecutive summers, the first time with no pair programming and the second time pair-programming two-thirds of the time, and described the increased rate of learning the second time as “phenomenal”. There are a great many theories on how we should develop software (Scrum, XP, Lean, etc.), but woefully little scientific research in their effectiveness. For a group that spends so much time crunching other people’s data, I wonder if developers spend enough time crunching data about themselves. Capers Jones’ data may be incomplete, but should cause a pause for thought, especially for any large IT departments, supporting commerce and industry, who are considering pair programming. It certainly shouldn’t discourage teams from exploring new ways of developing software, as long as they also think about how to gather hard data to gauge their effectiveness.

    Read the article

  • How much is a subscriber worth?

    - by Tom Lewin
    This year at Red Gate, we’ve started providing a way to back up SQL Azure databases and Azure storage. We decided to sell this as a service, instead of a product, which means customers only pay for what they use. Unfortunately for us, it makes figuring out revenue much trickier. With a product like SQL Compare, a customer pays for it, and it’s theirs for good. Sure, we offer support and upgrades, but, fundamentally, the sale is a simple, upfront transaction: we’ve made this product, you need this product, we swap product for money and everyone is happy. With software as a service, it isn’t that easy. The money and product don’t change hands up front. Instead, we provide a service in exchange for a recurring fee. We know someone buying SQL Compare will pay us $X, but we don’t know how long service customers will stay with us, or how much they will spend. How do we find this out? We use lifetime value analysis. What is lifetime value? Lifetime value, or LTV, is how much a customer is worth to the business. For Entrepreneurs has a brilliant write up that we followed to conduct our analysis. Basically, it all boils down to this equation: LTV = ARPU x ALC To make it a bit less of an alphabet-soup and a bit more understandable, we can write it out in full: The lifetime value of a customer equals the average revenue per customer per month, times the average time a customer spends with the service Simple, right? A customer is worth the average spend times the average stay. If customers pay on average $50/month, and stay on average for ten months, then a new customer will, on average, bring in $500 over the time they are a customer! Average spend is easy to work out; it’s revenue divided by customers. The problem comes when we realise that we don’t know exactly how long a customer will stay with us. How can we figure out the average lifetime of a customer, if we only have six months’ worth of data? The answer lies in the fact that: Average Lifetime of a Customer = 1 / Churn Rate The churn rate is the percentage of customers that cancel in a month. If half of your customers cancel each month, then your average customer lifetime is two months. The problem we faced was that we didn’t have enough data to make an estimate of one month’s cancellations reliable (because barely anybody cancels)! To deal with this data problem, we can take data from the last three months instead. This means we have more data to play with. We can still use the equation above, we just need to multiply the final result by three (as we worked out how many three month periods customers stay for, and we want our answer to be in months). Now these estimates are likely to be fairly unreliable; when there’s not a lot of data it pays to be cautious with inference. That said, the numbers we have look fairly consistent, and it’s super easy to revise our estimates when new data comes in. At the very least, these numbers give us a vague idea of whether a subscription business is viable. As far as Cloud Services goes, the business looks very viable indeed, and the low cancellation rates are much more than just data points in LTV equations; they show that the product is working out great for our customers, which is exactly what we’re looking for!

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • The five steps of business intelligence adoption: where are you?

    - by Red Gate Software BI Tools Team
    When I was in Orlando and New York last month, I spoke to a lot of business intelligence users. What they told me suggested a path of BI adoption. The user’s place on the path depends on the size and sophistication of their organisation. Step 1: A company with a database of customer transactions will often want to examine particular data, like revenue and unit sales over the last period for each product and territory. To do this, they probably use simple SQL queries or stored procedures to produce data on demand. Step 2: The results from step one are saved in an Excel document, so business users can analyse them with filters or pivot tables. Alternatively, SQL Server Reporting Services (SSRS) might be used to generate a report of the SQL query for display on an intranet page. Step 3: If these queries are run frequently, or business users want to explore data from multiple sources more freely, it may become necessary to create a new database structured for analysis rather than CRUD (create, retrieve, update, and delete). For example, data from more than one system — plus external information — may be incorporated into a data warehouse. This can become ‘one source of truth’ for the business’s operational activities. The warehouse will probably have a simple ‘star’ schema, with fact tables representing the measures to be analysed (e.g. unit sales, revenue) and dimension tables defining how this data is aggregated (e.g. by time, region or product). Reports can be generated from the warehouse with Excel, SSRS or other tools. Step 4: Not too long ago, Microsoft introduced an Excel plug-in, PowerPivot, which allows users to bring larger volumes of data into Excel documents and create links between multiple tables.  These BISM Tabular documents can be created by the database owners or other expert Excel users and viewed by anyone with Excel PowerPivot. Sometimes, business users may use PowerPivot to create reports directly from the primary database, bypassing the need for a data warehouse. This can introduce problems when there are misunderstandings of the database structure or no single ‘source of truth’ for key data. Step 5: Steps three or four are often enough to satisfy business intelligence needs, especially if users are sophisticated enough to work with the warehouse in Excel or SSRS. However, sometimes the relationships between data are too complex or the queries which aggregate across periods, regions etc are too slow. In these cases, it can be necessary to formalise how the data is analysed and pre-build some of the aggregations. To do this, a business intelligence professional will typically use SQL Server Analysis Services (SSAS) to create a multidimensional model — or “cube” — that more simply represents key measures and aggregates them across specified dimensions. Step five is where our tool, SSAS Compare, becomes useful, as it helps review and deploy changes from development to production. For us at Red Gate, the primary value of SSAS Compare is to establish a dialog with BI users, so we can develop a portfolio of products that support creation and deployment across a range of report and model types. For example, PowerPivot and the new BISM Tabular model create a potential customer base for tools that extend beyond BI professionals. We’re interested in learning where people are in this story, so we’ve created a six-question survey to find out. Whether you’re at step one or step five, we’d love to know how you use BI so we can decide how to build tools that solve your problems. So if you have a sixty seconds to spare, tell us on the survey!

    Read the article

  • Synchronizing Asynchronous request handlers in Silverlight environment

    - by Eric Lifka
    For our senior design project my group is making a Silverlight application that utilizes graph theory concepts and stores the data in a database on the back end. We have a situation where we add a link between two nodes in the graph and upon doing so we run analysis to re-categorize our clusters of nodes. The problem is that this re-categorization is quite complex and involves multiple queries and updates to the database so if multiple instances of it run at once it quickly garbles data and breaks (by trying to re-insert already used primary keys). Essentially it's not thread safe, and we're trying to make it safe, and that's where we're failing and need help :). The create link function looks like this: private Semaphore dblock = new Semaphore(1, 1); // This function is on our service reference and gets called // by the client code. public int addNeed(int nodeOne, int nodeTwo) { dblock.WaitOne(); submitNewNeed(createNewNeed(nodeOne, nodeTwo)); verifyClusters(nodeOne, nodeTwo); dblock.Release(); return 0; } private void verifyClusters(int nodeOne, int nodeTwo) { // Run analysis of nodeOne and nodeTwo in graph } All copies of addNeed should wait for the first one that comes in to finish before another can execute. But instead they all seem to be running and conflicting with each other in the verifyClusters method. One solution would be to force our front end calls to be made synchronously. And in fact, when we do that everything works fine, so the code logic isn't broken. But when it's launched our application will be deployed within a business setting and used by internal IT staff (or at least that's the plan) so we'll have the same problem. We can't force all clients to submit data at different times, so we really need to get it synchronized on the back end. Thanks for any help you can give, I'd be glad to supply any additional information that you could need!

    Read the article

  • OCR: How to improve accuracy - existing libraries for removing non-text 'furniture', shapes, etc to

    - by Rob
    I want to remove rectangles etc that enclose text in a screenshot image, so that I can perform optical character recognition to get accurate text from the screenshot. Background: I doing this to extract data from a legacy application for use with other applications. This is the only way to get at this data as associated files are in a closed, proprietary, binary format. I will be using AutoItScript to drive the application to show data in its UI, then I will screenshot this and feed this to tesseract. I've already had some success in automating the UI, and have been able to use tesseract to get plain ascii text out of the bitmap. There are several AutoItScripr forum articles discussing its use with tesseract/OCR but not specifically for my question. http://www.autoitscript.com/forum/index.php?s=6c32c3ece12756e635a619cdf175eff9&showforum=2 What I need to do There are thin, 1-pixel wide rectangles that closely enclose some text, when fed to tesseract, it sees them as I for example for a verticle line of the rectangle. Any thoughts on how to remove the rectangles, or best practices? I'm asking if there is a generic command line based toolset to overwrite rectangles, for example, in .png files. I could then pass the .png through this, then pass it to tesseract. Details on the tesseract release/setup I've used are as follows: Go here: http://code.google.com/p/tesseract-ocr/downloads/list - For the basic english generic character set to get Tesseract up and running and recognising your bitmapped text into ascii text, use tesseract-2.00.eng.tar.gz (current version at time of writing is: "English language data for Tesseract (2.00 and up) Jul 2007 989 KB 84845") Related questions I have already looked at on Stack Overflow http://stackoverflow.com/questions/1335581/how-to-give-best-chance-of-success-to-an-ocr-software http://stackoverflow.com/questions/2296568/analysis-and-transformation-of-the-image-on-the-basis-of-this-analysis-for-better http://stackoverflow.com/questions/2268028/reading-characters-off-of-the-screen In these, my question is not completely answered or a commercial solution is being sold. I do not want to consider a commercial solution at this stage.

    Read the article

  • Text mining on large database (data mining)

    - by yox
    Hello, I have a large database of resumes (CV), and a certain table skills grouping all users skills. inside that table there's a field skill_text that describes the skill in full text. I'm looking for an algorithm/software/method to extract significant terms/phrases from that table in order to build a new table with standarized skills.. Here are some examples skills extracted from the DB : Sectoral and competitive analysis Business Development (incl. in international settings) Specific structure and road design software - Microstation, Macao, AutoCAD (basic knowledge) Creative work (Photoshop, In-Design, Illustrator) checking and reporting back on campaign progress organising and attending events and exhibitions Development : Aptana Studio, PHP, HTML, CSS, JavaScript, SQL, AJAX Discipline: One to one marketing, E-marketing (SEO & SEA, display, emailing, affiliate program) Mix marketing, Viral Marketing, Social network marketing. The output shoud be something like : Sectoral and competitive analysis Business Development Specific structure and road design software - Macao AutoCAD Photoshop In-Design Illustrator organising events Development Aptana Studio PHP HTML CSS JavaScript SQL AJAX Mix marketing Viral Marketing Social network marketing emailing SEO One to one marketing As you see only skills remains no other representation text. I know this is possible using text mining technics but how to do it ? the database is realy large.. it's a good thing because we can calculate text frequency and decide if it's a real skill or just meaningless text... The big problem is .. how to determin that "blablabla" is a skill ? thanks

    Read the article

  • Agile and Scrum burning me down please help me figuring out the truth

    - by jadook
    hi all, in the last while I installed MS-TFS 2008 then started to get myself prepared to use Agile Process Guidance template shipped with the TFS. with little googling I passed through Mike Cohn materials: I watched his conference in youtube "sponsored by google: http://www.youtube.com/watch?v=fb9Rzyi8b90 http://www.youtube.com/watch?v=jeT0pOVg0EI Read his book "Agile Estimating and Planning" Watching the video series in his website: http://www.mountaingoatsoftware.com/presentations-tag/video-recorded I was very happy while absorbing and eating the techniques he is using with the teams and how agile and scrum is such a great software process/methodology until I saw Mike answering a question regarding an architect role and talking about the requirements document... at that point everything start falling apart due to the following: Last year I had been assigned to make full analysis "including requirements gathering" for big project "very high priority project". within 2 months of hardwork, dedication and commitment I delivered the whole analysis with full satisfaction of the customer and my BOSS and ZERO amendments. Later on, the project entered the architecting, development ... phases. due to the fact that the system included many competitive and exciting features I requested patenting it and its going in the process... so imagine you are the kind of person who used to love facing all kind of challenges and returning with excellent experience and results for the stakeholders and yourself, How fairly agile and scrum processes will credit and admit your talent and passion while the scrum master/coach treat the team as one unit that accomplish user stories and converge through trial and error approach??!!!! with that dark thoughts about agile and scrum I found many people "anti agile" and on top of them is "Crispin Rogers Johnson": http://agile-crispin.blogspot.com/ that guy made anti statement for everything Mike Cohn used to talk about. I really don't know what to do next! so any guidance will be appreciated. Thanks,

    Read the article

  • structured vs. unstructured data in db

    - by Igor
    the question is one of design. i'm gathering a big chunk of performance data with lots of key-value pairs. pretty much everything in /proc/cpuinfo, /proc/meminfo/, /proc/loadavg, plus a bunch of other stuff, from several hundred hosts. right now, i just need to display the latest chunk of data in my UI. i will probably end up doing some analysis of the data gathered to figure out performance problems down the road, but this is a new application so i'm not sure what exactly i'm looking for performance-wise just yet. i could structure the data in the db -- have a column for each key i'm gathering. the table would end up being O(100) columns wide, it would be a pain to put into the db, i would have to add new columns if i start gathering a new stat. but it would be easy to sort/analyze the data just using SQL. or i could just dump my unstructured data blob into the table. maybe three columns -- host id, timestamp, and a serialized version of my array, probably using JSON in a TEXT field. which should I do? am i going to be sorry if i go with the unstructured approach? when doing analysis, should i just convert the fields i'm interested in and create a new, more structured table? what are the trade-offs i'm missing here?

    Read the article

  • rails + compass: advantages vs using haml + blueprint directly

    - by egarcia
    I've got some experience using haml (+sass) on rails projects. I recently started using them with blueprintcss - the only thing I did was transform blueprint.css into a sass file, and started coding from there. I even have a rails generator that includes all this by default. It seems that Compass does what I do, and other things. I'm trying to understand what those other things are - but the documentation/tutorials weren't very clear. These are my conclusions: Compass comes with built-in sass mixins that implement common CSS idioms, such as links with icons or horizontal lists. My solution doesn't provide anything like that. (1 point for Compass). Compass has several command-line options: you can create a rails project, but you can also "install" it on an existing rails project. A rails generator could be personalized to do the same thing, I guess. (Tie). Compass has two modes of working with blueprint: "basic" and "semantic" usage. I'm not clear about the differences between those. With my rails generator I only have one mode, but it seems enough. (Tie) Apparently, Compass is prepared to use other frameworks, besides blueprint (e.g. YUI). I could not find much documentation about this, and I'm not interested on it anyway - blueprint is ok for me (Tie). Compass' learning curve seems a bit stiff and the documentation seems sparse. Learning could be a bit difficult. On the other hand, I know the ins and outs of my own system and can use it right away. (1 point for my system). With this analysis, I'm hesitant to give Compass a try. Is my analysis correct? Are Am I missing any key points, or have I evaluated any of these points wrongly?

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >