Search Results

Search found 1406 results on 57 pages for 'candlestick chart'.

Page 52/57 | < Previous Page | 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • EXC_BAD_ACCESS and KERN_INVALID_ADDRESS after intiating a print sequence.

    - by Edward M. Bergmann
    MAC G4/1.5GHz/2GB/1TB+ OS10.4.11 Start up Volume has been erased/complete reinstall with updated software. Current problem only occurs when printing to an Epson Artisan 800 [USB as well as Ethernet connected] when using Macromedia FreeHand 10.0.1.67. All other apps/printers work fine. Memory has been removed/swapped/reinstalled several times, CPU was changed from 1.5GB to 1.3GB. Page(s) will print, but application quits within a second or two after selecting "print." Apple has never replied, Epson hasn't a clue, and I am befuddled!! Perhaps there is GURU out their who and see a bigger-better picture and understands how to interpret all of this stuff. If so, it would be a terrific pleasure to get a handle on how to cure this problem or get some A M M U N I T I O N to fire in the right direction. I thank you in advance. FreeHand 10 MAC OS10.4.11 unexpectedly quits after invoking a print command, the result: Date/Time: 2010-04-20 14:23:18.371 -0700 OS Version: 10.4.11 (Build 8S165) Report Version: 4 Command: FreeHand 10 Path: /Applications/Macromedia FreeHand 10.0.1.67/FreeHand 10 Parent: WindowServer [1060] Version: 10.0.1.67 (10.0.1.67, Copyright © 1988-2002 Macromedia Inc. and its licensors. All rights reserved.) PID: 1217 Thread: 0 Exception: EXC_BAD_ACCESS (0x0001) Codes: KERN_INVALID_ADDRESS (0x0001) at 0x07d7e000 Thread 0 Crashed: 0 <<00000000>> 0xffff8a60 __memcpy + 704 (cpu_capabilities.h:189) 1 FreeHand X 0x011d2994 0x1008000 + 1878420 2 FreeHand X 0x01081da4 0x1008000 + 499108 3 FreeHand X 0x010f5474 0x1008000 + 971892 4 FreeHand X 0x010d0278 0x1008000 + 819832 5 FreeHand X 0x010fa808 0x1008000 + 993288 6 FreeHand X 0x01113608 0x1008000 + 1095176 7 FreeHand X 0x01113748 0x1008000 + 1095496 8 FreeHand X 0x01099ebc 0x1008000 + 597692 9 FreeHand X 0x010fa358 0x1008000 + 992088 10 FreeHand X 0x010fa170 0x1008000 + 991600 11 FreeHand X 0x010f9830 0x1008000 + 989232 12 FreeHand X 0x01098678 0x1008000 + 591480 13 FreeHand X 0x010f7a5c 0x1008000 + 981596 Thread 1: 0 libSystem.B.dylib 0x90005dec syscall + 12 1 com.apple.OpenTransport 0x9ad015a0 BSD_waitevent + 44 2 com.apple.OpenTransport 0x9ad06360 CarbonSelectThreadFunc + 176 3 libSystem.B.dylib 0x9002b908 _pthread_body + 96 Thread 2: 0 libSystem.B.dylib 0x9002bfc8 semaphore_wait_signal_trap + 8 1 libSystem.B.dylib 0x90030aac pthread_cond_wait + 480 2 com.apple.OpenTransport 0x9ad01e94 CarbonOperationThreadFunc + 80 3 libSystem.B.dylib 0x9002b908 _pthread_body + 96 Thread 3: 0 libSystem.B.dylib 0x9002bfc8 semaphore_wait_signal_trap + 8 1 libSystem.B.dylib 0x90030aac pthread_cond_wait + 480 2 com.apple.OpenTransport 0x9ad11df0 CarbonInetOperThreadFunc + 80 3 libSystem.B.dylib 0x9002b908 _pthread_body + 96 Thread 4: 0 libSystem.B.dylib 0x90053f88 semaphore_timedwait_signal_trap + 8 1 libSystem.B.dylib 0x900707e8 pthread_cond_timedwait_relative_np + 556 2 ...ple.CoreServices.CarbonCore 0x90bf9330 TSWaitOnSemaphoreCommon + 176 3 ...ple.CoreServices.CarbonCore 0x90c012d0 TimerThread + 60 4 libSystem.B.dylib 0x9002b908 _pthread_body + 96 Thread 5: 0 libSystem.B.dylib 0x9001f48c select + 12 1 com.apple.CoreFoundation 0x907f1240 __CFSocketManager + 472 2 libSystem.B.dylib 0x9002b908 _pthread_body + 96 Thread 6: 0 libSystem.B.dylib 0x9002188c access + 12 1 ...e.print.framework.PrintCore 0x9169a620 CreateProxyURL(__CFURL const*) + 192 2 ...e.print.framework.PrintCore 0x9169a4f8 CreateOriginalPrinterProxyURL() + 80 3 ...e.print.framework.PrintCore 0x9169a034 CheckPrinterProxyVersion(OpaquePMPrinter*, __CFURL const*) + 192 4 ...e.print.framework.PrintCore 0x91699d94 PJCPrinterProxyCreateURL + 932 5 ...e.print.framework.PrintCore 0x916997bc PJCLaunchPrinterProxy(OpaquePMPrinter*, PMLaunchPCReason) + 32 6 ...e.print.framework.PrintCore 0x91699730 PJCLaunchPrinterProxyThread(void*) + 136 7 libSystem.B.dylib 0x9002b908 _pthread_body + 96 Thread 0 crashed with PPC Thread State 64: srr0: 0x00000000ffff8a60 srr1: 0x000000000200f030 vrsave: 0x00000000ff000000 cr: 0x24002244 xer: 0x0000000020000002 lr: 0x00000000011d2994 ctr: 0x00000000000003f6 r0: 0x0000000000000000 r1: 0x00000000bfffea60 r2: 0x0000000000000000 r3: 0x00000000083bb000 r4: 0x00000000083c0040 r5: 0x0000000000014d84 r6: 0x0000000000000010 r7: 0x0000000000000020 r8: 0x0000000000000030 r9: 0x0000000000000000 r10: 0x0000000000000060 r11: 0x0000000000000080 r12: 0x0000000007d7e000 r13: 0x0000000000000000 r14: 0x00000000005cbd26 r15: 0x0000000000000001 r16: 0x00000000017b03a0 r17: 0x0000000000000000 r18: 0x000000000068fa80 r19: 0x0000000000000001 r20: 0x0000000006c639c4 r21: 0x00000000006900f8 r22: 0x0000000006e09480 r23: 0x0000000006e0a250 r24: 0x0000000000000002 r25: 0x0000000000000000 r26: 0x00000000bfffed2c r27: 0x0000000006e05ce0 r28: 0x0000000000014d84 r29: 0x0000000000000000 r30: 0x0000000000014d84 r31: 0x00000000083bb000 Binary Images Description: 0x1000 - 0x2fff LaunchCFMApp /System/Library/Frameworks/Carbon.framework/Versions/A/Support/LaunchCFMApp 0x27f000 - 0x2ce3c7 CarbonLibpwpc PEF binary: CarbonLibpwpc 0x2ce3d0 - 0x2e66bd Apple;Carbon;Multimedia PEF binary: Apple;Carbon;Multimedia 0x2e7c00 - 0x2e998b Apple;Carbon;Networking PEF binary: Apple;Carbon;Networking 0x31ab10 - 0x31abb3 CFMPriv_QuickTime PEF binary: CFMPriv_QuickTime 0x31ac20 - 0x31ac97 CFMPriv_System PEF binary: CFMPriv_System 0x31af30 - 0x31b000 CFMPriv_CarbonSound PEF binary: CFMPriv_CarbonSound 0x31b080 - 0x31b153 CFMPriv_CommonPanels PEF binary: CFMPriv_CommonPanels 0x31b230 - 0x31b2eb CFMPriv_Help PEF binary: CFMPriv_Help 0x31b2f0 - 0x31b3ba CFMPriv_HIToolbox PEF binary: CFMPriv_HIToolbox 0x31b440 - 0x31b516 CFMPriv_HTMLRendering PEF binary: CFMPriv_HTMLRendering 0x31b550 - 0x31b602 CFMPriv_CoreFoundation PEF binary: CFMPriv_CoreFoundation 0x31b7f0 - 0x31b8a5 CFMPriv_DVComponentGlue PEF binary: CFMPriv_DVComponentGlue 0x31f760 - 0x31f833 CFMPriv_ImageCapture PEF binary: CFMPriv_ImageCapture 0x31f8c0 - 0x31f9a5 CFMPriv_NavigationServices PEF binary: CFMPriv_NavigationServices 0x31fa20 - 0x31faf6 CFMPriv_OpenScripting?MacBLib PEF binary: CFMPriv_OpenScripting?MacBLib 0x31fbd0 - 0x31fc8e CFMPriv_Print PEF binary: CFMPriv_Print 0x31fcb0 - 0x31fd7d CFMPriv_SecurityHI PEF binary: CFMPriv_SecurityHI 0x31fe00 - 0x31fee2 CFMPriv_SpeechRecognition PEF binary: CFMPriv_SpeechRecognition 0x31ff60 - 0x320033 CFMPriv_CarbonCore PEF binary: CFMPriv_CarbonCore 0x3200b0 - 0x320183 CFMPriv_OSServices PEF binary: CFMPriv_OSServices 0x320260 - 0x320322 CFMPriv_AE PEF binary: CFMPriv_AE 0x320330 - 0x3203f5 CFMPriv_ATS PEF binary: CFMPriv_ATS 0x320470 - 0x320547 CFMPriv_ColorSync PEF binary: CFMPriv_ColorSync 0x3205d0 - 0x3206b3 CFMPriv_FindByContent PEF binary: CFMPriv_FindByContent 0x320730 - 0x32080a CFMPriv_HIServices PEF binary: CFMPriv_HIServices 0x320880 - 0x320960 CFMPriv_LangAnalysis PEF binary: CFMPriv_LangAnalysis 0x3209f0 - 0x320ad6 CFMPriv_LaunchServices PEF binary: CFMPriv_LaunchServices 0x320bb0 - 0x320c87 CFMPriv_PrintCore PEF binary: CFMPriv_PrintCore 0x320c90 - 0x320d52 CFMPriv_QD PEF binary: CFMPriv_QD 0x320e50 - 0x320f39 CFMPriv_SpeechSynthesis PEF binary: CFMPriv_SpeechSynthesis 0x405000 - 0x497f7a PowerPlant Shared Library PEF binary: PowerPlant Shared Library 0x498000 - 0x4f0012 Player PEF binary: Player 0x4f1000 - 0x54a4c0 KodakCMSC PEF binary: KodakCMSC 0x6bd000 - 0x6eca10 <Unknown disk fragment> PEF binary: <Unknown disk fragment> 0x7fc000 - 0x7fdb8b xRes Palette Importc PEF binary: xRes Palette Importc 0x1008000 - 0x17770a5 FreeHand X PEF binary: FreeHand X 0x17770b0 - 0x17a6fe7 MW_MSL.Carbon.Shlb PEF binary: MW_MSL.Carbon.Shlb 0x17fb000 - 0x17fff03 Smudge PEF binary: Smudge 0x5ce6000 - 0x5cecebe PICT Import Export68 PEF binary: PICT Import Export68 0x5ced000 - 0x5d24267 PNG Import Exportr68 PEF binary: PNG Import Exportr68 0x5d25000 - 0x5d30dde Release To Layerswpc PEF binary: Release To Layerswpc 0x5d31000 - 0x5d37fd2 Roughen PEF binary: Roughen 0x5d38000 - 0x5d459ae Shadow PEF binary: Shadow 0x5d46000 - 0x5d4b0de Spiral PEF binary: Spiral 0x5d4c000 - 0x5d57f07 Targa Import Export8 PEF binary: Targa Import Export8 0x5d58000 - 0x5d8d959 TIFF Import Export68 PEF binary: TIFF Import Export68 0x5d93000 - 0x5da0f65 Color Utilities PEF binary: Color Utilities 0x5f62000 - 0x5f6e795 Mirror PEF binary: Mirror 0x5f6f000 - 0x5fbd656 HTML Export PEF binary: HTML Export 0x5fc8000 - 0x5fd442f Graphic Hose PEF binary: Graphic Hose 0x5fd5000 - 0x5fe4b5a BMP Import Exportr68 PEF binary: BMP Import Exportr68 0x5fe5000 - 0x60342d6 PDF Export PEF binary: PDF Export 0x6041000 - 0x6042f44 Fractalizej@ PEF binary: Fractalizej@ 0x6043000 - 0x6075214 Chart Tool™ PEF binary: Chart Tool™ 0x6076000 - 0x607d46d Bend PEF binary: Bend 0x607e000 - 0x60cda7b PDF Import PEF binary: PDF Import 0x60dc000 - 0x60e38f2 Photoshop ImportChartCursor PEF binary: Photoshop ImportChartCursor 0x60e4000 - 0x60eb9b1 3D Rotationp PEF binary: 3D Rotationp 0x60ec000 - 0x611b458 JPEG Import ExportANEL PEF binary: JPEG Import ExportANEL 0x611c000 - 0x613d89f GIF Import Export PEF binary: GIF Import Export 0x613e000 - 0x616d7f7 Flash Export PEF binary: Flash Export 0x616e000 - 0x6175d75 Fisheye Lens PEF binary: Fisheye Lens 0x6176000 - 0x6182343 IPTC File Info PEF binary: IPTC File Info 0x6184000 - 0x6193790 PEF binary: 0x6194000 - 0x61965e5 Photoshop Palette Import PEF binary: Photoshop Palette Import 0x6197000 - 0x619c5a4 Add PointsZ PEF binary: Add PointsZ 0x619d000 - 0x61ad92b Emboss PEF binary: Emboss 0x61ae000 - 0x61be6e1 AppleScript™ Xtrawpc PEF binary: AppleScript™ Xtrawpc 0x61bf000 - 0x61d16de Navigation PEF binary: Navigation 0x61d2000 - 0x61ff94e CorelDRAW 7-8 Import PEF binary: CorelDRAW 7-8 Import 0x620a000 - 0x620d7f1 Trap PEF binary: Trap 0x620e000 - 0x62149d4 Import RGB Color Table PEF binary: Import RGB Color Table 0x6215000 - 0x6217dfe Arc PEF binary: Arc 0x6218000 - 0x62211e3 Delete Empty Text Blocks PEF binary: Delete Empty Text Blocks 0x6222000 - 0x624c8da MIX Services PEF binary: MIX Services 0x7d0b000 - 0x7d37fff com.apple.print.framework.Print.Private 4.6 (163.10) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Print.framework/Versions/Current/Plugins/PrintCocoaUI.bundle/Contents/MacOS/PrintCocoaUI 0x7dbf000 - 0x7ddffff com.apple.print.PrintingCocoaPDEs 4.6 (163.10) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Print.framework/Versions/A/Plugins/PrintingCocoaPDEs.bundle/Contents/MacOS/PrintingCocoaPDEs 0x7f05000 - 0x7f39fff com.epson.ijprinter.pde.PrintSetting.EP0827MSA 6.36 /Library/Printers/EPSON/InkjetPrinter/PrintingModule/EP0827MSA_Core.plugin/Contents/PDEs/PrintSetting.plugin/Contents/MacOS/PrintSetting 0x7f49000 - 0x8044fff com.epson.ijprinter.IJPFoundation 6.54 /Library/Printers/EPSON/InkjetPrinter/Libraries/IJPFoundation.framework/Versions/A/IJPFoundation 0x809a000 - 0x80cdfff com.epson.ijprinter.pde.ColorManagement.EP0827MSA 6.36 /Library/Printers/EPSON/InkjetPrinter/PrintingModule/EP0827MSA_Core.plugin/Contents/PDEs/ColorManagement.plugin/Contents/MacOS/ColorManagement 0x80dd000 - 0x8110fff com.epson.ijprinter.pde.ExpandMargin.EP0827MSA 6.36 /Library/Printers/EPSON/InkjetPrinter/PrintingModule/EP0827MSA_Core.plugin/Contents/PDEs/ExpandMargin.plugin/Contents/MacOS/ExpandMargin 0x8120000 - 0x8153fff com.epson.ijprinter.pde.ExtensionSetting.EP0827MSA 6.36 /Library/Printers/EPSON/InkjetPrinter/PrintingModule/EP0827MSA_Core.plugin/Contents/PDEs/ExtensionSetting.plugin/Contents/MacOS/ExtensionSetting 0x8163000 - 0x8196fff com.epson.ijprinter.pde.DoubleSidePrint.EP0827MSA 6.36 /Library/Printers/EPSON/InkjetPrinter/PrintingModule/EP0827MSA_Core.plugin/Contents/PDEs/DoubleSidePrint.plugin/Contents/MacOS/DoubleSidePrint 0x81a6000 - 0x81bffff com.apple.print.PrintingTiogaPDEs 4.6 (163.10) /System/Library/Frameworks/Carbon.framework/Frameworks/Print.framework/Versions/A/Plugins/PrintingTiogaPDEs.bundle/Contents/MacOS/PrintingTiogaPDEs 0x838f000 - 0x8397fff com.apple.print.converter.plugin 4.5 (163.8) /System/Library/Printers/CVs/Converter.plugin/Contents/MacOS/Converter 0x78e00000 - 0x78e07fff libLW8Utils.dylib /System/Library/Printers/Libraries/libLW8Utils.dylib 0x79200000 - 0x7923ffff libLW8Converter.dylib /System/Library/Printers/Libraries/libLW8Converter.dylib 0x8fe00000 - 0x8fe52fff dyld 46.16 /usr/lib/dyld 0x90000000 - 0x901bcfff libSystem.B.dylib /usr/lib/libSystem.B.dylib 0x90214000 - 0x90219fff libmathCommon.A.dylib /usr/lib/system/libmathCommon.A.dylib 0x9021b000 - 0x90268fff com.apple.CoreText 1.0.4 (???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreText.framework/Versions/A/CoreText 0x90293000 - 0x90344fff ATS /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/ATS 0x90373000 - 0x9072efff com.apple.CoreGraphics 1.258.85 (???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/CoreGraphics 0x907bb000 - 0x90895fff com.apple.CoreFoundation 6.4.11 (368.35) /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x908de000 - 0x908defff com.apple.CoreServices 10.4 (???) /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices 0x908e0000 - 0x909e2fff libicucore.A.dylib /usr/lib/libicucore.A.dylib 0x90a3c000 - 0x90ac0fff libobjc.A.dylib /usr/lib/libobjc.A.dylib 0x90aea000 - 0x90b5cfff IOKit /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit 0x90b72000 - 0x90b84fff libauto.dylib /usr/lib/libauto.dylib 0x90b8b000 - 0x90e62fff com.apple.CoreServices.CarbonCore 681.19 (681.21) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/CarbonCore 0x90ec8000 - 0x90f48fff com.apple.CoreServices.OSServices 4.1 /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/OSServices.framework/Versions/A/OSServices 0x90f92000 - 0x90fd4fff com.apple.CFNetwork 129.24 /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CFNetwork.framework/Versions/A/CFNetwork 0x90fe9000 - 0x91001fff com.apple.WebServices 1.1.2 (1.1.0) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/WebServicesCore.framework/Versions/A/WebServicesCore 0x91011000 - 0x91092fff com.apple.SearchKit 1.0.8 /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/SearchKit.framework/Versions/A/SearchKit 0x910d8000 - 0x91101fff com.apple.Metadata 10.4.4 (121.36) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/Metadata.framework/Versions/A/Metadata 0x91112000 - 0x91120fff libz.1.dylib /usr/lib/libz.1.dylib 0x91123000 - 0x912defff com.apple.security 4.6 (29770) /System/Library/Frameworks/Security.framework/Versions/A/Security 0x913dd000 - 0x913e6fff com.apple.DiskArbitration 2.1.2 /System/Library/Frameworks/DiskArbitration.framework/Versions/A/DiskArbitration 0x913ed000 - 0x913f5fff libbsm.dylib /usr/lib/libbsm.dylib 0x913f9000 - 0x91421fff com.apple.SystemConfiguration 1.8.3 /System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfiguration 0x91434000 - 0x9143ffff libgcc_s.1.dylib /usr/lib/libgcc_s.1.dylib 0x91444000 - 0x914bffff com.apple.audio.CoreAudio 3.0.5 /System/Library/Frameworks/CoreAudio.framework/Versions/A/CoreAudio 0x914fc000 - 0x914fcfff com.apple.ApplicationServices 10.4 (???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/ApplicationServices 0x914fe000 - 0x91536fff com.apple.AE 312.2 /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/AE.framework/Versions/A/AE 0x91551000 - 0x91623fff com.apple.ColorSync 4.4.13 /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ColorSync.framework/Versions/A/ColorSync 0x91676000 - 0x91707fff com.apple.print.framework.PrintCore 4.6 (177.13) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/PrintCore.framework/Versions/A/PrintCore 0x9174e000 - 0x91805fff com.apple.QD 3.10.28 (???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/QD.framework/Versions/A/QD 0x91842000 - 0x918a0fff com.apple.HIServices 1.5.3 (???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/HIServices.framework/Versions/A/HIServices 0x918cf000 - 0x918f0fff com.apple.LangAnalysis 1.6.1 /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/LangAnalysis.framework/Versions/A/LangAnalysis 0x91904000 - 0x91929fff com.apple.FindByContent 1.5 /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/FindByContent.framework/Versions/A/FindByContent 0x9193c000 - 0x9197efff com.apple.LaunchServices 183.1 /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/LaunchServices.framework/Versions/A/LaunchServices 0x9199a000 - 0x919aefff com.apple.speech.synthesis.framework 3.3 /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/SpeechSynthesis.framework/Versions/A/SpeechSynthesis 0x919bc000 - 0x91a02fff com.apple.ImageIO.framework 1.5.9 /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/ImageIO 0x91a19000 - 0x91ae0fff libcrypto.0.9.7.dylib /usr/lib/libcrypto.0.9.7.dylib 0x91b2e000 - 0x91b43fff libcups.2.dylib /usr/lib/libcups.2.dylib 0x91b48000 - 0x91b66fff libJPEG.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libJPEG.dylib 0x91b6c000 - 0x91c23fff libJP2.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libJP2.dylib 0x91c72000 - 0x91c76fff libGIF.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libGIF.dylib 0x91c78000 - 0x91ce2fff libRaw.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libRaw.dylib 0x91ce7000 - 0x91d02fff libPng.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libPng.dylib 0x91d07000 - 0x91d0afff libRadiance.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libRadiance.dylib 0x91d0c000 - 0x91deafff libxml2.2.dylib /usr/lib/libxml2.2.dylib 0x91e0a000 - 0x91e48fff libTIFF.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libTIFF.dylib 0x91e4f000 - 0x91e4ffff com.apple.Accelerate 1.2.2 (Accelerate 1.2.2) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate 0x91e51000 - 0x91f36fff com.apple.vImage 2.4 /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/Versions/A/vImage 0x91f3e000 - 0x91f5dfff com.apple.Accelerate.vecLib 3.2.2 (vecLib 3.2.2) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/vecLib 0x91fc9000 - 0x92037fff libvMisc.dylib /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib 0x92042000 - 0x920d7fff libvDSP.dylib /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvDSP.dylib 0x920f1000 - 0x92679fff libBLAS.dylib /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib 0x926ac000 - 0x929d7fff libLAPACK.dylib /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib 0x92a07000 - 0x92af5fff libiconv.2.dylib /usr/lib/libiconv.2.dylib 0x92af8000 - 0x92b80fff com.apple.DesktopServices 1.3.7 /System/Library/PrivateFrameworks/DesktopServicesPriv.framework/Versions/A/DesktopServicesPriv 0x92bc1000 - 0x92df4fff com.apple.Foundation 6.4.12 (567.42) /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation 0x92f27000 - 0x92f45fff libGL.dylib /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGL.dylib 0x92f50000 - 0x92faafff libGLU.dylib /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLU.dylib 0x92fc8000 - 0x92fc8fff com.apple.Carbon 10.4 (???) /System/Library/Frameworks/Carbon.framework/Versions/A/Carbon 0x92fca000 - 0x92fdefff com.apple.ImageCapture 3.0 /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/ImageCapture.framework/Versions/A/ImageCapture 0x92ff6000 - 0x93006fff com.apple.speech.recognition.framework 3.4 /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/SpeechRecognition.framework/Versions/A/SpeechRecognition 0x93012000 - 0x93027fff com.apple.securityhi 2.0 (203) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/SecurityHI.framework/Versions/A/SecurityHI 0x93039000 - 0x930c0fff com.apple.ink.framework 101.2 (69) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Ink.framework/Versions/A/Ink 0x930d4000 - 0x930dffff com.apple.help 1.0.3 (32) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Help.framework/Versions/A/Help 0x930e9000 - 0x93117fff com.apple.openscripting 1.2.7 (???) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/OpenScripting.framework/Versions/A/OpenScripting 0x93131000 - 0x93140fff com.apple.print.framework.Print 5.2 (192.4) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Print.framework/Versions/A/Print 0x9314c000 - 0x931b2fff com.apple.htmlrendering 1.1.2 /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HTMLRendering.framework/Versions/A/HTMLRendering 0x931e3000 - 0x93232fff com.apple.NavigationServices 3.4.4 (3.4.3) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/NavigationServices.framework/Versions/A/NavigationServices 0x93260000 - 0x9327dfff com.apple.audio.SoundManager 3.9 /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/CarbonSound.framework/Versions/A/CarbonSound 0x9328f000 - 0x9329cfff com.apple.CommonPanels 1.2.2 (73) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/CommonPanels.framework/Versions/A/CommonPanels 0x932a5000 - 0x935b3fff com.apple.HIToolbox 1.4.10 (???) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/HIToolbox 0x93703000 - 0x9370ffff com.apple.opengl 1.4.7 /System/Library/Frameworks/OpenGL.framework/Versions/A/OpenGL 0x93714000 - 0x93734fff com.apple.DirectoryService.Framework 3.3 /System/Library/Frameworks/DirectoryService.framework/Versions/A/DirectoryService 0x93787000 - 0x93787fff com.apple.Cocoa 6.4 (???) /System/Library/Frameworks/Cocoa.framework/Versions/A/Cocoa 0x93789000 - 0x93dbcfff com.apple.AppKit 6.4.10 (824.48) /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit 0x94149000 - 0x941bbfff com.apple.CoreData 91 (92.1) /System/Library/Frameworks/CoreData.framework/Versions/A/CoreData 0x941f4000 - 0x942b9fff com.apple.audio.toolbox.AudioToolbox 1.4.7 /System/Library/Frameworks/AudioToolbox.framework/Versions/A/AudioToolbox 0x9430c000 - 0x9430cfff com.apple.audio.units.AudioUnit 1.4 /System/Library/Frameworks/AudioUnit.framework/Versions/A/AudioUnit 0x9430e000 - 0x944cefff com.apple.QuartzCore 1.4.12 /System/Library/Frameworks/QuartzCore.framework/Versions/A/QuartzCore 0x94518000 - 0x94555fff libsqlite3.0.dylib /usr/lib/libsqlite3.0.dylib 0x9455d000 - 0x945adfff libGLImage.dylib /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLImage.dylib 0x945b6000 - 0x945cffff com.apple.CoreVideo 1.4.2 /System/Library/Frameworks/CoreVideo.framework/Versions/A/CoreVideo 0x9477d000 - 0x9478cfff libCGATS.A.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libCGATS.A.dylib 0x94794000 - 0x947a1fff libCSync.A.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libCSync.A.dylib 0x947a7000 - 0x947c6fff libPDFRIP.A.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libPDFRIP.A.dylib 0x947e7000 - 0x94800fff libRIP.A.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libRIP.A.dylib 0x94807000 - 0x94b3afff com.apple.QuickTime 7.6.4 (1327.73) /System/Library/Frameworks/QuickTime.framework/Versions/A/QuickTime 0x94c22000 - 0x94c93fff libstdc++.6.dylib /usr/lib/libstdc++.6.dylib 0x94e09000 - 0x94f39fff com.apple.AddressBook.framework 4.0.6 (490) /System/Library/Frameworks/AddressBook.framework/Versions/A/AddressBook 0x94fcc000 - 0x94fdbfff com.apple.DSObjCWrappers.Framework 1.1 /System/Library/PrivateFrameworks/DSObjCWrappers.framework/Versions/A/DSObjCWrappers 0x94fe3000 - 0x95010fff com.apple.LDAPFramework 1.4.1 (69.0.1) /System/Library/Frameworks/LDAP.framework/Versions/A/LDAP 0x95017000 - 0x95027fff libsasl2.2.dylib /usr/lib/libsasl2.2.dylib 0x9502b000 - 0x9505afff libssl.0.9.7.dylib /usr/lib/libssl.0.9.7.dylib 0x9506a000 - 0x95087fff libresolv.9.dylib /usr/lib/libresolv.9.dylib 0x9acff000 - 0x9ad1dfff com.apple.OpenTransport 2.0 /System/Library/PrivateFrameworks/OpenTransport.framework/OpenTransport 0x9ad98000 - 0x9ad99fff com.apple.iokit.dvcomponentglue 1.7.9 /System/Library/Frameworks/DVComponentGlue.framework/Versions/A/DVComponentGlue 0x9b1db000 - 0x9b1f2fff libCFilter.A.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libCFilter.A.dylib 0x9c69b000 - 0x9c6bdfff libmx.A.dylib /usr/lib/libmx.A.dylib 0xeab00000 - 0xeab25fff libConverter.dylib /System/Library/Printers/Libraries/libConverter.dylib Model: PowerMac3,1, BootROM 4.2.8f1, 1 processors, PowerPC G4 (3.3), 1.3 GHz, 2 GB Graphics: ATI Radeon 7500, ATY,RV200, AGP, 32 MB Memory Module: DIMM0/J21, 512 MB, SDRAM, PC133-333 Memory Module: DIMM1/J22, 512 MB, SDRAM, PC133-333 Memory Module: DIMM2/J23, 512 MB, SDRAM, PC133-333 Memory Module: DIMM3/J24, 512 MB, SDRAM, PC133-333 Modem: Spring, UCJ, V.90, 3.0F, APPLE VERSION 0001, 4/7/1999 Network Service: Built-in Ethernet, Ethernet, en0 PCI Card: SeriTek/1V2E2 v.5.1.3,11/22/05, 23:47:18, ata, SLOT-B PCI Card: pci-bridge, pci, SLOT-C PCI Card: firewire, ieee1394, 2x8 PCI Card: usb, usb, 2x9 PCI Card: usb, usb, 2x9 PCI Card: pcie55,2928, 2x9 PCI Card: ATTO,ExpressPCIPro, scsi, SLOT-D Parallel ATA Device: MATSHITADVD-ROM SR-8585 Parallel ATA Device: IOMEGA ZIP 100 ATAPI USB Device: Hub, Up to 12 Mb/sec, 500 mA USB Device: Hub, Up to 12 Mb/sec, 500 mA USB Device: USB2.0 Hub, Up to 12 Mb/sec, 500 mA USB Device: iMic USB audio system, Griffin Technology, Inc, Up to 12 Mb/sec, 500 mA USB Device: USB Storage Device, Generic, Up to 12 Mb/sec, 500 mA USB Device: USB2.0 MFP, EPSON, Up to 12 Mb/sec, 500 mA USB Device: DYMO LabelWriter Twin Turbo, DYMO, Up to 12 Mb/sec, 500 mA USB Device: USB 2.0 CD + HDD, DMI, Up to 12 Mb/sec, 500 mA USB Device: USB2.0 Hub, Up to 12 Mb/sec, 500 mA USB Device: USB2.0 Hub, Up to 12 Mb/sec, 500 mA USB Device: iMate, USB To ADB Adaptor, Griffin Technology, Inc., Up to 1.5 Mb/sec, 500 mA USB Device: Hub in Apple Pro Keyboard, Alps Electric, Up to 12 Mb/sec, 500 mA USB Device: Griffin PowerMate, Griffin Technology, Inc., Up to 1.5 Mb/sec, 100 mA

    Read the article

  • Announcing SonicAgile – An Agile Project Management Solution

    - by Stephen.Walther
    I’m happy to announce the public release of SonicAgile – an online tool for managing software projects. You can register for SonicAgile at www.SonicAgile.com and start using it with your team today. SonicAgile is an agile project management solution which is designed to help teams of developers coordinate their work on software projects. SonicAgile supports creating backlogs, scrumboards, and burndown charts. It includes support for acceptance criteria, story estimation, calculating team velocity, and email integration. In short, SonicAgile includes all of the tools that you need to coordinate work on a software project, get stuff done, and build great software. Let me discuss each of the features of SonicAgile in more detail. SonicAgile Backlog You use the backlog to create a prioritized list of user stories such as features, bugs, and change requests. Basically, all future work planned for a product should be captured in the backlog. We focused our attention on designing the user interface for the backlog. Because the main function of the backlog is to prioritize stories, we made it easy to prioritize a story by just drag and dropping the story from one location to another. We also wanted to make it easy to add stories from the product backlog to a sprint backlog. A sprint backlog contains the stories that you plan to complete during a particular sprint. To add a story to a sprint, you just drag the story from the product backlog to the sprint backlog. Finally, we made it easy to track team velocity — the average amount of work that your team completes in each sprint. Your team’s average velocity is displayed in the backlog. When you add too many stories to a sprint – in other words, you attempt to take on too much work – you are warned automatically: SonicAgile Scrumboard Every workday, your team meets to have their daily scrum. During the daily scrum, you can use the SonicAgile Scrumboard to see (at a glance) what everyone on the team is working on. For example, the following scrumboard shows that Stephen is working on the Fix Gravatar Bug story and Pete and Jane have finished working on the Product Details Page story: Every story can be broken into tasks. For example, to create the Product Details Page, you might need to create database objects, do page design, and create an MVC controller. You can use the Scrumboard to track the state of each task. A story can have acceptance criteria which clarify the requirements for the story to be done. For example, here is how you can specify the acceptance criteria for the Product Details Page story: You cannot close a story — and remove the story from the list of active stories on the scrumboard — until all tasks and acceptance criteria associated with the story are done. SonicAgile Burndown Charts You can use Burndown charts to track your team’s progress. SonicAgile supports Release Burndown, Sprint Burndown by Task Estimates, and Sprint Burndown by Story Points charts. For example, here’s a sample of a Sprint Burndown by Story Points chart: The downward slope shows the progress of the team when closing stories. The vertical axis represents story points and the horizontal axis represents time. Email Integration SonicAgile was designed to improve your team’s communication and collaboration. Most stories and tasks require discussion to nail down exactly what work needs to be done. The most natural way to discuss stories and tasks is through email. However, you don’t want these discussions to get lost. When you use SonicAgile, all email discussions concerning a story or a task (including all email attachments) are captured automatically. At any time in the future, you can view all of the email discussion concerning a story or a task by opening the Story Details dialog: Why We Built SonicAgile We built SonicAgile because we needed it for our team. Our consulting company, Superexpert, builds websites for financial services, startups, and large corporations. We have multiple teams working on multiple projects. Keeping on top of all of the work that needs to be done to complete a software project is challenging. You need a good sense of what needs to be done, who is doing it, and when the work will be done. We built SonicAgile because we wanted a lightweight project management tool which we could use to coordinate the work that our team performs on software projects. How We Built SonicAgile We wanted SonicAgile to be easy to use, highly scalable, and have a highly interactive client interface. SonicAgile is very close to being a pure Ajax application. We built SonicAgile using ASP.NET MVC 3, jQuery, and Knockout. We would not have been able to build such a complex Ajax application without these technologies. Almost all of our MVC controller actions return JSON results (While developing SonicAgile, I would have given my left arm to be able to use the new ASP.NET Web API). The controller actions are invoked from jQuery Ajax calls from the browser. We built SonicAgile on Windows Azure. We are taking advantage of SQL Azure, Table Storage, and Blob Storage. Windows Azure enables us to scale very quickly to handle whatever demand is thrown at us. Summary I hope that you will try SonicAgile. You can register at www.SonicAgile.com (there’s a free 30-day trial). The goal of SonicAgile is to make it easier for teams to get more stuff done, work better together, and build amazing software. Let us know what you think!

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 3

    - by rajbk
    We continue building our report in this three part series. Creating an ASP.NET report using Visual Studio 2010 - Part 1 Creating an ASP.NET report using Visual Studio 2010 - Part 2 Adding the ReportViewer control and filter drop downs. Open the source code for index.aspx and add a ScriptManager control. This control is required for the ReportViewer control. Add a DropDownList for the categories and suppliers. Add the ReportViewer control. The markup after these steps is shown below. <div> <asp:ScriptManager ID="smScriptManager" runat="server"> </asp:ScriptManager> <div id="searchFilter"> Filter by: Category : <asp:DropDownList ID="ddlCategories" runat="server" /> and Supplier : <asp:DropDownList ID="ddlSuppliers" runat="server" /> </div> <rsweb:ReportViewer ID="rvProducts" runat="server"> </rsweb:ReportViewer> </div> The design view for index.aspx is shown below. The dropdowns will display the categories and suppliers in the database. Changing the selection in the drop downs will cause the report to be filtered by the selections in the dropdowns. You will see how to do this in the next steps.   Attaching the RDLC to the ReportViewer control by clicking on the top right of the control, going to Report Viewer tasks and selecting Products.rdlc.   Resize the ReportViewer control by dragging at the bottom right corner. I set mine to 800px x 500px. You can also set this value in source view. Defining the data sources. We will now define the Data Source used to populate the report. Go back to the “ReportViewer Tasks” and select “Choose Data Sources” Select a “New data source..” Select “Object” and name your Data Source ID “odsProducts”   In the next screen, choose “ProductRepository” as your business object. Choose “GetProductsProjected” in the next screen.   The method requires a SupplierID and CategoryID. We will set these so that our data source gets the values from the drop down lists we defined earlier. Set the parameter source to be of type “Control” and set the ControlIDs to be ddlSuppliers and ddlCategories respectively. Your screen will look like this: We are now going to define the data source for our drop downs. Select the ddlCategory drop down and pick “Choose Data Source”. Pick “Object” and give it an id “odsCategories”   In the next screen, choose “ProductRepository” Select the GetCategories() method in the next screen.   Select “CategoryName” and “CategoryID” in the next screen. We are done defining the data source for the Category drop down. Perform the same steps for the Suppliers drop down.   Select each dropdown and set the AppendDataBoundItems to true and AutoPostback to true.     The AppendDataBoundItems is needed because we are going to insert an “All“ list item with a value of empty. Go to each drop down and add this list item markup as shown below> Finally, double click on each drop down in the designer and add the following code in the code behind. This along with the “Autopostback= true” attribute refreshes the report anytime a drop down is changed. protected void ddlCategories_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); }   protected void ddlSuppliers_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); } Compile your report and run the page. You should see the report rendered. Note that the tool bar in the ReportViewer control gives you a couple of options including the ability to export the data to Excel, PDF or word.   Conclusion Through this three part series, we did the following: Created a data layer for use by our RDLC. Created an RDLC using the report wizard and define a dataset for the report. Used the report design surface to design our report including adding a chart. Used the ReportViewer control to attach the RDLC. Connected our ReportWiewer to a data source and take parameter values from the drop downlists. Used AutoPostBack to refresh the reports when the dropdown selection was changed. RDLCs allow you to create interactive reports including drill downs and grouping. For even more advanced reports you can use Microsoft® SQL Server™ Reporting Services with RDLs. With RDLs, the report is rendered on the report server instead of the web server. Another nice thing about RDLs is that you can define a parameter list for the report and it gets rendered automatically for you. RDLCs and RDLs both have their advantages and its best to compare them and choose the right one for your requirements. Download VS2010 RTM Sample project NorthwindReports.zip   Alfred Borden: Are you watching closely?

    Read the article

  • How to Sync Specific Folders With Dropbox

    - by Matthew Guay
    Would you like to sync specific folders with Dropbox instead of automatically syncing all of your folders to all of your computers?  Here’s how using Selective Sync available in the latest Beta version of Dropbox. Dropbox is a great tool for keeping your important files synced between your computers, and we have covered many interesting things you can do with your Dropbox account.  But until now, there was no way to only sync certain folders with each computer; it was all or nothing.  This could be frustrating if you wanted to store large files from one computer but didn’t want them on a computer with a smaller hard drive.  The latest Beta version of Dropbox allows you to selectively choose which folders to sync between computers. Please Note: This feature is currently only available in the 0.8 beta version of Dropbox. Setup the new Beta Download the new beta version of Dropbox 0.8 (link below); choose the correct download for your system.  Run the installer as normal. It only took a couple seconds to install, though it made the taskbar disappear briefly at the end of the installation on our tests.  Strangely, the installer doesn’t let you know it’s finished installing; if you already had a previous version of Dropbox installed, it will simply start working from your system tray as before.  If this is a new installation of Dropbox, you will be asked to enter your Dropbox account info or create a new account.   Selectively Sync Folders By default, Dropbox will still sync all of your Dropbox folders to all of your computers.  Once this beta is installed, you can choose individual folders or subfolders you don’t want to sync.  Right-click the Dropbox icon in your system tray and select Preferences. Click the Advanced tab on the top, and then click the new Selective Sync button. Now uncheck any folders you don’t want to sync to this computer.  These folders will still exist on your other machines and in the Dropbox web interface, but they will not be downloaded to this computer. The default view only shows your top-level folders in your Dropbox account.  If you wish to sync certain folders but exclude their subfolders, click the Switch to Advanced View button.   Expand any folder and uncheck any subfolders you don’t want to sync.  Notice that the parent folder’s check box is filled now, showing that it is partially synced. Click OK when you’ve made the changes you want.  Dropbox will then make sure you know these folders will stop syncing to this computer; click OK again if you’re sure you don’t want to sync these folders.   Dropbox will cleanup your folder and remove the files and folders you don’t want synced.   Next time you open your Dropbox folder, you’ll notice that the folders we unchecked are no longer in this computer’s Dropbox folder.  They are still in our Dropbox online account, and on any other computers we’re syncing with. If you add a new folder with the same name as a folder you stopped syncing, you’ll notice a grey minus icon over the folder.  This folder will not sync with your other computers or your online Dropbox account. If you want to add these folders back to this computer’s Dropbox, just repeat the steps, this time checking the folders you want to sync.  If you have any folders that were not syncing before, their names will have (Selective Sync Conflict) added to the end, and will sync with all of your computers. Conclusion We’re excited that we can now choose exactly which folders we want synced on each computer.  Since everything is still synced with the online Dropbox, we can still access any of the folders from anywhere.  This makes your Dropbox much more versatile, and can help you keep the folders synced exactly the way you want. Links Download the new Dropbox 0.8.64 beta Signup for Dropbox Similar Articles Productive Geek Tips Add "My Dropbox" to Your Windows 7 Start MenuSync Your Pidgin Profile Across Multiple PCs with DropboxUser Guide to Dropbox Shared FoldersUse Any Folder For Your Ubuntu Desktop (Even a Dropbox Folder)Shut Down or Reboot a Solaris System TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • SQL SERVER – SSMS: Memory Usage By Memory Optimized Objects Report

    - by Pinal Dave
    At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition. This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list. If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group. Here is the query used to identify whether a database has memory-optimized file group or not. SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX' Once we add filegroup using below command, we would see different version of report. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA GO The report is still empty because we have not defined any Memory Optimized table in the database.  Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO') TO FILEGROUP [IMO_FG] GO You may have to change the path based on your SQL Server configuration. Below is the script to create the table. USE MemoryOptimizedDB GO --Drop table if it already exists. IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL DROP TABLE dbo.SQLAuthority GO CREATE TABLE dbo.SQLAuthority ( ID INT IDENTITY NOT NULL, Name CHAR(500)  COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal', CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID), INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below. Notice that table memory is zero but index is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows INSERT INTO SQLAuthority VALUES (DEFAULT) GO 10000 Here is the same report after inserting 1000 rows into our InMemory table.    There are total three sections in the whole report. Total Memory consumed by In-Memory Objects Pie chart showing memory distribution based on type of consumer – table, index and system. Details of memory usage by each table. The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs.  So, to look at user table memory usage, below is the over-simplified version of query. USE MemoryOptimizedDB GO SELECT OBJECT_NAME(OBJECT_ID), * FROM sys.dm_db_xtp_table_memory_stats WHERE OBJECT_ID > 0 GO This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Memory, SQL Reports

    Read the article

  • Using LogParser - part 2

    - by fatherjack
    PersonAddress.csv SalesOrderDetail.tsv In part 1 of this series we downloaded and installed LogParser and used it to list data from a csv file. That was a good start and in this article we are going to see the different ways we can stream data and choose whether a whole file is selected. We are also going to take a brief look at what file types we can interrogate. If we take the query from part 1 and add a value for the output parameter as -o:datagrid so that the query becomes LOGPARSER "SELECT top 15 * FROM C:\LP\person_address.csv" -o:datagrid and run that we get a different result. A pop-up dialog that lets us view the results in a resizable grid. Notice that because we didn't specify the columns we wanted returned by LogParser (we used SELECT *) is has added two columns to the recordset - filename and rownumber. This behaviour can be very useful as we will see in future parts of this series. You can click Next 10 rows or All rows or close the datagrid once you are finished reviewing the data. You may have noticed that the files that I am working with are different file types - one is a csv (comma separated values) and the other is a tsv (tab separated values). If you want to convert a file from one to another then LogParser makes it incredibly simple. Rather than using 'datagrid' as the value for the output parameter, use 'csv': logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\Sales_SalesOrderDetail.csv FROM C:\Sales_SalesOrderDetail.tsv" -i:tsv -o:csv Those familiar with SQL will not have to make a very big leap of faith to making adjustments to the above query to filter in/out records from the source file. Lets get all the records from the same file where the Order Quantity (OrderQty) is more than 25: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailOver25.csv FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty > 25" -i:tsv -o:csv Or we could find all those records where the Order Quantity is equal to 25 and output it to an xml file: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailEq25.xml FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty = 25" -i:tsv -o:xml All the standard comparison operators are to be found in LogParser; >, <, =, LIKE, BETWEEN, OR, NOT, AND. Input and Output file formats. LogParser has a pretty impressive list of file formats that it can parse and a good selection of output formats that will let you generate output in a format that is useable for whatever process or application you may be using. From any of these To any of these IISW3C: parses IIS log files in the W3C Extended Log File Format.   NAT: formats output records as readable tabulated columns. IIS: parses IIS log files in the Microsoft IIS Log File Format. CSV: formats output records as comma-separated values text. BIN: parses IIS log files in the Centralized Binary Log File Format. TSV: formats output records as tab-separated or space-separated values text. IISODBC: returns database records from the tables logged to by IIS when configured to log in the ODBC Log Format. XML: formats output records as XML documents. HTTPERR: parses HTTP error log files generated by Http.sys. W3C: formats output records in the W3C Extended Log File Format. URLSCAN: parses log files generated by the URLScan IIS filter. TPL: formats output records following user-defined templates. CSV: parses comma-separated values text files. IIS: formats output records in the Microsoft IIS Log File Format. TSV: parses tab-separated and space-separated values text files. SQL: uploads output records to a table in a SQL database. XML: parses XML text files. SYSLOG: sends output records to a Syslog server. W3C: parses text files in the W3C Extended Log File Format. DATAGRID: displays output records in a graphical user interface. NCSA: parses web server log files in the NCSA Common, Combined, and Extended Log File Formats. CHART: creates image files containing charts. TEXTLINE: returns lines from generic text files. TEXTWORD: returns words from generic text files. EVT: returns events from the Windows Event Log and from Event Log backup files (.evt files). FS: returns information on files and directories. REG: returns information on registry values. ADS: returns information on Active Directory objects. NETMON: parses network capture files created by NetMon. ETW: parses Enterprise Tracing for Windows trace log files and live sessions. COM: provides an interface to Custom Input Format COM Plugins. So, you can query data from any of the types on the left and really easily get it into a format where it is ready for analysis by other tools. To a DBA or network Administrator with an enquiring mind this is a treasure trove. In part 3 we will look at working with multiple sources and specifically outputting to SQL format. See you there!

    Read the article

  • Create Chemistry Equations and Diagrams in Word

    - by Matthew Guay
    Microsoft Word is a great tool for formatting text, but what if you want to insert a chemistry formula or diagram?  Thanks to a new free add-in for Word, you can now insert high-quality chemistry formulas and diagrams directly from the Ribbon in Word. Microsoft’s new Education Labs has recently released the new Chemistry Add-in for Word 2007 and 2010.  This free download adds support for entering and editing chemistry symbols, diagrams, and formulas using the standard XML based Chemical Markup Language.  You can convert any chemical name, such as benzene, or formula, such as H2O, into a chemical diagram, standard name, or formula.  Whether you’re a professional chemist, just taking chemistry in school, or simply curious about the makeup of Citric Acid, this add-in is an exciting way to bring chemistry to your computer. This add-in works great on Word 2007 and 2010, including the 64 bit version of Word 2010.  Please note that the current version is still in beta, so only run it if you are comfortable running beta products. Getting Started Download the Chemistry add-in from Microsoft Education Labs (link below), and unzip the file.  Then, run the ChemistryAddinforWordBeta2.Setup.msi. It may inform you that you need to install the Visual Studio Tools for Office 3.0.  Simply click Yes to download these tools. This will open the download in your default browser.  Simply click run, or save and then run it when it is downloaded. Now, click next to install the Visual Studio Tools for Office as usual. When this is finished, run the ChemistryAddinforWordBeta2.Setup.msi again.  This time, you can easily install it with the default options. Once it’s finished installing, open Word to try out the Chemistry Add-in.  You will be asked if you want to install this customization, so click Install to enable it. Now you will have a new Chemistry tab in your Word ribbon.  Here’s the ribbon in Word 2010… And here it is in Word 2007.   Using the Chemistry Add-in It’s very easy to insert nice chemistry diagrams and formulas in Word with the Chemistry add-in.  You can quickly insert a premade diagram from the Chemistry Gallery: Or you can insert a formula from file.  Simply click “From File” and choose any Chemical Markup Language (.cml) formatted file to insert the chemical formula. You can also convert any chemical name to it’s chemical form.  Simply select the word, right-click, select “Convert to Chemistry Zone” and then click on its name. Now you can see the chemical form in the sidebar if you click the Chemistry Navigator button, and can choose to insert the diagram into the document.  Some chemicals will automatically convert to the diagram in the document, while others simply link to it in the sidebar.  Either way, you can display exactly what you want. You can also convert a chemical formula directly to it’s chemical diagram.  Here we entered H2O and converted it to Chemistry Zone: This directly converted it to the diagram directly in the document. You can click the Edit button on the top, and from there choose to either edit the 2D model of the chemical, or edit the labels. When you click Edit Labels, you may be asked which form you wish to display.  Here’s the options for potassium permanganate: You can then edit the names and formulas, and add or remove any you wish. If you choose to edit the chemical in 2D, you can even edit the individual atoms and change the chemical you’re diagramming.  This 2D editor has a lot of options, so you can get your chemical diagram to look just like you want. And, if you need any help or want to learn more about the Chemistry add-in and its features, simply click the help button in the Chemistry Ribbon.  This will open a Word document containing examples and explanations which can be helpful in mastering all the features of this add-in. All of this works perfectly, whether you’re running it in Word 2007 or 2010, 32 or 64 bit editions. Conclusion Whether you’re using chemistry formulas everyday or simply want to investigate a chemical makeup occasionally, this is a great way to do it with tools you already have on your computer.  It will also help make homework a bit easier if you’re struggling with it in high school or college. Links Download the Chemistry Add-in for Word Introducing Chemistry Add-in for Word – MSDN blogs Chemistry Markup Language – Wikipedia Similar Articles Productive Geek Tips Geek Reviews: Using Dia as a Free Replacement for Microsoft VisioEasily Summarize A Word 2007 DocumentCreate a Hyperlink in a Word 2007 Flow Chart and Hide Annoying ScreenTipsHow To Create and Publish Blog Posts in Word 2010 & 2007Using Word 2007 as a Blogging Tool TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper

    Read the article

  • Edit Media Center TV Recordings with Windows Live Movie Maker

    - by DigitalGeekery
    Have you ever wanted to take a TV program you’ve recorded in Media Center and remove the commercials or save clips of favorite scenes? Today we’ll take a look at editing WTV and DVR-MS files with Windows Live Movie Maker. Download and Install Windows Live Movie Maker. The download link can be found at the end of the article. WLMM is part of Windows Live Essentials, but you can choose to install only the applications you want. You’ll also want to be sure to uncheck any unwanted settings like settings Bing as default search provider or MSN as your browser home page.   Add your recorded TV file to WLMM by clicking the Add videos and photos button, or by dragging and dropping it onto the storyboard.   You’ll see your video displayed in the Preview window on the left and on the storyboard. Adjust the Zoom Time Scale slider at the lower right to change the level of detail displayed on the storyboard. You may want to start zoomed out and zoom in for more detailed edits.   Removing Commercials or Unwanted Sections Note: Changes and edits made in Windows Live Movie Maker do not change or effect the original video file. To accomplish this, we will makes cuts, or “splits,” and the beginning and end of the section we want to remove, and then we will delete that section from our project. Click and drag the slider bar along the the storyboard to scroll through the video. When you get to the end of a row in on the storyboard, drag the slider down to the beginning of the next row. We’ve found it easiest and most accurate to get close to the end of the commercial break and then use the Play button and the Previous Frame and Next Frame buttons underneath the Preview window to fine tune your cut point. When you find the right place to make your first cut, click the split button on the Edit tab on the ribbon. You will see your video “split” into two sections. Now, repeat the process of scrolling through the storyboard to find the end of the section you wish to cut. When you are at the proper point, click the Split button again.   Now we’ll delete that section by selecting it and pressing the Delete key, selecting remove on the Home tab, or by right clicking on the section and selecting Remove.   Trim Tool This tool allows you to select a portion of the video to keep while trimming away the rest.   Click and drag the sliders in the preview windows to select the area you want to keep. The area outside the sliders will be trimmed away. The area inside is the section that is kept in the movie. You can also adjust the Start and End points manually on the ribbon.   Delete any additional clips you don’t want in the final output. You can also accomplish this by using the Set start point and Set end point buttons. Clicking Set start point will eliminate everything before the start point. Set end point will eliminate everything after the end point. And you’re left with only the clip you want to keep.   Output your Video Select the icon at the top left, then select Save movie. All of these settings will output your movie as a WMV file, but file size and quality will vary by setting. The Burn to DVD option also outputs a WMV file, but then opens Windows DVD Maker and prompts you to create and burn a DVD.   Conclusion WLMM is one of the few applications that can edit WTV files, and it’s the only one we’re aware of that’s free. We should note only WTV and DVR-MS files will appear in the Recorded TV library in Media Center, so if you want to view your WMV output file in WMC you’ll need to add it to the Video or Movie library. Would you like to learn more about Windows Live Movie Maker? Check out are article on how to turn photos and home videos into movies with Windows Live Movie Maker. Need to add videos from a network location? WLMM doesn’t allow this by default, but you check out how to add network support to Windows Live Move Maker. Download Windows Live Similar Articles Productive Geek Tips Rotate a Video 90 degrees with VLC or Windows Live Movie MakerHow to Make/Edit a movie with Windows Movie Maker in Windows VistaFamily Fun: Share Photos with Photo Gallery and Windows Live SpacesAutomatically Mount and View ISO files in Windows 7 Media CenterAutomatically Start Windows 7 Media Center in Live TV Mode TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor

    Read the article

  • World Record Oracle Business Intelligence Benchmark on SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server configured with four SPARC T4 3.0 GHz processors delivered the first and best performance of 25,000 concurrent users on Oracle Business Intelligence Enterprise Edition (BI EE) 11g benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 10. A SPARC T4-4 server running Oracle Business Intelligence Enterprise Edition 11g achieved 25,000 concurrent users with an average response time of 0.36 seconds with Oracle BI server cache set to ON. The benchmark data clearly shows that the underlying hardware, SPARC T4 server, and the Oracle BI EE 11g (11.1.1.6.0 64-bit) platform scales within a single system supporting 25,000 concurrent users while executing 415 transactions/sec. The benchmark demonstrated the scalability of Oracle Business Intelligence Enterprise Edition 11g 11.1.1.6.0, which was deployed in a vertical scale-out fashion on a single SPARC T4-4 server. Oracle Internet Directory configured on SPARC T4 server provided authentication for the 25,000 Oracle BI EE users with sub-second response time. A SPARC T4-4 with internal Solid State Drive (SSD) using the ZFS file system showed significant I/O performance improvement over traditional disk for the Web Catalog activity. In addition, ZFS helped get past the UFS limitation of 32767 sub-directories in a Web Catalog directory. The multi-threaded 64-bit Oracle Business Intelligence Enterprise Edition 11g and SPARC T4-4 server proved to be a successful combination by providing sub-second response times for the end user transactions, consuming only half of the available CPU resources at 25,000 concurrent users, leaving plenty of head room for increased load. The Oracle Business Intelligence on SPARC T4-4 server benchmark results demonstrate that comprehensive BI functionality built on a unified infrastructure with a unified business model yields best-in-class scalability, reliability and performance. Oracle BI EE 11g is a newer version of Business Intelligence Suite with richer and superior functionality. Results produced with Oracle BI EE 11g benchmark are not comparable to results with Oracle BI EE 10g benchmark. Oracle BI EE 11g is a more difficult benchmark to run, exercising more features of Oracle BI. Performance Landscape Results for the Oracle BI EE 11g version of the benchmark. Results are not comparable to the Oracle BI EE 10g version of the benchmark. Oracle BI EE 11g Benchmark System Number of Users Response Time (sec) 1 x SPARC T4-4 (4 x SPARC T4 3.0 GHz) 25,000 0.36 Results for the Oracle BI EE 10g version of the benchmark. Results are not comparable to the Oracle BI EE 11g version of the benchmark. Oracle BI EE 10g Benchmark System Number of Users 2 x SPARC T5440 (4 x SPARC T2+ 1.6 GHz) 50,000 1 x SPARC T5440 (4 x SPARC T2+ 1.6 GHz) 28,000 Configuration Summary Hardware Configuration: SPARC T4-4 server 4 x SPARC T4-4 processors, 3.0 GHz 128 GB memory 4 x 300 GB internal SSD Storage Configuration: "> Sun ZFS Storage 7120 16 x 146 GB disks Software Configuration: Oracle Solaris 10 8/11 Oracle Solaris Studio 12.1 Oracle Business Intelligence Enterprise Edition 11g (11.1.1.6.0) Oracle WebLogic Server 10.3.5 Oracle Internet Directory 11.1.1.6.0 Oracle Database 11g Release 2 Benchmark Description Oracle Business Intelligence Enterprise Edition (Oracle BI EE) delivers a robust set of reporting, ad-hoc query and analysis, OLAP, dashboard, and scorecard functionality with a rich end-user experience that includes visualization, collaboration, and more. The Oracle BI EE benchmark test used five different business user roles - Marketing Executive, Sales Representative, Sales Manager, Sales Vice-President, and Service Manager. These roles included a maximum of 5 different pre-built dashboards. Each dashboard page had an average of 5 reports in the form of a mix of charts, tables and pivot tables, returning anywhere from 50 rows to approximately 500 rows of aggregated data. The test scenario also included drill-down into multiple levels from a table or chart within a dashboard. The benchmark test scenario uses a typical business user sequence of dashboard navigation, report viewing, and drill down. For example, a Service Manager logs into the system and navigates to his own set of dashboards using Service Manager. The BI user selects the Service Effectiveness dashboard, which shows him four distinct reports, Service Request Trend, First Time Fix Rate, Activity Problem Areas, and Cost Per Completed Service Call spanning 2002 to 2005. The user then proceeds to view the Customer Satisfaction dashboard, which also contains a set of 4 related reports, drills down on some of the reports to see the detail data. The BI user continues to view more dashboards – Customer Satisfaction and Service Request Overview, for example. After navigating through those dashboards, the user logs out of the application. The benchmark test is executed against a full production version of the Oracle Business Intelligence 11g Applications with a fully populated underlying database schema. The business processes in the test scenario closely represent a real world customer scenario. See Also SPARC T4-4 Server oracle.com OTN Oracle Business Intelligence oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN WebLogic Suite oracle.com OTN Oracle Solaris oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

    Read the article

  • Team Leaders & Authors - Manage and Report Workflow using "Print an Outline" in UPK

    - by [email protected]
    Did you know you can "print an outline?" You can print any outline or portion of an outline. Why might you want to "print an outline" in UPK... Have you ever wondered how many topics you have recorded, how many of your topics are ready for review, or even better, how many topics are complete! Do you need to report your project status to management? Maybe you just like to have a copy of your outline to refer to during development. Included in this output is the outline structure as well as the layout defined in the Details View of the Outline Editor. To print an outline, you must open either a module or section in the Outline Editor. A set of default data columns is automatically included in the output; however, you can configure which columns you want to appear in the report by switching to the Details view and customizing the columns. (To learn more about customizing your columns refer to the Add and Remove Columns section of the Content Development.pdf guide) To print an outline from the Outline Editor: 1. Open a module or section document in the Outline Editor. 2. Expand the documents to display the details that you want included in the report. 3. On the File menu, choose Print and use the toolbar icons to print, view, or save the report to a file. Personally, I opt to save my outline in Microsoft Excel. Using the delivered features of Microsoft Excel you can add columns of information, such as development notes, to your outline or you can graph and chart your Project status. As mentioned above you can configure what columns you want to appear in the outline. When utilizing the Print an Outline feature in conjunction with the Managing Workflow features of the UPK Multi-user instance you as a Team Lead or Author can better report project status. Read more about Managing Workflow below. Managing Workflow: The Properties toolpane contains special properties that allow authors to track document status or State as well as assign Document Ownership. Assign Content State The State property is an editable property for communicating the status of a document. This is particularly helpful when collaborating with other authors in a development team. Authors can assign a state to documents from the master list defined by the administrator. The default list of States includes (blank), Not Started, Draft, In Review, and Final. Administrators can customize the list by adding, deleting or renaming the values. To assign a State value to a document: 1. Make sure you are working online. 2. Display the Properties toolpane. 3. Select the document(s) to which you want to assign a state. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the State cell. 5. Select a value from the list. Assign Document Ownership In many enterprises, multiple authors often work together developing content in a team environment. Team leaders typically handle large projects by assigning specific development responsibilities to authors. The Owner property allows team leaders and authors to assign documents to themselves and other authors to track who is responsible for a specific document. You view and change document assignments for a document using the Owner property in the Properties toolpane. To assign a document owner: 1. Make sure you are working online. 2. On the View menu, choose Properties. 3. Select the document(s) to which you want to assign document responsibility. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the Owner cell. 5. Select a name from the list. Is anyone out there already using this feature? Share your ideas with the group. Those of you new to this feature, give it a test drive and let us know what you think. - Kathryn Lustenberger, Oracle UPK & Tutor Outbound Product Management

    Read the article

  • Benefits of PerformancePoint Services Using SharePoint Server 2010

    - by Wayne
    What is PerformancePoint Services? Most of the time it happens that the metrics that make up your key performance indicators are not simple values from a data source. In SharePoint Server 2007 PerformancePoint Services, you could create two kinds of KPI metrics: Simple single value metrics from any supported data source or Complex multiple value metrics from a single Analysis Services data source using MDX. Now things are even easier with Performance Point Services in SharePoint 2010. Let us check what is it? PerformancePoint Services in SharePoint Server 2010 is a performance management service that you can use to monitor and analyze your business. By providing flexible, easy-to-use tools for building dashboards, scorecards, reports, and key performance indicators (KPIs), PerformancePoint Services can help everyone across an organization make informed business decisions that align with companywide objectives and strategy. Scorecards, dashboards, and KPIs help drive accountability. Integrated analytics help employees move quickly from monitoring information to analyzing it and, when appropriate, sharing it throughout the organization. Prior to the addition of PerformancePoint Services to SharePoint Server, Microsoft Office PerformancePoint Server 2007 functioned as a standalone server. Now PerformancePoint functionality is available as an integrated part of the SharePoint Server Enterprise license, as is the case with Excel Services in Microsoft SharePoint Server 2010. The popular features of earlier versions of PerformancePoint Services are preserved along with numerous enhancements and additional functionality. New PerformancePoint Services features PerformancePoint Services now can utilize SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. Dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. New features and enhancements of SharePoint 2010 PerformancePoint Services • With PerformancePoint Services, functioning as a service in SharePoint Server, dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. The new architecture also takes advantage of SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. You also can include and link PerformancePoint Services Web Parts with other SharePoint Server Web Parts on the same page. The new architecture also streamlines security models that simplify access to report data. • The Decomposition Tree is a new visualization report type available in PerformancePoint Services. You can use it to quickly and visually break down higher-level data values from a multi-dimensional data set to understand the driving forces behind those values. The Decomposition Tree is available in scorecards and analytic reports and ultimately in dashboards. • You can access more detailed business information with improved scorecards. Scorecards have been enhanced to make it easy for you to drill down and quickly access more detailed information. PerformancePoint scorecards also offer more flexible layout options, dynamic hierarchies, and calculated KPI features. Using this enhanced functionality, you can now create custom metrics that use multiple data sources. You can also sort, filter, and view variances between actual and target values to help you identify concerns or risks. • Better Time Intelligence filtering capabilities that you can use to create and use dynamic time filters that are always up to date. Other improved filters improve the ability for dashboard users to quickly focus in on information that is most relevant. • Ability to include and link PerformancePoint Services Web Parts together with other PerformancePoint Services Web parts on the same page. • Easier to author and publish dashboard items by using Dashboard Designer. • SQL Server Analysis Services 2008 support. • Increased support for accessibility compliance in individual reports and scorecards. • The KPI Details report is a new report type that displays contextually relevant information about KPIs, metrics, rows, columns, and cells within a scorecard. The KPI Details report works as a Web part that links to a scorecard or individual KPI to show relevant metadata to the end user in SharePoint Server. This Web part can be added to PerformancePoint dashboards or any SharePoint Server page. • Create analytics reports to better understand underlying business forces behind the results. Analytic reports have been enhanced to support value filtering, new chart types, and server-based conditional formatting. To conclude, PerformancePoint Services, by becoming tightly integrated with SharePoint Server 2010, takes advantage of many enterprise-level SharePoint Server 2010 features. Unfortunately, SharePoint Foundation 2010 doesn’t include this feature. There are still many choices in SharePoint family of products that include SharePoint Server 2010, SharePoint Foundation, SharePoint Server 2007 and associated free SharePoint web parts and templates.

    Read the article

  • Review of ComponentOne Silverlight Controls (Free License Giveaway).

    - by mbcrump
    ComponentOne has several great products that target Silverlight Developers. One of them is their Silverlight Controls and the other is the XAP Optimizer. I decided that I would check out the controls and Xap Optimizer and feature them on my blog. After talking with ComponentOne, they agreed to take part in my Monthly Silverlight giveaway. The details are listed below: ----------------------------------------------------------------------------------------------------------------------------------------------------------- Win a FREE developer’s license of ComponentOne Silverlight Controls + XAP Optimizer! (the winner also gets a license to Silverlight Spy) Random winner will be announced on March 1st, 2011! To be entered into the contest do the following things: Subscribe to my feed. Leave a comment below with a valid email account (I WILL NOT share this info with anyone.) Retweet the following : I just entered to win free #Silverlight controls from @mbcrump and @ComponentOne http://mcrump.me/fTSmB8 ! Don’t change the URL because this will allow me to track the users that Tweet this page. Don’t forget to visit ComponentOne because they made this possible. MichaelCrump.Net provides Silverlight Giveaways every month. You can also see the latest giveaway by bookmarking http://giveaways.michaelcrump.net . ---------------------------------------------------------------------------------------------------------------------------------------------------------- Before we get started with the Silverlight Controls, here is a couple of links to bookmark: The Live Demos of the Silverlight Controls is located here. The XAP Optimizer page is here. One thing that I liked about the help documentation is that you can grab a PDF that only contains documentation for that control. This allows you to get the information you need without going through several hundred pages. You can also download the full documentation from their site.  ComponentOne Silverlight Controls I recently built a hobby project and decided to use ComponentOne Silverlight Controls. The main reason for this is that the controls are heavily documented, they look great and getting help was just a tweet or forum click away. So, the first question that you may ask is, “What is included?” Here is the official list below. I wanted to show several of the controls that I think developers will use the most. 1) ComponentOne’s Image Control – Display animated GIF images on your Silverlight pages as you would in traditional Web apps. Add attractive visuals with minimal effort. 2) HTML Host - Render HTML and arbitrary URI content from within Silverlight. 3) Chart3D - Create 3D surface charts with options for contour levels, zones, a chart legend and more. 4) PDFViewer - View PDF files in Silverlight! That is just a fraction of the controls available. If you want to check out several of them in a “real” application then check out my Silverlight page at http://michaelcrump.info. This brings me to the second part of the giveaway. XAP Optimizer – Is designed to reduce the size of your XAP File. It also includes built-in obfuscation and signing. With my personal project, I decided to use the XAP Optimizer by ComponentOne. It was so easy to use. You basically give it your .XAP file and it provides an output file. If you prefer to prune unused references manually then you can prune your XAP file manually by selecting the option below. I went ahead and added Obfuscation just to try it out and it worked great. You may notice from the screenshot below that I only obfuscated assemblies that I built. The other dlls anyone can grab off the net so we have no reason to obfuscate them. You also have the option to automatically sign your .xap with the SN.exe tool. So how did it turn out? Well, I reduced my XAP size from 2.4 to 1.8 with simply a click of a button. I added obfuscation with a click of a button: Screenshot of no obfuscation on my XAP File   Screenshot of obfuscation on my XAP File with XAP Optimizer.   So, with 2 button clicks, I reduce my XAP file and obfuscated my assembly. What else can you want? Well, they provide a nice HTML report that gives you an optimization summary. So what if you don’t want to launch this tool every time you deploy a Silverlight application? Well the official documentation provided a way to do it in your built event in Visual Studio. Click the Build Events tab on the left side of the Properties window. Enter the following command in the Post-build event command line: $Program Files\ComponentOne\XapOptimizer\XapOptimizer.exe /cmd /p:$(ProjectDir)$(ProjectName).xoproj In the end, this is a great product. I love code that I don’t have to write and utilities that just work. ComponentOne delivers with both the Silverlight Controls and the XAP Optimizer. Don’t forget to leave a comment below in order to win a set of the controls! Subscribe to my feed

    Read the article

  • SQL SERVER – SSMS: Top Object and Batch Execution Statistics Reports

    - by Pinal Dave
    The month of June till mid of July has been the fever of sports. First, it was Wimbledon Tennis and then the Soccer fever was all over. There is a huge number of fan followers and it is great to see the level at which people sometimes worship these sports. Being an Indian, I cannot forget to mention the India tour of England later part of July. Following these sports and as the events unfold to the finals, there are a number of ways the statisticians can slice and dice the numbers. Cue from soccer I can surely say there is a team performance against another team and then there is individual member fairs against a particular opponent. Such statistics give us a fair idea to how a team in the past or in the recent past has fared against each other, head-to-head stats during World cup and during other neutral venue games. All these statistics are just pointers. In reality, they don’t reflect the calibre of the current team because the individuals who performed in each of these games are totally different (Typical example being the Brazil Vs Germany semi-final match in FIFA 2014). So at times these numbers are misleading. It is worth investigating and get the next level information. Similar to these statistics, SQL Server Management studio is also equipped with a number of reports like a) Object Execution Statistics report and b) Batch Execution Statistics reports. As discussed in the example, the team scorecard is like the Batch Execution statistics and individual stats is like Object Level statistics. The analogy can be taken only this far, trust me there is no correlation between SQL Server functioning and playing sports – It is like I think about diet all the time except while I am eating. Performance – Batch Execution Statistics Let us view the first report which can be invoked from Server Node -> Reports -> Standard Reports -> Performance – Batch Execution Statistics. Most of the values that are displayed in this report come from the DMVs sys.dm_exec_query_stats and sys.dm_exec_sql_text(sql_handle). This report contains 3 distinctive sections as outline below.   Section 1: This is a graphical bar graph representation of Average CPU Time, Average Logical reads and Average Logical Writes for individual batches. The Batch numbers are indicative and the details of individual batch is available in section 3 (detailed below). Section 2: This represents a Pie chart of all the batches by Total CPU Time (%) and Total Logical IO (%) by batches. This graphical representation tells us which batch consumed the highest CPU and IO since the server started, provided plan is available in the cache. Section 3: This is the section where we can find the SQL statements associated with each of the batch Numbers. This also gives us the details of Average CPU / Average Logical Reads and Average Logical Writes in the system for the given batch with object details. Expanding the rows, I will also get the # Executions and # Plans Generated for each of the queries. Performance – Object Execution Statistics The second report worth a look is Object Execution statistics. This is a similar report as the previous but turned on its head by SQL Server Objects. The report has 3 areas to look as above. Section 1 gives the Average CPU, Average IO bar charts for specific objects. The section 2 is a graphical representation of Total CPU by objects and Total Logical IO by objects. The final section details the various objects in detail with the Avg. CPU, IO and other details which are self-explanatory. At a high-level both the reports are based on queries on two DMVs (sys.dm_exec_query_stats and sys.dm_exec_sql_text) and it builds values based on calculations using columns in them: SELECT * FROM    sys.dm_exec_query_stats s1 CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 WHERE   s2.objectid IS NOT NULL AND DB_NAME(s2.dbid) IS NOT NULL ORDER BY  s1.sql_handle; This is one of the simplest form of reports and in future blogs we will look at more complex reports. I truly hope that these reports can give DBAs and developers a hint about what is the possible performance tuning area. As a closing point I must emphasize that all above reports pick up data from the plan cache. If a particular query has consumed a lot of resources earlier, but plan is not available in the cache, none of the above reports would show that bad query. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • The Case of the Extra Page: Rendering Reporting Services as PDF

    - by smisner
    I had to troubleshoot a problem with a mysterious extra page appearing in a PDF this week. My first thought was that it was likely to caused by one of the most common problems that people encounter when developing reports that eventually get rendered as PDF is getting blank pages inserted into the PDF document. The cause of the blank pages is usually related to sizing. You can learn more at Understanding Pagination in Reporting Services in Books Online. When designing a report, you have to be really careful with the layout of items in the body. As you move items around, the body will expand to accommodate the space you're using and you might eventually tighten everything back up again, but the body doesn't automatically collapse. One of my favorite things to do in Reporting Services 2005 - which I dubbed the "vacu-pack" method - was to just erase the size property of the Body and let it auto-calculate the new size, squeezing out all the extra space. Alas, that method no longer works beginning with Reporting Services 2008. Even when you make sure the body size is as small as possible (with no unnecessary extra space along the top, bottom, left, or right side of the body), it's important to calculate the body size plus header plus footer plus the margins and ensure that the calculated height and width do not exceed the report's height and width (shown as the page in the illustration above). This won't matter if users always render reports online, but they'll get extra pages in a PDF document if the report's height and width are smaller than the calculate space. Beginning the Investigation In the situation that I was troubleshooting, I checked the properties: Item Property Value Body Height 6.25in   Width 10.5in Page Header Height 1in Page Footer Height 0.25in Report Left Margin 0.1in   Right Margin 0.1in   Top Margin 0.05in   Bottom Margin 0.05in   Page Size - Height 8.5in   Page Size - Width 11in So I calculated the total width using Body Width + Left Margin + Right Margin and came up with a value of 10.7 inches. And then I calculated the total height using Body Height + Page Header Height + Page Footer Height + Top Margin + Bottom Margin and got 7.6 inches. Well, page sizing couldn't be the reason for the extra page in my report because 10.7 inches is smaller than the report's width of 11 inches and 7.6 inches is smaller than the report's height of 8.5 inches. I had to look elsewhere to find the culprit. Conducting the Third Degree My next thought was to focus on the rendering size of the items in the report. I've adapted my problem to use the Adventure Works database. At the top of the report are two charts, and then below each chart is a rectangle that contains a table. In the real-life scenario, there were some graphics present as a background for the tables which fit within the rectangles that were about 3 inches high so the visual space of the rectangles matched the visual space of the charts - also about 3 inches high. But there was also a huge amount of white space at the bottom of the page, and as I mentioned at the beginning of this post, a second page which was blank except for the footer that appeared at the bottom. Placing a textbox beneath the rectangles to see if they would appear on the first page resulted the textbox's appearance on the second page. For some reason, the rectangles wanted a buffer zone beneath them. What's going on? Taking the Suspect into Custody My next step was to see what was really going on with the rectangle. The graphic appeared to be correctly sized, but the behavior in the report indicated the rectangle was growing. So I added a border to the rectangle to see what it was doing. When I added borders, I could see that the size of each rectangle was growing to accommodate the table it contains. The rectangle on the right is slightly larger than the one on the left because the table on the right contains an extra row. The rectangle is trying to preserve the whitespace that appears in the layout, as shown below. Closing the Case Now that I knew what the problem was, what could I do about it? Because of the graphic in the rectangle (not shown), I couldn't eliminate the use of the rectangles and just show the tables. But fortunately, there is a report property that comes to the rescue: ConsumeContainerWhitespace (accessible only in the Properties window). I set the value of this property to True. Problem solved. Now the rectangles remain fixed at the configured size and don't grow vertically to preserve the whitespace. Case closed.

    Read the article

  • Oracle Business Intelligence Advanced - Hands-on Workshop para Parceiros - 18 a 21 de Janeiro

    - by Claudia Costa
    Workshop Description This FREE hands-on workshop highlights strengths of OBIEE 11g by providing attendees a hands-on experience with BI 11g product. OBIEE 11g has adopted the standardized infrastructure of Fusion Middleware to provide robust server capability along with highly anticipated advanced visualization components like Maps, Flash based charts, Scorecards and KPIs. This workshop focuses on new features and infrastructure components for the BI practitioners who are familiar with either OBIEE 10g or previous BI releases. After taking this course, Oracle Business Intelligence 11g Advanced, you will gain insight into OBIEE11g technology, reporting solutions and new features. Workshop provides opportunities to practice with OBIEE11g environment as hands on activities. Participant will gain in-depth understanding of new architecture of OBIEE 11g, security mode, installation/configuration as well as reporting aspects like, new ROLAP/MOLAP style hierarchical browsing, new chart types, Action Framework and Advanced Visualization. If you are a Business Intelligence practitioners and familiar with BI10g - you cannot afford to miss this 3-day workshop. Register Now! PresentationsBusiness Intelligence EE (OBIEE) 11g: Advanced Workshop ·         OBIEE 11g Overview ·         OBIEE 11g Architecture and Infrastructure ·         OBIEE 11g Installation, Configuration and Monitoring ·         OBIEE11g Security Model and BI Components ·         OBIEE 11g Homepage Overview ·         New Visualizations: Master-Detail Events, Charts, Hierarchies ·         Reports Building with OBIEE 11g and Catalog Management ·         Spatial Integration, Action Framework, Scorecards ·         OBIEE 11g Dashboards ·         OBIEE Integration Options  Lab OutlineOracle Business Intelligence (OBIEE) 11g: Advanced Workshop The labs enable OBIEE Core functionality through hands-on activities are based on a Oracle VirtualBox image with software and training samples pre-installed. This Advanced course has few labs optional during the workshop to allow for students to practice them on their own. The primary purpose of the workshop is to provide expertise of 11g features and infrastructure changes from 10g. Labs will allow you to explore concepts to: ·         Have a clear understanding of the OBIEE 11g architecture ·         Have a clear understanding of the OBIEE differentiators ·         OBIEE11g Security Model ·         OBIEE11g Environment Management ·         Report Building with OBIEE11g ·         OBIEE11g Dashboard and Homepage Environment ·         New Visualization features ·         Management of Reports, Dashboards and BI Catalog Objects Audience ·         Business Intelligence Evangelist ·         Business Intelligence Application Developer or Consultant ·         Data Warehouse Developer ·         Enterprise Architects ·         Industry Solutions Architects Prerequisites ·         Experience and Understanding of OBIEE 10g is required. ·         Good understanding of data modeling for reporting purpose ·         Strong experience with database technologies preferred Equipment RequirementsThis workshop requires attendees to provide their own laptops. Attendee laptops must meet the following minimum hardware/software requirements: OBIEE 11g environments requires at least 3 GB of RAM (4GB Preferred), without which student will not be able to complete labs. This workshop has environment that includes VM Image and also a software components that students will install on their laptop for the labs. ·         Minimum 3GB RAM. 25GB free disk space ·         Internet Explorer 7 ·         VirtualBox (the latest version) ·         Downloadable from http://www.virtualbox.org ·         WINRAR or 7zip ·         Downloadable from http://www.win-rar.com/download.html ·         Downloadable from http://www.7zip.com/ Attendees will be given a VirtualBox image for Oraclee BI 11g Workshop containing the software along with required toolset, database and data sets for the labs. AgendaThis class duration is 3 Days9:00am: Sign-in and Technical Set up9:30am : Workshop Starts5:00pm : Workhop Ends LocalHotel Holiday Inn Express - Porto Salvo - Lisboa This class is Free. Register early to confirm a seat! Oracle BI Advanced 11g Hands-on Workshop - Schedule Register Now! January 11-13, 2011: Kista, Sweden January 18-20, 2011: Lisbon, Portugal March 1-3, 2011: Reading, Berkshire, UK March 15-17, 2011: Colombes, Paris, France March 29-31, 2011: Amsterdam, Netherlands Questions? For registration questions please send an email to [email protected]. Para outras informações, por favor contacte Claudia Costa, telf: 214235027 ou pelo email   

    Read the article

  • Memory Efficient Windows SOA Server

    - by Antony Reynolds
    Installing a Memory Efficient SOA Suite 11.1.1.6 on Windows Server Well 11.1.1.6 is now available for download so I thought I would build a Windows Server environment to run it.  I will minimize the memory footprint of the installation by putting all functionality into the Admin Server of the SOA Suite domain. Required Software 64-bit JDK SOA Suite If you want 64-bit then choose “Generic” rather than “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” This has links to all the required software. If you choose “Generic” then the Repository Creation Utility link does not show, you still need this so change the platform to “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” to get the software. Similarly if you need a database then you need to change the platform to get the link to XE for Windows or Linux. If possible I recommend installing a 64-bit JDK as this allows you to assign more memory to individual JVMs. Windows XE will work, but it is better if you can use a full Oracle database because of the limitations on XE that sometimes cause it to run out of space with large or multiple SOA deployments. Installation Steps The following flow chart outlines the steps required in installing and configuring SOA Suite. The steps in the diagram are explained below. 64-bit? Is a 64-bit installation required?  The Windows & Linux installers will install 32-bit versions of the Sun JDK and JRockit.  A separate JDK must be installed for 64-bit. Install 64-bit JDK The 64-bit JDK can be either Hotspot or JRockit.  You can choose either JDK 1.7 or 1.6. Install WebLogic If you are using 64-bit then install WebLogic using “java –jar wls1036_generic.jar”.  Make sure you include Coherence in the installation, the easiest way to do this is to accept the “Typical” installation. SOA Suite Required? If you are not installing SOA Suite then you can jump straight ahead and create a WebLogic domain. Install SOA Suite Run the SOA Suite installer and point it at the existing Middleware Home created for WebLogic.  Note to run the SOA installer on Windows the user must have admin privileges.  I also found that on Windows Server 2008R2 I had to start the installer from a command prompt with administrative privileges, granting it privileges when it ran caused it to ignore the jreLoc parameter. Database Available? Do you have access to a database into which you can install the SOA schema.  SOA Suite requires access to an Oracle database (it is supported on other databases but I would always use an oracle database). Install Database I use an 11gR2 Oracle database to avoid XE limitations.  Make sure that you set the database character set to be unicode (AL32UTF8).  I also disabled the new security settings because they get in the way for a developer database.  Don’t forget to check that number of processes is at least 150 and number of sessions is not set, or is set to at least 200 (in the DB init parameters). Run RCU The SOA Suite database schemas are created by running the Repository Creation Utility.  Install the “SOA and BPM Infrastructure” component to support SOA Suite.  If you keep the schema prefix as “DEV” then the config wizard is easier to complete. Run Config Wizard The Config wizard creates the domain which hosts the WebLogic server instances.  To get a minimum footprint SOA installation choose the “Oracle Enterprise Manager” and “Oracle SOA Suite for developers” products.  All other required products will be automatically selected. The “for developers” installs target the appropriate components at the AdminServer rather than creating a separate managed server to house them.  This reduces the number of JVMs required to run the system and hence the amount of memory required.  This is not suitable for anything other than a developer environment as it mixes the admin and runtime functions together in a single server.  It also takes a long time to load all the required modules, making start up a slow process. If it exists I would recommend running the config wizard found in the “oracle_common/common/bin” directory under the middleware home.  This should have access to all the templates, including SOA. If you also want to run BAM in the same JVM as everything else then you need to “Select Optional Configuration” for “Managed Servers, Clusters and Machines”. To target BAM at the AdminServer delete the “bam_server1” managed server that is created by default.  This will result in BAM being targeted at the AdminServer. Installation Issues I had a few problems when I came to test everything in my mega-JVM. Following applications were not targeted and so I needed to target them at the AdminServer: b2bui composer Healthcare UI FMW Welcome Page Application (11.1.0.0.0) How Memory Efficient is It? On a Windows 2008R2 Server running under VirtualBox I was able to bring up both the 11gR2 database and SOA/BPM/BAM in 3G memory.  I allocated a minimum 512M to the PermGen and a minimum of 1.5G for the heap.  The setting from setSOADomainEnv are shown below: set DEFAULT_MEM_ARGS=-Xms1536m -Xmx2048m set PORT_MEM_ARGS=-Xms1536m -Xmx2048m set DEFAULT_MEM_ARGS=%DEFAULT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m set PORT_MEM_ARGS=%PORT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m I arrived at these numbers by monitoring JVM memory usage in JConsole. Task Manager showed total system memory usage at 2.9G – just below the 3G I allocated to the VM. Performance is not stellar but it runs and I could run JDeveloper alongside it on my 8G laptop, so in that sense it was a result!

    Read the article

  • Create Advanced Panoramas with Microsoft Image Composite Editor

    - by Matthew Guay
    Do you enjoy making panoramas with your pictures, but want more features than tools like Live Photo Gallery offer?  Here’s how you can create amazing panoramas for free with the Microsoft Image Composite Editor. Yesterday we took a look at creating panoramic photos in Windows Live Photo Gallery. Today we take a look at a free tool from Microsoft that will give you more advanced features to create your own masterpiece. Getting Started Download Microsoft Image Composite Editor from Microsoft Research (link below), and install as normal.  Note that there are separate version for 32 & 64-bit editions of Windows, so make sure to download the correct one for your computer. Once it’s installed, you can proceed to create awesome panoramas and extremely large image combinations with it.  Microsoft Image Composite Editor integrates with Live Photo Gallery, so you can create more advanced panoramic pictures directly.  Select the pictures you want to combine, click Extras in the menu bar, and select Create Image Composite. You can also create a photo stitch directly from Explorer.  Select the pictures you want to combine, right-click, and select Stitch Images… Or, simply launch the Image Composite Editor itself and drag your pictures into its editor.  Either way you start a image composition, the program will automatically analyze and combine your images.  This application is optimized for multiple cores, and we found it much faster than other panorama tools such as Live Photo Gallery. Within seconds, you’ll see your panorama in the top preview pane. From the bottom of the window, you can choose a different camera motion which will change how the program stitches the pictures together.  You can also quickly crop the picture to the size you want, or use Automatic Crop to have the program select the maximum area with a continuous picture.   Here’s how our panorama looked when we switched the Camera Motion to Planar Motion 2. But, the real tweaking comes in when you adjust the panorama’s projection and orientation.  Click the box button at the top to change these settings. The panorama is now overlaid with a grid, and you can drag the corners and edges of the panorama to change its shape. Or, from the Projection button at the top, you can choose different projection modes. Here we’ve chosen Cylinder (Vertical), which entirely removed the warp on the walls in the image.  You can pan around the image, and get the part you find most important in the center.  Click the Apply button on the top when you’re finished making changes, or click Revert if you want to switch to the default view settings. Once you’ve finished your masterpiece, you can export it easily to common photo formats from the Export panel on the bottom.  You can choose to scale the image or set it to a maximum width and height as well.  Click Export to disk to save the photo to your computer, or select Publish to Photosynth to post your panorama online. Alternately, from the File menu you can choose to save the panorama as .spj file.  This preserves all of your settings in the Image Composite Editor so you can edit it more in the future if you wish.   Conclusion Whether you’re trying to capture the inside of a building or a tall tree, the extra tools in Microsoft Image Composite Editor let you make nicer panoramas than you ever thought possible.  We found the final results surprisingly accurate to the real buildings and objects, especially after tweaking the projection modes.  This tool can be both fun and useful, so give it a try and let us know what you’ve found it useful for. Works with 32 & 64-bit versions of XP, Vista, and Windows 7 Link Download Microsoft Image Composite Editor Similar Articles Productive Geek Tips Change or Set the Greasemonkey Script Editor in FirefoxNew Vista Syntax for Opening Control Panel Items from the Command-lineTune Your ClearType Font Settings in Windows VistaChange the Default Editor From Nano on Ubuntu LinuxMake MSE Create a Restore Point Before Cleaning Malware TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • Short Season, Long Models - Dealing with Seasonality

    - by Michel Adar
    Accounting for seasonality presents a challenge for the accurate prediction of events. Examples of seasonality include: ·         Boxed cosmetics sets are more popular during Christmas. They sell at other times of the year, but they rise higher than other products during the holiday season. ·         Interest in a promotion rises around the time advertising on TV airs ·         Interest in the Sports section of a newspaper rises when there is a big football match There are several ways of dealing with seasonality in predictions. Time Windows If the length of the model time windows is short enough relative to the seasonality effect, then the models will see only seasonal data, and therefore will be accurate in their predictions. For example, a model with a weekly time window may be quick enough to adapt during the holiday season. In order for time windows to be useful in dealing with seasonality it is necessary that: The time window is significantly shorter than the season changes There is enough volume of data in the short time windows to produce an accurate model An additional issue to consider is that sometimes the season may have an abrupt end, for example the day after Christmas. Input Data If available, it is possible to include the seasonality effect in the input data for the model. For example the customer record may include a list of all the promotions advertised in the area of residence. A model with these inputs will have to learn the effect of the input. It is possible to learn it specific to the promotion – and by the way learn about inter-promotion cross feeding – by leaving the list of ads as it is; or it is possible to learn the general effect by having a flag that indicates if the promotion is being advertised. For inputs to properly represent the effect in the model it is necessary that: The model sees enough events with the input present. For example, by virtue of the model lifetime (or time window) being long enough to see several “seasons” or by having enough volume for the model to learn seasonality quickly. Proportional Frequency If we create a model that ignores seasonality it is possible to use that model to predict how the specific person likelihood differs from average. If we have a divergence from average then we can transfer that divergence proportionally to the observed frequency at the time of the prediction. Definitions: Ft = trailing average frequency of the event at time “t”. The average is done over a suitable period of to achieve a statistical significant estimate. F = average frequency as seen by the model. L = likelihood predicted by the model for a specific person Lt = predicted likelihood proportionally scaled for time “t”. If the model is good at predicting deviation from average, and this holds over the interesting range of seasons, then we can estimate Lt as: Lt = L * (Ft / F) Considering that: L = (L – F) + F Substituting we get: Lt = [(L – F) + F] * (Ft / F) Which simplifies to: (i)                  Lt = (L – F) * (Ft / F)  +  Ft This latest expression can be understood as “The adjusted likelihood at time t is the average likelihood at time t plus the effect from the model, which is calculated as the difference from average time the proportion of frequencies”. The formula above assumes a linear translation of the proportion. It is possible to generalize the formula using a factor which we will call “a” as follows: (ii)                Lt = (L – F) * (Ft / F) * a  +  Ft It is also possible to use a formula that does not scale the difference, like: (iii)               Lt = (L – F) * a  +  Ft While these formulas seem reasonable, they should be taken as hypothesis to be proven with empirical data. A theoretical analysis provides the following insights: The Cumulative Gains Chart (lift) should stay the same, as at any given time the order of the likelihood for different customers is preserved If F is equal to Ft then the formula reverts to “L” If (Ft = 0) then Lt in (i) and (ii) is 0 It is possible for Lt to be above 1. If it is desired to avoid going over 1, for relatively high base frequencies it is possible to use a relative interpretation of the multiplicative factor. For example, if we say that Y is twice as likely as X, then we can interpret this sentence as: If X is 3%, then Y is 6% If X is 11%, then Y is 22% If X is 70%, then Y is 85% - in this case we interpret “twice as likely” as “half as likely to not happen” Applying this reasoning to (i) for example we would get: If (L < F) or (Ft < (1 / ((L/F) + 1)) Then  Lt = L * (Ft / F) Else Lt = 1 – (F / L) + (Ft * F / L)  

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • The Enterprise is a Curmudgeon

    - by John K. Hines
    Working in an enterprise environment is a unique challenge.  There's a lot more to software development than developing software.  A project lead or Scrum Master has to manage personalities and intra-team politics, has to manage accomplishing the task at hand while creating the opportunities and a reputation for handling desirable future work, has to create a competent, happy team that actually delivers while being careful not to burn bridges or hurt feelings outside the team.  Which makes me feel surprised to read advice like: " The enterprise should figure out what is likely to work best for itself and try to use it." - Ken Schwaber, The Enterprise and Scrum. The enterprises I have experience with are fundamentally unable to be self-reflective.  It's like asking a Roman gladiator if he'd like to carve out a little space in the arena for some silent meditation.  I'm currently wondering how compatible Scrum is with the top-down hierarchy of life in a large organization.  Specifically, manufacturing-mindset, fixed-release, harmony-valuing large organizations.  Now I understand why Agile can be a better fit for companies without much organizational inertia. Recently I've talked with nearly two dozen software professionals and their managers about Scrum and Agile.  I've become convinced that a developer, team, organization, or enterprise can be Agile without using Scrum.  But I'm not sure about what process would be the best fit, in general, for an enterprise that wants to become Agile.  It's possible I should read more than just the introduction to Ken's book. I do feel prepared to answer some of the questions I had asked in a previous post: How can Agile practices (including but not limited to Scrum) be adopted in situations where the highest-placed managers in a company demand software within extremely aggressive deadlines? Answer: In a very limited capacity at the individual level.  The situation here is that the senior management of this company values any software release more than it values developer well-being, end-user experience, or software quality.  Only if the developing organization is given an immediate refactoring opportunity does this sort of development make sense to a person who values sustainable software.   How can Agile practices be adopted by teams that do not perform a continuous cycle of new development, such as those whose sole purpose is to reproduce and debug customer issues? Answer: It depends.  For Scrum in particular, I don't believe Scrum is meant to manage unpredictable work.  While you can easily adopt XP practices for bug fixing, the project-management aspects of Scrum require some predictability.  My question here was meant toward those who want to apply Scrum to non-development teams.  In some cases it works, in others it does not. How can a team measure if its development efforts are both Agile and employ sound engineering practices? Answer: I'm currently leaning toward measuring these independently.  The Agile Principles are a terrific way to measure if a software team is agile.  Sound engineering practices are those practices which help developers meet the principles.  I think Scrum is being mistakenly applied as an engineering practice when it is essentially a project management practice.  In my opinion, XP and Lean are examples of good engineering practices. How can Agile be explained in an accurate way that describes its benefits to sceptical developers and/or revenue-focused non-developers? Answer: Agile techniques will result in higher-quality, lower-cost software development.  This comes primarily from finding defects earlier in the development cycle.  If there are individual developers who do not want to collaborate, write unit tests, or refactor, then these are simply developers who are either working in an area where adding these techniques will not add value (i.e. they are an expert) or they are a developer who is satisfied with the status quo.  In the first case they should be left alone.  In the second case, the results of Agile should be demonstrated by other developers who are willing to receive recognition for their efforts.  It all comes down to individuals, doesn't it?  If you're working in an organization whose Agile adoption consists exclusively of Scrum, consider ways to form individual Agile teams to demonstrate its benefits.  These can even be virtual teams that span people across org-chart boundaries.  Once you can measure real value, whether it's Scrum, Lean, or something else, people will follow.  Even the curmudgeons.

    Read the article

  • Lessons learnt in implementing Scrum in a Large Organization that has traditional values

    - by MarkPearl
    I recently had the experience of being involved in a “test” scrum implementation in a large organization that was used to a traditional project management approach. Here are some lessons that I learnt from it. Don’t let the Project Manager be the Product Owner First lesson learnt is to identify the correct product owner – in this instance the product manager assumed the role of the product owner which was a mistake. The product owner is the one who has the most to loose if the project fails. With a methodology that advocates removing the role of the project manager from the process then it is not in the interests of the person who is employed as a project manager to be the product owner – in fact they have the most to gain should the project fail. Know the time commitments of team members to the Project Second lesson learnt is to get a firm time commitment of the members on a team for the sprint and to hold them to it. In this project instance many of the issues we faced were with team members having to double up on supporting existing projects/systems and the scrum project. In many situations they just didn’t get round to doing any work on the scrum project for several days while they tried to meet other commitments. Initially this was not made transparent to the team – in stand up team members would say that had done some work but would be very vague on how much time they had actually spent using the blackhole of their other legacy projects as an excuse – putting up a time burn down chart made time allocations transparent and easy to hold the team to. In addition, how can you plan for a sprint without knowing the actual time available of the members – when I mean actual time, the exercise of getting them to go through all their appointments and lunch times and breaks and removing them from their time commitment helps get you to a realistic time that they can dedicate. Make sure you meet your minimum team sizes In a recent post I wrote about the difference between a partnership and a team. If you are going to do scrum in a large organization make sure you have a minimum team size of at least 3 developers. My experience with larger organizations is that people have a tendency to be sick more, take more leave and generally not be around – if you have a team size of two it is so easy to loose momentum on the project – the more people you have in the team (up to about 9) the more the momentum the project will have when people are not around. Swapping from one methodology to another can seem as waste to the customer It sounds bad, but most customers don’t care what methodology you use. Often they have bought into the “big plan upfront”. If you can, avoid taking a project on midstream from a traditional approach unless the customer has not bought into the process – with this particular project they had a detailed upfront planning breakaway with the customer using the traditional approach and then before the project started we moved onto a scrum implementation – this seemed as waste to the customer. We should have managed the customers expectation properly. Don’t play the role of the scrum master if you can’t be the scrum master With this particular implementation I was the “scrum master”. But all I did was go through the process of the formal meetings of scrum – I attended stand up, retrospectives and planning – but I was not hands on the ground. I was not performing the most important role of removing blockages – and by the end of the project there were a number of blockages “cropping up”. What could have been a better approach was to take someone on the team and train them to be the scrum master and be present to coach them. Alternatively actually be on the team on a fulltime basis and be the scrum master. By just going through the meetings of scrum didn’t mean we were doing scrum. So we failed with this one, if you fail look at it from an agile perspective As this particular project drew to a close and it became more and more apparent that it was not going to succeed the failure of it became depressing. Emotions were expressed by various people on the team that we not encouraging and enforced the failure. Embracing the failure and looking at it for what it is instead of taking it as the end of the world can change how you grow from the experience. Acknowledging that it failed and then focussing on learning from why and how to avoid the failure in the future can change how you feel emotionally about the team, the project and the organization.

    Read the article

  • Getting Started with ADF Mobile Sample Apps

    - by Denis T
    Getting Started with ADF Mobile Sample Apps   Installation Steps Install JDeveloper 11.1.2.3.0 from Oracle Technology Network After installing JDeveloper, go to Help menu and select "Check For Updates" and find the ADF Mobile extension and install this. It will require you restart JDeveloper For iOS development, be on a Mac and have Xcode installed. (Currently only Xcode 4.4 is officially supported. Xcode 4.5 support is coming soon) For Android development, have the Android SDK installed. In the JDeveloper Tools menu, select "Preferences". In the Preferences dialog, select ADF Mobile. You can expand it to select configure your Platform preferences for things like the location of Xcode and the Android SDK. In your /jdeveloper/jdev/extensions/oracle.adf.mobile/Samples folder you will find a PublicSamples.zip. Unzip this into the Samples folder so you have all the projects ready to go. Open each of the sample application's .JWS file to open the corresponding workspace. Then from the "Application" menu, select "Deploy" and then select the deployment profile for the platform you wish to deploy to. Try deploying to the simulator/emulator on each platform first because it won't require signing. Note: If you wish to deploy to the Android emulator, it must be running before you start the deployment.   Sample Application Details   Recommended Order of Use Application Name Description 1 HelloWorld The "hello world" application for ADF Mobile, which demonstrates the basic structure of the framework. This basic application has a single application feature that is implemented with a local HTML file. Use this application to ascertain that the development environment is set up correctly to compile and deploy an application. See also Section 4.2.2, "What Happens When You Create an ADF Mobile Application." 2 CompGallery This application is meant to be a runtime application and not necessarily to review the code, though that is available. It serves as an introduction to the ADF Mobile AMX UI components by demonstrating all of these components. Using this application, you can change the attributes of these components at runtime and see the effects of those changes in real time without recompiling and redeploying the application after each change. See generally Chapter 8, "Creating ADF Mobile AMX User Interface." 3 LayoutDemo This application demonstrates the user interface layout and shows how to create the various list and button styles that are commonly used in mobile applications. It also demonstrates how to create the action sheet style of a popup component and how to use various chart and gauge components. See Section 8.3, "Creating and Using UI Components" and Section 8.5, "Providing Data Visualization." Note: This application must be opened from the Samples directory or the Default springboard option must be cleared in the Applications page of the adfmf-application.xml overview editor, then selected again. 4 JavaDemo This application demonstrates how to bind the user interface to Java beans. It also demonstrates how to invoke EL bindings from the Java layer using the supplied utility classes. See also Section 8.10, "Using Event Listeners" and Section 9.2, "Understanding EL Support." 5 Navigation This application demonstrates the various navigation techniques in ADF Mobile, including bounded task flows and routers. It also demonstrates the various page transitions. See also Section 7.2, "Creating Task Flows." Note: This application must be opened from the Samples directory or the Default springboard option must be cleared in the Applications page of the adfmf-application.xml overview editor, then selected again. 6 LifecycleEvents This application implements lifecycle event handlers on the ADF Mobile application itself and its embedded application features. This application shows you where to insert code to enable the applications to perform their own logic at certain points in the lifecycle. See also Section 5.6, "About Lifecycle Event Listeners." Note: iOS, the LifecycleEvents sample application logs data to the Console application, located at Applications-Utilities-Console application. 7 DeviceDemo This application shows you how to use the DeviceFeatures data control to expose such device features as geolocation, e-mail, SMS, and contacts, as well as how to query the device for its properties. See also Section 9.5, "Using the DeviceFeatures Data Control." Note: You must also run this application on an actual device because SMS and some of the device properties do not function on an iOS simulator or Android emulator. 8 GestureDemo This application demonstrates how gestures can be implemented and used in ADF Mobile applications. See also Section 8.4, "Enabling Gestures." 9 StockTracker This application demonstrates how data change events use Java to enable data changes to be reflected in the user interface. It also has a variety of layout use cases, gestures and basic mobile patterns. See also Section 9.7, "Data Change Events."

    Read the article

  • How to set text in Y-axis, instead of numbers, in a RadChart component from Telerik with Bar-type

    - by radbyx
    Hi, I have made an bar RadChart with "SeriesOrientation="Horizontal"". I have the text showing at the end for each bars, but instead I would like that text to be listet in the y-axis, instead of the 1,2,3.. numbers. It seems like i'm not allow to set any text in the y-axis, is there a property I can set? Here is my code snippes: === .ascx === <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> <telerik:RadChart ID="RadChart1" runat="server" Skin="WebBlue" AutoLayout="true" Height="350px" Width="680px" SeriesOrientation="Horizontal"> <Series> <telerik:ChartSeries DataYColumn="UnitPrice" Name="Product Unit Price"> </telerik:ChartSeries> </Series> <PlotArea> <YAxis> <Appearance> <TextAppearance TextProperties-Font="Verdana, 8.25pt, style=Bold" /> </Appearance> </YAxis> <XAxis DataLabelsColumn="TenMostExpensiveProducts"> </XAxis> </PlotArea> <ChartTitle> <TextBlock Text="Ten Most Expensive Products" /> </ChartTitle> </telerik:RadChart> </ContentTemplate> </asp:UpdatePanel> ========================= === .ascx === protected void Page_Load(object sender, EventArgs e) { RadChart1.AutoLayout = false; RadChart1.Legend.Visible = false; // Create a ChartSeries and assign its name and chart type ChartSeries chartSeries = new ChartSeries(); chartSeries.Name = "Name"; chartSeries.Type = ChartSeriesType.Bar; // add new items to the series, // passing a value and a label string chartSeries.AddItem(98, "Product1"); chartSeries.AddItem(95, "Product2"); chartSeries.AddItem(100, "Product3"); chartSeries.AddItem(75, "Product4"); chartSeries.AddItem(1, "Product5"); // add the series to the RadChart Series collection RadChart1.Series.Add(chartSeries); // add the RadChart to the page. // this.Page.Controls.Add(RadChart1); // RadChart1.Series[0].Appearance.LegendDisplayMode = ChartSeriesLegendDisplayMode.Nothing; // RadChart1.Series[0].DataYColumn = "Uptime"; RadChart1.PlotArea.XAxis.DataLabelsColumn = "Name"; RadChart1.PlotArea.XAxis.Appearance.TextAppearance.TextProperties.Font = new System.Drawing.Font("Verdana", 8); RadChart1.BackColor = System.Drawing.Color.White; RadChart1.Height = 350; RadChart1.Width = 570; RadChart1.DataBind(); } I want to have the text: "Product1", "Product2", ect in the y-axis, can anyone help? Thx.

    Read the article

  • Qt vs .NET - plz no n00bs who don't know wtf they're talking about [closed]

    - by Pirate for Profit
    Man in all these Qt vs. .NET discussions 90% these people don't know WTF they're talking about. Trying to get a real comparison chart going before we embark on a major fucking project. And yes I'm drunk, and yes I use cocaine. Event Handling In Qt the event handling system you just emit signals when something cool happens and then catch them in slots, for instance emit valueChanged(int percent, bool something); and void MyCatcherObj::valueChanged(int p, bool ok){} blocking them and disconnecting them when needed, doing it across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET delegate crap is the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down i m o. Basically, the footprints make more sense and you can visualize the project easier without the clunky event handling system. I wish I could it explain it better. The only thing is, I do love some of the ease of C# compared to C++ and .NET's assembly architecture. That is a big bonus for modular projects, which are a PITA to do in C++. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57  | Next Page >