Search Results

Search found 7216 results on 289 pages for 'low cost'.

Page 279/289 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • External HDD USB 3.0 failure

    - by Philip
    [ 2560.376113] usb 9-1: new high-speed USB device number 2 using xhci_hcd [ 2560.376186] usb 9-1: Device not responding to set address. [ 2560.580136] usb 9-1: Device not responding to set address. [ 2560.784104] usb 9-1: device not accepting address 2, error -71 [ 2560.840127] hub 9-0:1.0: unable to enumerate USB device on port 1 [ 2561.080182] usb 10-1: new SuperSpeed USB device number 5 using xhci_hcd [ 2566.096163] usb 10-1: device descriptor read/8, error -110 [ 2566.200096] usb 10-1: new SuperSpeed USB device number 5 using xhci_hcd [ 2571.216175] usb 10-1: device descriptor read/8, error -110 [ 2571.376138] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2571.744174] usb 10-1: new SuperSpeed USB device number 7 using xhci_hcd [ 2576.760116] usb 10-1: device descriptor read/8, error -110 [ 2576.864074] usb 10-1: new SuperSpeed USB device number 7 using xhci_hcd [ 2581.880153] usb 10-1: device descriptor read/8, error -110 [ 2582.040123] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2582.224139] hub 9-0:1.0: unable to enumerate USB device on port 1 [ 2582.464177] usb 10-1: new SuperSpeed USB device number 9 using xhci_hcd [ 2587.480122] usb 10-1: device descriptor read/8, error -110 [ 2587.584079] usb 10-1: new SuperSpeed USB device number 9 using xhci_hcd [ 2592.600150] usb 10-1: device descriptor read/8, error -110 [ 2592.760134] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2593.128175] usb 10-1: new SuperSpeed USB device number 11 using xhci_hcd [ 2598.144183] usb 10-1: device descriptor read/8, error -110 [ 2598.248109] usb 10-1: new SuperSpeed USB device number 11 using xhci_hcd [ 2603.264171] usb 10-1: device descriptor read/8, error -110 [ 2603.480157] usb 10-1: new SuperSpeed USB device number 12 using xhci_hcd [ 2608.496162] usb 10-1: device descriptor read/8, error -110 [ 2608.600091] usb 10-1: new SuperSpeed USB device number 12 using xhci_hcd [ 2613.616166] usb 10-1: device descriptor read/8, error -110 [ 2613.832170] usb 10-1: new SuperSpeed USB device number 13 using xhci_hcd [ 2618.848135] usb 10-1: device descriptor read/8, error -110 [ 2618.952079] usb 10-1: new SuperSpeed USB device number 13 using xhci_hcd [ 2623.968155] usb 10-1: device descriptor read/8, error -110 [ 2624.184176] usb 10-1: new SuperSpeed USB device number 14 using xhci_hcd [ 2629.200124] usb 10-1: device descriptor read/8, error -110 [ 2629.304075] usb 10-1: new SuperSpeed USB device number 14 using xhci_hcd [ 2634.320172] usb 10-1: device descriptor read/8, error -110 [ 2634.424135] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2634.776186] usb 10-1: new SuperSpeed USB device number 15 using xhci_hcd [ 2639.792105] usb 10-1: device descriptor read/8, error -110 [ 2639.896090] usb 10-1: new SuperSpeed USB device number 15 using xhci_hcd [ 2644.912172] usb 10-1: device descriptor read/8, error -110 [ 2645.128174] usb 10-1: new SuperSpeed USB device number 16 using xhci_hcd [ 2650.144160] usb 10-1: device descriptor read/8, error -110 [ 2650.248062] usb 10-1: new SuperSpeed USB device number 16 using xhci_hcd [ 2655.264120] usb 10-1: device descriptor read/8, error -110 [ 2655.480182] usb 10-1: new SuperSpeed USB device number 17 using xhci_hcd [ 2660.496121] usb 10-1: device descriptor read/8, error -110 [ 2660.600086] usb 10-1: new SuperSpeed USB device number 17 using xhci_hcd [ 2665.616167] usb 10-1: device descriptor read/8, error -110 [ 2665.832177] usb 10-1: new SuperSpeed USB device number 18 using xhci_hcd [ 2670.848110] usb 10-1: device descriptor read/8, error -110 [ 2670.952066] usb 10-1: new SuperSpeed USB device number 18 using xhci_hcd [ 2675.968081] usb 10-1: device descriptor read/8, error -110 [ 2676.072124] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2786.104531] xhci_hcd 0000:02:00.0: remove, state 4 [ 2786.104546] usb usb10: USB disconnect, device number 1 [ 2786.104686] xHCI xhci_drop_endpoint called for root hub [ 2786.104692] xHCI xhci_check_bandwidth called for root hub [ 2786.104942] xhci_hcd 0000:02:00.0: USB bus 10 deregistered [ 2786.105054] xhci_hcd 0000:02:00.0: remove, state 4 [ 2786.105065] usb usb9: USB disconnect, device number 1 [ 2786.105176] xHCI xhci_drop_endpoint called for root hub [ 2786.105181] xHCI xhci_check_bandwidth called for root hub [ 2786.109787] xhci_hcd 0000:02:00.0: USB bus 9 deregistered [ 2786.110134] xhci_hcd 0000:02:00.0: PCI INT A disabled [ 2794.268445] pci 0000:02:00.0: [1b73:1000] type 0 class 0x000c03 [ 2794.268483] pci 0000:02:00.0: reg 10: [mem 0x00000000-0x0000ffff] [ 2794.268689] pci 0000:02:00.0: PME# supported from D0 D3hot [ 2794.268700] pci 0000:02:00.0: PME# disabled [ 2794.276383] pci 0000:02:00.0: BAR 0: assigned [mem 0xd7800000-0xd780ffff] [ 2794.276398] pci 0000:02:00.0: BAR 0: set to [mem 0xd7800000-0xd780ffff] (PCI address [0xd7800000-0xd780ffff]) [ 2794.276419] pci 0000:02:00.0: no hotplug settings from platform [ 2794.276658] xhci_hcd 0000:02:00.0: enabling device (0000 -> 0002) [ 2794.276675] xhci_hcd 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 2794.276762] xhci_hcd 0000:02:00.0: setting latency timer to 64 [ 2794.276771] xhci_hcd 0000:02:00.0: xHCI Host Controller [ 2794.276913] xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 9 [ 2794.395760] xhci_hcd 0000:02:00.0: irq 16, io mem 0xd7800000 [ 2794.396141] xHCI xhci_add_endpoint called for root hub [ 2794.396144] xHCI xhci_check_bandwidth called for root hub [ 2794.396195] hub 9-0:1.0: USB hub found [ 2794.396203] hub 9-0:1.0: 1 port detected [ 2794.396305] xhci_hcd 0000:02:00.0: xHCI Host Controller [ 2794.396371] xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 10 [ 2794.396496] xHCI xhci_add_endpoint called for root hub [ 2794.396499] xHCI xhci_check_bandwidth called for root hub [ 2794.396547] hub 10-0:1.0: USB hub found [ 2794.396553] hub 10-0:1.0: 1 port detected [ 2798.004084] usb 1-3: new high-speed USB device number 8 using ehci_hcd [ 2798.140824] scsi21 : usb-storage 1-3:1.0 [ 2820.176116] usb 1-3: reset high-speed USB device number 8 using ehci_hcd [ 2824.000526] scsi 21:0:0:0: Direct-Access BUFFALO HD-PZU3 0001 PQ: 0 ANSI: 6 [ 2824.002263] sd 21:0:0:0: Attached scsi generic sg2 type 0 [ 2824.003617] sd 21:0:0:0: [sdb] 1953463728 512-byte logical blocks: (1.00 TB/931 GiB) [ 2824.005139] sd 21:0:0:0: [sdb] Write Protect is off [ 2824.005149] sd 21:0:0:0: [sdb] Mode Sense: 1f 00 00 08 [ 2824.009084] sd 21:0:0:0: [sdb] No Caching mode page present [ 2824.009094] sd 21:0:0:0: [sdb] Assuming drive cache: write through [ 2824.011944] sd 21:0:0:0: [sdb] No Caching mode page present [ 2824.011952] sd 21:0:0:0: [sdb] Assuming drive cache: write through [ 2824.049153] sdb: sdb1 [ 2824.051814] sd 21:0:0:0: [sdb] No Caching mode page present [ 2824.051821] sd 21:0:0:0: [sdb] Assuming drive cache: write through [ 2824.051825] sd 21:0:0:0: [sdb] Attached SCSI disk [ 2839.536624] usb 1-3: USB disconnect, device number 8 [ 2844.620178] usb 10-1: new SuperSpeed USB device number 2 using xhci_hcd [ 2844.640281] scsi22 : usb-storage 10-1:1.0 [ 2850.326545] scsi 22:0:0:0: Direct-Access BUFFALO HD-PZU3 0001 PQ: 0 ANSI: 6 [ 2850.327560] sd 22:0:0:0: Attached scsi generic sg2 type 0 [ 2850.329561] sd 22:0:0:0: [sdb] 1953463728 512-byte logical blocks: (1.00 TB/931 GiB) [ 2850.329889] sd 22:0:0:0: [sdb] Write Protect is off [ 2850.329897] sd 22:0:0:0: [sdb] Mode Sense: 1f 00 00 08 [ 2850.330223] sd 22:0:0:0: [sdb] No Caching mode page present [ 2850.330231] sd 22:0:0:0: [sdb] Assuming drive cache: write through [ 2850.331414] sd 22:0:0:0: [sdb] No Caching mode page present [ 2850.331423] sd 22:0:0:0: [sdb] Assuming drive cache: write through [ 2850.384116] usb 10-1: USB disconnect, device number 2 [ 2850.392050] sd 22:0:0:0: [sdb] Unhandled error code [ 2850.392056] sd 22:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2850.392061] sd 22:0:0:0: [sdb] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 2850.392074] end_request: I/O error, dev sdb, sector 0 [ 2850.392079] quiet_error: 70 callbacks suppressed [ 2850.392082] Buffer I/O error on device sdb, logical block 0 [ 2850.392194] ldm_validate_partition_table(): Disk read failed. [ 2850.392271] Dev sdb: unable to read RDB block 0 [ 2850.392377] sdb: unable to read partition table [ 2850.392581] sd 22:0:0:0: [sdb] READ CAPACITY failed [ 2850.392584] sd 22:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2850.392588] sd 22:0:0:0: [sdb] Sense not available. [ 2850.392613] sd 22:0:0:0: [sdb] Asking for cache data failed [ 2850.392617] sd 22:0:0:0: [sdb] Assuming drive cache: write through [ 2850.392621] sd 22:0:0:0: [sdb] Attached SCSI disk [ 2850.732182] usb 10-1: new SuperSpeed USB device number 3 using xhci_hcd [ 2850.752228] scsi23 : usb-storage 10-1:1.0 [ 2851.752709] scsi 23:0:0:0: Direct-Access BUFFALO HD-PZU3 0001 PQ: 0 ANSI: 6 [ 2851.754481] sd 23:0:0:0: Attached scsi generic sg2 type 0 [ 2851.756576] sd 23:0:0:0: [sdb] 1953463728 512-byte logical blocks: (1.00 TB/931 GiB) [ 2851.758426] sd 23:0:0:0: [sdb] Write Protect is off [ 2851.758436] sd 23:0:0:0: [sdb] Mode Sense: 1f 00 00 08 [ 2851.758779] sd 23:0:0:0: [sdb] No Caching mode page present [ 2851.758787] sd 23:0:0:0: [sdb] Assuming drive cache: write through [ 2851.759968] sd 23:0:0:0: [sdb] No Caching mode page present [ 2851.759977] sd 23:0:0:0: [sdb] Assuming drive cache: write through [ 2851.817710] sdb: sdb1 [ 2851.820562] sd 23:0:0:0: [sdb] No Caching mode page present [ 2851.820568] sd 23:0:0:0: [sdb] Assuming drive cache: write through [ 2851.820572] sd 23:0:0:0: [sdb] Attached SCSI disk [ 2852.060352] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.076533] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.076538] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.196329] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.212593] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.212599] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.456290] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.472402] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.472408] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.624304] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.640531] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.640536] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.772296] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.788536] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.788541] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.920349] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.936536] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.936540] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2853.072287] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2853.088565] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2853.088570] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.176339] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.192561] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.192567] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.320349] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.336526] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.336531] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.468344] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.484551] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.484556] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.612349] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.628540] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.628545] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.756350] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.772528] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.772533] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.848116] usb 10-1: USB disconnect, device number 3 [ 2884.851493] scsi 23:0:0:0: [sdb] killing request [ 2884.851501] scsi 23:0:0:0: [sdb] killing request [ 2884.851699] scsi 23:0:0:0: [sdb] Unhandled error code [ 2884.851702] scsi 23:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2884.851708] scsi 23:0:0:0: [sdb] CDB: Read(10): 28 00 00 5f 2b ee 00 00 3e 00 [ 2884.851721] end_request: I/O error, dev sdb, sector 6237166 [ 2884.851726] Buffer I/O error on device sdb1, logical block 6237102 [ 2884.851730] Buffer I/O error on device sdb1, logical block 6237103 [ 2884.851738] Buffer I/O error on device sdb1, logical block 6237104 [ 2884.851741] Buffer I/O error on device sdb1, logical block 6237105 [ 2884.851744] Buffer I/O error on device sdb1, logical block 6237106 [ 2884.851747] Buffer I/O error on device sdb1, logical block 6237107 [ 2884.851750] Buffer I/O error on device sdb1, logical block 6237108 [ 2884.851753] Buffer I/O error on device sdb1, logical block 6237109 [ 2884.851757] Buffer I/O error on device sdb1, logical block 6237110 [ 2884.851807] scsi 23:0:0:0: [sdb] Unhandled error code [ 2884.851810] scsi 23:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2884.851813] scsi 23:0:0:0: [sdb] CDB: Read(10): 28 00 00 5f 2c 2c 00 00 3e 00 [ 2884.851824] end_request: I/O error, dev sdb, sector 6237228 [ 2885.168190] usb 10-1: new SuperSpeed USB device number 4 using xhci_hcd [ 2885.188268] scsi24 : usb-storage 10-1:1.0 Please help me with my problem. I got this after running dmesg.

    Read the article

  • Issue 15: Oracle PartnerNetwork Exchange @ Oracle OpenWorld

    - by rituchhibber
         ORACLE FOCUS Oracle PartnerNetwork Exchange@ ORACLE OpenWorld Sylvie MichouSenior DirectorPartner Marketing & Communications and Strategic Programs RESOURCES -- Oracle OpenWorld 2012 Oracle PartnerNetwork Exchange @ OpenWorld Oracle PartnerNetwork Exchange @ OpenWorld Registration Oracle PartnerNetwork Exchange SpecializationTest Fest Oracle OpenWorld Schedule Builder Oracle OpenWorld Promotional Toolkit for Partners Oracle Partner Events Oracle Partner Webcasts Oracle EMEA Partner News SUBSCRIBE FEEDBACK PREVIOUS ISSUES If you are attending our forthcoming Oracle OpenWorld 2012 conference in San Francisco from 30 September to 4 October, you will discover a new dedicated programme of keynotes and sessions tailored especially for you, our valued partners. Oracle PartnerNetwork Exchange @ OpenWorld has been created to enhance the opportunities for you to learn from and network with Oracle executives and experts. The programme also provides more informal opportunities than ever throughout the week to meet up with the people who are most important to your business: customers, prospects, colleagues and the Oracle EMEA Alliances & Channels management team. Oracle remains fully focused on building the industry's most admired partner ecosystem—which today spans over 25,000 partners. This new OPN Exchange programme offers an exciting change of pace for partners throughout the conference. Now it will be possible to enjoy a fully-integrated, partner-dedicated session schedule throughout the week, as well as key social events such as the Sunday night Welcome Reception, networking lunches from Monday to Thursday at the Howard Street Tent, and a fantastic closing event on the last Thursday afternoon. In addition to the regular Oracle OpenWorld conference schedule, if you have registered for the Oracle PartnerNetwork Exchange @ OpenWorld programme, you will be invited to attend a much anticipated global partner keynote presentation, plus more than 40 conference sessions aimed squarely at what's most important to you, as partners. Prominent topics for discussion will include: Oracle technologies and roadmaps and how they fit with partners' business plans; business development; regional distinctions in business practices; and much more. Each session will provide plenty of food for thought ahead of the numerous networking opportunities throughout the week, encouraging the knowledge exchange with Oracle executives, customers, prospects, and colleagues that will make this conference of even greater value for you. At Oracle we always work closely with our partners to deliver solution offerings that improve business value, simplify the IT experience and drive innovation and efficiencies for joint customers. The most important element of our new OPN Exchange is content that helps you get more from technology investments, more from your peer-to-peer connections, and more from your interactions with customers. To this end we've created some partner-specific tools which can be used by OPN members ahead of the conference itself. Crucially, a comprehensive Content Catalog already lists and organises details of every OPN Exchange session, speaker, exhibitor, demonstration and related materials. This Content Catalog can be used by all our partners to identify interesting content that you can add to your own personalised Oracle OpenWorld Schedule Builder, allowing more effective planning and pre-enrolment for vital sessions. There are numerous highlights that you will definitely want to include in those personal schedules. On Sunday morning, 30 September we will start the week with partner dedicated OPN Exchange sessions, following our Global Partner Keynote at 13:00 with Judson Althoff, SVP, Worldwide Alliances & Channels and Embedded Sales and senior executives, giving insight into Oracle's partner vision, strategy, and resources—all designed to help build and strengthen market opportunities for you. This will be followed by a number of OPN Exchange general sessions, the Oracle OpenWorld Opening Keynote with Larry Ellison, CEO, Oracle and concluded with the OPN Exchange AfterDark Welcome Reception, starting at 19:30 at the Metreon. From Monday 1 to Thursday 4 October, you can attend the OPN Exchange sessions that are most relevant to your business today and over the coming year. Oracle's top product and sales leaders will be on hand to discuss Oracle's strategic direction in 40+ targeted and in-depth sessions focussing on critical success factors to develop your business. Oracle's dedication to innovation, specialization, enablement and engineering provides Oracle partners with a huge opportunity to create new services and solutions, differentiate themselves and deliver extreme value to joint customers across the globe. Oracle will even be helping over 1000 partners to earn OPN Specialization certification during the Oracle OpenWorld OPN Exchange Test Fest, which will be providing all the study materials and exams required to drive Specialization for free at the conference. You simply need to check the list of current certification tracks available, and make sure you pre-register to reserve a seat in one of the ten sessions being offered free to OPN Exchange registered attendees. And finally, let's not forget those all-important networking opportunities, which can so often provide partners with valuable long-term alliances as well as exciting new business leads. The Oracle PartnerNetwork Lounge, located at Moscone South, exhibition hall, room 100 is the place where partners can meet formally or informally with colleagues, customers, prospects, and other industry professionals. OPN Specialized partners with OPN Exchange passes can also visit the OPN Video Blogging room to record and share ideas, and at the OPN Information Station you will find consultants available to answer your questions. "For the first time ever we will have a full partner conference within OpenWorld. OPN Exchange @ OpenWorld will kick-off on the first Sunday and run the entire week. We'll have over 40 sessions throughout that time and partners will hear from our top development executives, with special sessions dedicated to partnering throughout. It's going to be a phenomenal event, and we look forward to seeing our partners there." Judson Althoff, SVP, Oracle Worldwide Alliances & Channels and Embedded Sales So if you haven't done so already, please register for Oracle PartnerNetwork Exchange @ OpenWorld today or add OPN Exchange to your existing registration for just $100 through My Account. And if you have any further questions regarding partner activities at Oracle OpenWorld, please don't hesitate to contact the Oracle PartnerNetwork team at [email protected] will be on hand to share the very latest information about: Oracle's SPARC Superclusters: the latest Engineered Systems from Oracle, delivering radically improved performance, faster deployment and greatly reduced operational costs for mixed database and enterprise application consolidation Oracle's SPARC T4 servers: with the newly developed T4 processor and Oracle Solaris providing up to five times the single threaded performance and better overall system throughput for expanded application versatility Oracle Database Appliance: a new way to take advantage of the world's most popular database, Oracle Database 11g, in a single, easy-to-deploy and manage system. It's a complete package engineered to deliver simple, reliable and affordable database services to small and medium size businesses and departmental systems. All hardware and software components are supported together and offer customers unique pay-as-you-grow software licensing to quickly scale from two to 24 processor cores without incurring the costs and downtime usually associated with hardware upgrades Oracle Exalogic: the world's only integrated cloud machine, featuring server hardware and middleware software engineered together for maximum performance with minimum set-up and operational cost Oracle Exadata Database Machine: the only database machine that provides extreme performance for both data warehousing and online transaction processing (OLTP) applications, making it the ideal platform for consolidating onto grids or private clouds. It is a complete package of servers, storage, networking and software that is massively scalable, secure and redundant Oracle Sun ZFS Storage Appliances: providing enterprise-class NAS performance, price-performance, manageability and TCO by combining third-generation software with high-performance controllers, flash-based caches and disks Oracle Pillar Axiom Quality-of-Service: confidently consolidate storage for multiple applications into a single datacentre storage solution Oracle Solaris 11: delivering secure enterprise cloud deployments with the ability to run hundreds of virtual application with no overhead and co-engineered with other Oracle software products to provide the highest levels of security, manageability and performance Oracle Enterprise Manager 12c: Oracle's integrated enterprise IT management product, providing the industry's only complete, integrated and business-driven enterprise cloud management solution Oracle VM 3.0: the latest release of Oracle's server virtualisation and management solution, helping to move datacentres beyond server consolidation to improve application deployment and management. Register today and ensure your place at the Extreme Performance Tour! Extreme Performance Tour events are free to attend, but places are limited. To make sure that you don't miss out, please visit Oracle's Extreme Performance Tour website, select the city that you'd be interest in attending an event in, and then click on the 'Register Now' button for that city to secure your interest. Each individual city page also contains more in-depth information about your local event, including logistics, agenda and maybe even a preview of VIP guest speakers. -- Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • What You Need to Know About Windows 8.1

    - by Chris Hoffman
    Windows 8.1 is available to everyone starting today, October 19. The latest version of Windows improves on Windows 8 in every way. It’s a big upgrade, whether you use the desktop or new touch-optimized interface. The latest version of Windows has been dubbed “an apology” by some — it’s definitely more at home on a desktop PC than Windows 8 was. However, it also offers a more fleshed out and mature tablet experience. How to Get Windows 8.1 For Windows 8 users, Windows 8.1 is completely free. It will be available as a download from the Windows Store — that’s the “Store” app in the Modern, tiled interface. Assuming upgrading to the final version will be just like upgrading to the preview version, you’ll likely see a “Get Windows 8.1″ pop-up that will take you to the Windows Store and guide you through the download process. You’ll also be able to download ISO images of Windows 8.1, so can perform a clean install to upgrade. On any new computer, you can just install Windows 8.1 without going through Windows 8. New computers will start to ship with Windows 8.1 and boxed copies of Windows 8 will be replaced by boxed copies of Windows 8.1. If you’re using Windows 7 or a previous version of Windows, the update won’t be free. Getting Windows 8.1 will cost you the same amount as a full copy of Windows 8 — $120 for the standard version. If you’re an average Windows 7 user, you’re likely better off waiting until you buy a new PC with Windows 8.1 included rather than spend this amount of money to upgrade. Improvements for Desktop Users Some have dubbed Windows 8.1 “an apology” from Microsoft, although you certainly won’t see Microsoft referring to it this way. Either way, Steven Sinofsky, who presided over Windows 8′s development, left the company shortly after Windows 8 was released. Coincidentally, Windows 8.1 contains many features that Steven Sinofsky and Microsoft refused to implement. Windows 8.1 offers the following big improvements for desktop users: Boot to Desktop: You can now log in directly to the desktop, skipping the tiled interface entirely. Disable Top-Left and Top-Right Hot Corners: The app switcher and charms bar won’t appear when you move your mouse to the top-left or top-right corners of the screen if you enable this option. No more intrusions into the desktop. The Start Button Returns: Windows 8.1 brings back an always-present Start button on the desktop taskbar, dramatically improving discoverability for new Windows 8 users and providing a bigger mouse target for remote desktops and virtual machines. Crucially, the Start menu isn’t back — clicking this button will open the full-screen Modern interface. Start menu replacements will continue to function on Windows 8.1, offering more traditional Start menus. Show All Apps By Default: Luckily, you can hide the Start screen and its tiles almost entirely. Windows 8.1 can be configured to show a full-screen list of all your installed apps when you click the Start button, with desktop apps prioritized. The only real difference is that the Start menu is now a full-screen interface. Shut Down or Restart From Start Button: You can now right-click the Start button to access Shut down, Restart, and other power options in just as many clicks as you could on Windows 7. Shared Start Screen and Desktop Backgrounds; Windows 8 limited you to just a few Steven Sinofsky-approved background images for your Start screen, but Windows 8.1 allows you to use your desktop background on the Start screen. This can make the transition between the Start screen and desktop much less jarring. The tiles or shortcuts appear to be floating above the desktop rather than off in their own separate universe. Unified Search: Unified search is back, so you can start typing and search your programs, settings, and files all at once — no more awkwardly clicking between different categories when trying to open a Control Panel screen or search for a file. These all add up to a big improvement when using Windows 8.1 on the desktop. Microsoft is being much more flexible — the Start menu is full screen, but Microsoft has relented on so many other things and you’d never have to see a tile if you didn’t want to. For more information, read our guide to optimizing Windows 8.1 for a desktop PC. These are just the improvements specifically for desktop users. Windows 8.1 includes other useful features for everyone, such as deep SkyDrive integration that allows you to store your files in the cloud without installing any additional sync programs. Improvements for Touch Users If you have a Windows 8 or Windows RT tablet or another touch-based device you use the interface formerly known as Metro on, you’ll see many other noticeable improvements. Windows 8′s new interface was half-baked when it launched, but it’s now much more capable and mature. App Updates: Windows 8′s included apps were extremely limited in many cases. For example, Internet Explorer 10 could only display ten tabs at a time and the Mail app was a barren experience devoid of features. In Windows 8.1, some apps — like Xbox Music — have been redesigned from scratch, Internet Explorer allows you to display a tab bar on-screen all the time, while apps like Mail have accumulated quite a few useful features. The Windows Store app has been entirely redesigned and is less awkward to browse. Snap Improvements: Windows 8′s Snap feature was a toy, allowing you to snap one app to a small sidebar at one side of your screen while another app consumed most of your screen. Windows 8.1 allows you to snap two apps side-by-side, seeing each app’s full interface at once. On larger displays, you can even snap three or four apps at once. Windows 8′s ability to use multiple apps at once on a tablet is compelling and unmatched by iPads and Android tablets. You can also snap two of the same apps side-by-side — to view two web pages at once, for example. More Comprehensive PC Settings: Windows 8.1 offers a more comprehensive PC settings app, allowing you to change most system settings in a touch-optimized interface. You shouldn’t have to use the desktop Control Panel on a tablet anymore — or at least not as often. Touch-Optimized File Browsing: Microsoft’s SkyDrive app allows you to browse files on your local PC, finally offering a built-in, touch-optimized way to manage files without using the desktop. Help & Tips: Windows 8.1 includes a Help+Tips app that will help guide new users through its new interface, something Microsoft stubbornly refused to add during development. There’s still no “Modern” version of Microsoft Office apps (aside from OneNote), so you’ll still have to head to use desktop Office apps on tablets. It’s not perfect, but the Modern interface doesn’t feel anywhere near as immature anymore. Read our in-depth look at the ways Microsoft’s Modern interface, formerly known as Metro, is improved in Windows 8.1 for more information. In summary, Windows 8.1 is what Windows 8 should have been. All of these improvements are on top of the many great desktop features, security improvements, and all-around battery life and performance optimizations that appeared in Windows 8. If you’re still using Windows 7 and are happy with it, there’s probably no reason to race out and buy a copy of Windows 8.1 at the rather high price of $120. But, if you’re using Windows 8, it’s a big upgrade no matter what you’re doing. If you buy a new PC and it comes with Windows 8.1, you’re getting a much more flexible and comfortable experience. If you’re holding off on buying a new computer because you don’t want Windows 8, give Windows 8.1 a try — yes, it’s different, but Microsoft has compromised on the desktop while making a lot of improvements to the new interface. You just might find that Windows 8.1 is now a worthwhile upgrade, even if you only want to use the desktop.     

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • SQL Server and Hyper-V Dynamic Memory - Part 1

    - by SQLOS Team
    SQL and Dynamic Memory Blog Post Series   Hyper-V Dynamic Memory is a new feature in Windows Server 2008 R2 SP1 that allows the memory assigned to guest virtual machines to vary according to demand. Using this feature with SQL Server is supported, but how well does it work in an environment where available memory can vary dynamically, especially since SQL Server likes memory, and is not very eager to let go of it? The next three posts will look at this question in detail. In Part 1 Serdar Sutay, a program manager in the Windows Hyper-V team, introduces Dynamic Memory with an overview of the basic architecture, configuration and monitoring concepts. In subsequent parts we will look at SQL Server memory handling, and develop some guidelines on using SQL Server with Dynamic Memory.   Part 1: Dynamic Memory Introduction   In virtualized environments memory is often the bottleneck for reaching higher VM densities. In Windows Server 2008 R2 SP1 Hyper-V introduced a new feature “Dynamic Memory” to improve VM densities on Hyper-V hosts. Dynamic Memory increases the memory utilization in virtualized environments by enabling VM memory to be changed dynamically when the VM is running.   This brings up the question of how to utilize this feature with SQL Server VMs as SQL Server performance is very sensitive to the memory being used. In the next three posts we’ll discuss the internals of Dynamic Memory, SQL Server Memory Management and how to use Dynamic Memory with SQL Server VMs.   Memory Utilization Efficiency in Virtualized Environments   The primary reason memory is usually the bottleneck for higher VM densities is that users tend to be generous when assigning memory to their VMs. Here are some memory sizing practices we’ve heard from customers:   ·         I assign 4 GB of memory to my VMs. I don’t know if all of it is being used by the applications but no one complains. ·         I take the minimum system requirements and add 50% more. ·         I go with the recommendations provided by my software vendor.   In reality correctly sizing a virtual machine requires significant effort to monitor the memory usage of the applications. Since this is not done in most environments, VMs are usually over-provisioned in terms of memory. In other words, a SQL Server VM that is assigned 4 GB of memory may not need to use 4 GB.   How does Dynamic Memory help?   Dynamic Memory improves the memory utilization by removing the requirement to determine the memory need for an application. Hyper-V determines the memory needed by applications in the VM by evaluating the memory usage information in the guest with Dynamic Memory. VMs can start with a small amount of memory and they can be assigned more memory dynamically based on the workload of applications running inside.   Overview of Dynamic Memory Concepts   ·         Startup Memory: Startup Memory is the starting amount of memory when Dynamic Memory is enabled for a VM. Dynamic Memory will make sure that this amount of memory is always assigned to the VMs by default.   ·         Maximum Memory: Maximum Memory specifies the maximum amount of memory that a VM can grow to with Dynamic Memory. ·         Memory Demand: Memory Demand is the amount determined by Dynamic Memory as the memory needed by the applications in the VM. In Windows Server 2008 R2 SP1, this is equal to the total amount of committed memory of the VM. ·         Memory Buffer: Memory Buffer is the amount of memory assigned to the VMs in addition to their memory demand to satisfy immediate memory requirements and file cache needs.   Once Dynamic Memory is enabled for a VM, it will start with the “Startup Memory”. After the boot process Dynamic Memory will determine the “Memory Demand” of the VM. Based on this memory demand it will determine the amount of “Memory Buffer” that needs to be assigned to the VM. Dynamic Memory will assign the total of “Memory Demand” and “Memory Buffer” to the VM as long as this value is less than “Maximum Memory” and as long as physical memory is available on the host.   What happens when there is not enough physical memory available on the host?   Once there is not enough physical memory on the host to satisfy VM needs, Dynamic Memory will assign less than needed amount of memory to the VMs based on their importance. A concept known as “Memory Weight” is used to determine how much VMs should be penalized based on their needed amount of memory. “Memory Weight” is a configuration setting on the VM. It can be configured to be higher for the VMs with high performance requirements. Under high memory pressure on the host, the “Memory Weight” of the VMs are evaluated in a relative manner and the VMs with lower relative “Memory Weight” will be penalized more than the ones with higher “Memory Weight”.   Dynamic Memory Configuration   Based on these concepts “Startup Memory”, “Maximum Memory”, “Memory Buffer” and “Memory Weight” can be configured as shown below in Windows Server 2008 R2 SP1 Hyper-V Manager. Memory Demand is automatically calculated by Dynamic Memory once VMs start running.     Dynamic Memory Monitoring    In Windows Server 2008 R2 SP1, Hyper-V Manager displays the memory status of VMs in the following three columns:         ·         Assigned Memory represents the current physical memory assigned to the VM. In regular conditions this will be equal to the sum of “Memory Demand” and “Memory Buffer” assigned to the VM. When there is not enough memory on the host, this value can go below the Memory Demand determined for the VM. ·         Memory Demand displays the current “Memory Demand” determined for the VM. ·         Memory Status displays the current memory status of the VM. This column can represent three values for a VM: o   OK: In this condition the VM is assigned the total of Memory Demand and Memory Buffer it needs. o   Low: In this condition the VM is assigned all the Memory Demand and a certain percentage of the Memory Buffer it needs. o   Warning: In this condition the VM is assigned a lower memory than its Memory Demand. When VMs are running in this condition, it’s likely that they will exhibit performance problems due to internal paging happening in the VM.    So far so good! But how does it work with SQL Server?   SQL Server is aggressive in terms of memory usage for good reasons. This raises the question: How do SQL Server and Dynamic Memory work together? To understand the full story, we’ll first need to understand how SQL Server Memory Management works. This will be covered in our second post in “SQL and Dynamic Memory” series. Meanwhile if you want to dive deeper into Dynamic Memory you can check the below posts from the Windows Virtualization Team Blog:   http://blogs.technet.com/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx   http://blogs.technet.com/virtualization/archive/2010/03/25/dynamic-memory-coming-to-hyper-v-part-2.aspx   http://blogs.technet.com/virtualization/archive/2010/04/07/dynamic-memory-coming-to-hyper-v-part-3.aspx   http://blogs.technet.com/b/virtualization/archive/2010/04/21/dynamic-memory-coming-to-hyper-v-part-4.aspx   http://blogs.technet.com/b/virtualization/archive/2010/05/20/dynamic-memory-coming-to-hyper-v-part-5.aspx   http://blogs.technet.com/b/virtualization/archive/2010/07/12/dynamic-memory-coming-to-hyper-v-part-6.aspx   - Serdar Sutay   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • MVP Summit 2011 summary and thoughts: The &ldquo;I hope I don&rsquo;t cross a line and lose my MVP status&rdquo; post

    - by George Clingerman
    I've been wanting to write this post summarizing my thoughts about the MVP summit but have been dragging my feet since it's a very difficult one to write. However seeing Andy (http://forums.create.msdn.com/forums/t/77625.aspx) and Catalin (http://www.catalinzima.com/2011/03/mvp-summit-2011/) and Chris (http://geekswithblogs.net/cwilliams/archive/2011/03/07/144229.aspx) post about it has encouraged me to finally take the plunge. I'm going to have to write carefully though because I'm going to be dancing around a ton of NDA mine fields as well as having to walk the tight-rope of not sending the wrong message or having people read too much into what I'm saying. I want to note that most of what I'm about to say is just based on my observations, they're not thoughts that Microsoft has asked me to pass along and they're not things I heard Microsoft say. It's just me sharing what I think after going to the MVP summit. Let's start off with a short imaginary question and answer session.     Has the App Hub forums and XBLIG management been rather poor by Microsoft? Yes.     Do I think we're going to see changes to that overnight? No.     Will it continue to look bad from the outside? Somewhat. Confusing right? Well that's kind of how things are right now. Lots of confusion. XNA is doing AWESOME. Like, really, really awesome. As a result of that awesomeness, XNA is on three major platforms: Xbox 360, WP7 and PC. This means that internally Microsoft is really excited and invested in the technology. That's fantastic for XNA and really should show you the future the framework has. It's here to stay. So why are Xbox LIVE Indie Game developers feeling so much pain? The ironic thing is that pain is being caused by the success of XNA. When XNA was just a small thing, there was more freedom and more focus. It was just us and them. We were an only child. Now our family has grown and everyone has and wants some time with XNA. This gets XNA pulled in all directions and as it moves onto new platforms, it plays catch up trying to get those platforms up to speed to where Xbox LIVE Indie Games has grown. Forums, documentation, educational content. They all need to be there because Xbox LIVE Indie Games has all of that and more. Along with the catch up in features/documentation/awesomeness there's the catch up that the people on the team have to play. New platforms and new areas of development mean new players and those new guys don't have the history of being around from the beginning. This leads to a lack of understanding at times just how important some things are because they seem so small and insignificant (Rich Text defaulting for new forum profiles would be one things that jumps to mind). If you're not aware that the forums have become more than just a basic Q&A, if you're not aware that they're a central hub to a very active community, then you don't understand why that small change should be prioritized over something else. New people have to get caught up and figure out how to make a framework and central forum site work for everyone it's now serving. So yeah, a lot of our pain this last year has been simply that XNA is doing well and XBLIG is doing well so the focus was shifted to catch other things up. It hurts when a parent seems to not have any time for you and they're spending some much time with your new baby brother. Growing pains. All families and in our case our product family experience it to some degree. I think as WP7 matures we'll see the team figuring out how to give everyone the right amount of attention. While we're talking about some of our growing pains, it is also important to note (although not really an excuse) that the Xbox LIVE Arcade developers complain about many of the same things that we do. If you paid attention to talks and information coming out of GDC 2011, most of the the XBLA guys were saying things that sounded eerily similar to what the XBLIG developers are saying (Scott Nichols from GayGamer.net noticed http://twitter.com/#!/NaviFairyGG/status/43540379206811650). Does this mean we should just accept the status quo since we're being treated exactly the same? No way. However it DOES show that the way we're being treated is no indication of the stability and future of the platform, it's just Microsoft dropping the communication ball on two playing fields. We're not alone and we're not even being treated worse. Not great, but also in a weird way a very good sign. Now on to a few tidbits I think I CAN share from the summit (I'm really crossing my fingers I'm not stepping over some NDA line I shouldn't be). First, I discovered that the XBLIG user base is bigger than I personally had originally estimated. I won't give the exact numbers (although we did beg Microsoft to release some of these numbers so maybe someday?) but it was much larger than my original guestimates and I was pleasantly surprised. Maybe some of you guys had the right number when you were guessing, but I know that mine was much too low. And even MORE importantly the number of users/shoppers is growing at a steady pace as well. Our market is growing! That was fantastic news and really something that I had to share. On to the community manager discussion. It was mentioned. I was mentioned. I blushed. Nothing more to report there than the blush in my cheeks was a light crimson color. If I ever see a job description posted for that position I have a resume waiting in the wings. I can't deny that I think that would be my dream job... ...so after I finished blushing, the MVPs did make it very, very clear that the communication has to improve. Community manager or not the single biggest pain point with the Xbox LIVE Indie Game community has been a lack of communication. I have seen dramatic improvement in the team responding to MVPs and I'm even seeing more communication from them on the forums so I'm hoping that's a long term change. I really think they understood the issue, the problem remains how to open that communication channel in a way that was sustainable. I think they'll get it figured out and hopefully that's sooner rather than later. During the summit, you may have seen me tweeting about how I was "that guy" (http://twitter.com/#!/clingermangw/status/42740432471470081). You also may have noticed that Andy and Catalin both mentioned me in their summit write ups. I may have come on a bit strong while I was there...went a little out of character for myself. I've been agitated for a while with the way things have been and I've been listening to you guys and hearing you guys be agitated. I'm also watching some really awesome indie game developers looking elsewhere and leaving the platform. Some of them we might not have been able to keep even with changes, but others are only leaving because of perceptions and lack of communication from Microsoft. And that pisses me off. And I let Microsoft know that I was pissed off. You made your list and I took that list and verbalized it. I verbalized the hell out of it. [It was actually mentioned that I'm a lot nicer on the forums and in email than I am in person...I felt bad about that, but I couldn't stay silent]. Hopefully it did something guys, I really did try hard to get the message across. Along with my agitation, I also brought some pride. I mentioned several things in person to the team that I was particularly proud of. From people in the community that are doing an awesome job, to the re-launch of XboxIndies that was going on that week and even gamers like Steven Hurdle (http://writingsofmassdeduction.com/) who have purchased one XBLIG every day for over 100 days now. The community is freaking rocking it and I made sure to highlight that. So in conclusion, I'd just like to say hang in there (you know, like that picture of the cat). If you've been worried about investing in Xbox LIVE Indie Games because you think it's on shaky ground. It's not. Dream Build Play being about the Xbox 360 should have helped a little to point that out. The team is really scrambling around trying to figure things out and make improvements all around. There’s quite a few new gals and guys and it's going to take them time to catch up and there are a lot of constantly shifting priorities. We all have one toy, one team and we're fighting for time with it. It's also time for the community to continue spreading our wings and going out on our own more often. The Indie Game Winter Uprising was a fantastic example of that. We took things into our own hands and it got noticed and Microsoft got behind it. They do every time we stand up and do something (look at how many Microsoft employees tweeted, wrote about the re-launch of XboxIndies.com or the support I've gotten from them for my weekly XNA Notes). XNA is here to stay, it's time for us to stop being scared of that and figure out how to make our own games the successes they should be. There's definitely a list of things that need to be fixed, things that should be improved and I think we should definitely keep vocal about that with Microsoft. Keep it short, focused and prioritized. There's also a lot of things we can do ourselves while we're waiting on them to fix and change things. Lots of ways we can compensate for particular weaknesses in the channel. The kind of stuff that we can step up and do ourselves. Do it on our own, you know, the way Indies always do. And I'm really looking forward to watching us do just that.

    Read the article

  • CodePlex Daily Summary for Monday, April 12, 2010

    CodePlex Daily Summary for Monday, April 12, 2010New Projects3 Hour Game Design Contest: The 3 Hour Game Design Contest is a programming contest for making simple games in 3 hours. 3 hours may not seem like enough time to make a game, b...BI Monkey SSIS ETL Framework: The BI Monkey SSIS ETL Framework is an ETL Execution, Control and Logging system for ETL projects using SSIS. It is supported by a SQL Server metad...Blend Sample Data Helpers: Helper behavior classes to generate sample images and data from Internet sources such as Flickr images. Bold TCP for Delphi 7: Open Sourcing the Bold TCP for Delphi 7.cfThreadingTools: This library project contains classes and extensions which will allow easy handling of multi-threaded UI-accesses.CuBiX_SDL: CuBiX_SDL : CuBiX est un projet personnel.Draglets: Draglets makes it easier for editors and CMS-developers to move and reorder content at their web sites. It's developed in ASP.NET, C# with WCF and ...DSQLT - Dynamic SQL Templates: DSQLT - Dynamic SQL Templates Use Stored Procedures as templates for dynamic SQL statements. Substitute parameters @0-@9 with values like objectna...Edtter: Edtter is a sample web application built on ASP.NET MVC 2 Framework. (Japanese Version Only)Forms Based Authentication Management - SharePoint2007FBA: This is my own update to Stacy Draper's FBABasic project for Forms Based Authentication in MOSS 2007. In additon to managing your fba user's roles,...Height Map to 3D World at XNA: Height Map to 3D World is a XNA project that developed firstly by Eric Grossinger and secondly improved by Karadeniz Technical University Computer ...HouseFly: A simple contact and note taking applicationITM 495 - iPhone Web App: School ProjectKaufleute: This will be finished laterLR: this project is about connecting toPowerShell Integration Services: A set of tools aimed at Extract Transform and Load tasks. Focused on getting the most common ETL tasks done without SSIS. Salient: A collection of, hopefully, useful libraries.Samurai.Validation: Extensible and flexible .Net object validation frameworkSamurai.Workflow: Samurai Workflow is a slim, easy-to-use workflow framework for WPF applications.SharePoint User Management WebPart: SharePoint User Management WebPartUrl shorte(ne)r: It's simple Url Shortener (like: http://tinyurl.com) Currently only Polish language is supported. In future will be provided multi language suppor...Yasbg: Yasbg (pronounced yas-bug) is Yet Another Static Blog Generator. It is made in C# using MarkdownSharp for markdown. Currently in alpha. New Releases.NET Extensions - Extension Methods Library: Release 2010.06: Added an universal approach for grouping extension methods like conversions. Conversion are now available on any data type (it's actually extension...3 Hour Game Design Contest: 3H-GDC mVII: This is the collection of game files for the 7th 3H-GDCB&W Port Scanner: Black`n`White Port Scanner 3.0: B&W Port Scanner 3 includes FTP Server detection tool, Better stability, Optimized memory management, Saving & Opening Result sets ... and more new...BI Monkey SSIS ETL Framework: Framework v1 Alpha: This Alpha release is not fully tested and some functionality is not operating as intended.Bluetooth Radar: Version 1.7: UI Changes Device UserControl Randomly placed devices.BugTracker.NET: BugTracker.NET 3.4.1: For the tasks/time tracking feature, added a way of viewing all the tasks at once, not just the tasks for one bug. Also added a way of exporting a...cfThreadingTools: cfThreadingTools 0.1.1.8: This is the first public available release. Following items are included: BaseTools-class which allows thread-safe setting of properties and callin...DeepZoom Pivot Constructor: DeepZoom Pivot Constructor v0.1: This is a test release of the library platform - Targets .NET 3.5 No samples yet, etc., but it works well :-)DSQLT - Dynamic SQL Templates: Initial release with License Included: nothing changed but license print procedure included the zip file contains database backup SQL script readmeForms Based Authentication Management - SharePoint2007FBA: SharePoint2007FBA 1.0.0.0: Downloads for the Project solution and the WSP package. Please read the Setup Guide. If you are unfamiliar with setting up Forms Based Authenticati...Foursquare BlogEngine Widget: foursquare widget for BlogEngine.NET version 0.3: To see the changes which have been made, visit http://philippkueng.ch/post/Foursquare-BlogEngineNET-widget-version-03.aspx For installation instruc...Framework Detector: FrameworkDetect Support .NET 4 v2: FrameworkDetect Support .NET 4Happy Turtle Plugins for BVI :: Repository Based Versioning for Visual Studio: Happy Turtle 1.0.46860: This is the second beta release of the SVN based version incrementor. Please feel free to create a thread in the discussion tabs and provide feedb...Height Map to 3D World at XNA: 3DWorld: Just open .rar file and extract it any folder and run Proje2Dto3D.exe file.HTML Ruby: 6.20.2: Removed rubyLineSpace option Improved options panel Fixed ruby text font-size rendering issue with complex ruby annotation Removed more waste...HTML Ruby: 6.20.3: Removed unused code Temporary partial fix for Firefox 3.7a4pre nightly buildHTML Ruby: 6.21.0: Added support for current HTML5 ruby annotation format. All ruby annotations are converted to XHTML 1.1 complex ruby annotation.Kooboo HTML form: Kooboo HTML Form Module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo Menu: Kooboo CMS Menu for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo Meta: Kooboo Meta Module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo PageMenu: Kooboo CMS PageMenu for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo Search: Kooboo CMS Search module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Numina Application/Security Framework: Numina.Framework Core 50212: Added bulk import user page Added General settings page for updating Company Name, Theme, and API Key Add/Edit application calls Full URL to h...Rawr: Rawr 2.3.14: - Rawr3: Tons of fixes for Rawr3 compatability and UI. - Significant performance improvements all around. - More fixes and improvements to Wowhea...Rich Ajax empowered Web/Cloud Applications: 6.4 beta 2: The first fully featured version of Visual webGui offering web/cloud development tool that puts all ASP.NET Ajax limits behind with enhanced perfor...SharePoint User Management WebPart: User Management Web part 1.0: Most of the organization have one SharePoint Site which is configured with windows authenticated which is for internal employees having AD authenti...SkeinLibManaged: SkeinLibManaged 1.1.0.0 (Beta): This is the compiled DLL with XML documentation, so there should be plenty of context sensitive help and Intellisense. This is the Release version,...VCC: Latest build, v2.1.30411.0: Automatic drop of latest buildVFPX: Code References 1.1 Beta: Visit the Code References Info Page for complete information about this release.VisioAutomation: VisioAutomation 2.5.0: VisioAutomation 2.5.0- General cleanup/bugfixes - Many low-level changes the the VisioAutomation extension methods - these are far fewer now - This...Visual Studio DSite: English To Spanish Translator (Visual C++ 2008): A simple english to spanish translator made in visual c 2008, using the Google Translate API.WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.10.00: Whats NewFile Browser: Inherits Folder Permissions from DotNetNuke Updated the Editor to Version 3.2.1 revision 5372 Added CkEditor jQuery Adap...Web/Cloud Applications Development Framework | Visual WebGui: 6.4 beta 2: The first fully featured version of Visual webGui offering web/cloud development tool that puts all ASP.NET Ajax limits behind with unique develope...WPF Data Virtualization: 1.0.0.0: First ReleaseYasbg: Yasbg Alpha: ReadmeYet Another Static Blog Generator is a command line utility that generates static html files for blogs. Currently, it is NOT feed enabled. I...異世界の新着動画: Ver. 10-04-12: ニコ生の仕様変更に対応 アンケート時間の設定追加Most Popular ProjectsWBFS ManagerRawrASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesFacebook Developer ToolkitMost Active ProjectsRawrnopCommerce. Open Source online shop e-commerce solution.AutoPocopatterns & practices – Enterprise LibraryShweet: SharePoint 2010 Team Messaging built with PexFarseer Physics EngineNB_Store - Free DotNetNuke Ecommerce Catalog ModuleIonics Isapi Rewrite FilterBlogEngine.NETBeanProxy

    Read the article

  • Top tweets SOA Partner Community – October 2013

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Ronald Luttikhuizen ?My latest upload: SOA Made Simple | Introduction to SOA on @slideshare http://www.slideshare.net/rluttikhuizen/soa-made-simple-introduction-to-soa … via @SlideShare OTNArchBeat ?ArchBeat Link-o-Rama for October 4, 2013 #cloud #linux #oaam #soa http://pub.vitrue.com/y4SK Lucas Jellema ?My blog article shows news on the new SOA Suite 12c release - as it was publicly available during #oow13 see: http://technology.amis.nl/2013/09/27/oow13-soa-suite-12c/ … Yogesh Sontakke ?Introducing OER's new Express Workflows - Simplified Lifecycle Management. Blog post: http://bit.ly/16JKHCf @soacommunity #soagovernance SrinivasPadmanabhuni ?"@OTNArchBeat: SOA and User Interfaces - by @soacommunity @HajoNormann @gschmutz @t_winterberg et al #industrialsoa http://pub.vitrue.com/KmOp " SOA Community ?SOA and User-Interfaces http://servicetechmag.com/I76/0913-2 article published part of #industrialSOA at Service Technology Magazine #soacommunity Estafet Limited ?@Estafet win @UKOUG Middleware Partner of the Year 2013 Yogesh Sontakke ?RT @VikasAatOracle: #Oracle #B2B - written by experts #soa #soacommunity #oraclesoa - time to get a copy ! @SOAScott Danilo Schmiedel ?Thanks a lot to Juergen @soacommunity for the super interesting and well-organized Partner Advisory Council yesterday! Such a Great Value! OTNArchBeat ?Case management supporting re-landscaping application portfolios | @leonsmiers http://pub.vitrue.com/MC5j Samantha Searle ?Apply for the #GartnerBPM 2014 Excellence Awards - find out how via this link http://ow.ly/ptaNQ #Gartner #bpm #process #entarch #cio OTNArchBeat ?SOA and User Interfaces - by @soacommunity @hajonormann @gschmutz @t_winterberg et al #industrialsoa http://pub.vitrue.com/KmOp Dain Hansen ?Hybrid #cloud is on the rise, but is the IT department's culture standing in the way? http://add.vc/eJN #CloudIntegration #OracleSOA OTNArchBeat #SOASuite 11g ps6 - Download your log files directly from the Enterprise Manager | @whitehorsenl http://pub.vitrue.com/KrJ2 Whitehorses ?Whiteblog: SOA Suite 11g ps6 - Download your log files directly from the Enterprise Manager (http://goo.gl/2Gqiax ) Rajesh Raheja ?Cloud integration session recap #oow13 http://blog.raastech.com/2013/09/recap-of-real-world-cloud-integration.html?m=1 … Vikas Anand ?@Ahmed_Aboulnaga thanks for the excellent summary and kind words. #oow13 #cloud #oraclesoa http://blog.raastech.com/2013/09/recap-of-real-world-cloud-integration.html?m=1 … Luis Augusto Weir ?REST is also SOA. Check it out http://www.soa4u.co.uk/2013/09/restful-is-also-soa.html?m=1 … #soacommunity Graham ?“@OracleBPM & @soacommunity: 5 Ways to Modernize Applications with BPM #AppAdvantage" #oracleday http://bit.ly/15yC6e3 SOA Community ?#ACED director asked me for BPM references in FSI - ever visited my #SOACommunity workspace? https://beehiveonline.oracle.com/teamcollab/overview/SOA_Community_Workspace … #soacommunity #bpm OracleBlogs ?SOA Community Newsletter September 2013 http://ow.ly/2Aj6oK OTNArchBeat ?OOW13: First glimpses of the new #SOASuite12c | @LucasJellema http://pub.vitrue.com/2YgX sbernhardt ?Just published new blog entry on OOW 2013 wrap up. http://thecattlecrew.wordpress.com/2013/09/30/oracle-open-world-2013-wrap-up/ … #oow13 @OC_WIRE @soacommunity Emiel Paasschens ?Home with family after an overwhelming #OOW week in San Francisco with lot of info & meetings. Special thanx to @OracleBelux & @soacommunity Robert van Mölken ?Had a awesome week at #OOW13 in SF. Highlights were the @soacommunity Wine tour, @OracleBelux meet-ups and @OracleSOA CAB. Thanks to all :) SOA Community ?The place Oracle Fusion middleware comes from - Oracle 200 - TKs office - next Oracle 100 - SOA & BPM #soacommunity pic.twitter.com/qibFOQVbRo Oracle BPM ?5 Ways to Modernize Applications with BPM #AppAdvantage http://pub.vitrue.com/l2dn Simon Haslam ?Ha ha - how did we miss that! RT @lucasjellema: Post conference announcement of a new middleware appliance? #oow13 pic.twitter.com/3NvcjPfjXb OTNArchBeat ?The OTNArchBeat Daily is out! http://paper.li/OTNArchBeat/1329828521 … ? Top stories today via @lucasjellema @myfear @TylerJewell Packt Publishing ?Get 50% off ALL our DRM-free eBooks - this weekend only! Go to http://www.packtpub.com/ and use code BIG50, as often as you like! #BIG50 OracleBlogs ?Global Perspective: ACE Director from EMEA Weighs in on AppAdvantage http://ow.ly/2Afek2 orclateamsoa ?#orclateamsoa Blog: BPM Auditing Demystified - I've heard from a couple of customers recently asking about BPM aud... http://ow.ly/2AfbAn AMIS, Oracle & Java ?Cool #soasuite 12c feature managed file transfer - visit Dave Barry at demo point sr212 #oow #soacommunity pic.twitter.com/gb4HLbUarR SOA Community ?Let us know what was best at #OOW @soacommunity save trip home - thanks for coming to #SF ;-) see you at #OOW2014 pic.twitter.com/xbWXjRapqh Lonneke Dikmans ?Nice @dschmied is talking about the different steps in his project. He starts with explaining the user interface design #oow13 #ux #acm Lonneke Dikmans ?Saving the best for the end: managing knowledge worker processes by @dschmied and Prasen.#oow13 #acm cool stuff: adaptive case management Luis Augusto Weir ?SOA Governance is more than just OER. Requires people, processes and tools. Check it out #SOA #soacommunity http://youtu.be/Ohn06smVKVw Lonneke Dikmans ?“@OracleSOA: #oow Join us for:Enterprise SOA Infrastructure Best Practices Thu 9/26 2:00 PM - 3:00 PM Moscone West - 2020 SOA Community ?Business Process Management (BPM) 11g PS6 Awareness Course http://wp.me/p10C8u-1as Ajay Khanna ?Detect, Analyze, Act - Fast! http://wp.me/p10C8u-1ao via @soacommunity #OracleBPM Simone Geib ?It took a while, but I finally reached 500 followers. Thanks everybody and especially @soacommunity :) SOA Community ?Functional Testing Business Processes In Oracle BPM Suite 11g by Arun Pareek http://wp.me/p10C8u-1aq SOA Community Distribute the September edition of the SOA Community newsletter READ it! Didn't receive it register http://www.oracle.com/goto/emea/soa #soacommunity SOA Community ?Detect, Analyze, Act - Fast! by Ajay Khanna http://wp.me/p10C8u-1ao Robert van Mölken ?Finalised my #OOW presentation #CON8736 and live demo on wednesday 25th at 11:45am. Also giving a short version at the SOA CAB on thursday. Rajesh Raheja ?"The AppAdvantage of Oracle Cloud & On-premises Integration" http://bit.ly/14RYHmZ SOA Community ?Additional new content SOA & BPM Partner Community http://wp.me/p10C8u-1aw Dain Hansen ?Right now #oow13 SOA, BPM - Customer Advisory Boards. 'No tweeting' says @SOASimone. Instagram of funny cats still ok. leonsmiers ?Case Management with Oracle BPM Suite our presentation on #oow13 http://www.slideshare.net/leonsmiers/oracle-open-world-2013-case-management-smiers-kitson … #capgemini @nkitson72 Mark Simpson ?Flextronics reduced cost of processing an invoice to <$1 from $7 due to BPM @OracleBPM #oow13 saving millions. Way less than industry avg. Holger Mueller ?#Siemens Shared Services CIO says that #Fusion #Middleware made the difference for #Oracle over #Workday. #Integration matters. #OOW13 oracleopenworld ?Miss any #oow13 keynotes, or simply want to rewatch? Check out the live streaming site for keynotes on demand: http://pub.vitrue.com/RG4D SOA Community ?Analyze your m2m data and act on it! Big data Pattern matching, fast data & soa #soacommunity #oow pic.twitter.com/48Q1z4ckh7 SOA Community ?Top tweets SOA Partner Community – September 2013 http://wp.me/p10C8u-1cR Simone Geib ?#oraclesoa hands on lab at #oow13 pic.twitter.com/IJJrqXIMiu Danilo Schmiedel #oow13 CON8436: Managing Knowledge Worker Processes. Come & get a free Adaptive Case Management poster @soacommunity pic.twitter.com/FRc2CSyLwb John Sim ?Great job again Jurgen @soacommunity helping bring Ace Community together! Danilo Schmiedel ?Excellent #OracleBPM Adaptive Case Management intro by @heidibuelowBPM and Prasen at the #oow13 demo ground.Last chance today @soacommunity SOA Community ?Thanks to all our #bpm #soa and #weblogic partners for the great middleware business #oow #soacommunity pic.twitter.com/dBwZ8DMHfH Whitehorses ?Thanks @soacommunity for the party tonight. Great to meet product management & see all the talented EMEA middleware specialists. #oow13 Danilo Schmiedel ?Great tool demo from Link Consulting about managing your SOA with OER #oow13 @soacommunity Torsten Winterberg ?“@soacommunity: thanks to @dschmied and @OC_WIRE for making it happen to have our case management poster as printed version hier at #oow13 Ronald Luttikhuizen ?These were the architects involved in the diagram excitement :) just after State of SOA podcast with @OTNArchBeat pic.twitter.com/5B8jIrVTA9 SOA Community ?Tanks to AVIO for the excellent #bpmn poster and the great bpm business - visit then at #OOW & get the poster pic.twitter.com/ebTg9pFY1C Dain Hansen ?Kurian introducing Oracle Platform-as-a-Service developments. #oow13 #OracleCloud pic.twitter.com/evJLTU53rx Bruce Tierney ?API Management "multi-level pie chart" at #oow13 by Oracle's Tim Hall pic.twitter.com/q12OIRdaue Dain Hansen ?This is not your Daddy's BAM @soacommunity: Is this BAM? Very cool in #soasuite 12c get a demo at sr225 pic.twitter.com/EvwqXW9U5j SOA Community ?Is this BAM? Very cool in #soasuite 12c get a demo at sr225 pic.twitter.com/LybHxyF362 SOA Community ?SOA governance by @Yogesh_Sontakke at demo point sr214 many good new features - key for soa projects #oow #soa pic.twitter.com/DFK0ummsK1 SOA Community ?Cool #soasuite 12c feature managed file transfer - visit Dave Barry at demo point sr212 #oow #soacommunity pic.twitter.com/GDKcqDGhCF SOA Community ?Adaptive Case Management demo point at #OOW visit @heidibuelowBPM get a demo and cmmn notation poster #soacommunity pic.twitter.com/T7yEyI7tdn Lonneke Dikmans ?In case you missed it: http://blog.vennster.nl/2013/09/case-management-part-1.html?spref=tw … Lucas Jellema ?SOA Suite news: Cloud Adapters RightNow and SalesForce plus SDK to develop custom cloud adapters (CY13); REST/JSON support in SB/SCA (12c) Oracle SOA ?Cloud Integration and AppAdvantage: Transform your Enterprise #soa #oow13 http://pub.vitrue.com/UfPB Dain Hansen ?Cloud Integration and AppAdvantage: Transform your Enterprise #soa #oow13 http://pub.vitrue.com/4QWA Hajo Normann ?#BigData, eventing & real time #analytics suggest timely next actions in #oracleBPM & #oracleACM; #oow13 #FastData pic.twitter.com/aFVGrTXPqu Mark Simpson ?OEP CQL engine now used in BAM12c for event stream summary computation with temporal and pattern match features to feed dashboards. #oow13 Mark Simpson ?BAM12c virtually a new product. Analytics that senses ahead of time and also compares to historical trends to guide process or case #oow13 Andrejus Baranovskis ?Enabling UI Shell 12c/11g Multitasking Behavior http://fb.me/18l9vxQfA Amit Zavery ?Oracle Fusion Middleware Empowers Business Users, EVP Thomas Kurian's session summary http://onforb.es/18Ta1jf #oow13 #oraclemiddle #oracle Vikas Anand ?#oow13 #oracleopenworld BPM on display at Middleware keynote by Thomas Kurian pic.twitter.com/PMm719S0Ui SOA Community ?BPM composer - business user empowerment #oow #soacommunity #bpmsuite pic.twitter.com/0Qgl6oVh0h SOA Community ?Model your process in BPMN - make is executable and analyze & improve them #oow #soacommunity pic.twitter.com/jkLlObDdoi Bruce Tierney ?@demed and Thomas Kurian talk mobile and cloud at #oow13 pic.twitter.com/bAAeqn5a2V Amit Zavery ?Thomas Kurian showcasing all the new features of Oracle Fusion Middleware #oraclemiddle #oow13 SOA Community ?Demo time cloud adapters in #soasuite at Thomas Kurian keynote. Build and integrate mobile apps in minutes #oow pic.twitter.com/qTnCOJLLwS SOA Community ?Soa suite cloud adapters and mobile apps by @demed at Thomas Kurian keynote #oow #oracle #soacommunity pic.twitter.com/5aMLkNH4Ng Danilo Schmiedel ?First impressions from Oracle Open World 2013 http://wp.me/p2fG8x-77 @soacommunity @OC_WIRE SOA Community ?Good morning SFO let us know if you attend #OOW & #OPN keynote - #soacommunity pic.twitter.com/hzLYGDlRgE Simon Haslam ?Had a very useful @wlscommunity PAC meeting yesterday... & probably the best swag to date! pic.twitter.com/Lqus8ysbp7 Vikas Anand ?Oracle SOA Suite - Team Blog http://bit.ly/18I1Zj7 Rajesh Raheja ?Introducing new Cloud Connectivity Adapters #soa #demopod #oow13. I'll be there Sep 23 & 24 3-6pm to meetup http://bit.ly/18I1Zj7 leonsmiers ?..and again a very successful Oracle SOA/BPM partner council on the eve of #oow13. Thanks Jurgen! @soacommunity pic.twitter.com/aM1LMlb7Yw Vikas Anand ?#oow13 #soa #oep #exalogic Canon Delivers Fast Data with Oracle Event Processing (Oracle SOA Suite) http://bit.ly/1dwPeHb #soacommunity Rolf Scheuch ?The ACM poster is a big success. Great talks and .... I am soon out of posters! #bpmcon #ACM pic.twitter.com/TriaUyXRWK Oracle SOA ?British Telecom Sucess with Oracle B2B #oow #soa #b2b http://pub.vitrue.com/1RWi leonsmiers ?(Oracle) Case Management supporting re-platforming, a pre-read before our presentation at #oow13 http://leonsmiers.blogspot.com/2013/09/case-management-supporting-re.html … #capgemini #yammer SOA & BPM Partner CommunityFor regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Twitter,SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • SQL Server and Hyper-V Dynamic Memory - Part 1

    - by SQLOS Team
    SQL and Dynamic Memory Blog Post Series   Hyper-V Dynamic Memory is a new feature in Windows Server 2008 R2 SP1 that allows the memory assigned to guest virtual machines to vary according to demand. Using this feature with SQL Server is supported, but how well does it work in an environment where available memory can vary dynamically, especially since SQL Server likes memory, and is not very eager to let go of it? The next three posts will look at this question in detail. In Part 1 Serdar Sutay, a program manager in the Windows Hyper-V team, introduces Dynamic Memory with an overview of the basic architecture, configuration and monitoring concepts. In subsequent parts we will look at SQL Server memory handling, and develop some guidelines on using SQL Server with Dynamic Memory.   Part 1: Dynamic Memory Introduction   In virtualized environments memory is often the bottleneck for reaching higher VM densities. In Windows Server 2008 R2 SP1 Hyper-V introduced a new feature “Dynamic Memory” to improve VM densities on Hyper-V hosts. Dynamic Memory increases the memory utilization in virtualized environments by enabling VM memory to be changed dynamically when the VM is running.   This brings up the question of how to utilize this feature with SQL Server VMs as SQL Server performance is very sensitive to the memory being used. In the next three posts we’ll discuss the internals of Dynamic Memory, SQL Server Memory Management and how to use Dynamic Memory with SQL Server VMs.   Memory Utilization Efficiency in Virtualized Environments   The primary reason memory is usually the bottleneck for higher VM densities is that users tend to be generous when assigning memory to their VMs. Here are some memory sizing practices we’ve heard from customers:   ·         I assign 4 GB of memory to my VMs. I don’t know if all of it is being used by the applications but no one complains. ·         I take the minimum system requirements and add 50% more. ·         I go with the recommendations provided by my software vendor.   In reality correctly sizing a virtual machine requires significant effort to monitor the memory usage of the applications. Since this is not done in most environments, VMs are usually over-provisioned in terms of memory. In other words, a SQL Server VM that is assigned 4 GB of memory may not need to use 4 GB.   How does Dynamic Memory help?   Dynamic Memory improves the memory utilization by removing the requirement to determine the memory need for an application. Hyper-V determines the memory needed by applications in the VM by evaluating the memory usage information in the guest with Dynamic Memory. VMs can start with a small amount of memory and they can be assigned more memory dynamically based on the workload of applications running inside.   Overview of Dynamic Memory Concepts   ·         Startup Memory: Startup Memory is the starting amount of memory when Dynamic Memory is enabled for a VM. Dynamic Memory will make sure that this amount of memory is always assigned to the VMs by default.   ·         Maximum Memory: Maximum Memory specifies the maximum amount of memory that a VM can grow to with Dynamic Memory. ·         Memory Demand: Memory Demand is the amount determined by Dynamic Memory as the memory needed by the applications in the VM. In Windows Server 2008 R2 SP1, this is equal to the total amount of committed memory of the VM. ·         Memory Buffer: Memory Buffer is the amount of memory assigned to the VMs in addition to their memory demand to satisfy immediate memory requirements and file cache needs.   Once Dynamic Memory is enabled for a VM, it will start with the “Startup Memory”. After the boot process Dynamic Memory will determine the “Memory Demand” of the VM. Based on this memory demand it will determine the amount of “Memory Buffer” that needs to be assigned to the VM. Dynamic Memory will assign the total of “Memory Demand” and “Memory Buffer” to the VM as long as this value is less than “Maximum Memory” and as long as physical memory is available on the host.   What happens when there is not enough physical memory available on the host?   Once there is not enough physical memory on the host to satisfy VM needs, Dynamic Memory will assign less than needed amount of memory to the VMs based on their importance. A concept known as “Memory Weight” is used to determine how much VMs should be penalized based on their needed amount of memory. “Memory Weight” is a configuration setting on the VM. It can be configured to be higher for the VMs with high performance requirements. Under high memory pressure on the host, the “Memory Weight” of the VMs are evaluated in a relative manner and the VMs with lower relative “Memory Weight” will be penalized more than the ones with higher “Memory Weight”.   Dynamic Memory Configuration   Based on these concepts “Startup Memory”, “Maximum Memory”, “Memory Buffer” and “Memory Weight” can be configured as shown below in Windows Server 2008 R2 SP1 Hyper-V Manager. Memory Demand is automatically calculated by Dynamic Memory once VMs start running.     Dynamic Memory Monitoring    In Windows Server 2008 R2 SP1, Hyper-V Manager displays the memory status of VMs in the following three columns:         ·         Assigned Memory represents the current physical memory assigned to the VM. In regular conditions this will be equal to the sum of “Memory Demand” and “Memory Buffer” assigned to the VM. When there is not enough memory on the host, this value can go below the Memory Demand determined for the VM. ·         Memory Demand displays the current “Memory Demand” determined for the VM. ·         Memory Status displays the current memory status of the VM. This column can represent three values for a VM: o   OK: In this condition the VM is assigned the total of Memory Demand and Memory Buffer it needs. o   Low: In this condition the VM is assigned all the Memory Demand and a certain percentage of the Memory Buffer it needs. o   Warning: In this condition the VM is assigned a lower memory than its Memory Demand. When VMs are running in this condition, it’s likely that they will exhibit performance problems due to internal paging happening in the VM.    So far so good! But how does it work with SQL Server?   SQL Server is aggressive in terms of memory usage for good reasons. This raises the question: How do SQL Server and Dynamic Memory work together? To understand the full story, we’ll first need to understand how SQL Server Memory Management works. This will be covered in our second post in “SQL and Dynamic Memory” series. Meanwhile if you want to dive deeper into Dynamic Memory you can check the below posts from the Windows Virtualization Team Blog:   http://blogs.technet.com/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx   http://blogs.technet.com/virtualization/archive/2010/03/25/dynamic-memory-coming-to-hyper-v-part-2.aspx   http://blogs.technet.com/virtualization/archive/2010/04/07/dynamic-memory-coming-to-hyper-v-part-3.aspx   http://blogs.technet.com/b/virtualization/archive/2010/04/21/dynamic-memory-coming-to-hyper-v-part-4.aspx   http://blogs.technet.com/b/virtualization/archive/2010/05/20/dynamic-memory-coming-to-hyper-v-part-5.aspx   http://blogs.technet.com/b/virtualization/archive/2010/07/12/dynamic-memory-coming-to-hyper-v-part-6.aspx   - Serdar Sutay   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Introducing… SharePress!

    - by Bil Simser
    For those that follow me I’ve been away from blogging and twittering for a couple of months. This is the reason. For the last few months I’ve been working with a cross-functional team putting together a new product from the people that run WordPress, the free premiere blogging platform. The result is a new product we call SharePress, a highly extensible blogging and content management platform with the usability of WordPress and the power of SharePoint combined into a single product. SharePress gives you SharePoint sites that are SEO-friendly delivered with a Web 2.0 ease of use, leveraging all of the existing abilities of SharePoint and WordPress that we know today. The Reason Back in December I was approached by the WordPress team about building a new platform that took advantage of the power of SharePoint but the ease of WordPress. I’m no stranger to WordPress and it’s 5 minute no-holds-barred install (I’ve always wanted SharePoint to do this!) and I run my personal blog on WordPress as does my better half, Princess Jenn. There’s always been a pitch by so-called Web 2.0 applications to deliver the power of SharePoint but the ease of [insert product here] over the past year or so. I checked each and every one of them out, but they fell woefully short when it came to SharePoint’s document management, versioning, and customization. They try, but it’s never been up to par in my books. On the flipside, SharePoint has always been tops in collaboration in the Enterprise but it’s painful to develop web parts, UI customization can be tricky, and there’s just no user community for something as simple as themes and designs. The Product Enter SharePress. Is it SharePoint? Is it WordPress? It’s both, and neither. Everything you like about both products are there but this is a bold new product that is positioned to bring SharePoint to the masses while maintaining the fidelity of an Enterprise 2.0 collaboration platform. SharePress delivers on all fronts including: The ability to leverage any WordPress/Joomla/Drupal/DotNetNuke themes and skins inside of SharePoint Run any WordPress/Drupal/Joomla/DotNetNuke/SharePoint plug-in/module/web part/feature works out of the box with SharePress SEO-friendly URLs and pages Permalinks for all content All the features of SharePoint Server 2010 (including InfoPath, Excel, and Access services) included in the price Small deployment footprint. You decide how much to deploy and where. Independent Database Abstraction Layer (iDal) that allows you to deploy to SQL Server 2005/2008, MySQL, and PostgreSQL Portable Rendering Engine Layer (PREL) so you host .NET or PHP on Apache or IIS (version 7 or higher). The install feature is built around WordPress and it’s famous 5-minute install (actually, it’s never taken me more than 1 minute). SharePress installs with two screens after the files are uploaded to your server (which can be done entirely using FTP): After you enter two fields of information click “Install SharePress” and you’ll be done: No mess, no fuss, no complicated dependencies, and no server access required! How simpler could this be? The Technology WordPress plug-ins and themes working with SharePoint? Of course! The answer is IronPython which has now reached a maturity level capable of doing on the fly code language conversions. SharePress is a brand new product not built on top of any previous platform but leverages all the power of each of those applications through a patent pending technique called SharePress Multi-plAtfoRm Technology (SMART). SMART will convert PHP code on the fly into Python (using SWIG as an intermediate processor) which is then compiled to MSIL and then delivered back as an ASP.NET MVC application (output is C# or VB.NET, but you can build your own SMART converter to output a different language). Sound complicated? It is, but it’s all behind the scenes and you don’t have to worry about a thing. This image illustrates the technology stack and process: So users can load up out of the box PHP themes and plug-ins from the WordPress/Joomla/Drupal community into the SMART converter and output MSIL that is used by the SharePress engine and rendered on the fly to the end user. Supported PHP versions are 4.xx and 5.xx with version 6 support to come when it’s released. Similarly you can take any .NET application, DotNetNuke Module, SharePoint Web Part or event handler and feed it into the converter to output the same. Everything is reverse compiled into MSIL so it becomes technology agnostic. No source code access is needed and the SMART converter can handle obfuscated .NET assemblies that were built with .NET 1.0, 1.1, 2.0, 3.5, and 4.0. With this technology you can also with the flip of a switch have the output create PHP pages for you. This allows you to run SharePress on Unix based systems running PHP and MySQL, allowing you to deliver your SharePoint like experience to your users with a $0 infrastructure footprint. Here’s SharePress with the default WordPress post imported then a stock SharePoint collaboration site was imported. The site was then applied with the default Kubrick theme from WordPress. The Features Deploy any of the freely available 100,000 WordPress/Joomla/Drupal themes instantly to your runtime SharePress environment and preview or activate them right from your browser. Built-in Web 2.0 jQuery Enabled End User and Administrator Web Interface. Never have to remote into a server again! Run any SharePoint Web Part or Event Handler directly without modification or access to source code in SharePress. Use any WordPress/Joomla/Drupal plug-in directly in SharePress, no local admin or access to server. Just upload and activate. Upload and Activate any SharePoint Solution Package to any site remotely. No rebuilding. Changes made to sites require no compiling or rebuilding and are published immediately. Password Protected Content. You can give passwords to individual posts, articles, pages, documents, forms, and list items. A powerful polymorphic Captcha system backs the security interface and vendors can easily tie into smart card readers, fingerprint readers, and retina scanners for authorization and identification. OpenID, Windows Live, and Windows Authentication are supported out of the box. Infinitely customizable and extensible. You can leverage plug-ins from the open source community to do practically anything, all configured and uploaded via the browser. Additionally the developer API (available soon) allows you to build extensions in .NET, PHP, and Python with little effort. Easy Importing. We have importers for Blogger, WordPress, Drupal, Joomla, DotNetNuke, and SharePoint so you can populate your site quickly and easily with full metadata modeling and creation. Banner Management. It’s easy to setup banners for your web site complete with impression numbers, special URLs, and more. Menu Manager. The Menu Manager allows you to create as many menus as you want, each one can be associated to specific audiences or roles and then be styled across multiple contexts including the same menu delivered as a fly out, rollover, drop down, and just about any navigation you can think of. Collaborative ShareBook. Our exclusive book feature allows you to setup a “book” and then authorize individuals to contribute content. Permalinks. All content in SharePress has a permanent or “perma link” associated with it so people can link to it freely without fear of broken links. Apache or IIS, Unix / Linux / BSD / Solaris / Windows / Mac OS X support. Deliver SharePress the way *you* want from the platform *you* decide. Database Independence. We know people wanted to run on any database platform so SharePress is built on top of a database abstraction layer that allows you to run on SQL Server, MySQL, PostgreSQL. Other databases can be supported by writing a supporting database script consisting of fourteen function calls. The script can be written in Perl, Python, AWK, PowerShell, Unix Shell scripts, VBA, or simple DOS batch files. The Team SharePress is the work of a lot of people in both the WordPress and SharePoint community. I worked with a lot of SharePoint MVPs to create this new product as we really wanted to deliver the most compatible and feature rich system in a product that we would be proud of. Many thanks go out to Eli Bleeker, Todd Robillard, Scot Larson, Daniel Hillier, Shane Fox, Box Peran, Amanda English, and Bill Murray for doing the heavy lifting and all of their expertise and innovative thinking to get this product out. Licensing and Pricing SharePress is still in the final stages for pricing but we’re looking at a price point somewhere between $99-$100 to make it affordable for everyone. We plan to announce final pricing sometime in the next few weeks. There are no additional charges for Enterprise versions or additional features. Everything you see is what’s available and it’s just a matter of lighting up your site with whatever feature you want to enable. The product will not be open source but source code licenses will be available to ISVs who are interested in interfacing with the API at a low level. Cost will be $25,000 USD per developer and gives you complete access to the source code to the SharePress Foundation System and the .NET 4.0 Framework source code. Conclusion We hope you enjoy the launch of SharePress as the new premium blogging and content management platform for both Intranets and the Internet. We think we’ve build the best of breed solutions here and made it easy for anyone to get started with a minimal of infrastructure but allow the scalability of SharePress to shine through in the Enterprise 2.0 world. We encourage your feedback so please leave comments as to what you’re looking for in this system as we’re always evolving it to make it a better product for everyone.

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Messing with the Team

    - by Robert May
    Good Product Owners will help the team be the best that they can be.  Bad product owners will mess with the team and won’t care about the team.  If you’re a product owner, seek to do good and avoid bad behavior at all costs.  Remember, this is for YOUR benefit and you have much power given to you.  Use that power wisely. Scope Creep The product owner has several tools at his disposal to inject scope into an iteration.  First, the product owner can use defects to inject scope.  To do this, they’ll tell the team what functionality that they want to see in a feature.  Then, after the feature is developed, the Product Owner will decide that they don’t really like how the functionality behaves.  To change it, rather than creating a new story, they’ll add a defect.  The functionality is correct, as designed, but the Product Owner doesn’t like it.  By creating the defect, the Product Owner destroys the trust that the team has of the product owner.  They may not be able to count the story, because the Product Owner changed the story in the iteration, and the team then ends up looking like they have low velocity for something over which they have no control.  This is bad.  One way to deal with this is to add “Product Owner Time” to the iteration.  This will slow the velocity, but then the ScrumMaster can tell stake holders that this time is strictly in place to deal with bad behavior of the Product Owner. Another mechanism often used to inject Scope is the concept of directed development.  Outside of planning, stand-ups, or any other meeting, the Product Owner will take a developer aside and ask them to complete a task for them.  This is bad!  The team should be allocating all of their time to development.  If the Product Owner asks for a favor, then time that would normally be used for development will be used for a pet project of the Product Owner and the team will not get credit for this work.  Selfish product owners do this, and I typically see people who were “managers” do this behavior.  Authoritarian command and control development environments also see this happen.  The best thing that can happen is for the team member to report the issue to the ScrumMaster and the ScrumMaster to get very aggressive with management and the Product Owner to try and stop the behavior.  This may result in the ScrumMaster being fired, but if the behavior continues, Scrum is doomed.  This problem is especially bad in cases where the team member’s direct supervisor is the Product Owner.  I don’t recommend that the Product Owner or ScrumMaster have a direct report relationship with team members, since team members need the ability to say no.  To work around this issue, team members need to say no.  If that fails, team members need to add extra time to the iteration to deal with the scope creep injection and accept the lower velocity. As discussed above, another mechanism for injecting scope is by changing acceptance tests after the work is complete.  This is similar to adding defects to change scope and is bad.  To get around, add time for Product Owner uncertainty to the iteration and make sure that stakeholders are aware of the need to add this time because of the Product Owner. Refusing to Prioritize Refusing to prioritize causes chaos for the team.  From the team’s perspective, things that are not important will be worked on while things that the team knows are vital will be ignored.  A poor Product Owner will often pick the stories for the iteration on a whim.  This leads to the team working on many different aspects of the product and results in a lower velocity, since each iteration the team must switch context to the new area of development. The team will also experience confusion about priorities.  In one iteration, Feature X was the highest priority and had to be done.  Then, the following iteration, even though parts of Feature X still need to be completed, no stories to address them will be in the iteration.  However, three iterations later, Feature X will again become high priority. This will cause the team to not trust the Product Owner, and eventually, they’ll stop caring about the features they implement.  They won’t know what is important, so to insulate themselves from the ever changing chaos, they’ll become apathetic to all features.  Team members are some of the most creative people in a company.  By losing their engagement, the company is going to have a substandard product because the passion for the product won’t be in the team. Other signs that the Product Owner refuses to prioritize is that no one outside of the product owner will be consulted on priorities.  Additionally, the product, release, and iteration backlogs will be weak or non-existent. Dealing with this issue is not easy.  This really isn’t something the team can fix, short of taking over the role of Product Owner themselves.  An appeal to the stake holders might work, but only if the Product Owner isn’t a “manager” themselves.  The ScrumMaster needs to protect the team and do what they can to either get the Product Owner to prioritize or have the Product Owner replaced. Managing the Team A Product Owner that is also the “boss” of team members is a Scrum team that is waiting to fail.  If your boss tells you to do something, failing to do that something can cause you to be fired.  The team needs the ability to tell the Product Owner NO.  If the product owner introduces scope creep, the team has a responsibility to tell the Product Owner no.  If the Product Owner tries to get the team to commit to more than they can accomplish in an iteration, the team needs the ability to tell the Product Owner no. If the Product Owner is your boss and determines your pay increases, you’re probably not going to ever tell them no, and Scrum will likely fail.  The team can’t do much in this situation. Another aspect of “managing the team” that often happens is the Product Owner tries to tell the team how to develop the stories that are in the iteration.  This is one reason why I recommend that Product Owners are NOT technical people.  That way, the team can come up with the tasks that are needed to accomplish the stories and the Product Owner won’t know better.  If the Product Owner is technical, the ScrumMaster will need to take great care to protect the team from the ScrumMaster changing how the team thinks they need to implement the stories. Product Owners can also try to manage the team by their body language.  If the team says a task is going to take 6 hours to complete, and the Product Owner disagrees, they will use some kind of sour body language to indicate this disagreement.  In weak teams, this may cause the team to revise their estimate down, which will result in them taking longer than estimated and may result in them missing the iteration.  The ScrumMaster will need to make sure that the Product Owner doesn’t send such messages and that the team ignores them and estimates what they REALLY think it will take to complete the tasks.  Forcing the team to deal with such items in the retrospective can be helpful. Absenteeism The team is completely dependent upon the Product Owner to develop features for the customer.  The Product Owner IS the voice of the customer and without them, the team will lack direction.  Being the Product Owner is a full time job!  If the Product Owner cannot dedicate daily time with the team, a different product owner should be found. The Product Owner needs to attend every stand-up, planning meeting, showcase, and retrospective that the team has.  The team also must be able to have instant communication with the product owner.  They must not be required to schedule meetings to speak with their product owner.  The team must be the highest priority task that the Product Owner has. The best way to work around an absent Product Owner is to appoint a new Product Owner in the team.  This person will be responsible for making the decisions that the Product Owner should be making and to act as the liaison to the absent Product Owner.  If the delegate Product Owner doesn’t have authority to make decisions for the team, Scrum will fail.  If the Product Owner is absent, the ScrumMaster should seek to have that Product Owner replaced by someone who has the time and ability to be a real Product Owner. Making it Personal Too often Product Owners will become convinced that their ideas are the ones that matter and that anyone who disagrees is making a personal attack on them.  Remember that Product Owners will inherently be at odds with many people, simply because they have the need to prioritize.  Others will frequently question prioritization because they only see part of the picture that Product Owners face. Product Owners must have a thick skin and think egos.  If they don’t, they tend to make things personal, which causes them to become emotional and causes them to take actions that can destroy the trust that team members have in the Product Owner. If a Product Owner is making things person, the best thing that team members can do is reassure them that its not personal, but be firm about doing what is best for the Company and for the users.  The ScrumMaster should also spend significant time coaching the Product Owner on how to not react emotionally and how to accept criticism without becoming defensive. Conclusion I’m sure there are other ways that a Product Owner can mess with the team, but these are the most common that I’ve seen.  I would encourage all Product Owners to seek to be a good Product Owner.  If you find yourself behaving in any of the bad product owner ways, change your behavior today!  Your team will thank you. Remember, being Product Owner is very difficult!  Product Owner is one of the most difficult roles in Scrum.  However, it can also be one of the most rewarding roles in Scrum, since Product Owners literally see their ideas brought to life on the computer screen.  Product Owners need to be very patient, even in the face of criticism and need to be willing to make tough decisions on priority, but then not become offended when others disagree with those decisions.  Companies should spend the time needed to find the right product owners for their teams.  Doing so will only help the company to write better software. Technorati Tags: Scrum,Product Owner

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Productivity Tips

    - by Brian T. Jackett
    A few months ago during my first end of year review at Microsoft I was doing an assessment of my year.  One of my personal goals to come out of this reflection was to improve my personal productivity.  While I hear many people say “I wish I had more hours in the day so that I could get more done” I feel like that is the wrong approach.  There is an inherent assumption that you are being productive with your time that you already have and thus more time would allow you to be as productive given more time.    Instead of wishing I could add more hours to the day I’ve begun adopting a number of processes or behavior changes in my personal life to make better use of my time with the goal of improving productivity.  The areas of focus are as follows: Focus Processes Tools Personal health Email Note: A number of these topics have spawned from reading Scott Hanselman’s blog posts on productivity, reading of David Allen’s book Getting Things Done, and discussions with friends and coworkers who had great insights into this topic.   Focus Pre-reading / viewing: Overcome your work addiction Millennials paralyzed by choice Its Not What You Read Its What You Ignore (Scott Hanselman video)    I highly recommend Scott Hanselman’s video above and this post before continuing with this article.  It is well worth the 40+ mins price of admission for the video and couple minutes for article.  One key takeaway for me was listing out my activities in an average week and realizing which ones held little or no value to me.  We all have a finite amount of time to work each day.  Do you know how much time and effort you spend on various aspects of your life (family, friends, religion, work, personal happiness, etc.)?  Do your actions and commitments reflect your priorities?    The biggest time consumers with little value for me were time spent on social media services (Twitter and Facebook), playing an MMO video game, and watching TV.  I still check up on Facebook, Twitter, Microsoft internal chat forums, and other services to keep contact with others but I’ve reduced that time significantly.  As for TV I’ve cut the cord and no longer subscribe to cable TV.  Instead I use Netflix, RedBox, and over the air channels but again with reduced time consumption.  With the time I’ve freed up I’m back to working out 2-3 times a week and reading 4 nights a week (both of which I had been neglecting previously).  I’ll mention a few tools for helping measure your time in the Tools section.   Processes    Do not multi-task.  I’ll say it again.  Do not multi-task.  There is no such thing as multi tasking.  The human brain is optimized to work on one thing at a time.  When you are “multi-tasking” you are really doing 2 or more things at less than 100%, usually by a wide margin.  I take pride in my work and when I’m doing something less than 100% the results typically degrade rapidly.    Now there are some ways of bending the rules of physics for this one.  There is the notion of getting a double amount of work done in the same timeframe.  Some examples would be listening to podcasts / watching a movie while working out, using a treadmill as your work desk, or reading while in the bathroom.    Personally I’ve found good results in combining one task that does not require focus (making dinner, playing certain video games, working out) and one task that does (watching a movie, listening to podcasts).  I believe this is related to me being a visual and kinesthetic (using my hands or actually doing it) learner.  I’m terrible with auditory learning.  My fiance and I joke that sometimes we talk and talk to each other but never really hear each other.   Goals / Tasks    Goals can give us direction in life and a sense of accomplishment when we complete them.  Goals can also overwhelm us and give us a sense of failure when we don’t complete them.  I propose that you shift your perspective and not dwell on all of the things that you haven’t gotten done, but focus instead on regularly setting measureable goals that are within reason of accomplishing.    At the end of each time frame have a retrospective to review your progress.  Do not feel guilty about what you did not accomplish.  Feel proud of what you did accomplish and readjust your goals for the next time frame to more attainable goals.  Here is a sample schedule I’ve seen proposed by some.  I have not consistently set goals for each timeframe, but I do typically set 3 small goals a day (this blog post is #2 for today). Each day set 3 small goals Each week set 3 medium goals Each month set 1 large goal Each year set 2 very large goals   Tools    Tools are an extension of our human body.  They help us extend beyond what we can physically and mentally do.  Below are some tools I use almost daily or have found useful as of late. Disclaimer: I am not getting endorsed to promote any of these products.  I just happen to like them and find them useful. Instapaper – Save internet links for reading later.  There are many tools like this but I’ve found this to be a great one.  There is even a “read it later” JavaScript button you can add to your browser so when you navigate to a site it will then add this to your list. Stacks for Instapaper – A Windows Phone 7 app for reading my Instapaper articles on the go.  It does require a subscription to Instapaper (nominal $3 every three months) but is easily worth the cost.  Alternatively you can set up your Kindle to sync with Instapaper easily but I haven’t done so. SlapDash Podcast – Apps for Windows Phone and  Windows 8 (possibly other platforms) to sync podcast viewing / listening across multiple devices.  Now that I have my Surface RT device (which I love) this is making my consumption easier to manage. Feed Reader – Simple Windows 8 app for quickly catching up on my RSS feeds.  I used to have hundreds of unread items all the time.  Now I’m down to 20-50 regularly and it is much easier and faster to consume on my Surface RT.  There is also a free version (which I use) and I can’t see much different between the free and paid versions currently. Rescue Time – Have you ever wondered how much time you’ve spent on websites vs. email vs. “doing work”?  This service tracks your computer actions and then lets you report on them.  This can help you quantitatively identify areas where your actions are not in line with your priorities. PowerShell – Windows automation tool.  It is now built into every client and server OS.  This tool has saved me days (and I mean the full 24 hrs worth) of time and effort in the past year alone.  If you haven’t started learning PowerShell and you administrating any Windows OS or server product you need to start today. Various blogging tools – I wrote a post a couple years ago called How I Blog about my blogging process and tools used.  Almost all of it still applies today.   Personal Health    Some of these may be common sense or debatable, but I’ve found them to help prioritize my daily activities. Get plenty of sleep on a regular basis.  Sacrificing sleep too many nights a week negatively impacts your cognition, attitude, and overall health. Exercise at least three days.  Exercise could be lifting weights, taking the stairs up multiple flights of stairs, walking for 20 mins, or a number of other "non-traditional” activities.  I find that regular exercise helps with sleep and improves my overall attitude. Eat a well balanced diet.  Too much sugar, caffeine, junk food, etc. are not good for your body.  This is not a matter of losing weight but taking care of your body and helping you perform at your peak potential.   Email    Email can be one of the biggest time consumers (i.e. waster) if you aren’t careful. Time box your email usage.  Set a meeting invite for yourself if necessary to limit how much time you spend checking email. Use rules to prioritize your email.  Email from external customers, my manager, or include me directly on the To line go into my inbox.  Everything else goes a level down and I have 30+ rules to further sort it, mostly distribution lists. Use keyboard shortcuts (when available).  I use Outlook for my primary email and am constantly hitting Alt + S to send, Ctrl + 1 for my inbox, Ctrl + 2 for my calendar, Space / Tab / Shift + Tab to mark items as read, and a number of other useful commands.  Learn them and you’ll see your speed getting through emails increase. Keep emails short.  No one Few people like reading through long emails.  The first line should state exactly why you are sending the email followed by a 3-4 lines to support it.  Anything longer might be better suited as a phone call or in person discussion.   Conclusion    In this post I walked through various tips and tricks I’ve found for improving personal productivity.  It is a mix of re-focusing on the things that matter, using tools to assist in your efforts, and cutting out actions that are not aligned with your priorities.  I originally had a whole section on keyboard shortcuts, but with my recent purchase of the Surface RT I’m finding that touch gestures have replaced numerous keyboard commands that I used to need.  I see a big future in touch enabled devices.  Hopefully some of these tips help you out.  If you have any tools, tips, or ideas you would like to share feel free to add in the comments section.         -Frog Out   Links Scott Hanselman Productivity posts http://www.hanselman.com/blog/CategoryView.aspx?category=Productivity Overcome your work addiction http://blogs.hbr.org/hbsfaculty/2012/05/overcome-your-work-addiction.html?awid=5512355740280659420-3271   Millennials paralyzed by choice http://priyaparker.com/blog/millennials-paralyzed-by-choice   Its Not What You Read Its What You Ignore (video) http://www.hanselman.com/blog/ItsNotWhatYouReadItsWhatYouIgnoreVideoOfScottHanselmansPersonalProductivityTips.aspx   Cutting the cord – Jeff Blankenburg http://www.jeffblankenburg.com/2011/04/06/cutting-the-cord/   Building a sitting standing desk – Eric Harlan http://www.ericharlan.com/Everything_Else/building-a-sitting-standing-desk-a229.html   Instapaper http://www.instapaper.com/u   Stacks for Instapaper http://www.stacksforinstapaper.com/   Slapdash Podcast Windows Phone -  http://www.windowsphone.com/en-us/store/app/slapdash-podcasts/90e8b121-080b-e011-9264-00237de2db9e Windows 8 - http://apps.microsoft.com/webpdp/en-us/app/slapdash-podcasts/0c62e66a-f2e4-4403-af88-3430a821741e/m/ROW   Feed Reader http://apps.microsoft.com/webpdp/en-us/app/feed-reader/d03199c9-8e08-469a-bda1-7963099840cc/m/ROW   Rescue Time http://www.rescuetime.com/   PowerShell Script Center http://technet.microsoft.com/en-us/scriptcenter/bb410849.aspx

    Read the article

  • Mr Flibble: As Seen Through a Lens, Darkly

    - by Phil Factor
    One of the rewarding things about getting involved with Simple-Talk has been in meeting and working with some pretty daunting talents. I’d like to say that Dom Reed’s talents are at the end of the visible spectrum, but then there is Richard, who pops up on national radio occasionally, presenting intellectual programs, Andrew, master of the ukulele, with his pioneering local history work, and Tony with marathon running and his past as a university lecturer. However, Dom, who is Red Gate’s head of creative design and who did the preliminary design work for Simple-Talk, has taken the art photography to an extreme that was impossible before Photoshop. He’s not the first person to take a photograph of himself every day for two years, but he is definitely the first to weave the results into a frightening narrative that veers from comedy to pathos, using all the arts of Photoshop to create a fictional character, Mr Flibble.   Have a look at some of the Flickr pages. Uncle Spike The B-Men – Woolverine The 2011 BoyZ iN Sink reunion tour turned out to be their last Error 404 – Flibble not found Mr Flibble is not a normal type of alter-ego. We generally prefer to choose bronze age warriors of impossibly magnificent physique and stamina; superheroes who bestride the world, scorning the forces of evil and anarchy in a series noble and righteous quests. Not so Dom, whose Mr Flibble is vulnerable, and laid low by an addiction to toxic substances. His work has gained an international cult following and is used as course material by several courses in photography. Although his work was for a while ignored by the more conventional world of ‘art’ photography they became famous through the internet. His photos have received well over a million views on Flickr. It was definitely time to turn this work into a book, because the whole sequence of images has its maximum effect when seen in sequence. He has a Kickstarter project page, one of the first following the recent UK launch of the crowdfunding platform. The publication of the book should be a major event and the £45 I shall divvy up will be one of the securest investments I shall ever make. The local news in Cambridge picked up on the project and I can quote from the report by the excellent Cabume website , the source of Tech news from the ‘Cambridge cluster’ Put really simply Mr Flibble likes to dress up and take pictures of himself. One of the benefits of a split personality, however is that Mr Flibble is supported in his endeavour by Reed’s top notch photography skills, supreme mastery of Photoshop and unflinching dedication to the cause. The duo have collaborated to take a picture every day for the past 730-plus days. It is not a big surprise that neither Mr Flibble nor Reed watches any TV: In addition to his full-time role at Cambridge software house,Red Gate Software as head of creativity and the two to five hours a day he spends taking the Mr Flibble shots, Reed also helps organise the . And now Reed is using Kickstarter to see if the world is ready for a Mr Flibble coffee table book. Judging by the early response it is. At the time of writing, just a few days after it went live, ‘I Drink Lead Paint: An absurd photography book by Mr Flibble’ had raised £1,545 of the £10,000 target it needs to raise by the Friday 30 November deadline from 37 backers. Following the standard Kickstarter template, Reed is offering a series of rewards based on the amount pledged, ranging from a Mr Flibble desktop wallpaper for pledges of £5 or more to a signed copy of the book for pledges of £45 or more, right up to a starring role in the book for £1,500. Mr Flibble is unquestionably one of the more deranged Kickstarter hopefuls, but don’t think for a second that he doesn’t have a firm grasp on the challenges he faces on the road to immortalisation on 150 gsm stock. Under the section ‘risks and challenges’ on his Kickstarter page his statement begins: “An angry horde of telepathic iguanas discover the world’s last remaining stock of vintage lead paint and hold me to ransom. Gosh how I love to guzzle lead paint. Anyway… faced with such brazen bravado, I cower at the thought of taking on their combined might and die a sad and lonely Flibble deprived of my one and only true liquid love.” At which point, Reed manages to wrestle away the keyboard, giving him the opportunity to present slightly more cogent analysis of the obstacles the project must still overcome. We asked Reed a few questions about Mr Flibble’s Kickstarter adventure and felt that his responses were worth publishing in full: Firstly, how did you manage it – holding down a full time job and also conceiving and executing these ideas on a daily basis? I employed a small team of ferocious gerbils to feed me ideas on a daily basis. Whilst most of their ideas were incomprehensibly rubbish and usually revolved around food, just occasionally they’d give me an idea like my B-Men series. As a backup plan though, I found that the best way to generate ideas was to actually start taking photos. If I were to stand in front of the camera, pull a silly face, place a vegetable on my head or something else equally stupid, the resulting photo of that would typically spark an idea when I came to look at it. Sitting around idly trying to think of an idea was doomed to result in no ideas. I admit that I really struggled with time. I’m proud that I never missed a day, but it was definitely hard when you were late from work, tired or doing something socially on the same day. I don’t watch TV, which I guess really helps, because I’d frequently be spending 2-5 hours taking and processing the photos every day. Are there any overlaps between software development and creative thinking? Software is an inherently creative business and the speed that it moves ensures you always have to find solutions to new things. Everyone in the team needs to be a problem solver. Has it helped me specifically with my photography? Probably. Working within teams that continually need to figure out new stuff keeps the brain feisty I suppose, and I guess I’m continually exposed to a lot of possible sources of inspiration. How specifically will this Kickstarter project allow you to test the commercial appeal of your work and do you plan to get the book into shops? It’s taken a while to be confident saying it, but I know that people like the work that I do. I’ve had well over a million views of my pictures, many humbling comments and I know I’ve garnered some loyal fans out there who anticipate my next photo. For me, this Kickstarter is about seeing if there’s worth to my work beyond just making people smile. In an online world where there’s an abundance of freely available content, can you hope to receive anything from what you do, or would people just move onto the next piece of content if you happen to ask for some support? A book has been the single-most requested thing that people have asked me to produce and it’s something that I feel would showcase my work well. It’s just hard to convince people in the publishing industry just now to take any kind of risk – they’ve been hit hard. If I can show that people would like my work enough to buy a book, then it sends a pretty clear picture that publishers might hear, or it gives me the confidence enough to invest in myself a bit more – hard to do when you’re riddled with self-doubt! I’d love to see my work in the shops, yes. I could see it being the thing that someone flips through idly as they’re Christmas shopping and recognizing that it’d be just the perfect gift for their difficult to buy for friend or relative. That said, working in the software industry means I’m clearly aware of how I could use technology to distribute my work, but I can’t deny that there’s something very appealing to having a physical thing to hold in your hands. If the project is successful is there a chance that it could become a full-time job? At the moment that seems like a distant dream, as should this be successful, there are many more steps I’d need to take to reach any kind of business viability. Kickstarter seems exactly that – a way for people to help kick start me into something that could take off. If people like my work and want me to succeed with it, then taking a look at my Kickstarter page (and hopefully pledging a bit of support) would make my elbows blush considerably. So there is is. An opportunity to open the wallet just a bit to ensure that one of the more unusual talents sees the light in the format it deserves.  

    Read the article

  • Auto-Suggest via &lsquo;Trie&rsquo; (Pre-fix Tree)

    - by Strenium
    Auto-Suggest (Auto-Complete) “thing” has been around for a few years. Here’s my little snippet on the subject. For one of my projects, I had to deal with a non-trivial set of items to be pulled via auto-suggest used by multiple concurrent users. Simple, dumb iteration through a list in local cache or back-end access didn’t quite cut it. Enter a nifty little structure, perfectly suited for storing and matching verbal data: “Trie” (http://tinyurl.com/db56g) also known as a Pre-fix Tree: “Unlike a binary search tree, no node in the tree stores the key associated with that node; instead, its position in the tree defines the key with which it is associated. All the descendants of a node have a common prefix of the string associated with that node, and the root is associated with the empty string. Values are normally not associated with every node, only with leaves and some inner nodes that correspond to keys of interest.” This is a very scalable, performing structure. Though, as usual, something ‘fast’ comes at a cost of ‘size’; fortunately RAM is more plentiful today so I can live with that. I won’t bore you with the detailed algorithmic performance here - Google can do a better job of such. So, here’s C# implementation of all this. Let’s start with individual node: Trie Node /// <summary> /// Contains datum of a single trie node. /// </summary> public class AutoSuggestTrieNode {     public char Value { get; set; }       /// <summary>     /// Gets a value indicating whether this instance is leaf node.     /// </summary>     /// <value>     ///     <c>true</c> if this instance is leaf node; otherwise, a prefix node <c>false</c>.     /// </value>     public bool IsLeafNode { get; private set; }       public List<AutoSuggestTrieNode> DescendantNodes { get; private set; }         /// <summary>     /// Initializes a new instance of the <see cref="AutoSuggestTrieNode"/> class.     /// </summary>     /// <param name="value">The phonetic value.</param>     /// <param name="isLeafNode">if set to <c>true</c> [is leaf node].</param>     public AutoSuggestTrieNode(char value = ' ', bool isLeafNode = false)     {         Value = value;         IsLeafNode = isLeafNode;           DescendantNodes = new List<AutoSuggestTrieNode>();     }       /// <summary>     /// Gets the descendants of the pre-fix node, if any.     /// </summary>     /// <param name="descendantValue">The descendant value.</param>     /// <returns></returns>     public AutoSuggestTrieNode GetDescendant(char descendantValue)     {         return DescendantNodes.FirstOrDefault(descendant => descendant.Value == descendantValue);     } }   Quite self-explanatory, imho. A node is either a “Pre-fix” or a “Leaf” node. “Leaf” contains the full “word”, while the “Pre-fix” nodes act as indices used for matching the results.   Ok, now the Trie: Trie Structure /// <summary> /// Contains structure and functionality of an AutoSuggest Trie (Pre-fix Tree) /// </summary> public class AutoSuggestTrie {     private readonly AutoSuggestTrieNode _root = new AutoSuggestTrieNode();       /// <summary>     /// Adds the word to the trie by breaking it up to pre-fix nodes + leaf node.     /// </summary>     /// <param name="word">Phonetic value.</param>     public void AddWord(string word)     {         var currentNode = _root;         word = word.Trim().ToLower();           for (int i = 0; i < word.Length; i++)         {             var child = currentNode.GetDescendant(word[i]);               if (child == null) /* this character hasn't yet been indexed in the trie */             {                 var newNode = new AutoSuggestTrieNode(word[i], word.Count() - 1 == i);                   currentNode.DescendantNodes.Add(newNode);                 currentNode = newNode;             }             else                 currentNode = child; /* this character is already indexed, move down the trie */         }     }         /// <summary>     /// Gets the suggested matches.     /// </summary>     /// <param name="word">The phonetic search value.</param>     /// <returns></returns>     public List<string> GetSuggestedMatches(string word)     {         var currentNode = _root;         word = word.Trim().ToLower();           var indexedNodesValues = new StringBuilder();         var resultBag = new ConcurrentBag<string>();           for (int i = 0; i < word.Trim().Length; i++)  /* traverse the trie collecting closest indexed parent (parent can't be leaf, obviously) */         {             var child = currentNode.GetDescendant(word[i]);               if (child == null || word.Count() - 1 == i)                 break; /* done looking, the rest of the characters aren't indexed in the trie */               indexedNodesValues.Append(word[i]);             currentNode = child;         }           Action<AutoSuggestTrieNode, string> collectAllMatches = null;         collectAllMatches = (node, aggregatedValue) => /* traverse the trie collecting matching leafNodes (i.e. "full words") */             {                 if (node.IsLeafNode) /* full word */                     resultBag.Add(aggregatedValue); /* thread-safe write */                   Parallel.ForEach(node.DescendantNodes, descendandNode => /* asynchronous recursive traversal */                 {                     collectAllMatches(descendandNode, String.Format("{0}{1}", aggregatedValue, descendandNode.Value));                 });             };           collectAllMatches(currentNode, indexedNodesValues.ToString());           return resultBag.OrderBy(o => o).ToList();     }         /// <summary>     /// Gets the total words (leafs) in the trie. Recursive traversal.     /// </summary>     public int TotalWords     {         get         {             int runningCount = 0;               Action<AutoSuggestTrieNode> traverseAllDecendants = null;             traverseAllDecendants = n => { runningCount += n.DescendantNodes.Count(o => o.IsLeafNode); n.DescendantNodes.ForEach(traverseAllDecendants); };             traverseAllDecendants(this._root);               return runningCount;         }     } }   Matching operations and Inserts involve traversing the nodes before the right “spot” is found. Inserts need be synchronous since ordering of data matters here. However, matching can be done in parallel traversal using recursion (line 64). Here’s sample usage:   [TestMethod] public void AutoSuggestTest() {     var autoSuggestCache = new AutoSuggestTrie();       var testInput = @"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer nec odio. Praesent libero.                 Sed cursus ante dapibus diam. Sed nisi. Nulla quis sem at nibh elementum imperdiet. Duis sagittis ipsum. Praesent mauris.                 Fusce nec tellus sed augue semper porta. Mauris massa. Vestibulum lacinia arcu eget nulla. Class aptent taciti sociosqu ad                 litora torquent per conubia nostra, per inceptos himenaeos. Curabitur sodales ligula in libero. Sed dignissim lacinia nunc.                 Curabitur tortor. Pellentesque nibh. Aenean quam. In scelerisque sem at dolor. Maecenas mattis. Sed convallis tristique sem.                 Proin ut ligula vel nunc egestas porttitor. Morbi lectus risus, iaculis vel, suscipit quis, luctus non, massa. Fusce ac                 turpis quis ligula lacinia aliquet. Mauris ipsum. Nulla metus metus, ullamcorper vel, tincidunt sed, euismod in, nibh. Quisque                 volutpat condimentum velit. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nam                 nec ante. Sed lacinia, urna non tincidunt mattis, tortor neque adipiscing diam, a cursus ipsum ante quis turpis. Nulla                 facilisi. Ut fringilla. Suspendisse potenti. Nunc feugiat mi a tellus consequat imperdiet. Vestibulum sapien. Proin quam. Etiam                 ultrices. Suspendisse in justo eu magna luctus suscipit. Sed lectus. Integer euismod lacus luctus magna. Quisque cursus, metus                 vitae pharetra auctor, sem massa mattis sem, at interdum magna augue eget diam. Vestibulum ante ipsum primis in faucibus orci                 luctus et ultrices posuere cubilia Curae; Morbi lacinia molestie dui. Praesent blandit dolor. Sed non quam. In vel mi sit amet                 augue congue elementum. Morbi in ipsum sit amet pede facilisis laoreet. Donec lacus nunc, viverra nec.";       testInput.Split(' ').ToList().ForEach(word => autoSuggestCache.AddWord(word));       var testMatches = autoSuggestCache.GetSuggestedMatches("le"); }   ..and the result: That’s it!

    Read the article

  • Summary of Oracle E-Business Suite Technology Webcasts and Training

    - by BillSawyer
    Last Updated: November 16, 2011We're glad to hear that you've been finding our ATG Live Webcast series to be useful.  If you missed a webcast, you can download the presentation materials and listen to the recordings below. We're collecting other learning-related materials right now.  We'll update this summary with pointers to new training resources on an ongoing basis.  ATG Live Webcast Replays All of the ATG Live Webcasts are hosted by the Oracle University Knowledge Center.  In order to access the replays, you will need a free Oracle.com account. You can register for an Oracle.com account here.If you are a first-time OUKC user, you will have to accept the Terms of Use. Sign-in with your Oracle.com account, or if you don't already have one, use the link provided on the sign-in screen to create an account. After signing in, accept the Terms of Use. Upon completion of these steps, you will be directed to the replay. You only need to accept the Terms of Use once. Your acceptance will be noted on your account for all future OUKC replays and event registrations. 1. E-Business Suite R12 Oracle Application Framework (OAF) Rich User Interface Enhancements (Presentation) Prabodh Ambale (Senior Manager, ATG Development) and Gustavo Jiminez (Development Manager, ATG Development) offer a comprehensive review of the latest user interface enhancements and updates to OA Framework in EBS 12.  The webcast provides a detailed look at new features designed to enhance usability, including new capabilities for personalization and extensions, and features that support the use of dashboards and web services. (January 2011) 2. E-Business Suite R12 Service Oriented Architectures (SOA) Using the E-Business Suite Adapter (Presentation, Viewlet) Neeraj Chauhan (Product Manager, ATG Development) reviews the Service Oriented Architecture (SOA) capabilities within E-Business Suite 12, focussing on using the E-Business Suite Adapter to integrate EBS with third-party applications via web services, and orchestrate services and distributed transactions across disparate applications. (February 2011) 3. Deploying Oracle VM Templates for Oracle E-Business Suite and Oracle PeopleSoft Enterprise Applications Ivo Dujmovic (Director, ATG Development) reviews the latest capabilities for using Oracle VM to deploy virtualized EBS database and application tier instances using prebuilt EBS templates, wire those virtualized instances together using the EBS virtualization kit, and take advantage of live migration of user sessions between failing application tier nodes.  (February 2011) 4. How to Reduce Total Cost of Ownership (TCO) Using Oracle E-Business Suite Management Packs (Presentation) Angelo Rosado (Product Manager, ATG Development) provides an overview of how EBS sysadmins can make their lives easier with the Management Packs for Oracle E-Business Suite Release 12.  This session highlights key features in Application Management Pack (AMP) and Application Change Management Pack) that can automate or streamline system configurations, monitor EBS performance and uptime, keep multiple EBS environments in sync with patches and configurations, and create patches for your own EBS customizations and apply them with Oracle's own patching tools.  (June 2011) 5. Upgrading E-Business Suite 11i Customizations to R12 (Presentation) Sara Woodhull (Principal Product Manager, ATG Development) provides an overview of how E-Business Suite developers can manage and upgrade existing EBS 11i customizations to R12.  Sara covers methods for comparing customizations between Release 11i and 12, managing common customization types, managing deprecated technologies, and more. (July 2011) 6. Tuning All Layers of E-Business Suite (Part 1 of 3) (Presentation) Lester Gutierrez, Senior Architect, and Deepak Bhatnagar, Senior Manager, from the E-Business Suite Application Performance team, lead Tuning All Layers of E-Business Suite (Part 1 of 3). This webcast provides an overview of how Oracle E-Business Suite system administrators, DBAs, developers, and implementers can improve E-Business Suite performance by following a performance tuning framework. Part 1 focuses on the performance triage approach, tuning applications modules, upgrade performance best practices, and tuning the database tier. This ATG Live Webcast is an expansion of the performance sessions at conferences that are perennial favourites with hardcore Apps DBAs. (August 2011)  7. Oracle E-Business Suite Directions: Deployment and System Administration (Presentation) Max Arderius, Manager Applications Technology Group, and Ivo Dujmovic, Director Applications Technology group, lead Oracle E-Business Suite Directions: Deployment and System Administration covering important changes in E-Business Suite R12.2. The changes discussed in this presentation include Oracle E-Business Suite architecture, installation, upgrade, WebLogic Server integration, online patching, and cloning. This webcast provides an overview of how Oracle E-Business Suite system administrators, DBAs, developers, and implementers can prepare themselves for these changes in R12.2 of Oracle E-Business Suite. (October 2011) Oracle University Courses For a general listing of all Oracle University courses related to E-Business Suite Technology, use the Oracle University E-Business Suite Technology course catalog link. Oracle University E-Business Suite Technology Course Catalog 1. R12 Oracle Applications System Administrator Fundamentals In this course students learn concepts and functions that are critical to the System Administrator role in implementing and managing the Oracle E-Business Suite. Topics covered include configuring security and user management, configuring flexfields, managing concurrent processing, and setting up other essential features such as profile options and printing. In addition, configuration and maintenance of an Oracle E-Business Suite through Oracle Applications Manager is discussed. Students also learn the fundamentals of Oracle Workflow including its setup. The System Administrator Fundamentals course provides the foundation needed to effectively control security and ensure smooth operations for an E-Business Suite installation. Demonstrations and hands-on practice reinforce the fundamental concepts of configuring an Oracle E-Business Suite, as well as handling day-to-day system administrator tasks. 2. R12.x Install/Patch/Maintain Oracle E-Business Suite This course will be applicable for customers who have implemented Oracle E-Business Suite Release 12 or Oracle E-Business Suite 12.1. This course explains how to go about installing and maintaining an Oracle E-Business Suite Release 12.x system. Both Standard and Express installation types are covered in detail. Maintenance topics include a detailed examination of the standard tools and utilities, and an in-depth look at patching an Oracle E-Business Suite system. After this course, students will be able to make informed decisions about how to install an Oracle E-Business Suite system that meets their specific requirements, and how to maintain the system afterwards. The extensive hands-on practices include performing an installation on a Linux system, navigating the file system to locate key files, running the standard maintenance tools and utilities, applying patches, and carrying out cloning operations. 3. R12.x Extend Oracle Applications: Building OA Framework Applications This class is a hands-on lab-intensive course that will keep the student busy and active for the duration of the course. While the course covers the fundamentals that support OA Framework-based applications, the course is really an exercise in J2EE programming. Over the duration of the course, the student will create an OA Framework-based application that selects, inserts, updates, and deletes data from a R12 Oracle Applications instance. 4. R12.x Extend Oracle Applications: Customizing OA Framework Applications This course has been significantly changed from the prior version to include additional deployments. The course doesn't teach the specifics of configuration of each product. That is left to the product-specific courses. What the course does cover is the general methods of building, personalizing, and extending OA Framework-based pages within the E-Business Suite. Additionally, the course covers the methods to deploy those types of customizations. The course doesn't include discussion of the Oracle Forms-based pages within the E-Business Suite. 5. R12.x Extend Oracle Applications: OA Framework Personalizations Personalization is the ability within an E-Business Suite instance to make changes to the look and behavior of OA Framework-based pages without programming. And, personalizations are likely to survive patches and upgrades, increasing their utility. This course will systematically walk you through the myriad of personalization options, starting with simple examples and increasing in complexity from there. 6. E-Business Suite: BI Publisher 5.6.3 for Developers Starting with the basic concepts, architecture, and underlying standards of Oracle XML Publisher, this course will lead a student through a progress of exercises building their expertise. By the end of the course, the student should be able to create Oracle XML Publisher RTF templates and data templates. They should also be able to deploy and maintain a BI Publisher report in an E-Business Suite instance. Students will also be introduced to Oracle BI Publisher Enterprise. 7. R12.x Implement Oracle Workflow This course provides an overview of the architecture and features of Oracle Workflow and the benefits of using Oracle Workflow in an e-business environment. You can learn how to design workflow processes to automate and streamline business processes, and how to define event subscriptions to perform processing triggered by business events. Students also learn how to respond to workflow notifications, how to administer and monitor workflow processes, and what setup steps are required for Oracle Workflow. Demonstrations and hands-on practice reinforce the fundamental concepts. 8. R12.x Oracle E-Business Suite Essentials for Implementers Oracle R12.1 E-Business Essentials for Implementers is a course that provides a functional foundation for any E-Business Suite Fundamentals course.

    Read the article

  • PASS Summit Feedback

    - by Rob Farley
    PASS Feedback came in last week. I also saw my dentist for some fillings... At the PASS Summit this year, I delivered a couple of regular sessions and a Lightning Talk. People told me they enjoyed it, but when the rankings came out, they showed that I didn’t score particularly well. Brent Ozar was keen to discuss it with me. Brent: PASS speaker feedback is out. You did two sessions and a Lightning Talk. How did you go? Rob: Not so well actually, thanks for asking. Brent: Ha! Sorry. Of course you know that's why I wanted to discuss this with you. I was in one of your sessions at SQLBits in the UK a month before PASS, and I thought you rocked. You've got a really good and distinctive delivery style.  Then I noticed your talks were ranked in the bottom quarter of the Summit ratings and wanted to discuss it. Rob: Yeah, I know. You did ask me if we could do this...  I should explain – my presentation style is not the stereotypical IT conference one. I throw in jokes, and try to engage the audience thoroughly. I find many talks amazingly dry, and I guess I try to buck that trend. I also run training courses, and find that I get a lot of feedback from people thanking me for keeping things interesting. That said, I also get feedback criticising me for my style, and that’s basically what’s happened here. For the rest of this discussion, let’s focus on my talk about the Incredible Shrinking Execution Plan, which I considered to be my main talk. Brent: I thought that session title was the very best one at the entire Summit, and I had it on my recommended sessions list.  In four words, you managed to sum up the topic and your sense of humor.  I read that and immediately thought, "People need to be in this session," and then it didn't score well.  Tell me about your scores. Rob: The questions on the feedback form covered the usefulness of the information, the speaker’s presentation skills, their knowledge of the subject, how well the session was described, the amount of time allocated, and the quality of the presentation materials. Brent: Presentation materials? But you don’t do slides.  Did they rate your thong? Rob: No-one saw my flip-flops in this talk, Brent. I created a script in Management Studio, and published that afterwards, but I think people will have scored that question based on the lack of slides. I wasn’t expecting to do particularly well on that one. That was the only section that didn’t have 5/5 as the most popular score. Brent: See, that sucks, because cookbook-style scripts are often some of my favorites.  Adam Machanic's Service Broker workbench series helped me immensely when I was prepping for the MCM.  As an attendee, I'd rather have a commented script than a slide deck.  So how did you rank so low? Rob: When I look at the scores that you got (based on your blog post), you got very few scores below 3 – people that felt strong enough about your talk to post a negative score. In my scores, between 5% and 10% were below 3 (except on the question about whether I knew my stuff – I guess I came as knowledgeable). Brent: Wow – so quite a few people really didn’t like your talk then? Rob: Yeah. Mind you, based on the comments, some people really loved it. I’d like to think that there would be a certain portion of the room who may have rated the talk as one of the best of the conference. Some of my comments included “amazing!”, “Best presentation so far!”, “Wow, best session yet”, “fantastic” and “Outstanding!”. I think lots of talks can be “Great”, but not so many talks can be “Outstanding” without the word losing its meaning. One wrote “Pretty amazing presentation, considering it was completely extemporaneous.” Brent: Extemporaneous, eh? Rob: Yeah. I guess they don’t realise how much preparation goes into coming across as unprepared. In many ways it’s much easier to give a written speech than to deliver a presentation without slides as a prompt. Brent: That delivery style, the really relaxed, casual, college-professor approach was one of the things I really liked about your presentation at SQLbits.  As somebody who presents a lot, I "get" it - I know how hard it is to come off as relaxed and comfortable with your own material.  It's like improv done by jazz players and comedians - if you've never tried it, you don't realize how hard it is.  People also don't realize how hard it is to make a tough subject fun. Rob: Yeah well... There will be people writing comments on this post that say I wasn't trying to make the subject fun, and that I was making it all about me. Sometimes the style works, sometimes it doesn't. Most of the comments mentioned the fact that I tell jokes, some in a nice way, but some not so much (and it wasn't just a PASS thing - that's the mix of feedback I generally get). One comment at PASS was: “great stand up comedian - not what I'm looking for at pass”, and there were certainly a few that said “too many jokes”. I’m not trying to do stand-up – jokes are my way of engaging with the audience while I demonstrate some of the amazing things that the Query Optimizer can do if you write your queries the right way. Some people didn’t think it was technical enough, but I’ve also had some people tell me that the concepts I’m explaining are deep and profound. Brent: To me, that's a hallmark of a great explanation - when someone says, "But of course it has to work that way - how could it work any other way?  It seems so simple and logical."  Well, sure it does when it's explained correctly, but now pick up any number of thick SQL Server books and try to understand the Redundant Joins concept.  I guarantee it'll take more than 45 minutes. Rob: Some people in my audiences realise that, but definitely not everyone. There's only so much you can tell someone that something is profound. Generally it's something that they either have an epiphany on or not. I like to lull my audience into knowing what's going on, and do something that surprises them. Gain their trust, build a rapport, and then show them the deeper truth of what just happened. Brent: So you've learned your lesson about presentation scores, right?  From here on out, you're going to be dry, humorless, and all your presentations will consist of you reading bullet points off the screen. Rob: No Brent, I’m not. I'm also not going to suggest that most presentations at PASS are like that. No-one tries to present like that. There's a big space to occupy between what "dry and humourless" and me. My difference is to focus on the relationship I have with the crowd, rather than focussing on delivering the perfect session. I want to see people smiling and know they're relaxed. I think most presenters focus on the material, which is completely reasonable and safe. I remember once hearing someone talking about product creation. They talked about mediocrity. They said that one of the worst things that people can ever say about your product is that it’s “good”. What you want is for 10% of the world to love it enough to want to buy it. If 10% the world gave me a dollar, I’d have more money than I could ever use (assuming it wasn’t the SAME dollar they were giving me I guess). Brent: It's the Raving Fans theory.  It's better to have a small number of raving customers than a large number of almost-but-not-really customers who don't care that much about your product or service.  I know exactly how you feel - when I got survey feedback from my Quest video presentation when I was dressed up in a Richard Simmons costume, some of the attendees said I was unprofessional and distracting.  Some of the attendees couldn't get enough and Photoshopped all kinds of stuff into the screen captures.  On a whole, I probably didn't score that well, and I'm fine with that.  It sucks to look at the scores though - do those lower scores bother you? Rob: Of course they do. It hurts deeply. I open myself up and give presentations in a very personal way. All presenters do that, and we all feel the pain of negative feedback. I hate coming 146th & 162nd out of 185, but have to acknowledge that many sessions did worse still. Plus, once I feel the wounds have healed, I’ll be able to remember that there are people in the world that rave about my presentation style, and figure that people will hopefully talk about me. One day maybe those people that don’t like my presentation style will stay away and I might be able to score better. You don’t pay to hear country music if you prefer western... Lots of people find chili too spicy, but it’s still a popular food. Brent: But don’t you want to appeal to everyone? Rob: I do, but I don’t want to be lukewarm as in Revelation 3:16. I’d rather disgust and be discussed. Well, maybe not ‘disgust’, but I don’t want to conform. Conformity just isn’t the same any more. I’m not sure I’ve ever been one to do that. I try not to offend, but definitely like to be different. Brent: Count me among your raving fans, sir.  Where can we see you next? Rob: Considering I live in Adelaide in Australia, I’m not about to appear at anyone’s local SQL Saturday. I’m still trying to plan which events I’ll get to in 2011. I’ve submitted abstracts for TechEd North America, but won’t hold my breath. I’m also considering the SQLBits conferences in the UK in April, PASS in October, and I’m sure I’ll do some LiveMeeting presentations for user groups. Online, people download some of my recent SQLBits presentations at http://bit.ly/RFSarg and http://bit.ly/Simplification though. And they can download a 5-minute MP3 of my Lightning Talk at http://www.lobsterpot.com.au/files/Collation.mp3, in which I try to explain the idea behind collation, using thongs as an example. Brent: I was in the audience for http://bit.ly/RFSarg. That was a great presentation. Rob: Thanks, Brent. Now where’s my dollar?

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • NRF Online Merchandising Workshop: Where Online Retailers Are Focusing for Holiday and Beyond

    - by Rose Spicer-Oracle
    0 0 1 1204 6863 Oracle Corporation 57 16 8051 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Last month we attended the NRF Online Merchandising Workshop in LA, and it was a great opportunity to catch up with our customers, meet new retailers, and hear some great presentations from VF Corporation, Zazzle, Julep Beauty, Backcountry, eBags and more. The one-on-one conversations with Merchants and the keynote presentations carry the same themes across companies of all sizes and across verticals. With only 125 days left (and counting) until Black Friday, these conversations provided some great insight in to what’s top of mind for retailers during the most stressful time of their year, and a sneak peek in to what they will deliver this holiday season.  Some of the most popular topics were: When to start promoting for holiday: seems like a funny conversation to have in July, but a number of retailers said they already had their holiday shopping gift guides live on their site, and it was attracting a significant portion of their onsite traffic. When it comes to timing, most retailers were questioning when to begin their holiday promotions -- carefully balancing when to release pricing and specials, and knowing that customers are holding out for last-minute deals and price drops. Many retailers noted the frustrations around transparent pricing by Amazon and a few other mega-retailers last year, publishing their “lowest prices of the season” as early as October – ensuring shoppers that those prices were the best they could get all season long. Many retailers felt their hands were forced to drop prices. Others kept their set pricing with negative customer reaction, causing some to miss their holiday goals. The pressure is on, and most retailers identified November 1 as their target start date for the holiday promotions blitz. Some are even waiting for the big guys to release their “lowest prices of the season” guides and will then follow suit.      Attribution is tough – and a huge focus: understanding the path to conversion is a tough nut to crack, especially in the new omnichannel world where consumers use multiple touchpoints to make a single purchase, and internal management wants to know hard data. This has lead many retailers to invest in attribution; carefully tracking their online marketing efforts to determine what gets “credit” for the sale, instead of giving credit to the “last click.” Retailers noted that it is very difficult to determine the numbers when online and offline worlds collide – like when a shopper uses digital channels for research and then makes a purchase in a store. As one of the presenters from The North Face mentioned in her keynote, a key to enabling better customer service and satisfaction when it comes to converged online and offline sales is training the in-store staff, and creating a culture where it eventually “doesn’t matter what group gets the credit” if they all add to the sale. No doubt, the area of attribution will be a big area of retail investment in the coming years.      How to plan for the converged world: planning to ensure inventory gets where it needs to be was another concern. In conversations with retailers, we advised them to analyze customer patterns: where shoppers purchase items, where the items were sourced from and even where items are returned. This analysis is very valuable in determining inventory plans. From there, retailers can more accurately plan and allocate inventory to support both the online and offline customer behavior. As we head into the holiday season, the need for accurate enterprise-wide inventory visibility, and providing that information to associates, is even more critical to the brand-wide customer experience.       Improving the search / navigation / usability of the site(s): Aside from some of the big ideas and standard holiday pricing pressure, most conversations we had centered around continuing to improve the basics of the site. Reinvesting in search and navigation came up time and time again (FitForCommerce blogged about what a big topic it was at the event as well). Obviously getting shoppers on their path quickly and allowing them to find what they need fast is critical, but it was definitely interesting to hear just how much effort is still going in to honing the search and navigation experience. Adding new elements to search and navigation like typeahed, inventive navigation refinements, and new navigation categories like gift guides, specialized boutiques and flash sales were top of mind, in addition to searchandising and making search-driven product recommendations. (Oracle can help!)       Reducing cart abandonment: always a hot topic that is top of mind for every online retailer. Getting shoppers to the cart is often less then half the battle; getting them to click “buy” and complete the transaction is much more difficult. While retailers carefully study the checkout process and where shoppers tend to bounce, they know that how they design their checkout page is critical. We’re all online shoppers in our personal lives and we know how frustrating it can be when total prices are not transparent (i.e. shipping, processing, taxes is not included until the very last possible screen before clicking that buy button). Online retailers are struggling with where in the checkout process to surface the total price to be charged to reduce cart abandonment, while not showing the total figure too early in the process that it keeps shoppers from getting to checkout altogether. Recent research shows that providing total pricing prior to the checkout process dramatically reduces cart abandonment – as it serves as a filter to those shopping within a specific price band. Much of the cart abandonment discussion leads us to…       The free shipping / free returns question: it’s no secret that because of Amazon and programs like Prime, consumers expect free shipping, much to the chagrin of the smaller retailer. The reality is that if you’re not a mega-retailer, shipping is an expensive part of doing business that doesn’t allow most retailers to keep their prices low and offer free shipping. This has many retailers venturing out on the “free returns” path, especially in apparel. A number of retailers we spoke with are testing a flat rate shipping fee with free returns to see if they can crack the price threshold where shoppers are willing to pay for shipping with an added service. But, free shipping remains king.      Social ads and retargeting: they are working, but do they turn off consumers? That’s the big question. Every retailer we spoke with during a roundtable on the topic said that social ads and retargeting (where that pair of boots you’re been eyeing on a site magically follows you around the Internet) work and are meeting campaign goals. The larger question many retailers are asking is if this type of tactic is turning off a large number of shoppers, even if these campaigns are meeting their early goals. Retailers also mentioned that Facebook ads are working very well for them, especially when it comes to new customer acquisition, serving as a complimentary a channel to SEO when it comes to engaging new customers. While there are always new things to experiment with in retail, standard challenges are top of mind as retailers scramble to get ready for holiday. It will undoubtedly be another record-breaking online shopping season, but as retailers get more and more advanced with each Black Friday, expect some exciting things. This excitement needs to be backed by sound solutions and optimized operations. Then again, consumers are expecting more than ever, so I don’t doubt that retailers are already thinking about the possibilities of holiday 2015… and beyond. Customers who read this article, also found value in the following stories: Personalization for Retail: http://blogs.oracle.com/retail/entry/personalization_for_retailShop Direct User Experience Focus Drives Sales:https://blogs.oracle.com/retail/entry/shop_direct_user_experience_focusMaking Waves: Australian Online Retailer SurfStitch: https://blogs.oracle.com/oracleretail/entry/surf_stitchWhat’s new in Oracle Commerce v11.1 for RetailWhat the Content+Commerce Equation is Missing

    Read the article

  • MVC Automatic Menu

    - by Nuri Halperin
    An ex-colleague of mine used to call his SQL script generator "Super-Scriptmatic 2000". It impressed our then boss little, but was fun to say and use. We called every batch job and script "something 2000" from that day on. I'm tempted to call this one Menu-Matic 2000, except it's waaaay past 2000. Oh well. The problem: I'm developing a bunch of stuff in MVC. There's no PM to generate mounds of requirements and there's no Ux Architect to create wireframe. During development, things change. Specifically, actions get renamed, moved from controller x to y etc. Well, as the site grows, it becomes a major pain to keep a static menu up to date, because the links change. The HtmlHelper doesn't live up to it's name and provides little help. How do I keep this growing list of pesky little forgotten actions reigned in? The general plan is: Decorate every action you want as a menu item with a custom attribute Reflect out all menu items into a structure at load time Render the menu using as CSS  friendly <ul><li> HTML. The MvcMenuItemAttribute decorates an action, designating it to be included as a menu item: [AttributeUsage(AttributeTargets.Method, AllowMultiple = true)] public class MvcMenuItemAttribute : Attribute {   public string MenuText { get; set; }   public int Order { get; set; }   public string ParentLink { get; set; }   internal string Controller { get; set; }   internal string Action { get; set; }     #region ctor   public MvcMenuItemAttribute(string menuText) : this(menuText, 0) { } public MvcMenuItemAttribute(string menuText, int order) { MenuText = menuText; Order = order; }       internal string Link { get { return string.Format("/{0}/{1}", Controller, this.Action); } }   internal MvcMenuItemAttribute ParentItem { get; set; } #endregion } The MenuText allows overriding the text displayed on the menu. The Order allows the items to be ordered. The ParentLink allows you to make this item a child of another menu item. An example action could then be decorated thusly: [MvcMenuItem("Tracks", Order = 20, ParentLink = "/Session/Index")] . All pretty straightforward methinks. The challenge with menu hierarchy becomes fairly apparent when you try to render a menu and highlight the "current" item or render a breadcrumb control. Both encounter an  ambiguity if you allow a data source to have more than one menu item with the same URL link. The issue is that there is no great way to tell which link a person click. Using referring URL will fail if a user bookmarked the page. Using some extra query string to disambiguate duplicate URLs essentially changes the links, and also ads a chance of collision with other query parameters. Besides, that smells. The stock ASP.Net sitemap provider simply disallows duplicate URLS. I decided not to, and simply pick the first one encountered as the "current". Although it doesn't solve the issue completely – one might say they wanted the second of the 2 links to be "current"- it allows one to include a link twice (home->deals and products->deals etc), and the logic of deciding "current" is easy enough to explain to the customer. Now that we got that out of the way, let's build the menu data structure: public static List<MvcMenuItemAttribute> ListMenuItems(Assembly assembly) { var result = new List<MvcMenuItemAttribute>(); foreach (var type in assembly.GetTypes()) { if (!type.IsSubclassOf(typeof(Controller))) { continue; } foreach (var method in type.GetMethods()) { var items = method.GetCustomAttributes(typeof(MvcMenuItemAttribute), false) as MvcMenuItemAttribute[]; if (items == null) { continue; } foreach (var item in items) { if (String.IsNullOrEmpty(item.Controller)) { item.Controller = type.Name.Substring(0, type.Name.Length - "Controller".Length); } if (String.IsNullOrEmpty(item.Action)) { item.Action = method.Name; } result.Add(item); } } } return result.OrderBy(i => i.Order).ToList(); } Using reflection, the ListMenuItems method takes an assembly (you will hand it your MVC web assembly) and generates a list of menu items. It digs up all the types, and for each one that is an MVC Controller, digs up the methods. Methods decorated with the MvcMenuItemAttribute get plucked and added to the output list. Again, pretty simple. To make the structure hierarchical, a LINQ expression matches up all the items to their parent: public static void RegisterMenuItems(List<MvcMenuItemAttribute> items) { _MenuItems = items; _MenuItems.ForEach(i => i.ParentItem = items.FirstOrDefault(p => String.Equals(p.Link, i.ParentLink, StringComparison.InvariantCultureIgnoreCase))); } The _MenuItems is simply an internal list to keep things around for later rendering. Finally, to package the menu building for easy consumption: public static void RegisterMenuItems(Type mvcApplicationType) { RegisterMenuItems(ListMenuItems(Assembly.GetAssembly(mvcApplicationType))); } To bring this puppy home, a call in Global.asax.cs Application_Start() registers the menu. Notice the ugliness of reflection is tucked away from the innocent developer. All they have to do is call the RegisterMenuItems() and pass in the type of the application. When you use the new project template, global.asax declares a class public class MvcApplication : HttpApplication and that is why the Register call passes in that type. protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes);   MvcMenu.RegisterMenuItems(typeof(MvcApplication)); }   What else is left to do? Oh, right, render! public static void ShowMenu(this TextWriter output) { var writer = new HtmlTextWriter(output);   renderHierarchy(writer, _MenuItems, null); }   public static void ShowBreadCrumb(this TextWriter output, Uri currentUri) { var writer = new HtmlTextWriter(output); string currentLink = "/" + currentUri.GetComponents(UriComponents.Path, UriFormat.Unescaped);   var menuItem = _MenuItems.FirstOrDefault(m => m.Link.Equals(currentLink, StringComparison.CurrentCultureIgnoreCase)); if (menuItem != null) { renderBreadCrumb(writer, _MenuItems, menuItem); } }   private static void renderBreadCrumb(HtmlTextWriter writer, List<MvcMenuItemAttribute> menuItems, MvcMenuItemAttribute current) { if (current == null) { return; } var parent = current.ParentItem; renderBreadCrumb(writer, menuItems, parent); writer.Write(current.MenuText); writer.Write(" / ");   }     static void renderHierarchy(HtmlTextWriter writer, List<MvcMenuItemAttribute> hierarchy, MvcMenuItemAttribute root) { if (!hierarchy.Any(i => i.ParentItem == root)) return;   writer.RenderBeginTag(HtmlTextWriterTag.Ul); foreach (var current in hierarchy.Where(element => element.ParentItem == root).OrderBy(i => i.Order)) { if (ItemFilter == null || ItemFilter(current)) {   writer.RenderBeginTag(HtmlTextWriterTag.Li); writer.AddAttribute(HtmlTextWriterAttribute.Href, current.Link); writer.AddAttribute(HtmlTextWriterAttribute.Alt, current.MenuText); writer.RenderBeginTag(HtmlTextWriterTag.A); writer.WriteEncodedText(current.MenuText); writer.RenderEndTag(); // link renderHierarchy(writer, hierarchy, current); writer.RenderEndTag(); // li } } writer.RenderEndTag(); // ul } The ShowMenu method renders the menu out to the provided TextWriter. In previous posts I've discussed my partiality to using well debugged, time test HtmlTextWriter to render HTML rather than writing out angled brackets by hand. In addition, writing out using the actual writer on the actual stream rather than generating string and byte intermediaries (yes, StringBuilder being no exception) disturbs me. To carry out the rendering of an hierarchical menu, the recursive renderHierarchy() is used. You may notice that an ItemFilter is called before rendering each item. I figured that at some point one might want to exclude certain items from the menu based on security role or context or something. That delegate is the hook for such future feature. To carry out rendering of a breadcrumb recursion is used again, this time simply to unwind the parent hierarchy from the leaf node, then rendering on the return from the recursion rather than as we go along deeper. I guess I was stuck in LISP that day.. recursion is fun though.   Now all that is left is some usage! Open your Site.Master or wherever you'd like to place a menu or breadcrumb, and plant one of these calls: <% MvcMenu.ShowBreadCrumb(this.Writer, Request.Url); %> to show a breadcrumb trail (notice lack of "=" after <% and the semicolon). <% MvcMenu.ShowMenu(Writer); %> to show the menu.   As mentioned before, the HTML output is nested <UL> <LI> tags, which should make it easy to style using abundant CSS to produce anything from static horizontal or vertical to dynamic drop-downs.   This has been quite a fun little implementation and I was pleased that the code size remained low. The main crux was figuring out how to pass parent information from the attribute to the hierarchy builder because attributes have restricted parameter types. Once I settled on that implementation, the rest falls into place quite easily.

    Read the article

  • Begin the Clone Wars Have!

    - by Antony Reynolds
    Creating a New Virtual Machine from an Existing Virtual Disk In previous posts I described how I set up an OEL6 machine under VirtualBox that can run an 11gR2 database and FMW 11.1.1.5.  That is great if you want the DB and FMW running in the same virtual image and it has served me well for some proof of concepts and also for some testing of different JVMs.  However I also wanted to run some testing of FMW with the database running on a separate physical machine.  So in this post I will show how to take a VirtualBox image and create a new image based on the disks from that original image. What are my Options? There is more than one way to skin a cat, or in this case to create two separate VMs that can run on different hardware.  Some of the options include: Create new virtual disk images for each new VM. Clone the existing disk images and point the new VM at the cloned images. Point the new VM at the existing snapshots. #1 is too much like hard work, install OEL twice, install a database again, install FMW again, run RCU again!  Life is too short! #2 is probably the safest way of doing things.  VirtualBox allows you to clone a disk image for use in a separate machine.  However this of course duplicates the disk and means that it is now occupying 3 times the space, once for the original disk and twice more for the two clones I would need. #3 is the most space efficient way of doing things.  It does mean however that I can only run the new “cloned” images if I have access to the original image because that is where the base snapshots reside.  However this is not a problem for me as long as I remember to keep all threee images together.  So this is the approach we will follow. Snapshot, What Snapshot? As we are going to create new virtual machines based on existing snapshots we need to figure out which snapshot to use.  We do this by opening the “Media Manager” from within VirtualBox and moving the mouse over the snapshot images until we find the snapshots we want – the snapshot name is identified in the “Attached to:” comment.  In my case I wanted the FMW installed snapshot because that had a database configured for FMW alongside the FMW software.  I made a note of the filename of that snapshot (actually I just noted the first 5 characters as that was all that was needed to uniquely identify the snapshot file). When we create the new machines we will point them at the snapshot filename we have just checked. Network or NotWork? Because we want the two new machines to communicate with each other when hosted in different physical machines we can’t use the default NAT networking mode without a lot of hassle.  But at the same time we need them to have fixed IP addresses relative to each other so that they can see each other whilst also being able to see the outside world. To achieve all these requirements I created two network adapters for each machine.  Adapter 1 was a standard NAT mapping.  This will allow each machine to get a dynamic IP address (10.0.2.15 by default) that can be used to access the external world through the VBox provided NAT gateway.  This is the same as the existing configuration. The second adapter I created as a bridged adapter.  This gives the virtual machine direct access to the host network card and by using fixed IP addresses each machine can see the other.  It is important to choose fixed IP addresses that are not routable across your internal network so you don’t get any clashes with other machines on your network.  Of course you could always get proper fixed IP addresses from your network people, but I have serveral people using my images and as long as I don’t have two instances of the same VM on the same network segment this is easier and avoids reconfiguring the network every time someone wants a copy of my VM.  If it is available I would suggest using the 10.0.3.* network as 10.0.2.* is the default NAT network.  You can check availability by pinging 10.0.3.1 and 10.0.3.2 from your host machine.  If it times out then you are probably safe to use that. Creating the New VMs Now that I had collected the data that I needed I went ahead and created the new VMs. When asked for a “Boot Hard Disk” I used the “Choose a virtual hard disk file…” link to find the snapshot I had previously selected and set that to be the existing hard disk.  I chose the previously existing SOA 11.1.1.5 install for both the new DB and FMW machines because that snapshot had the database with the RCU completed that I wanted for my DB machine and it had the SOA software installed which I wanted for my FMW machine. After the initial creation of the virtual machine go into the network setting section and enable a second adapter which will be bridged.  Make a note of the MAC addresses (the last four digits should be sufficient) of the two adapters so that you can later set the bridged adapter to use fixed IP and the NAT adapter to use DHCP. We are now ready to start the VMs and reconfigure Linux. Reconfiguring Linux Because I now have two new machines I need to change their network configuration.  In particular I need to change the hostname, update the hosts file and change the network settings. Changing the Hostname I renamed both hosts by running the hostname command as root: hostname vboxfmw.oracle.com I also edited the /etc/sysconfig file and set the correct hostname in there. HOSTNAME=vboxfmw.oracle.com Changing the Network Settings I needed to change the network configuration to give the bridged network a fixed IP address.  I first explicitly set the MAC addresses of the two adapters, because the order of the virtual adapters in the VirtualBox Manager is not necessarily the same as the order of the adapters in the guest OS.  So I went in to the System->Preferences->Network Connections screen and explicitly set the “Device MAC address” for the two adapters. Having correctly mapped the Linux adapters to the VirtualBox adapters I then set the Bridged adapter to use fixed IP addressing rather than DHCP.  There is no need for additional routing or default gateways because we expect the two machine to be on the same LAN segment. Updating the Hosts File Having renamed the machines and reconfigured the network I then updated the /etc/hosts file to refer to the new machine name add a new line to the hosts file to provide an additional IP address for my server (the new fixed IP address) add a new line for the fixed IP address of the other virtual machine 10.0.3.101      vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.2.15       vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.3.102      vboxfmw.oracle.com      vboxfmw # Added by NetworkManager 127.0.0.1       localhost.localdomain   localhost ::1     vboxdb.oracle.com       vboxdb  localhost6.localdomain6 localhost6 To make sure everything takes effect I restarted the server. Reconfiguring the Database on the DB Machine Because we changed the hostname the listener and the EM console no longer start so I need to modify the listener.ora to use the new hostname and I also need to rebuild the EM configuration because it also relies on the hostname. I edited the $ORACLE_HOME/network/admin/listener.ora and changed the listening address to the new hostname:       (ADDRESS = (PROTOCOL = TCP)(HOST = vboxdb.oracle.com)(PORT = 1521)) After changing the listener.ora I was able to start the listener using: lsnrctl start I also had to reconfigure the EM database control.  I first deconfigured it using the command: emca -deconfig dbcontrol db -repos drop This drops the repository and removes any existing registered dbcontrols. I then re-configured it using the following command: emca -config dbcontrol db -repos create This creates the EM repository and then configures and starts dbcontrol. Now my database machine is ready so I can close it down and take a snapshot. Disabling the Database on the FMW Machine I set up the database to start automatically by creating a service called “dbora”.  On the FMW machine I do not need the database running so I can prevent it auto-starting by running the following command: chkconfig –del dbora Note that because I am using a snapshot it is not a waste of disk space to have the DB installed but not used.  As long as I don’t run it, it won’t cost me anything. I can now close the FMW machine down and take a snapshot. Creating a New Domain The FMW machine is now ready to create a new domain.  When creating the domain I can point it at the second machine which is running the database.  I can potentially run these machines on two separate physical machines as long as I have the original virtual machine available to both of the physical machines. Gotchas in Snapshotting VirtualBox does not support the concept of linked machines in a network like some virtualization technologies so when creating a snapshot it is a good idea to shut both VMs down and then take a snapshot on both of them.  This is because we want to keep the database in sync with the middleware.  One way to make sure that this happens would be to place all the domain configuration files on the database server via an NFS share, this would mean that all we would need to snapshot would be the database machine because that would hold all the state and configuration. The Sky’s the Limit We have covered a simple case of having just two machines.  I have a more complicated configuration in which two machine run a RAC database off the same base OS image, and two more machines run a SOA cluster based on the same OS image.  Just remember what machine holds state and what are the consequences of taking a snapshot.

    Read the article

  • CodePlex Daily Summary for Wednesday, May 05, 2010

    CodePlex Daily Summary for Wednesday, May 05, 2010New Projects2010微软精英大挑战Heritage of Dragon项目: 我们来自上海市同济大学,兴趣相投,集聚于此共同构建一个开放的网络平台。致力于运用构建在云端基于地图的服务,使用文字、图片、视频、互动动画等形式来展示全国各地的传统手工艺。并且充分发挥网络的优势,通过开放协作的维基平台人人都可以参与到内容的添加修改与完善中来。目的在于记录、展示、挖掘、传承中国古...AutoArchive: Auto archive your "my documents" to a remote machine. I'm writing this so my wife can put things in "my documents" and it'll automaticly archive i...BigDoor .NET Client: A .NET client for the BigDoor Media API. The API enables secured virtual transactions with support for any number of currencies, transactions, awar...bubujie: Dreamweaver LibraryGeckoGit: GeckoGit is a combination of TortoiseSVN and AnkhSVN, but for Git repositories, and built on the GitSharp library.Global: global, config, mail, http, rest, xml, serialization, helper, path, ioIndustrial Dashboard Connected Grid webpart: This Sharepoint 2007/10 webpart provides a simple way to display grid based reports populated with data that comes from a SQL Server stored procedu...IpControls: "IpControls" contains IPv4 and IPv6 text boxes, both as Windows Forms and WPF version. The IPv6 control automatically detects the older hybrid for...LiteME: LiteME is short for LiteMapleStoryEmulator... it is v75, open-source, and still going through it's alpha stages. It is still in development!Meditel PHP Class: Une classe PHP qui vous permet de d'envoyer des SMS vers tous les numeros Meditel en utilisant leservice des SMS gratuits depuis le site Meditel.maMoneySafe: Help people.Mouse Zoom - Visual Studio Extension: Mouse Zoom is a Visual Studio 2010 extension that will cause the mouse zoom functionality to zoom at the mouse's cursor instead of at the top of th...Multi-Language Words Memorizer: This .net application is designed for learning words and help foreign language learners by lots of automatic features. After you select a list of ...Navigation for ASP.NET Web Forms: Navigation for ASP.NET Web Forms manages movement and data passing between aspx Pages in a unit testable manner. There is no Client-side logic, so ...NazTek.Extension.Clr4: CLR 4.0 extensions and utility APIOpalis Community Releases: Sample workflows, objects, code and other items related to System Center's Opalis Integration Server, published by the Opalis team.Power Video Player: Power Video Player is a slim feature-rich video/dvd player that meets everyday needs in video playback on PC with a bunch of advanced features on b...SchemeEditor: <WPF> <.NET> <Editor> <Silverlight> <Scheme> <Graphics> <simulink> <schematic>StyleCop+: StyleCop+ is a plug-in that extends original StyleCop features.timemanager2010: Just another work time managerTweetTunes: Updates Twitter with current song playing in iTunes - if your Twitter account is linked to Facebook - it will update that too The twittervb2 down...WCF Discovery Library: WCF Discovery Library is a small collection of utilities that makes it easy to add WCF 4.0 Discovery features into your projects.New ReleasesAjaxControlToolkit additional extenders: ControlToolkitExtended: this build contains web example with BreadCrumbsAnyCAD: AnyCAD Free Beta1: AnyCAD Free Beta1Baccarat: Single player practice baccarat: This is a simple baccarat game for Windows Mobile. It is single player and is only a practice version, which will help users familiarize themselve...BigDoor .NET Client: BigDoor .NET 2.0 Client (Alpha): Our first iteration of the .NET client. Please fork and or ask to be added if you want to make any contributions.CBM-Command: 2010-05-04: Release NotesNew Features Panel navigation now complete. Scroll up and down through directories using the up and down cursor keys. Switch between...Directory Linker: Directory Linker 2.1: This release introduces XP support, more information about all features can be found at http://www.humblecoder.co.uk/?p=141Extend SmallBasic: Teaching Extensions v.015: added high low quizGoogle AJAX Search Services for jQuery: jquery.gss-0.1.3.js: First official release - use at your own discretion. Thanks, AndrewIndustrial Dashboard Connected Grid webpart: Filtered Industrial Grid: Filtered Industrial Grid web part for SharePoint 2007/2010, First Release.jQuery Library for SharePoint Web Services: SPServices 0.5.5: IMPORTANT NOTE: This release is in an alpha state. You should only download it if you know what you are getting and are interested in testing it f...Meditel PHP Class: Meditel PHP Class: Zipped File : Example : exemplemeditel.php PHP Class : meditel.class.phpMulti-Language Words Memorizer: Memorizer 1.0: First release.mwNSPECT: mwNSPECT Plugin DLL: mwNSPECT Mapwindow plugin dll. Place in your MapWindow or BASINS plugins directory. Presently only for testing form functionality (not including...mwNSPECT: mwNSPECT Simple Installer: Simplistic mwNSPECT Mapwindow plugin installer using Inno setup. Installs all the files you'll need for NSPECT into the C:\NSPECT folder and insta...MyWSAT - ASP.NET Membership Administration Tool: MyWSAT v3.5.3: MyWSAT 3.5.3 Update Notes - May 4th 2010 1.) Added the user search box and a-z navigation menu to all relevant user gridviews. 2.) Added a membersh...Object/Relational Mapper & Code Generator in Net 2.0 for Relational & XML Schema: 2.7: Upgraded UI-generation templates for special case of associative tables (2-column primary keys). Minor bugfix with template-editor.Open NFSe: Open NFSe 2.0: Versao para Belo Horizonte utilizando Windows Services.Power Video Player: PVP 1.1.3776: v1.1.3776 This is mainly a rebuild of version 1.1 under Ms-PL license and is the 1st version available at CodePlex.PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT: PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT-3.1: The following error has been corrected: PCG ERROR: srcproj -- 3933 PCG ERROR: srcproj -- 2943 PCG ERROR: devproj -- 1474 PCG ERROR: mainprj -- 128...Rehost Image: 1.3.9: Fixed locations saving for mac and linux platforms.Robot Shootans: Robot Shootans 0.5.1 (Windows): This is the first public release of this game. Instructions on how to play are included in the game itself Known issues: Changing control style wh...SchemeEditor: SchemeEditor Beta: First release. Wait for documentation & update for some new functionSharePoint Rsync List: SharePoint Rsync 0.9.0.0: Initial release of sprsync. Comments, questions, feedback, and code enhancements are welcome!Software Is Hardwork: Sw. Is Hw. Lib. 3.0.0.x+01: Sw. Is Hw. Lib. 3.0.0.x+01 UNSUPPORTED, UNTESTED ALPHA RELEASE Code may disappear. This is just a preview of code that was in progress. Code is s...Software Localization Tool: SharpSLT 1.0.1: Minor release: bug fixes slight changes in the UIStyleCop+: StyleCop+ 0.6: Several important improvements made for Advanced Naming Rules: - Added new entities for fields and constants - Added new entities for methods (incl...turing machine simulator: First version of turing machine: Overview: First version of turing simulator with example script (transaction function). Files: SimulatorGui.exe - main GUI of simulator TuringMach...VCC: Latest build, v2.1.30504.0: Automatic drop of latest buildVocabulary Training Center: Basic Edition 1.1: A release with medium large changes: New functionality: Multiple-choice questions added Grammatical questions added Evaluation changed accordin...Web Service Software Factory: Web Service Software Factory 2010 RC: To use the Web Service Software Factory 2010, you need the following software installed on your computer: • Microsoft Visual Studio 2010 (Ultima...Web Service Software Factory: WSSF2010 Guide: This is the help and guidance for Web Service Software Factory 2010Windows Phone 7 Panorama control: panorama control v0.6 + samples: IMPORTANT NOTE: Please read the following bug + suggested workaround. I'll fix this in a new release shortly. Panorama Control source code + sampl...WPF Behavior Library: WPF Behavior Library 0.2 Release: Drag & Drop Took away the ItemType and DataTemplate requirements Added functions for inheritors to be able to provide custom logic to handle movi...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight Toolkitpatterns & practices – Enterprise LibraryWindows Presentation Foundation (WPF)iTuner - The iTunes CompanionDotNetNuke® Community EditionASP.NETMost Active Projectspatterns & practices – Enterprise LibraryAJAX Control FrameworkHydroServer - CUAHSI Hydrologic Information System ServerIonics Isapi Rewrite Filterpatterns & practices: Azure Security GuidanceRawrBlogEngine.NETTinyProjectNB_Store - Free DotNetNuke Ecommerce Catalog ModuleAll-In-One Code Framework

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >