Search Results

Search found 1506 results on 61 pages for 'brian buck'.

Page 59/61 | < Previous Page | 55 56 57 58 59 60 61  | Next Page >

  • Silverlight Firestarter Wrap Up and WCF RIA Services Talk Sample Code

    - by dwahlin
    I had a great time attending and speaking at the Silverlight Firestarter event up in Redmond on December 2, 2010. In addition to getting a chance to hang out with a lot of cool people from Microsoft such as Scott Guthrie, John Papa, Tim Heuer, Brian Goldfarb, John Allwright, David Pugmire, Jesse Liberty, Jeff Handley, Yavor Georgiev, Jossef Goldberg, Mike Cook and many others, I also had a chance to chat with a lot of people attending the event and hear about what projects they’re working on which was awesome. If you didn’t get a chance to look through all of the new features coming in Silverlight 5 check out John Papa’s post on the subject. While at the Silverlight Firestarter event I gave a presentation on WCF RIA Services and wanted to get the code posted since several people have asked when it’d be available. The talk can be viewed by clicking the image below. Code from the talk follows as well as additional links. I had a few people ask about the green bracelet on my left hand since it looks like something you’d get from a waterpark. It was used to get us access down a little hall that led backstage and allowed us to go backstage during the event. I thought it looked kind of dorky but it was required to get through security. Sample Code from My WCF RIA Services Talk (To login to the 2 apps use “user” and “P@ssw0rd”. Make sure to do a rebuild of the projects in Visual Studio before running them.) View All Silverlight Firestarter Talks and Scott Guthrie’s Keynote WCF RIA Services SP1 Beta for Silverlight 4 WCF RIA Services Code Samples (including some SP1 samples) Improved binding support in EntitySet and EntityCollection with SP1 (Kyle McClellan’s Blog) Introducing an MVVM-Friendly DomainDataSource: The DomainCollectionView (Kyle McClellan’s Blog) I’ve had the chance to speak at a lot of conferences but never with as many cameras, streaming capabilities, people watching live and overall hype involved. Over 1000 people registered to attend the conference in person at the Microsoft campus and well over 15,000 to watch it through the live stream.  The event started for me on Tuesday afternoon with a flight up to Seattle from Phoenix. My flight was delayed 1 1/2 hours (I seem to be good at booking delayed flights) so I didn’t get up there until almost 8 PM. John Papa did a tech check at 9 PM that night and I was scheduled for 9:30 PM. We basically plugged in my laptop backstage (amazing number of servers, racks and audio devices back there) and made sure everything showed up properly on the projector and the machines recording the presentation. In addition to a dedicated show director, there were at least 5 tech people back stage and at least that many up in the booth running lights, audio, cameras, and other aspects of the show. I wish I would’ve taken a picture of the backstage setup since it was pretty massive – servers all over the place. I definitely gained a new appreciation for how much work goes into these types of events. Here’s what the room looked like right before my tech check– not real exciting at this point. That’s Yavor Georgiev (who spoke on WCF Services at the Firestarter) in the background. We had plenty of monitors to reference during the presentation. Two monitors for slides (right and left side) and a notes monitor. The 4th monitor showed the time and they’d type in notes to us as we talked (such as “You’re over time!” in my case since I went around 4 minutes over :-)). Wednesday morning I went back on campus at Microsoft and watched John Papa film a few Silverlight TV episodes with Dave Campbell and Ryan Plemons.   Next I had the chance to watch the dry run of the keynote with Scott Guthrie and John Papa. We were all blown away by the demos shown since they were even better than expected. Starting at 1 PM on Wednesday I went over to Building 35 and listened to Yavor Georgiev (WCF Services), Jaime Rodriguez (Windows Phone 7), Jesse Liberty (Data Binding) and Jossef Goldberg and Mike Cook (Silverlight Performance) give their different talks and we all shared feedback with each other which was a lot of fun. Jeff Handley from the RIA Services team came afterwards and listened to me give a dry run of my WCF RIA Services talk. He had some great feedback that I really appreciated getting. That night I hung out with John Papa and Ward Bell and listened to John walk through his keynote demos. I also got a sneak peak of the gift given to Dave Campbell for all his work with Silverlight Cream over the years. It’s a poster signed by all of the key people involved with Silverlight: Thursday morning I got up fairly early to get to the event center by 8 AM for speaker pictures. It was nice and quiet at that point although outside the room there was a huge line of people waiting to get in.     At around 8:30 AM everyone was let in and the main room was filled quickly. Two other overflow rooms in the Microsoft conference center (Building 33) were also filled to capacity. At around 9 AM Scott Guthrie kicked off the event and all the excitement started! From there it was all a blur but it was definitely a lot of fun. All of the sessions for the Silverlight Firestarter were recorded and can be watched here (including the keynote). Corey Schuman, John Papa and I also released 11 lab exercises and associated videos to help people get started with Silverlight. Definitely check them out if you’re interested in learning more! Level 100: Getting Started Lab 01 - WinForms and Silverlight Lab 02 - ASP.NET and Silverlight Lab 03 - XAML and Controls Lab 04 - Data Binding Level 200: Ready for More Lab 05 - Migrating Apps to Out-of-Browser Lab 06 - Great UX with Blend Lab 07 - Web Services and Silverlight Lab 08 - Using WCF RIA Services Level 300: Take me Further Lab 09 - Deep Dive into Out-of-Browser Lab 10 - Silverlight Patterns: Using MVVM Lab 11 - Silverlight and Windows Phone 7

    Read the article

  • Talking JavaOne with Rock Star Martijn Verburg

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top-rated speakers at each JavaOne Conference. They are awarded by their peers, who, through conference surveys, recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized. Martijn Verburg has, in recent years, established himself as an important mover and shaker in the Java community. His “Diabolical Developer” session at the JavaOne 2011 Conference got people’s attention by identifying some of the worst practices Java developers are prone to engage in. Among other things, he is co-leader and organizer of the thriving London Java User Group (JUG) which has more than 2,500 members, co-represents the London JUG on the Executive Committee of the Java Community Process, and leads the global effort for the Java User Group “Adopt a JSR” and “Adopt OpenJDK” programs. Career highlights include overhauling technology stacks and SDLC practices at Mizuho International, mentoring Oracle on technical community management, and running off shore development teams for AIG. He is currently CTO at jClarity, a start-up focusing on automating optimization for Java/JVM related technologies, and Product Advisor at ZeroTurnaround. He co-authored, with Ben Evans, "The Well-Grounded Java Developer" published by Manning and, as a leading authority on technical team optimization, he is in high demand at major software conferences.Verburg is participating in five sessions, a busy man indeed. Here they are: CON6152 - Modern Software Development Antipatterns (with Ben Evans) UGF10434 - JCP and OpenJDK: Using the JUGs’ “Adopt” Programs in Your Group (with Csaba Toth) BOF4047 - OpenJDK Building and Testing: Case Study—Java User Group OpenJDK Bugathon (with Ben Evans and Cecilia Borg) BOF6283 - 101 Ways to Improve Java: Why Developer Participation Matters (with Bruno Souza and Heather Vancura-Chilson) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Kirk Pepperdine, Ellen Kraffmiller and Henri Tremblay) When I asked Verburg about the biggest mistakes Java developers tend to make, he listed three: A lack of communication -- Software development is far more a social activity than a technical one; most projects fail because of communication issues and social dynamics, not because of a bad technical decision. Sadly, many developers never learn this lesson. No source control -- Developers simply storing code in local filesystems and emailing code in order to integrate Design-driven Design -- The need for some developers to cram every design pattern from the Gang of Four (GoF) book into their source code All of which raises the question: If these practices are so bad, why do developers engage in them? “I've seen a wide gamut of reasons,” said Verburg, who lists them as: * They were never taught at high school/university that their bad habits were harmful.* They weren't mentored in their first professional roles.* They've lost passion for their craft.* They're being deliberately malicious!* They think software development is a technical activity and not a social one.* They think that they'll be able to tidy it up later.A couple of key confusions and misconceptions beset Java developers, according to Verburg. “With Java and the JVM in particular I've seen a couple of trends,” he remarked. “One is that developers think that the JVM is a magic box that will clean up their memory, make their code run fast, as well as make them cups of coffee. The JVM does help in a lot of cases, but bad code can and will still lead to terrible results! The other trend is to try and force Java (the language) to do something it's not very good at, such as rapid web development. So you get a proliferation of overly complex frameworks, libraries and techniques trying to get around the fact that Java is a monolithic, statically typed, compiled, OO environment. It's not a Golden Hammer!”I asked him about the keys to running a good Java User Group. “You need to have a ‘Why,’” he observed. “Many user groups know what they do (typically, events) and how they do it (the logistics), but what really drives users to join your group and to stay is to give them a purpose. For example, within the LJC we constantly talk about the ‘Why,’ which in our case is several whys:* Re-ignite the passion that developers have for their craft* Raise the bar of Java developers in London* We want developers to have a voice in deciding the future of Java* We want to inspire the next generation of tech leaders* To bring the disparate tech groups in London together* So we could learn from each other* We believe that the Java ecosystem forms a cornerstone of our society today -- we want to protect that for the futureLooking ahead to Java 8 Verburg expressed excitement about Lambdas. “I cannot wait for Lambdas,” he enthused. “Brian Goetz and his group are doing a great job, especially given some of the backwards compatibility that they have to maintain. It's going to remove a lot of boiler plate and yet maintain readability, plus enable massive scaling.”Check out Martijn Verburg at JavaOne if you get a chance, and, stay tuned for a longer interview yours truly did with Martijn to be publish on otn/java some time after JavaOne. Originally published on blogs.oracle.com/javaone.

    Read the article

  • Talking JavaOne with Rock Star Martijn Verburg

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top-rated speakers at each JavaOne Conference. They are awarded by their peers, who, through conference surveys, recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized. Martijn Verburg has, in recent years, established himself as an important mover and shaker in the Java community. His “Diabolical Developer” session at the JavaOne 2011 Conference got people’s attention by identifying some of the worst practices Java developers are prone to engage in. Among other things, he is co-leader and organizer of the thriving London Java User Group (JUG) which has more than 2,500 members, co-represents the London JUG on the Executive Committee of the Java Community Process, and leads the global effort for the Java User Group “Adopt a JSR” and “Adopt OpenJDK” programs. Career highlights include overhauling technology stacks and SDLC practices at Mizuho International, mentoring Oracle on technical community management, and running off shore development teams for AIG. He is currently CTO at jClarity, a start-up focusing on automating optimization for Java/JVM related technologies, and Product Advisor at ZeroTurnaround. He co-authored, with Ben Evans, "The Well-Grounded Java Developer" published by Manning and, as a leading authority on technical team optimization, he is in high demand at major software conferences.Verburg is participating in five sessions, a busy man indeed. Here they are: CON6152 - Modern Software Development Antipatterns (with Ben Evans) UGF10434 - JCP and OpenJDK: Using the JUGs’ “Adopt” Programs in Your Group (with Csaba Toth) BOF4047 - OpenJDK Building and Testing: Case Study—Java User Group OpenJDK Bugathon (with Ben Evans and Cecilia Borg) BOF6283 - 101 Ways to Improve Java: Why Developer Participation Matters (with Bruno Souza and Heather Vancura-Chilson) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Kirk Pepperdine, Ellen Kraffmiller and Henri Tremblay) When I asked Verburg about the biggest mistakes Java developers tend to make, he listed three: A lack of communication -- Software development is far more a social activity than a technical one; most projects fail because of communication issues and social dynamics, not because of a bad technical decision. Sadly, many developers never learn this lesson. No source control -- Developers simply storing code in local filesystems and emailing code in order to integrate Design-driven Design -- The need for some developers to cram every design pattern from the Gang of Four (GoF) book into their source code All of which raises the question: If these practices are so bad, why do developers engage in them? “I've seen a wide gamut of reasons,” said Verburg, who lists them as: * They were never taught at high school/university that their bad habits were harmful.* They weren't mentored in their first professional roles.* They've lost passion for their craft.* They're being deliberately malicious!* They think software development is a technical activity and not a social one.* They think that they'll be able to tidy it up later.A couple of key confusions and misconceptions beset Java developers, according to Verburg. “With Java and the JVM in particular I've seen a couple of trends,” he remarked. “One is that developers think that the JVM is a magic box that will clean up their memory, make their code run fast, as well as make them cups of coffee. The JVM does help in a lot of cases, but bad code can and will still lead to terrible results! The other trend is to try and force Java (the language) to do something it's not very good at, such as rapid web development. So you get a proliferation of overly complex frameworks, libraries and techniques trying to get around the fact that Java is a monolithic, statically typed, compiled, OO environment. It's not a Golden Hammer!”I asked him about the keys to running a good Java User Group. “You need to have a ‘Why,’” he observed. “Many user groups know what they do (typically, events) and how they do it (the logistics), but what really drives users to join your group and to stay is to give them a purpose. For example, within the LJC we constantly talk about the ‘Why,’ which in our case is several whys:* Re-ignite the passion that developers have for their craft* Raise the bar of Java developers in London* We want developers to have a voice in deciding the future of Java* We want to inspire the next generation of tech leaders* To bring the disparate tech groups in London together* So we could learn from each other* We believe that the Java ecosystem forms a cornerstone of our society today -- we want to protect that for the futureLooking ahead to Java 8 Verburg expressed excitement about Lambdas. “I cannot wait for Lambdas,” he enthused. “Brian Goetz and his group are doing a great job, especially given some of the backwards compatibility that they have to maintain. It's going to remove a lot of boiler plate and yet maintain readability, plus enable massive scaling.”Check out Martijn Verburg at JavaOne if you get a chance, and, stay tuned for a longer interview yours truly did with Martijn to be publish on otn/java some time after JavaOne.

    Read the article

  • Back Up to Tape the Way You Shop For Groceries

    - by rickramsey
    Imagine if this was how you shopped for groceries: From the end of the aisle sprint to the point where you reach the ketchup. Pull a bottle from the shelf and yell at the top of your lungs, “Got it!” Sprint back to the end of the aisle. Start again and sprint down the same aisle to the mustard, pull a bottle from the shelf and again yell for the whole store to hear, “Got it!” Sprint back to the end of the aisle. Repeat this procedure for every item you need in the aisle. Proceed to the next aisle and follow the same steps for the list of items you need from that aisle. Sounds ridiculous, doesn’t it? Not only is it horribly inefficient, it’s exhausting and can lead to wear out failures on your grocery cart, or worse, yourself. This is essentially how NetApp and some other applications write NDMP backups to tape. In the analogy, the ketchup and mustard are the files to be written, yelling “Got it!” is the equivalent of a sync mark at the end of a file, and the sprint back to the end of an aisle is the process most commonly called a “backhitch” where the drive has to back up on a tape to start writing again. Writing to tape in this way results in very slow tape drive performance and imposes unnecessary wear on the tape drive and the media, especially when writing small files. The good news is not all tape drives behave this way when writing small files. Unlike midrange LTO drives, Oracle’s StorageTek T10000D tape drive is designed to handle this scenario efficiently. The difference between the two drive types is that the T10000D drive gives you the ability to write files in a NetApp NDMP backup environment the way you would normally shop for groceries. With grocery shopping, you essentially stream through aisles picking up items as you go, and then after checking out, yell, “Got it!”, though you might do that last step silently. With the T10000D, it has a feature called the Tape Application Accelerator, which prevents the drive from having to stop after each file is written to notify NetApp or another application that the write was successful. When enabled in the T10000D tape drive, Tape Application Accelerator causes the tape drive to respond to tape mark and file sync commands differently than when disabled: A tape mark received by the tape drive is treated as a buffered tape mark. A file sync received by the tape drive is treated as a no op command. Since buffered tape marks and no op commands do not cause the tape drive to empty the contents of its buffer to tape and backhitch, the data is written to tape in significantly less time. Oracle has emulated NetApp environments with a number of different file sizes and found the following when comparing the T10000D with the Tape Application Accelerator enabled versus LTO6 tape drives. Notice how the T10000D is not only monumentally faster, but also remarkably consistent? In addition, the writing of the 50 GB of files is done without a single backhitch. The LTO6 drive, meanwhile, will perform as many as 3,800 backhitches! At the end of writing the entire set of files, the T10000D tape drive reports back to the application, in this case NetApp, that the write was successful via a tape mark. So if the Tape Application Accelerator dramatically improves performance and reliability, why wouldn’t you always have it enabled? The reason is because tape drive buffers are meant to be just temporary data repositories so in the event of a power loss, there could be data loss in certain environments for the files that resided in the buffer. Fortunately, we do have best practices depending on your environment to avoid this from happening. I highly recommend reading Maximizing Tape Performance with StorageTek T10000 Tape Drives (pdf) to decide which best practice is right for you. The white paper also digs deeper into the benefits of the Tape Application Accelerator. The white paper is free, and after downloading it you can decide for yourself whether you want to yell “Got it!” out loud or just silently to yourself. Customer Advisory Panel One final link: Oracle has started up a Customer Advisory Panel program to collect feedback from customers on their current experiences with Oracle products, as well as desires for future product development. If you would like to participate in the program, go to this link at oracle.com. photo taken on Idaho's Sacajewea Historic Biway by Rick Ramsey - Brian Zents Follow OTN on Blog | Facebook | Twitter | YouTube

    Read the article

  • Oracle Fusion Applications User Experience Design Patterns: Feeling the Love after Launch

    - by mvaughan
    By Misha Vaughan, Oracle Applications User ExperienceIn the first video by the Oracle Applications User Experience team on the Oracle Partner Network, Vice President Jeremy Ashley said that Oracle is looking to expand the ecosystem of support for Oracle’s applications customers as they begin to assess their investment and adoption of Oracle Fusion Applications. Oracle has made a massive investment to maintain the benefits of the Fusion Applications User Experience. This summer, the Applications User Experience team released the Oracle Fusion Applications user experience design patterns.Design patterns help create consistent experiences across devices.The launch has been very well received:Angelo Santagata, Senior Principal Technologist and Fusion Middleware evangelist for Oracle,  wrote this to the system integrator community: “The web site is the result of many years of Oracle R&D into user interface design for Fusion Applications and features a really cool web app which allows you to visualise the UI components in action.”  Grant Ronald, Director of Product Management, Application Development Framework (ADF) said: “It’s a science I don't understand, but now I don't have to ... Now you can learn from the UX experience of Fusion Applications.”Frank Nimphius, Senior Principal Product Manager, Oracle (ADF) wrote about the launch of the design patterns for the ADF Code Corner, and Jürgen Kress, Senior Manager EMEA Alliances & Channels for Fusion MiddleWare and Service Oriented Architecture, (SOA), shared the news with his Partner Community. Oracle Twitter followers also helped spread the message about the design patterns launch: ?@bex – Brian Huff, founder and Chief Software Architect for Bezzotech, and Oracle ACE Director:“Nifty! The Oracle Fusion UX team just released new ADF design patterns.”@maiko_rocha, Maiko Rocha, Oracle Consulting Solutions Architect and Oracle FMW engineer: “Haven't seen any other vendor offer such comprehensive UX Design Patterns catalog for free!”@zirous_chad, Chad Thompson, Senior Solutions Architect for Zirous, Inc. and ADF Developer:Wow - @ultan and company did a great job with the Fusion UX PatternsWhat is a user experience design pattern?A user experience design pattern is a re-usable, usability tested functional blueprint for a particular user experience.  Some examples are guided processes, shopping carts, and search and search results.  Ultan O’Broin discusses the top design patterns every developer should know.The patterns that were just released are based on thousands of hours of end-user field studies, state-of-the-art user interface assessments, and usability testing.  To be clear, these are functional design patterns, not technical design patterns that developers may be used to working with.  Because we know there is a gap, we are putting together some training that will help close that gap.Who should care?This is an offering targeted primarily at Application Development Framework (ADF) developers. If you are faced with the following questions regarding Fusion Applications, you will want to know and learn more:•    How do I build something that looks like Fusion Applications?•    How do I build a next-generation application?•    How do I extend a Fusion Application and maintain the user experience?•    I don’t want to re-invent the wheel on the user interface, so where do I start?•    I need to build something that will eventually co-exist with Fusion Applications. How do I do that?These questions are relevant to partners with an ADF competency, individual practitioners, or small consultancies with an ADF specialization, and customers who are trying to shift their IT staff over to supporting Fusion Applications.Where you can find out more?OnlineOur Fusion User Experience design patterns maven is Ultan O’Broin. The Oracle Partner Network is helping our team bring this first e-seminar to you in order to go into a more detail on what this means and how to take advantage of it:? Webinar: Build a Better User Experience with Oracle: Oracle Fusion Applications Functional Design PatternsSept 20, 2012 , 10:30am-11:30am PacificDial-In:  1. 877-664-9137 / Passcode 102546?International:  706-634-9619  http://www.intercall.com/national/oracleuniversity/gdnam.htmlAccess the Live Event Or Via Webconference Access http://ouweb.webex.com  ?and enter this session number: 598036234At a Usergroup eventThe Fusion User Experience Advocates (FXA) are also going to be getting some deep-dive training on this content and can share it with local user groups.At OpenWorld Ultan O’Broin               Chris MuirIf you will be at OpenWorld this year, our own Ultan O’Broin will be visiting the ADF demopod to say hello, thanks to Shay Shmeltzer, Senior Group Manager for ADF outbound communication and at the OTN lounge: Monday 10-10:45, Tuesday 2:15-2:45, Wednesday 2:15-3:30 ?  Oracle JDeveloper and Oracle ADF,  Moscone South, Right - S-207? “ADF Meet and Greett”, OTN Lounge, Wednesday 4:30 And I cannot talk about OpenWorld and ADF without mentioning Chris Muir’s ADF EMG event: the Year After the Year Of the ADF Developer – Sunday, Sept 30 of OpenWorld. Chris has played host to Ultan and the Applications user experience message for his online community and is now a seasoned UX expert.Expect to see additional announcements about expanded and training on similar topics in the future.

    Read the article

  • Red Gate does Byte Night 2012

    - by red(at)work
    On the 5th of October 2012, a team of nine plucky Red Gaters braved the howling wind and the driving rain to sleep outside. No tents or mattresses were allowed – all we took for protection were sleeping bags, groundsheets, plastic sacks and Colin’s enormous fishing umbrella (a godsend in umbrella-y disguise). Why would we do such a thing? For Byte Night, an annual tech sector sleepout in support of Action for Children, who tackle the causes as well as the consequences of youth homelessness. Byte Night encourages technology professionals to do for one night a year what thousands of young people have to do every night – sleep rough.  We signed up for Byte Night in the warm, heady midst of the British summer, thinking it couldn’t possibly be all that bad. Even on the night itself – before the rain began to fall, sat in the comfort and warmth of a company canteen, drinking wine and eating chill and preparing to win the pub quiz – we were excited and optimistic about the night that lay ahead of us. All of that changed as soon as we stepped out into one of the worst rainstorms of the year. Brian, the team’s birthday boy, describes it best: Picture the scene: it’s 3 am on a Friday. I’m lying outside, fully clothed in a sleeping bag, wearing a raincoat, trussed up inside a large plastic pocket, on a ground sheet beneath a giant umbrella, wedged so tightly between two of my colleagues that I can’t move my arms. I’m wide awake, staring up at the grey sky beyond the edge of the umbrella; a limp, flickering white glow hints at a moon somewhere behind the drifting clouds. I haven’t slept since we first moved outside at 11 pm. Outside. Did I mention we were outside? I’m hung over. I need the loo. But there is no way on earth that I’m getting out of this sleeping bag. It’s cold. It’s raining. Not just raining, but chucking it down. It’s been doing this non-stop since 10pm. The rain sounds like a hyperactive drummer on the fishing umbrella, and the noise is loud and relentless. Puddles of water are forming all over the groundsheet, and, despite being ensconced inside the plastic pouch, I am wet. The fishing umbrella is protecting me from the worst of the driving rain, but not all of me is under it, and five hours of rain is no match for it. Everything is wet. My left side has become horribly damp. My trainers, which I placed next to my sleeping bag, are now completely soaked through. Mmm. That’ll be fun in the morning. My head is next to Colin’s head on one side, and a multi-pack of McCoy’s cheddar and onion crisps on the other. Don’t ask about the tub of hummus. That’s somewhere down by my ankles, abandoned to the night. Jess, who is lying next to me, rolls over onto her side. A mini waterfall cascades from her rain-pouch onto my face. Bah. I continue to stare into the heavens, willing the dawn to hurry up. Something lands on my face. It’s a mosquito. Great. Midnight, when this still seemed like fun – when we opened some champagne and my colleagues presented me with a caterpillar birthday cake, when everyone was drunk and jolly and full of stoic resolve – feels like a long time ago. Did I mention that today is my birthday? The remains of the caterpillar cake endure the same fate as the hummus, left out in the rain like a metaphor for sadness. It’s getting colder. I can see my breath. Silence has descended on the group, apart from the rustle of plastic. And the rain, obviously. Someone snores, and I envy whoever it is the sweet escape of sleep. I try to wriggle a bit further down inside my sleeping bag, but it doesn’t want to be wriggled into. Only 3 hours till dawn. 180 minutes. I begin to count them off, one at a time.  All nine of us got to go home in the morning, but thousands of children across the UK don’t have that luxury. If you’d like to sponsor the Red Gate Byte Night team, our JustGiving page can be found here.   Chris, before the outside bit actually happened. More photos from Byte Night Cambridge 2012 can be found here.

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

  • Welcome to Jackstown

    - by fatherjack
    I live in a small town, the population count isn't that great but let me introduce you to some of the population. We'll start with Martin the Doc, he fixes up anything that gets poorly, so much so that he could be classed as the doctor, the vet and even the garage mechanic. He's got a reputation that he can fix anything and that hasn't been proved wrong yet. He's great friends with Brian (who gets called "Brains") the teacher who seems to have a sound understanding of any topic you care to pass his way. If he isn't sure he tells you and then goes to find out and comes back with a full answer real quick. Its good to have that sort of research capability close at hand. Brains is also great at encouraging anyone who needs a bit of support to get them up to speed and working on their jobs. Steve sees Brains regularly, that's because he is the librarian, he keeps all sorts of reading material and nowadays there's even video to watch about any topic you like. Steve keeps scouring all sorts of places to get the content that's needed and he keeps it in good order so that what ever is needed can be found quickly. He also has to make sure that old stuff gets marked as probably out of date so that anyone reading it wont get mislead. Over the road from him is Greg, he's the town crier. We don't have a newspaper here so Greg keeps us all informed of what's going on "out of town" - what new stuff we might make use of and what wont work in a small place like this. If we are interested he goes ahead and gets people in to demonstrate their products  and tell us about the details. Greg is pretty good at getting us discounts too. Now Greg's brother Ian works for the mayors office in the "waste management department" nowadays its all about the recycling but he still has to make sure that the stuff that cant be used any more gets disposed of properly. It depends on the type of waste he's dealing with that decides how it need to be treated and he has to know a lot about the different methods and when to use which ones. There are two people that keep the peace in town, Brent is the detective, investigating wrong doings and applying justice where necessary and Bart is the diplomat who smooths things over when any people have a dispute or disagreement. Brent is meticulous in his investigations and fair in the way he handles any situation he finds. Discretion is his byword. There's a rumour that Bart used to work for the United Nations but what ever his history there is no denying his ability to get apparently irreconcilable parties working together to their combined benefit. Someone who works closely with Bart is Brad, he is the translator in town. He has several languages that he can converse in but he can also explain things from someone's point of view or  and make it understandable to someone else. To keep things on the straight and narrow from a legal perspective is Ben the solicitor, making sure we all abide by the rules.Two people who make for an interesting evening's conversation if you get them together are Aaron and Grant, Aaron is the local planning inspector and Grant is an inventor of some reputation. Anything being constructed around here needs Aarons agreement. He's quite flexible in his rules though; if you can justify what you want to do with solid logic but he wont stand for any development going on without his inclusion. That gets a demolition notice and there's no argument. Grant as I mentioned is the inventor in town, if something can be improved or created then Grant is your man. He mainly works on his own but isnt averse to getting specific advice and assistance from specialist from out of town if they can help him finish his creations.There aren't too many people left for you to meet in the town, there's Rob, he's an ex professional sportsman. He played Hockey, Football, Cricket, you name it. He was in his element as goal keeper / wicket keeper and that shows in his personal life. He just goes about his business and people often don't even know that he's helped them. Really low profile, doesn't get any glory but saves people from lots of problems, even disasters on occasion. There goes Neil, he's a bit of an odd person, some people say he's gifted with special clairvoyant powers, personally I think he's got his ear to the ground and knows where to find out the important news as soon as its made public. Anyone getting a visit from Neil is best off to follow his advice though, he's usually spot on and you wont be caught by surprise if you follow his recommendations – wherever it comes from.Poor old Andrew is the last person to introduce you to. Andrew doesn't show himself too often but when he does it seems that people find a reason to blame him for their problems, whether he had anything to do with their predicament or not. In all honesty, without fail, and to his great credit, he takes it in good grace and never retaliates or gets annoyed when he's out and about.  It pays off too as its very often the case that those who were blaming him recently suddenly find they need his help and they readily forget the issues pretty rapidly.And then there's me, what do I do in town? Well, I'm just a DBA with a lot of hats. (Jackstown Pop. 1)

    Read the article

  • Five Things Learned at the BSR Conference in San Francisco on Nov 2nd-4th

    - by Evelyn Neumayr
    The BSR Conference 2011—“Redefining Leadership”—held from Nov 2nd to Nov 4th in San Francisco, with Oracle as one of the main sponsors, saw senior business executives, civil society representatives, and other experts from around the world gathering to share strategies and insights on the future of sustainability. The general conference sessions kicked off on November 2nd with a plenary address by former U.S. Vice President Al Gore. Other sessions were presented by CEOs of the caliber of Carl Bass (Autodesk), Brian Dunn (Best Buy), Carlos Brito (Anheuser-Busch InBev) and Ofra Strauss (Strauss Group). Here are five key highlights from the conference: 1.      The main leadership challenge is integrating sustainability into core business functions and overcoming short-termism. The “BSR GlobeScan State of Sustainable Business Poll 2011” - a survey of nearly 500 business leaders from 300 member companies - shows that 84% of respondents are optimistic that global businesses will embrace CSR/sustainability as part of their core strategies and operations in the next five years but consider integrating sustainability into their core business functions the key challenge. It is still difficult for many companies that are committed to the sustainability agenda to find investors that understand the long-term implications and as Al Gore said “Many companies are given the signal by the investors that it is the short term results that matter and that is a terribly debilitating force in the market.” 2.      Companies are required to address increasing compliance requirements and transparency in their supply chain, especially in relation with conflict minerals legislation and water management. The Dodd-Frank legislation, OECD guidelines, and the upcoming Securities and Exchange Commission (SEC) rules require companies to monitor upstream the sourcing of tin, tantalum, tungsten, and gold, but given the complexity of this issue companies need to collaborate and partner with peer companies in their industry as well as in other industries to understand how to address conflict minerals in their supply chains. The Institute of Public and Environmental Affairs’ (IPE) China Water Pollution Map enables the public to access thousands of environmental quality, discharge, and infraction records released by various government agencies. Empowered with this information, the public has the opportunity to place greater pressure on polluting companies to comply with environmental standards and create solutions to improve their performance. 3.      A new standard for reporting on supply chain greenhouse gas emissions is available. The New “Scope 3” Supply Chain Greenhouse Gas Inventory Standard, released on October 4th 2011, is the only international greenhouse gas emissions standard that accounts for the full lifecycle of a company’s products. It provides a framework for companies to account for indirect emissions outside of energy use, such as transportation, manufacturing, and distribution, and it incorporates both upstream and downstream impacts of a product. With key investors now listing supplier vulnerability to rising energy prices and disruptions of service as a key concern, greenhouse gas (GHG) management isn’t just for leading companies but a necessity for any business. 4.      Environmental, social, and corporate governance (ESG) reporting is becoming increasingly important to investors and other stakeholders. While European investors have traditionally driven the ESG agenda, U.S. investors are increasingly including ESG data in their analyses. This trend will likely increase as stakeholders continue to demand that an ESG lens be applied to their investments. Investors are increasingly looking to partner on sustainability, as they see the benefits of ESG providing significant returns on investment. 5.      Software companies are offering an increasing variety of solutions to help drive changes and measure performance internally, in supply chains, and across peer companies. The significant challenge is how to integrate different software systems to facilitate decision-making based on a holistic understanding of trade-offs. Jon Chorley, Chief Sustainability Officer and Vice President, Supply Chain Management Product Strategy at Oracle was a panelist in the “Trends in Sustainability Software” session and commented that, “How we think about our business decisions really comes down to how we think about cost. And as long as we don’t assign a cost to things that have an environmental impact or social impact, then we make decisions based on incomplete information. If we could include that in the process that determines ‘Is this product profitable? we would then have a much better decision.” For more information on BSR visit www.brs.org. You can also view highlights of the plenary session at http://www.bsr.org/en/bsr-conference/session-summaries/2011. Oracle is proud to be a sponsor of this BSR conference. By Elena Avesani, Principal Product Strategy Manager, Oracle          

    Read the article

  • Blog Buzz - Devoxx 2011

    - by Janice J. Heiss
    Some day I will make it to Devoxx – for now, I’m content to vicariously follow the blogs of attendees and pick up on what’s happening.  I’ve been doing more blog "fishing," looking for the best commentary on 2011 Devoxx. There’s plenty of food for thought – and the ideas are not half-baked.The bloggers are out in full, offering useful summaries and commentary on Devoxx goings-on.Constantin Partac, a Java developer and a member of Transylvania JUG, a community from Cluj-Napoca/Romania, offers an excellent summary of the Devoxx keynotes. Here’s a sample:“Oracle Opening Keynote and JDK 7, 8, and 9 Presentation•    Oracle is committed to Java and wants to provide support for it on any device.•    JSE 7 for Mac will be released next week.•    Oracle would like Java developers to be involved in JCP, to adopt a JSR and to attend local JUG meetings.•    JEE 7 will be released next year.•    JEE 7 is focused on cloud integration, some of the features are already implemented in glassfish 4 development branch.•    JSE 8 will be release in summer of 2013 due to “enterprise community request” as they can not keep the pace with an 18    month release cycle.•    The main features included in JSE8 are lambda support, project Jigsaw, new Date/Time API, project Coin++ and adding   support for sensors. JSE 9 probably will focus on some of these features:1.    self tuning JVM2.    improved native language integration3.    processing enhancement for big data4.    reification (adding runtime class type info for generic types)5.    unification of primitive and corresponding object classes6.    meta-object protocol in order to use type and methods define in other JVM languages7.    multi-tenancy8.    JVM resource management” Thanks Constantin! Ivan St. Ivanov, of SAP Labs Bulgaria, also commented on the keynotes with a different focus.  He summarizes Henrik Stahl’s look ahead to Java SE 8 and JavaFX 3.0; Cameron Purdy on Java EE and the cloud; celebrated Java Champion Josh Bloch on what’s good and bad about Java; Mark Reinhold’s quick look ahead to Java SE 9; and Brian Goetz on lambdas and default methods in Java SE 8. Here’s St. Ivanov’s account of Josh Bloch’s comments on the pluses of Java:“He started with the virtues of the platform. To name a few:    Tightly specified language primitives and evaluation order – int is always 32 bits and operations are executed always from left  to right, without compilers messing around    Dynamic linking – when you change a class, you need to recompile and rebuild just the jar that has it and not the whole application    Syntax  similarity with C/C++ – most existing developers at that time felt like at home    Object orientations – it was cool at that time as well as functional programming is today    It was statically typed language – helps in faster runtime, better IDE support, etc.    No operator overloading – well, I’m not sure why it is good. Scala has it for example and that’s why it is far better for defining DSLs. But I will not argue with Josh.”It’s worth checking out St. Ivanov’s summary of Bloch’s views on what’s not so great about Java as well. What's Coming in JAX-RS 2.0Marek Potociar, Principal Software Engineer at Oracle and currently specification lead of Java EE RESTful web services API (JAX-RS), blogged on his talk about what's coming in JAX-RS 2.0, scheduled for final release in mid-2012.  Here’s a taste:“Perhaps the most wanted addition to the JAX-RS is the Client API, that would complete the JAX-RS story, that is currently server-side only. In JAX-RS 2.0 we are adding a completely interface-based and fluent client API that blends nicely in with the existing fluent response builder pattern on the server-side. When we started with the client API, the first proposal contained around 30 classes. Thanks to the feedback from our Expert Group we managed to reduce the number of API classes to 14 (2 of them being exceptions)! The resulting is compact while at the same time we still managed to create an API that reflects the method invocation context flow (e.g. once you decide on the target URI and start setting headers on the request, your IDE will not try to offer you a URI setter in the code completion). This is a subtle but very important usability aspect of an API…” Obviously, Devoxx is a great Java conference, one that is hitting this year at a time when much is brewing in the platform and beginning to be anticipated.

    Read the article

  • The JavaOne 2012 Sunday Technical Keynote

    - by Janice J. Heiss
    At the JavaOne 2012 Sunday Technical Keynote, held at the Masonic Auditorium, Mark Reinhold, Chief Architect, Java Platform Group, stated that they were going to do things a bit differently--"rather than 20 minutes of SE, and 20 minutes of FX, and 20 minutes of EE, we're going to mix it up a little," he said. "For much of it, we're going to be showing a single application, to show off some of the great work that's been done in the last year, and how Java can scale well--from the cloud all the way down to some very small embedded devices, and how JavaFX scales right along with it."Richard Bair and Jasper Potts from the JavaFX team demonstrated a JavaOne schedule builder application with impressive navigation, animation, pop-overs, and transitions. They noted that the application runs seamlessly on either Windows or Macs, running Java 7. They then ran the same application on an Ubuntu Linux machine--"it just works," said Blair.The JavaFX duo next put the recently released JavaFX Scene Builder through its paces -- dragging and dropping various image assets to build the application's UI, then fine tuning a CSS file for the finished look and feel. Among many other new features, in the past six months, JavaFX has released support for H.264 and HTTP live streaming, "so you can get all the real media playing inside your JavaFX application," said Bair. And in their developer preview builds of JavaFX 8, they've now split the rendering thread from the UI thread, to better take advantage of multi-core architectures.Next, Brian Goetz, Java Language Architect, explored language and library features planned for Java SE 8, including Lambda expressions and better parallel libraries. These feature changes both simplify code and free-up libraries to more effectively use parallelism. "It's currently still a lot of work to convert an application from serial to parallel," noted Goetz.Reinhold had previously boasted of Java scaling down to "small embedded devices," so Blair and Potts next ran their schedule builder application on a small embedded PandaBoard system with an OMAP4 chip set. Connected to a touch screen, the embedded board ran the same JavaFX application previously seen on the desktop systems, but now running on Java SE Embedded. (The systems can be seen and tried at four of the nearby JavaOne hotels.) Bob Vandette, Java Embedded Architect, then displayed a $25 Rasberry Pi ARM-based system running Java SE Embedded, noting the even greater need for the platform independence of Java in such highly varied embedded processor spaces. Reinhold and Vandetta discussed Project Jigsaw, the planned modularization of the Java SE platform, and its deferral from the Java 8 release to Java 9. Reinhold demonstrated the promise of Jigsaw by running a modularized demo version of the earlier schedule builder application on the resource constrained Rasberry Pi system--although the demo gods were not smiling down, and the application ultimately crashed.Reinhold urged developers to become involved in the Java 8 development process--getting the weekly builds, trying out their current code, and trying out the new features:http://openjdk.java.net/projects/jdk8http://openjdk.java.net/projects/jdk8/spechttp://jdk8.java.netFrom there, Arun Gupta explored Java EE. The primary themes of Java EE 7, Gupta stated, will be greater productivity, and HTML 5 functionality (WebSocket, JSON, and HTML 5 forms). Part of the planned productivity increase of the release will come from a reduction in writing boilerplate code--through the widespread use of dependency injection in the platform, along with default data sources and default connection factories. Gupta noted the inclusion of JAX-RS in the web profile, the changes and improvements found in JMS 2.0, as well as enhancements to Java EE 7 in terms of JPA 2.1 and EJB 3.2. GlassFish 4 is the reference implementation of Java EE 7, and currently includes WebSocket, JSON, JAX-RS 2.0, JMS 2.0, and more. The final release is targeted for Q2, 2013. Looking forward to Java EE 8, Gupta explored how the platform will provide multi-tenancy for applications, modularity based on Jigsaw, and cloud architecture. Meanwhile, Project Avatar is the group's incubator project for designing an end-to-end framework for building HTML 5 applications. Santiago Pericas-Geertsen joined Gupta to demonstrate their "Angry Bids" auction/live-bid/chat application using many of the enhancements of Java EE 7, along with an Avatar HTML 5 infrastructure, and running on the GlassFish reference implementation.Finally, Gupta covered Project Easel, an advanced tooling capability in NetBeans for HTML5. John Ceccarelli, NetBeans Engineering Director, joined Gupta to demonstrate creating an HTML 5 project from within NetBeans--formatting the project for both desktop and smartphone implementations. Ceccarelli noted that NetBeans 7.3 beta will be released later this week, and will include support for creating such HTML 5 project types. Gupta directed conference attendees to: http://glassfish.org/javaone2012 for everything about Java EE and GlassFish at JavaOne 2012.

    Read the article

  • Highlights from the Oracle Customer Experience Summit @ OpenWorld

    - by Richard Lefebvre
    The Oracle Customer Experience Summit was the first-ever event covering the full breadth of Oracle's CX portfolio -- Marketing, Sales, Commerce, and Service. The purpose of the Summit was to articulate the customer experience imperative and to showcase the suite of Oracle products that can help our customers create the best possible customer experience. This topic has always been a very important one, but now that there are so many alternative companies to do business with and because people have such public ways to voice their displeasure, it's necessary for vendors to have multiple listening posts in place to gauge consumer sentiment. They need to know what is going on in real time and be able to react quickly to turn negative situations into positive ones. Those can then be shared in a social manner to enhance the brand and turn the customer into a repeat customer. The Summit was focused on Oracle's portfolio of products and entirely dedicated to customers who are committed to building great customer experiences within their businesses. Rather than DBAs, the attendees were business people looking to collaborate with other like-minded experts and find out how Oracle can help in terms of technology, best practices, and expertise. The event was at the Westin St. Francis Hotel in San Francisco as part of Oracle OpenWorld. We had eight hundred people attend, which was great for the first year. Next year, there's no doubt in my mind, we can raise that number to 5,000. Alignment and Logic Oracle's Customer Experience portfolio is made up of a combination of acquired and organic products owned by many people who are new to Oracle. We include homegrown Fusion CRM, as well as RightNow, Inquira, OPA, Vitrue, ATG, Endeca, and many others. The attendees knew of the acquisitions, so naturally they wanted to see how the products all fit together and hear the logic behind the portfolio. To tell them about our alignment, we needed to be aligned. To accomplish that, a cross functional team at Oracle agreed on the messaging so that every single Oracle presenter could cover the big picture before going deep into a product or topic. Talking about the full suite of products in one session produced overflow value for other products. And even though this internal coordination was a huge effort, everyone saw the value for our customers and for our long-term cooperation and success. Keynotes, Workshops, and Tents of Innovation We scored by having Seth Godin as our keynote speaker ? always provocative and popular. The opening keynote was a session orchestrated by Mark Hurd, Anthony Lye, and me. Mark set the stage by giving real-world examples of bad customer experiences, Anthony clearly articulated the business imperative for addressing these experiences, and I brought it all to life by taking the audience around the Customer Lifecycle and showing demos and videos, with partners included at each of the stops around the lifecycle. Brian Curran, a VP for RightNow Product Strategy, presented a session that was in high demand called The Economics of Customer Experience. People loved hearing how to build a business case and justify the cost of building a better customer experience. John Kembel, another VP for RightNow Product Strategy, held a workshop that customers raved about. It was based on the journey mapping methodology he created, which is a way to talk to customers about where they want to make improvements to their customers' experiences. He divided the audience into groups led by facilitators. Each person had the opportunity to engage with experts and peers and construct some real takeaways. The conference hotel was across from Union Square so we used that space to set up Innovation Tents. During the day we served lunch in the tents and partners showed their different innovative ideas. It was very interesting to see all the technologies and advancements. It also gave people a place to mix and mingle and to think about the fringe of where we could all take these ideas. Product Portfolio Plus Thought Leadership Of course there is always room for improvement, but the feedback on the format of the conference was positive. Ninety percent of the sessions had either a partner or a customer teamed with an Oracle presenter. The presentations weren't dry, one-way information dumps, but more interactive. I just followed up with a CEO who attended the conference with his Head of Marketing. He told me that they are using John Kembel's journey mapping methodology across the organization to pull people together. This sort of thought leadership in these highly competitive areas gives Oracle permission to engage around the technology. We have to differentiate ourselves and it's harder to do on the product side because everyone looks the same on paper. But on thought leadership ? we can, and did, take some really big steps. David Vap Group Vice President Oracle Applications Product Development

    Read the article

  • UIView with IrrlichtScene - iOS

    - by user1459024
    i have a UIViewController in a Storyboard and want to draw a IrrlichtScene in this View Controller. My Code: WWSViewController.h #import <UIKit/UIKit.h> @interface WWSViewController : UIViewController { IBOutlet UILabel *errorLabel; } @end WWSViewController.mm #import "WWSViewController.h" #include "../../ressources/irrlicht/include/irrlicht.h" using namespace irr; using namespace core; using namespace scene; using namespace video; using namespace io; using namespace gui; @interface WWSViewController () @end @implementation WWSViewController -(void)awakeFromNib { errorLabel = [[UILabel alloc] init]; errorLabel.text = @""; IrrlichtDevice *device = createDevice( video::EDT_OGLES1, dimension2d<u32>(640, 480), 16, false, false, false, 0); /* Set the caption of the window to some nice text. Note that there is an 'L' in front of the string. The Irrlicht Engine uses wide character strings when displaying text. */ device->setWindowCaption(L"Hello World! - Irrlicht Engine Demo"); /* Get a pointer to the VideoDriver, the SceneManager and the graphical user interface environment, so that we do not always have to write device->getVideoDriver(), device->getSceneManager(), or device->getGUIEnvironment(). */ IVideoDriver* driver = device->getVideoDriver(); ISceneManager* smgr = device->getSceneManager(); IGUIEnvironment* guienv = device->getGUIEnvironment(); /* We add a hello world label to the window, using the GUI environment. The text is placed at the position (10,10) as top left corner and (260,22) as lower right corner. */ guienv->addStaticText(L"Hello World! This is the Irrlicht Software renderer!", rect<s32>(10,10,260,22), true); /* To show something interesting, we load a Quake 2 model and display it. We only have to get the Mesh from the Scene Manager with getMesh() and add a SceneNode to display the mesh with addAnimatedMeshSceneNode(). We check the return value of getMesh() to become aware of loading problems and other errors. Instead of writing the filename sydney.md2, it would also be possible to load a Maya object file (.obj), a complete Quake3 map (.bsp) or any other supported file format. By the way, that cool Quake 2 model called sydney was modelled by Brian Collins. */ IAnimatedMesh* mesh = smgr->getMesh("/Users/dbocksteger/Desktop/test/media/sydney.md2"); if (!mesh) { device->drop(); if (!errorLabel) { errorLabel = [[UILabel alloc] init]; } errorLabel.text = @"Konnte Mesh nicht laden."; return; } IAnimatedMeshSceneNode* node = smgr->addAnimatedMeshSceneNode( mesh ); /* To let the mesh look a little bit nicer, we change its material. We disable lighting because we do not have a dynamic light in here, and the mesh would be totally black otherwise. Then we set the frame loop, such that the predefined STAND animation is used. And last, we apply a texture to the mesh. Without it the mesh would be drawn using only a color. */ if (node) { node->setMaterialFlag(EMF_LIGHTING, false); node->setMD2Animation(scene::EMAT_STAND); node->setMaterialTexture( 0, driver->getTexture("/Users/dbocksteger/Desktop/test/media/sydney.bmp") ); } /* To look at the mesh, we place a camera into 3d space at the position (0, 30, -40). The camera looks from there to (0,5,0), which is approximately the place where our md2 model is. */ smgr->addCameraSceneNode(0, vector3df(0,30,-40), vector3df(0,5,0)); /* Ok, now we have set up the scene, lets draw everything: We run the device in a while() loop, until the device does not want to run any more. This would be when the user closes the window or presses ALT+F4 (or whatever keycode closes a window). */ while(device->run()) { /* Anything can be drawn between a beginScene() and an endScene() call. The beginScene() call clears the screen with a color and the depth buffer, if desired. Then we let the Scene Manager and the GUI Environment draw their content. With the endScene() call everything is presented on the screen. */ driver->beginScene(true, true, SColor(255,100,101,140)); smgr->drawAll(); guienv->drawAll(); driver->endScene(); } /* After we are done with the render loop, we have to delete the Irrlicht Device created before with createDevice(). In the Irrlicht Engine, you have to delete all objects you created with a method or function which starts with 'create'. The object is simply deleted by calling ->drop(). See the documentation at irr::IReferenceCounted::drop() for more information. */ device->drop(); } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); } @end Sadly the result is just a black View in the Simulator. :( Hope here is anyone who can explain me how i draw the scene in a UIView. Furthermore I'm getting this Error: Could not load sprite bank because the file does not exist: #DefaultFont How can i fix it ?

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • World Backup Day

    - by red(at)work
    Here at Red Gate Towers, the SQL Backup development team have been hunkered down in their shed for the last few months, with the toolbox, blowtorch and chamois leather out, upgrading SQL Backup. When we started, autumn leaves were falling. Now we're about to finish, spring flowers are budding. If not quite a gleaming new machine, at the very least a familiar, reliable engine with some shiny new bits on it will trundle magnificently out of the workshop. One of the interesting things I've noticed about working on software development teams is that the team is together for so long 'implementing' stuff - designing, coding, testing, fixing bugs and so on - that you occasionally forget why you're doing what you're doing. Doubt creeps in. It feels like a long time since we launched this project in a fanfare of optimism and enthusiasm, and all that clarity of purpose and mission "yee-haw" has dissipated with the daily pressures of development. Every now and again, we look up from our bunker and notice all those thousands of users out there, with their different configurations and working practices and each with their own set of problems and requirements, and we ask ourselves "does anyone care about what we're doing?" Has the world moved on while we've been busy? Could we have been doing something more useful with the time and talent of all these excellent people we've assembled? In truth, you can research and test and validate all you like, but you never really know if you've done the right thing (or at least, something valuable for some users) until you release. All projects suffer this insecurity. If they don't, maybe you're not worrying enough about what you're building. The two enemies of software development are certainty and complacency. Oh, and of course, rival teams with Nerf guns. The goal of SQL Backup 7 is to make it so easy to schedule regular restores of your backups that you have no excuse not to. Why schedule a restore? Because your data is not as good as your last backup. It's only as good as your last successful restore. If you're not checking your backups by restoring them and running an integrity check on the database, you're only doing half the job. It seems that most DBAs know that this is best practice, but it can be tricky and time-consuming to set up, so it's one of those tasks that can get forgotten in the midst all the other demands on their time. Sometimes, they're just too busy firefighting. But if it was simple to do? That was our inspiration for SQL Backup 7. So it was heartening to read Brent Ozar's blog post the other day about World Backup Day. To be honest, I'd never heard of World Backup Day (Talk Like a Pirate Day, yes, but not this one); however, its emphasis on not just backing up your data but checking the validity of those backups was exactly the same message we had in mind when building SQL Backup 7. It's printed on a piece of A3 above our planning board - "Make backup verification so easy to do that no DBA has an excuse for not doing it" It's the missing piece that completes the puzzle. Simple idea, great concept, useful feature, but, as it turned out, far from straightforward to implement. The problem is the future. As Marty McFly discovered over the course of three movies, the future is uncertain and hard to predict - so when you are scheduling a restore to take place an hour, day, week or month after the backup, there are all kinds of questions that you wouldn't normally have to consider. Where will this backup live? Will it even exist at the time? Will it be split into multiple files? What will the file names be? Will it be encrypted? What files should it be restored to? SQL Backup needs to know what to expect at the time the restore job is actually run. Of course, a DBA will know the answer to all these questions, but to deliver the whole point of version 7, we wanted to make it easy for them to input that information into SQL Backup. We think we've done that. When you create your scheduled backup job, there is now an option to create a "reminder" to follow it up with a scheduled restore to verify the resulting backups. Actually, it's much more than a reminder, as it stores all the relevant data so you can click it and pre-populate the wizard with all the right settings to set up your verification restores. Simple. But, what do you think? We'd love you to try it. Post by Brian Harris

    Read the article

  • Revisiting ANTS Performance Profiler 7.4

    - by James Michael Hare
    Last year, I did a small review on the ANTS Performance Profiler 6.3, now that it’s a year later and a major version number higher, I thought I’d revisit the review and revise my last post. This post will take the same examples as the original post and update them to show what’s new in version 7.4 of the profiler. Background A performance profiler’s main job is to keep track of how much time is typically spent in each unit of code. This helps when we have a program that is not running at the performance we expect, and we want to know where the program is experiencing issues. There are many profilers out there of varying capabilities. Red Gate’s typically seem to be the very easy to “jump in” and get started with very little training required. So let’s dig into the Performance Profiler. I’ve constructed a very crude program with some obvious inefficiencies. It’s a simple program that generates random order numbers (or really could be any unique identifier), adds it to a list, sorts the list, then finds the max and min number in the list. Ignore the fact it’s very contrived and obviously inefficient, we just want to use it as an example to show off the tool: 1: // our test program 2: public static class Program 3: { 4: // the number of iterations to perform 5: private static int _iterations = 1000000; 6: 7: // The main method that controls it all 8: public static void Main() 9: { 10: var list = new List<string>(); 11: 12: for (int i = 0; i < _iterations; i++) 13: { 14: var x = GetNextId(); 15: 16: AddToList(list, x); 17: 18: var highLow = GetHighLow(list); 19: 20: if ((i % 1000) == 0) 21: { 22: Console.WriteLine("{0} - High: {1}, Low: {2}", i, highLow.Item1, highLow.Item2); 23: Console.Out.Flush(); 24: } 25: } 26: } 27: 28: // gets the next order id to process (random for us) 29: public static string GetNextId() 30: { 31: var random = new Random(); 32: var num = random.Next(1000000, 9999999); 33: return num.ToString(); 34: } 35: 36: // add it to our list - very inefficiently! 37: public static void AddToList(List<string> list, string item) 38: { 39: list.Add(item); 40: list.Sort(); 41: } 42: 43: // get high and low of order id range - very inefficiently! 44: public static Tuple<int,int> GetHighLow(List<string> list) 45: { 46: return Tuple.Create(list.Max(s => Convert.ToInt32(s)), list.Min(s => Convert.ToInt32(s))); 47: } 48: } So let’s run it through the profiler and see what happens! Visual Studio Integration First, let’s look at how the ANTS profilers integrate with Visual Studio’s menu system. Once you install the ANTS profilers, you will get an ANTS menu item with several options: Notice that you can either Profile Performance or Launch ANTS Performance Profiler. These sound similar but achieve two slightly different actions: Profile Performance: this immediately launches the profiler with all defaults selected to profile the active project in Visual Studio. Launch ANTS Performance Profiler: this launches the profiler much the same way as starting it from the Start Menu. The profiler will pre-populate the application and path information, but allow you to change the settings before beginning the profile run. So really, the main difference is that Profile Performance immediately begins profiling with the default selections, where Launch ANTS Performance Profiler allows you to change the defaults and attach to an already-running application. Let’s Fire it Up! So when you fire up ANTS either via Start Menu or Launch ANTS Performance Profiler menu in Visual Studio, you are presented with a very simple dialog to get you started: Notice you can choose from many different options for application type. You can profile executables, services, web applications, or just attach to a running process. In fact, in version 7.4 we see two new options added: ASP.NET Web Application (IIS Express) SharePoint web application (IIS) So this gives us an additional way to profile ASP.NET applications and the ability to profile SharePoint applications as well. You can also choose your level of detail in the Profiling Mode drop down. If you choose Line-Level and method-level timings detail, you will get a lot more detail on the method durations, but this will also slow down profiling somewhat. If you really need the profiler to be as unintrusive as possible, you can change it to Sample method-level timings. This is performing very light profiling, where basically the profiler collects timings of a method by examining the call-stack at given intervals. Which method you choose depends a lot on how much detail you need to find the issue and how sensitive your program issues are to timing. So for our example, let’s just go with the line and method timing detail. So, we check that all the options are correct (if you launch from VS2010, the executable and path are filled in already), and fire it up by clicking the [Start Profiling] button. Profiling the Application Once you start profiling the application, you will see a real-time graph of CPU usage that will indicate how much your application is using the CPU(s) on your system. During this time, you can select segments of the graph and bookmark them, giving them mnemonic names. This can be useful if you want to compare performance in one part of the run to another part of the run. Notice that once you select a block, it will give you the call tree breakdown for that selection only, and the relative performance of those calls. Once you feel you have collected enough information, you can click [Stop Profiling] to stop the application run and information collection and begin a more thorough analysis. Analyzing Method Timings So now that we’ve halted the run, we can look around the GUI and see what we can see. By default, the times are shown in terms of percentage of time of the total run of the application, though you can change it in the View menu item to milliseconds, ticks, or seconds as well. This won’t affect the percentages of methods, it only affects what units the times are shown. Notice also that the major hotspot seems to be in a method without source, ANTS Profiler will filter these out by default, but you can right-click on the line and remove the filter to see more detail. This proves especially handy when a bottleneck is due to a method in the BCL. So now that we’ve removed the filter, we see a bit more detail: In addition, ANTS Performance Profiler gives you the ability to decompile the methods without source so that you can dive even deeper, though typically this isn’t necessary for our purposes. When looking at timings, there are generally two types of timings for each method call: Time: This is the time spent ONLY in this method, not including calls this method makes to other methods. Time With Children: This is the total of time spent in both this method AND including calls this method makes to other methods. In other words, the Time tells you how much work is being done exclusively in this method, and the Time With Children tells you how much work is being done inclusively in this method and everything it calls. You can also choose to display the methods in a tree or in a grid. The tree view is the default and it shows the method calls arranged in terms of the tree representing all method calls and the parent method that called them, etc. This is useful for when you find a hot-spot method, you can see who is calling it to determine if the problem is the method itself, or if it is being called too many times. The grid method represents each method only once with its totals and is useful for quickly seeing what method is the trouble spot. In addition, you can choose to display Methods with source which are generally the methods you wrote (as opposed to native or BCL code), or Any Method which shows not only your methods, but also native calls, JIT overhead, synchronization waits, etc. So these are just two ways of viewing the same data, and you’re free to choose the organization that best suits what information you are after. Analyzing Method Source If we look at the timings above, we see that our AddToList() method (and in particular, it’s call to the List<T>.Sort() method in the BCL) is the hot-spot in this analysis. If ANTS sees a method that is consuming the most time, it will flag it as a hot-spot to help call out potential areas of concern. This doesn’t mean the other statistics aren’t meaningful, but that the hot-spot is most likely going to be your biggest bang-for-the-buck to concentrate on. So let’s select the AddToList() method, and see what it shows in the source window below: Notice the source breakout in the bottom pane when you select a method (from either tree or grid view). This shows you the timings in this method per line of code. This gives you a major indicator of where the trouble-spot in this method is. So in this case, we see that performing a Sort() on the List<T> after every Add() is killing our performance! Of course, this was a very contrived, duh moment, but you’d be surprised how many performance issues become duh moments. Note that this one line is taking up 86% of the execution time of this application! If we eliminate this bottleneck, we should see drastic improvement in the performance. So to fix this, if we still wanted to maintain the List<T> we’d have many options, including: delay Sort() until after all Add() methods, using a SortedSet, SortedList, or SortedDictionary depending on which is most appropriate, or forgoing the sorting all together and using a Dictionary. Rinse, Repeat! So let’s just change all instances of List<string> to SortedSet<string> and run this again through the profiler: Now we see the AddToList() method is no longer our hot-spot, but now the Max() and Min() calls are! This is good because we’ve eliminated one hot-spot and now we can try to correct this one as well. As before, we can then optimize this part of the code (possibly by taking advantage of the fact the list is now sorted and returning the first and last elements). We can then rinse and repeat this process until we have eliminated as many bottlenecks as possible. Calls by Web Request Another feature that was added recently is the ability to view .NET methods grouped by the HTTP requests that caused them to run. This can be helpful in determining which pages, web services, etc. are causing hot spots in your web applications. Summary If you like the other ANTS tools, you’ll like the ANTS Performance Profiler as well. It is extremely easy to use with very little product knowledge required to get up and running. There are profilers built into the higher product lines of Visual Studio, of course, which are also powerful and easy to use. But for quickly jumping in and finding hot spots rapidly, Red Gate’s Performance Profiler 7.4 is an excellent choice. Technorati Tags: Influencers,ANTS,Performance Profiler,Profiler

    Read the article

  • PASS Summit Feedback

    - by Rob Farley
    PASS Feedback came in last week. I also saw my dentist for some fillings... At the PASS Summit this year, I delivered a couple of regular sessions and a Lightning Talk. People told me they enjoyed it, but when the rankings came out, they showed that I didn’t score particularly well. Brent Ozar was keen to discuss it with me. Brent: PASS speaker feedback is out. You did two sessions and a Lightning Talk. How did you go? Rob: Not so well actually, thanks for asking. Brent: Ha! Sorry. Of course you know that's why I wanted to discuss this with you. I was in one of your sessions at SQLBits in the UK a month before PASS, and I thought you rocked. You've got a really good and distinctive delivery style.  Then I noticed your talks were ranked in the bottom quarter of the Summit ratings and wanted to discuss it. Rob: Yeah, I know. You did ask me if we could do this...  I should explain – my presentation style is not the stereotypical IT conference one. I throw in jokes, and try to engage the audience thoroughly. I find many talks amazingly dry, and I guess I try to buck that trend. I also run training courses, and find that I get a lot of feedback from people thanking me for keeping things interesting. That said, I also get feedback criticising me for my style, and that’s basically what’s happened here. For the rest of this discussion, let’s focus on my talk about the Incredible Shrinking Execution Plan, which I considered to be my main talk. Brent: I thought that session title was the very best one at the entire Summit, and I had it on my recommended sessions list.  In four words, you managed to sum up the topic and your sense of humor.  I read that and immediately thought, "People need to be in this session," and then it didn't score well.  Tell me about your scores. Rob: The questions on the feedback form covered the usefulness of the information, the speaker’s presentation skills, their knowledge of the subject, how well the session was described, the amount of time allocated, and the quality of the presentation materials. Brent: Presentation materials? But you don’t do slides.  Did they rate your thong? Rob: No-one saw my flip-flops in this talk, Brent. I created a script in Management Studio, and published that afterwards, but I think people will have scored that question based on the lack of slides. I wasn’t expecting to do particularly well on that one. That was the only section that didn’t have 5/5 as the most popular score. Brent: See, that sucks, because cookbook-style scripts are often some of my favorites.  Adam Machanic's Service Broker workbench series helped me immensely when I was prepping for the MCM.  As an attendee, I'd rather have a commented script than a slide deck.  So how did you rank so low? Rob: When I look at the scores that you got (based on your blog post), you got very few scores below 3 – people that felt strong enough about your talk to post a negative score. In my scores, between 5% and 10% were below 3 (except on the question about whether I knew my stuff – I guess I came as knowledgeable). Brent: Wow – so quite a few people really didn’t like your talk then? Rob: Yeah. Mind you, based on the comments, some people really loved it. I’d like to think that there would be a certain portion of the room who may have rated the talk as one of the best of the conference. Some of my comments included “amazing!”, “Best presentation so far!”, “Wow, best session yet”, “fantastic” and “Outstanding!”. I think lots of talks can be “Great”, but not so many talks can be “Outstanding” without the word losing its meaning. One wrote “Pretty amazing presentation, considering it was completely extemporaneous.” Brent: Extemporaneous, eh? Rob: Yeah. I guess they don’t realise how much preparation goes into coming across as unprepared. In many ways it’s much easier to give a written speech than to deliver a presentation without slides as a prompt. Brent: That delivery style, the really relaxed, casual, college-professor approach was one of the things I really liked about your presentation at SQLbits.  As somebody who presents a lot, I "get" it - I know how hard it is to come off as relaxed and comfortable with your own material.  It's like improv done by jazz players and comedians - if you've never tried it, you don't realize how hard it is.  People also don't realize how hard it is to make a tough subject fun. Rob: Yeah well... There will be people writing comments on this post that say I wasn't trying to make the subject fun, and that I was making it all about me. Sometimes the style works, sometimes it doesn't. Most of the comments mentioned the fact that I tell jokes, some in a nice way, but some not so much (and it wasn't just a PASS thing - that's the mix of feedback I generally get). One comment at PASS was: “great stand up comedian - not what I'm looking for at pass”, and there were certainly a few that said “too many jokes”. I’m not trying to do stand-up – jokes are my way of engaging with the audience while I demonstrate some of the amazing things that the Query Optimizer can do if you write your queries the right way. Some people didn’t think it was technical enough, but I’ve also had some people tell me that the concepts I’m explaining are deep and profound. Brent: To me, that's a hallmark of a great explanation - when someone says, "But of course it has to work that way - how could it work any other way?  It seems so simple and logical."  Well, sure it does when it's explained correctly, but now pick up any number of thick SQL Server books and try to understand the Redundant Joins concept.  I guarantee it'll take more than 45 minutes. Rob: Some people in my audiences realise that, but definitely not everyone. There's only so much you can tell someone that something is profound. Generally it's something that they either have an epiphany on or not. I like to lull my audience into knowing what's going on, and do something that surprises them. Gain their trust, build a rapport, and then show them the deeper truth of what just happened. Brent: So you've learned your lesson about presentation scores, right?  From here on out, you're going to be dry, humorless, and all your presentations will consist of you reading bullet points off the screen. Rob: No Brent, I’m not. I'm also not going to suggest that most presentations at PASS are like that. No-one tries to present like that. There's a big space to occupy between what "dry and humourless" and me. My difference is to focus on the relationship I have with the crowd, rather than focussing on delivering the perfect session. I want to see people smiling and know they're relaxed. I think most presenters focus on the material, which is completely reasonable and safe. I remember once hearing someone talking about product creation. They talked about mediocrity. They said that one of the worst things that people can ever say about your product is that it’s “good”. What you want is for 10% of the world to love it enough to want to buy it. If 10% the world gave me a dollar, I’d have more money than I could ever use (assuming it wasn’t the SAME dollar they were giving me I guess). Brent: It's the Raving Fans theory.  It's better to have a small number of raving customers than a large number of almost-but-not-really customers who don't care that much about your product or service.  I know exactly how you feel - when I got survey feedback from my Quest video presentation when I was dressed up in a Richard Simmons costume, some of the attendees said I was unprofessional and distracting.  Some of the attendees couldn't get enough and Photoshopped all kinds of stuff into the screen captures.  On a whole, I probably didn't score that well, and I'm fine with that.  It sucks to look at the scores though - do those lower scores bother you? Rob: Of course they do. It hurts deeply. I open myself up and give presentations in a very personal way. All presenters do that, and we all feel the pain of negative feedback. I hate coming 146th & 162nd out of 185, but have to acknowledge that many sessions did worse still. Plus, once I feel the wounds have healed, I’ll be able to remember that there are people in the world that rave about my presentation style, and figure that people will hopefully talk about me. One day maybe those people that don’t like my presentation style will stay away and I might be able to score better. You don’t pay to hear country music if you prefer western... Lots of people find chili too spicy, but it’s still a popular food. Brent: But don’t you want to appeal to everyone? Rob: I do, but I don’t want to be lukewarm as in Revelation 3:16. I’d rather disgust and be discussed. Well, maybe not ‘disgust’, but I don’t want to conform. Conformity just isn’t the same any more. I’m not sure I’ve ever been one to do that. I try not to offend, but definitely like to be different. Brent: Count me among your raving fans, sir.  Where can we see you next? Rob: Considering I live in Adelaide in Australia, I’m not about to appear at anyone’s local SQL Saturday. I’m still trying to plan which events I’ll get to in 2011. I’ve submitted abstracts for TechEd North America, but won’t hold my breath. I’m also considering the SQLBits conferences in the UK in April, PASS in October, and I’m sure I’ll do some LiveMeeting presentations for user groups. Online, people download some of my recent SQLBits presentations at http://bit.ly/RFSarg and http://bit.ly/Simplification though. And they can download a 5-minute MP3 of my Lightning Talk at http://www.lobsterpot.com.au/files/Collation.mp3, in which I try to explain the idea behind collation, using thongs as an example. Brent: I was in the audience for http://bit.ly/RFSarg. That was a great presentation. Rob: Thanks, Brent. Now where’s my dollar?

    Read the article

  • Does HTML 5 &ldquo;Rich vs. Reach&rdquo; a False Choice?

    - by andrewbrust
    The competition between the Web and proprietary rich platforms, including Windows, Mac OS, iPhone/iPad, Adobe’s Flash/AIR and Microsoft’s Silverlight, is not new. But with the emergence of HTML 5 and imminent support for it in the next release of the major Web browsers, the battle is heating up. And with the announcements made Wednesday at Google's I/O conference, it's getting kicked up yet another notch. The impact of this platform battle on companies in the media and advertising world, and the developers who serve them, is significant. The most prominent question is whether video and rich media online will shift towards pure HTML and away from plug-ins like Flash and Silverlight. In fact, certain features in HTML 5 make it suitable for development for line of business applications as well, further threatening those plug-in technologies. So what's the deal? Is this real or hype? To answer that question, I've done my own research into HTML 5's features and talked to several media-focused, New York area developers to get their opinions. I present my findings to you in this post. Before bearing down into HTML 5 specifics and practitioners’ quotes, let's set the context. To understand what HTML 5 can do, take a look at this video of Sports Illustrated’s HTML 5 prototype. This should start to get you bought into the idea that HTML 5 could be a game-changer. Next, if you happen to have installed the beta version of Google's Chrome 5 browser, take a look at the page linked to below, and in that page, click on any of the game thumbnails to see what's possible, without a plug-in, in this brave new world. (Note, although the instructions for each game tell you to press the A key to start, press the Z key instead.). Here's the link: http://www.kesiev.com/akihabara As an adjunct to what's enabled by HTML 5, consider the various transforms that are part of CSS 3. If you're running Safari as your browser, the following link will showcase this live; if not, you'll see a bitmap that will give you an idea of what's possible: http://webkit.org/blog/386/3d-transforms Are you starting to get the picture (literally)? What has up until now required browser plug-ins and other patches to HTML, most typically Flash, will soon be renderable, natively, in all major browsers. Moreover, it's looking likely that developers will be able to deliver such content and experiences in these browsers using one base of markup and script code (using straight JavaScript and/or jQuery), without resorting to browser-specific code and workarounds. If you're skeptical of this, I wouldn't blame you, especially with respect to Microsoft's Internet Explorer. However, i can tell you with confidence that even Microsoft is dedicated to full-on HTML 5 support in version 9 of that browser, which is currently under development. So what’s new in HTML 5, specifically, that makes sites like this possible?  The specification documents go into deep detail, and there’s no sense in rehashing them here, but a summary is probably in order.   Here is a non-authoritative, but useful, list of the major new feature areas in HTML 5: 2D drawing capabilities and 3D transforms. 2D drawing instructions can be embedded statically into a Web page; application interactivity and animation can be achieved through script.  As mentioned above, 3D transforms are technically part of version 3 of the CSS (Cascading Style Sheets) spec, rather than HTML 5, but they can nonetheless be thought of as part of the bundle.  They allow for rendering of 3D images and animations that, together with 2D drawing, make HTML-based games much more feasible than they are presently, as the links above demonstrate. Embedded audio and video. A media player can appear directly in a rendered Web page, using HTML markup and no plug-ins. Alternately, player controls can be hidden and the content can play automatically. Major enhancements to form-based input. This includes such things as specification of required fields, embedding of text “hints” into a control, limiting valid input on a field to dates, email addresses or a list of values.  There’s more to this, but the gist is that line-of-business applications, with complicated input and data validation, are supported directly Offline caching, local storage and client-side SQL database. These facilities allow Web applications to function more like native apps, even if no internet connection is available. User-defined data. Data (or metadata – data about data) can easily be embedded statically and/or retrieved and updated with Javascript code. This avoids having to embed that data in a separate file, or within script code. Taken together, these features position HTML to compete with, and perhaps overtake, Adobe’s Flash/AIR (and Microsoft’s Silverlight) as a viable Web platform for media, RIAs (rich internet applications – apps that function more like desktop software than Web sites) and interactive Web content, including games. What do players in the media world think about this?  From the embedded video above, we know what Sports Illustrated (and, therefore, Time Warner) think.  Hulu, the major Internet site for broadcast TV content, is on record as saying HTML5 video does not pass muster with them, at least not yet.  YouTube, on the other hand, already has an experimental HTML 5-based version of their site.  TechCrunch has reported that NetFlix is flirting with HTML 5 too, especially as it pertains to embedded browsers in TV-based devices.  And the New York Times’ Web site now embeds some video clips without resorting to Flash.  They have to – otherwise iPhone, iPod Touch and iPad users couldn’t see them in the Mobile Safari browser. What do media-focused developers think about all this?  I talked to several to get their opinions. Michael Pinto is CEO and Founder of Very Memorable Design whose primary focus has been to help marketing directors get traction online.  The firm’s client roster includes the likes Time, Inc., Scholastic and PBS.  Pinto predicts that “More and more microsites that were done entirely in Flash will be done more and more using jQuery. I can also see slideshows and video now being done without Flash. However if you needed to create a game or highly interactive activity Flash would still be the way to go for the web.” A dissenting view comes from Jesse Erlbaum, CEO of The Erlbaum Group, LLC, which serves numerous clients in the magazine publishing sector.  When I asked Erlbaum whether he thought HTML 5 and jQuery/JavaScript would steal significant market share from Flash, he responded “Not at all!  In particular, not for media and advertising customers!  These sectors are not generally in the business of making highly functional applications, which is the one place where HTML5/jQuery/etc really shines.” Ironically, Pinto’s firm is a heavy user of Flash for its projects and Erlbaum’s develops atop the “LAMP” (Linux, Apache, MySQL and PHP/Perl) stack.  For whatever reason, each firm seems to see the other’s toolset as a more viable choice.  But both agree that the developer tool story around HTML 5 is deficient.  Pinto explains “What’s lost with [HTML 5 and Javascript] techniques is that there isn’t a single widely favored easy-to-use tool of choice for authoring. So with Flash you can get up and running right away and not worry about what is different from one browser to the next.“  Erlbaum agrees, saying: “HTML5/Javascript lacks a sophisticated integrated development environment (IDE) which is an essential part of Flash.  If what someone is trying to make is primarily animation, it's a waste of time…to do this in Javascript.  It can be done much more easily in Flash, and with greater cross-browser compatibility and consistency due to the ubiquity of Flash.” Adobe (maker of Flash since its 2005 acquisition of Macromedia) likely agrees.  And for better or worse, they’ve decided to address this shortcoming of HTML 5, even at risk of diminishing their Flash platfrom. Yesterday Adobe announced that their hugely popular Deamweaver Web design authoring tool would directly support HTML 5 and CSS 3 development.  In fact, the Adobe Dreamweaver CS5 HTML5 Pack is downloadable now from Adobe Labs. Maybe Adobe is bowing to pressure from ardent Web professionals like Scott Kellum, Lead Designer at Channel V Media,  a digital and offline branding firm, serving the media and marketing sectors, among others.  Kellum told me that HTML 5 “…will definitely move people away from Flash. It has many of the same functionalities with faster load times and better accessibility. HTML5 will help Flash as well: with the new caching methods you can now even run Flash apps offline.” Although all three Web developers I interviewed would agree that Flash is still required for more sophisticated applications, Kellum seems to have put his finger on why HTML 5 may nonetheless dominate.  In his view, much of the Web development out there has little need for high-end capabilities: “Most people want to add a little punch to a navigation bar or some video and now you can get the biggest bang for your buck with HTML5, CSS3 and Javascript.” I’ve already mentioned that Google’s ongoing I/O conference, at the Moscone West center in San Francisco, is driving the HTML 5 news cycle, big time.  And Google made many announcements of their own, including the open sourcing of their VP8 video codec, new enterprise-oriented capabilities for its App Engine cloud offering, and the creation of the Chrome Web Store, which the company says will make it easier to find and “install” Web applications, in a fashion similar to  the way users procure native apps on various mobile platforms. HTML 5 looks to be disruptive, especially to the media world.  And even if the technology ends up disappointing, the chatter around it alone is causing big changes in the technology world.  If the richness it promises delivers, then magazine publishers and non-text digital advertisers may indeed have a platform for creating compelling content that loads quickly, is standards-based and will render identically in (the newest versions of) all major Web browsers.  Can this development in the digital arena save the titans of the print world?  I can’t predict, but it’s going to be fun to watch, and the competitive innovation from all players in both industries will likely be immense.

    Read the article

  • The Red Gate and .NET Reflector Debacle

    - by Rick Strahl
    About a month ago Red Gate – the company who owns the NET Reflector tool most .NET devs use at one point or another – decided to change their business model for Reflector and take the product from free to a fully paid for license model. As a bit of history: .NET Reflector was originally created by Lutz Roeder as a free community tool to inspect .NET assemblies. Using Reflector you can examine the types in an assembly, drill into type signatures and quickly disassemble code to see how a particular method works.  In case you’ve been living under a rock and you’ve never looked at Reflector, here’s what it looks like drilled into an assembly from disk with some disassembled source code showing: Note that you get tons of information about each element in the tree, and almost all related types and members are clickable both in the list and source view so it’s extremely easy to navigate and follow the code flow even in this static assembly only view. For many year’s Lutz kept the the tool up to date and added more features gradually improving an already amazing tool and making it better. Then about two and a half years ago Red Gate bought the tool from Lutz. A lot of ruckus and noise ensued in the community back then about what would happen with the tool and… for the most part very little did. Other than the incessant update notices with prominent Red Gate promo on them life with Reflector went on. The product didn’t die and and it didn’t go commercial or to a charge model. When .NET 4.0 came out it still continued to work mostly because the .NET feature set doesn’t drastically change how types behave.  Then a month back Red Gate started making noise about a new Version Version 7 which would be commercial. No more free version - and a shit storm broke out in the community. Now normally I’m not one to be critical of companies trying to make money from a product, much less for a product that’s as incredibly useful as Reflector. There isn’t day in .NET development that goes by for me where I don’t fire up Reflector. Whether it’s for examining the innards of the .NET Framework, checking out third party code, or verifying some of my own code and resources. Even more so recently I’ve been doing a lot of Interop work with a non-.NET application that needs to access .NET components and Reflector has been immensely valuable to me (and my clients) if figuring out exact type signatures required to calling .NET components in assemblies. In short Reflector is an invaluable tool to me. Ok, so what’s the problem? Why all the fuss? Certainly the $39 Red Gate is trying to charge isn’t going to kill any developer. If there’s any tool in .NET that’s worth $39 it’s Reflector, right? Right, but that’s not the problem here. The problem is how Red Gate went about moving the product to commercial which borders on the downright bizarre. It’s almost as if somebody in management wrote a slogan: “How can we piss off the .NET community in the most painful way we can?” And that it seems Red Gate has a utterly succeeded. People are rabid, and for once I think that this outrage isn’t exactly misplaced. Take a look at the message thread that Red Gate dedicated from a link off the download page. Not only is Version 7 going to be a paid commercial tool, but the older versions of Reflector won’t be available any longer. Not only that but older versions that are already in use also will continually try to update themselves to the new paid version – which when installed will then expire unless registered properly. There have also been reports of Version 6 installs shutting themselves down and failing to work if the update is refused (I haven’t seen that myself so not sure if that’s true). In other words Red Gate is trying to make damn sure they’re getting your money if you attempt to use Reflector. There’s a lot of temptation there. Think about the millions of .NET developers out there and all of them possibly upgrading – that’s a nice chunk of change that Red Gate’s sitting on. Even with all the community backlash these guys are probably making some bank right now just because people need to get life to move on. Red Gate also put up a Feedback link on the download page – which not surprisingly is chock full with hate mail condemning the move. Oddly there’s not a single response to any of those messages by the Red Gate folks except when it concerns license questions for the full version. It puzzles me what that link serves for other yet than another complete example of failure to understand how to handle customer relations. There’s no doubt that that all of this has caused some serious outrage in the community. The sad part though is that this could have been handled so much less arrogantly and without pissing off the entire community and causing so much ill-will. People are pissed off and I have no doubt that this negative publicity will show up in the sales numbers for their other products. I certainly hope so. Stupidity ought to be painful! Why do Companies do boneheaded stuff like this? Red Gate’s original decision to buy Reflector was hotly debated but at that the time most of what would happen was mostly speculation. But I thought it was a smart move for any company that is in need of spreading its marketing message and corporate image as a vendor in the .NET space. Where else do you get to flash your corporate logo to hordes of .NET developers on a regular basis?  Exploiting that marketing with some goodwill of providing a free tool breeds positive feedback that hopefully has a good effect on the company’s visibility and the products it sells. Instead Red Gate seems to have taken exactly the opposite tack of corporate bullying to try to make a quick buck – and in the process ruined any community goodwill that might have come from providing a service community for free while still getting valuable marketing. What’s so puzzling about this boneheaded escapade is that the company doesn’t need to resort to underhanded tactics like what they are trying with Reflector 7. The tools the company makes are very good. I personally use SQL Compare, Sql Data Compare and ANTS Profiler on a regular basis and all of these tools are essential in my toolbox. They certainly work much better than the tools that are in the box with Visual Studio. Chances are that if Reflector 7 added useful features I would have been more than happy to shell out my $39 to upgrade when the time is right. It’s Expensive to give away stuff for Free At the same time, this episode shows some of the big problems that come with ‘free’ tools. A lot of organizations are realizing that giving stuff away for free is actually quite expensive and the pay back is often very intangible if any at all. Those that rely on donations or other voluntary compensation find that they amount contributed is absolutely miniscule as to not matter at all. Yet at the same time I bet most of those clamouring the loudest on that Red Gate Reflector feedback page that Reflector won’t be free anymore probably have NEVER made a donation to any open source project or free tool ever. The expectation of Free these days is just too great – which is a shame I think. There’s a lot to be said for paid software and having somebody to hold to responsible to because you gave them some money. There’s an incentive –> payback –> responsibility model that seems to be missing from free software (not all of it, but a lot of it). While there certainly are plenty of bad apples in paid software as well, money tends to be a good motivator for people to continue working and improving products. Reasons for giving away stuff are many but often it’s a naïve desire to share things when things are simple. At first it might be no problem to volunteer time and effort but as products mature the fun goes out of it, and as the reality of product maintenance kicks in developers want to get something back for the time and effort they’re putting in doing non-glamorous work. It’s then when products die or languish and this is painful for all to watch. For Red Gate however, I think there was always a pretty good payback from the Reflector acquisition in terms of marketing: Visibility and possible positioning of their products although they seemed to have mostly ignored that option. On the other hand they started this off pretty badly even 2 and a half years back when they aquired Reflector from Lutz with the same arrogant attitude that is evident in the latest episode. You really gotta wonder what folks are thinking in management – the sad part is from advance emails that were circulating, they were fully aware of the shit storm they were inciting with this and I suspect they are banking on the sheer numbers of .NET developers to still make them a tidy chunk of change from upgrades… Alternatives are coming For me personally the single license isn’t a problem, but I actually have a tool that I sell (an interop Web Service proxy generation tool) to customers and one of the things I recommend to use with has been Reflector to view assembly information and to find which Interop classes to instantiate from the non-.NET environment. It’s been nice to use Reflector for this with its small footprint and zero-configuration installation. But now with V7 becoming a paid tool that option is not going to be available anymore. Luckily it looks like the .NET community is jumping to it and trying to fill the void. Amidst the Red Gate outrage a new library called ILSpy has sprung up and providing at least some of the core functionality of Reflector with an open source library. It looks promising going forward and I suspect there will be a lot more support and interest to support this project now that Reflector has gone over to the ‘dark side’…© Rick Strahl, West Wind Technologies, 2005-2011

    Read the article

  • 8 Reasons Why Even Microsoft Agrees the Windows Desktop is a Nightmare

    - by Chris Hoffman
    Let’s be honest: The Windows desktop is a mess. Sure, it’s extremely powerful and has a huge software library, but it’s not a good experience for average people. It’s not even a good experience for geeks, although we tolerate it. Even Microsoft agrees about this. Microsoft’s Surface tablets with Windows RT don’t support any third-party desktop apps. They consider this a feature — users can’t install malware and other desktop junk, so the system will always be speedy and secure. Malware is Still Common Malware may not affect geeks, but it certainly continues to affect average people. Securing Windows, keeping it secure, and avoiding unsafe programs is a complex process. There are over 50 different file extensions that can contain harmful code to keep track of. It’s easy to have theoretical discussions about how malware could infect Mac computers, Android devices, and other systems. But Mac malware is extremely rare, and has  generally been caused by problem with the terrible Java plug-in. Macs are configured to only run executables from identified developers by default, whereas Windows will run everything. Android malware is talked about a lot, but Android malware is rare in the real world and is generally confined to users who disable security protections and install pirated apps. Google has also taken action, rolling out built-in antivirus-like app checking to all Android devices, even old ones running Android 2.3, via Play Services. Whatever the reason, Windows malware is still common while malware for other systems isn’t. We all know it — anyone who does tech support for average users has dealt with infected Windows computers. Even users who can avoid malware are stuck dealing with complex and nagging antivirus programs, especially since it’s now so difficult to trust Microsoft’s antivirus products. Manufacturer-Installed Bloatware is Terrible Sit down with a new Mac, Chromebook, iPad, Android tablet, Linux laptop, or even a Surface running Windows RT and you can enjoy using your new device. The system is a clean slate for you to start exploring and installing your new software. Sit down with a new Windows PC and the system is a mess. Rather than be delighted, you’re stuck reinstalling Windows and then installing the necessary drivers or you’re forced to start uninstalling useless bloatware programs one-by-one, trying to figure out which ones are actually useful. After uninstalling the useless programs, you may end up with a system tray full of icons for ten different hardware utilities anyway. The first experience of using a new Windows PC is frustration, not delight. Yes, bloatware is still a problem on Windows 8 PCs. Manufacturers can customize the Refresh image, preventing bloatware rom easily being removed. Finding a Desktop Program is Dangerous Want to install a Windows desktop program? Well, you’ll have to head to your web browser and start searching. It’s up to you, the user, to know which programs are safe and which are dangerous. Even if you find a website for a reputable program, the advertisements on that page will often try to trick you into downloading fake installers full of adware. While it’s great to have the ability to leave the app store and get software that the platform’s owner hasn’t approved — as on Android — this is no excuse for not providing a good, secure software installation experience for typical users installing typical programs. Even Reputable Desktop Programs Try to Install Junk Even if you do find an entirely reputable program, you’ll have to keep your eyes open while installing it. It will likely try to install adware, add browse toolbars, change your default search engine, or change your web browser’s home page. Even Microsoft’s own programs do this — when you install Skype for Windows desktop, it will attempt to modify your browser settings t ouse Bing, even if you’re specially chosen another search engine and home page. With Microsoft setting such an example, it’s no surprise so many other software developers have followed suit. Geeks know how to avoid this stuff, but there’s a reason program installers continue to do this. It works and tricks many users, who end up with junk installed and settings changed. The Update Process is Confusing On iOS, Android, and Windows RT, software updates come from a single place — the app store. On Linux, software updates come from the package manager. On Mac OS X, typical users’ software updates likely come from the Mac App Store. On the Windows desktop, software updates come from… well, every program has to create its own update mechanism. Users have to keep track of all these updaters and make sure their software is up-to-date. Most programs now have their act together and automatically update by default, but users who have old versions of Flash and Adobe Reader installed are vulnerable until they realize their software isn’t automatically updating. Even if every program updates properly, the sheer mess of updaters is clunky, slow, and confusing in comparison to a centralized update process. Browser Plugins Open Security Holes It’s no surprise that other modern platforms like iOS, Android, Chrome OS, Windows RT, and Windows Phone don’t allow traditional browser plugins, or only allow Flash and build it into the system. Browser plugins provide a wealth of different ways for malicious web pages to exploit the browser and open the system to attack. Browser plugins are one of the most popular attack vectors because of how many users have out-of-date plugins and how many plugins, especially Java, seem to be designed without taking security seriously. Oracle’s Java plugin even tries to install the terrible Ask toolbar when installing security updates. That’s right — the security update process is also used to cram additional adware into users’ machines so unscrupulous companies like Oracle can make a quick buck. It’s no wonder that most Windows PCs have an out-of-date, vulnerable version of Java installed. Battery Life is Terrible Windows PCs have bad battery life compared to Macs, IOS devices, and Android tablets, all of which Windows now competes with. Even Microsoft’s own Surface Pro 2 has bad battery life. Apple’s 11-inch MacBook Air, which has very similar hardware to the Surface Pro 2, offers double its battery life when web browsing. Microsoft has been fond of blaming third-party hardware manufacturers for their poorly optimized drivers in the past, but there’s no longer any room to hide. The problem is clearly Windows. Why is this? No one really knows for sure. Perhaps Microsoft has kept on piling Windows component on top of Windows component and many older Windows components were never properly optimized. Windows Users Become Stuck on Old Windows Versions Apple’s new OS X 10.9 Mavericks upgrade is completely free to all Mac users and supports Macs going back to 2007. Apple has also announced their intention that all new releases of Mac OS X will be free. In 2007, Microsoft had just shipped Windows Vista. Macs from the Windows Vista era are being upgraded to the latest version of the Mac operating system for free, while Windows PCs from the same era are probably still using Windows Vista. There’s no easy upgrade path for these people. They’re stuck using Windows Vista and maybe even the outdated Internet Explorer 9 if they haven’t installed a third-party web browser. Microsoft’s upgrade path is for these people to pay $120 for a full copy of Windows 8.1 and go through a complicated process that’s actaully a clean install. Even users of Windows 8 devices will probably have to pay money to upgrade to Windows 9, while updates for other operating systems are completely free. If you’re a PC geek, a PC gamer, or someone who just requires specialized software that only runs on Windows, you probably use the Windows desktop and don’t want to switch. That’s fine, but it doesn’t mean the Windows desktop is actually a good experience. Much of the burden falls on average users, who have to struggle with malware, bloatware, adware bundled in installers, complex software installation processes, and out-of-date software. In return, all they get is the ability to use a web browser and some basic Office apps that they could use on almost any other platform without all the hassle. Microsoft would agree with this, touting Windows RT and their new “Windows 8-style” app platform as the solution. Why else would Microsoft, a “devices and services” company, position the Surface — a device without traditional Windows desktop programs — as their mass-market device recommended for average people? This isn’t necessarily an endorsement of Windows RT. If you’re tech support for your family members and it comes time for them to upgrade, you may want to get them off the Windows desktop and tell them to get a Mac or something else that’s simple. Better yet, if they get a Mac, you can tell them to visit the Apple Store for help instead of calling you. That’s another thing Windows PCs don’t offer — good manufacturer support. Image Credit: Blanca Stella Mejia on Flickr, Collin Andserson on Flickr, Luca Conti on Flickr     

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • When should I use Perl's AUTOLOAD?

    - by Robert S. Barnes
    In "Perl Best Practices" the very first line in the section on AUTOLOAD is: Don't use AUTOLOAD However all the cases he describes are dealing with OO or Modules. I have a stand alone script in which some command line switches control which versions of particular functions get defined. Now I know I could just take the conditionals and the evals and stick them naked at the top of my file before everything else, but I find it convenient and cleaner to put them in AUTOLOAD at the end of the file. Is this bad practice / style? If you think so why, and is there a another way to do it? As per brian's request I'm basically using this to do conditional compilation based on command line switches. I don't mind some constructive criticism. sub AUTOLOAD { our $AUTOLOAD; (my $method = $AUTOLOAD) =~ s/.*:://s; # remove package name if ($method eq 'tcpdump' && $tcpdump) { eval q( sub tcpdump { my $msg = shift; warn gf_time()." Thread ".threads->tid().": $msg\n"; } ); } elsif ($method eq 'loginfo' && $debug) { eval q( sub loginfo { my $msg = shift; $msg =~ s/$CRLF/\n/g; print gf_time()." Thread ".threads->tid().": $msg\n"; } ); } elsif ($method eq 'build_get') { if ($pipelining) { eval q( sub build_get { my $url = shift; my $base = shift; $url = "http://".$url unless $url =~ /^http/; return "GET $url HTTP/1.1${CRLF}Host: $base$CRLF$CRLF"; } ); } else { eval q( sub build_get { my $url = shift; my $base = shift; $url = "http://".$url unless $url =~ /^http/; return "GET $url HTTP/1.1${CRLF}Host: $base${CRLF}Connection: close$CRLF$CRLF"; } ); } } elsif ($method eq 'grow') { eval q{ require Convert::Scalar qw(grow); }; if ($@) { eval q( sub grow {} ); } goto &$method; } else { eval "sub $method {}"; return; } die $@ if $@; goto &$method; }

    Read the article

  • How can a test script inform R CMD check that it should emit a custom message?

    - by mariotomo
    I'm writing a R package (delftfews) here at office. we are using svUnit for unit testing. our process for describing new functionality: we define new unit tests, initially marked as DEACTIVATED; one block of tests at a time we activate them and implement the function described by the tests. almost all the time we have a small amount of DEACTIVATED tests, relative to functions that might be dropped or will be implemented. my problem/question is: can I alter the doSvUnit.R so that R CMD check pkg emits a NOTE (i.e. a custom message "NOTE" instead of "OK") in case there are DEACTIVATED tests? as of now, we see only that the active tests don't give error: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ OK * checking PDF version of manual ... OK which is all right if all tests succeed, but less all right if there are skipped tests and definitely wrong if there are failing tests. In this case, I'd actually like to see a NOTE or a WARNING like the following: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ NOTE 6 test(s) were skipped. WARNING 1 test(s) are failing. * checking PDF version of manual ... OK As of now, we have to open the doSvUnit.Rout to check the real test results. I contacted two of the maintainers at r-forge and CRAN and they pointed me to the sources of R, in particular the testing.R script. if I understand it correctly, to answer this question we need patching the tools package: scripts in the tests directory are called using a system call, output (stdout and stderr) go to one single file, there are two possible outcomes: ok or not ok, so I opened a change request on R, proposing something like bit-coding the return status, bit-0 for ERROR (as it is now), bit-1 for WARNING, bit-2 for NOTE. with my modification, it would be easy producing this output: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ NOTE - please check doSvUnit.Rout. WARNING - please check doSvUnit.Rout. * checking PDF version of manual ... OK Brian Ripley replied "There are however several packages with properly written unit tests that do signal as required. Please do take this discussion elsewhere: R-bugs is not the place to ask questions." and closed the change request. anybody has hints?

    Read the article

  • jQuery nested sortables jumpy behaviour

    - by sebbie
    I want to allow user to drag and drop UI elements. I've 'container' and 'control', control may be in container, containers may include other containers (this is important requirement). I created simple prototype using jQuery. HTML: <div class="one"> <div class="control">Control 1</div> <div class="control">Control 2</div> <div class="control container"> Container drag area <div class="control">Subcontrol 1</div> <div class="control">Subcontrol 2</div> <div class="control">Subcontrol 3</div> <div class="control">Subcontrol 4</div> <div class="control">Subcontrol 5</div> <div class="control">Subcontrol 6</div> <div class="control">Subcontrol 7</div> <div class="control">Subcontrol 8</div> <div class="control">Subcontrol 9</div> </div> <div class="control">Control 3</div> Then I created sortables using jQueryUI: $('.one').sortable({ items: 'div.control', placeholder: 'placeholder', forcePlaceholderSize: true }); Now when I'm trying to drag "Subcontrol 8" and place it between "Subcontrol 2" and "Subcontrol 3" for example I'm getting jumpy effect, you can observe it here: http://jsbin.com/egipu4/2 Interesting thing is - when I remove ability to drag "container" then it becomes smooth and perfect (you can see this on jsbin example below "jumpy" example, you can't drag using "Container drag area" span). I tried different "nested" plugins and techniques, google'd for a long time and the only one that worked was on this page: (StackOverflow doesn't allow me to post more than one like, google for "Brian Swartzfager's Blog: Nested List Sort Demo" should be first, sorry!) But it does work great only in jQuery1.2 and very old jQueryUI. If I include latest jQuery (1.3/1.4) and UI (1.7/1.8) it gets jumpy as well. What am I doing wrong?

    Read the article

  • Code golf - hex to (raw) binary conversion

    - by Alnitak
    In response to this question asking about hex to (raw) binary conversion, a comment suggested that it could be solved in "5-10 lines of C, or any other language." I'm sure that for (some) scripting languages that could be achieved, and would like to see how. Can we prove that comment true, for C, too? NB: this doesn't mean hex to ASCII binary - specifically the output should be a raw octet stream corresponding to the input ASCII hex. Also, the input parser should skip/ignore white space. edit (by Brian Campbell) May I propose the following rules, for consistency? Feel free to edit or delete these if you don't think these are helpful, but I think that since there has been some discussion of how certain cases should work, some clarification would be helpful. The program must read from stdin and write to stdout (we could also allow reading from and writing to files passed in on the command line, but I can't imagine that would be shorter in any language than stdin and stdout) The program must use only packages included with your base, standard language distribution. In the case of C/C++, this means their respective standard libraries, and not POSIX. The program must compile or run without any special options passed to the compiler or interpreter (so, 'gcc myprog.c' or 'python myprog.py' or 'ruby myprog.rb' are OK, while 'ruby -rscanf myprog.rb' is not allowed; requiring/importing modules counts against your character count). The program should read integer bytes represented by pairs of adjacent hexadecimal digits (upper, lower, or mixed case), optionally separated by whitespace, and write the corresponding bytes to output. Each pair of hexadecimal digits is written with most significant nibble first. The behavior of the program on invalid input (characters besides [a-fA-F \t\r\n], spaces separating the two characters in an individual byte, an odd number of hex digits in the input) is undefined; any behavior (other than actively damaging the user's computer or something) on bad input is acceptable (throwing an error, stopping output, ignoring bad characters, treating a single character as the value of one byte, are all OK) The program may write no additional bytes to output. Code is scored by fewest total bytes in the source file. (Or, if we wanted to be more true to the original challenge, the score would be based on lowest number of lines of code; I would impose an 80 character limit per line in that case, since otherwise you'd get a bunch of ties for 1 line).

    Read the article

< Previous Page | 55 56 57 58 59 60 61  | Next Page >