Search Results

Search found 908 results on 37 pages for 'the worst shady'.

Page 32/37 | < Previous Page | 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Geek Fun: Virtualized Old School Windows – Windows 95

    - by Matthew Guay
    Last week we enjoyed looking at Windows 3.1 running in VMware Player on Windows 7.  Today, let’s upgrade our 3.1 to 95, and get a look at how most of us remember Windows from the 90’s. In this demo, we’re running the first release of Windows 95 (version 4.00.950) in VMware Player 3.0 running on Windows 7 x64.  For fun, we ran the 95 upgrade on the 3.1 virtual machine we built last week. Windows 95 So let’s get started.  Here’s the first setup screen.  For the record, Windows 95 installed in about 15 minutes or less in VMware in our test. Strangely, Windows 95 offered several installation choices.  They actually let you choose what extra parts of Windows to install if you wished.  Oh, and who wants to run Windows 95 on your “Portable Computer”?  Most smartphones today are more powerful than the “portable computers” of 95. Your productivity may vastly increase if you run Windows 95.  Anyone want to switch? No, I don’t want to restart … I want to use my computer! Welcome to Windows 95!  Hey, did you know you can launch programs from the Start button? Our quick spin around Windows 95 reminded us why Windows got such a bad reputation in the ‘90’s for being unstable.  We didn’t even get our test copy fully booted after installation before we saw our first error screen.  Windows in space … was that the most popular screensaver in Windows 95, or was it just me? Hello Windows 3.1!  The UI was still outdated in some spots.   Ah, yes, Media Player before it got 101 features to compete with iTunes. But, you couldn’t even play CDs in Media Player.  Actually, CD player was one program I used almost daily in Windows 95 back in the day. Want some new programs?  This help file about new programs designed for Windows 95 lists a lot of outdated names in tech.    And, you really may want some programs.  The first edition of Windows 95 didn’t even ship with Internet Explorer.   We’ve still got Minesweeper, though! My Computer had really limited functionality, and by default opened everything in a new window.  Double click on C:, and it opens in a new window.  Ugh. But Explorer is a bit more like more modern versions. Hey, look, Start menu search!  If only it found the files you were looking for… Now I’m feeling old … this shutdown screen brought back so many memories … of shutdowns that wouldn’t shut down! But, you still have to turn off your computer.  I wonder how many old monitors had these words burned into them? So there’s yet another trip down Windows memory lane.  Most of us can remember using Windows 95, so let us know your favorite (or worst) memory of it!  At least we can all be thankful for our modern computers and operating systems today, right?  Similar Articles Productive Geek Tips Geek Fun: Remember the Old-School SkiFree Game?Geek Fun: Virtualized old school Windows 3.11Stupid Geek Tricks: Tile or Cascade Multiple Windows in Windows 7Stupid Geek Tricks: Select Multiple Windows on the TaskbarHow to Delete a System File in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Geek Fun: Virtualized Old School Windows – Windows 95

    - by Matthew Guay
    Last week we enjoyed looking at Windows 3.1 running in VMware Player on Windows 7.  Today, let’s upgrade our 3.1 to 95, and get a look at how most of us remember Windows from the 90’s. In this demo, we’re running the first release of Windows 95 (version 4.00.950) in VMware Player 3.0 running on Windows 7 x64.  For fun, we ran the 95 upgrade on the 3.1 virtual machine we built last week. Windows 95 So let’s get started.  Here’s the first setup screen.  For the record, Windows 95 installed in about 15 minutes or less in VMware in our test. Strangely, Windows 95 offered several installation choices.  They actually let you choose what extra parts of Windows to install if you wished.  Oh, and who wants to run Windows 95 on your “Portable Computer”?  Most smartphones today are more powerful than the “portable computers” of 95. Your productivity may vastly increase if you run Windows 95.  Anyone want to switch? No, I don’t want to restart … I want to use my computer! Welcome to Windows 95!  Hey, did you know you can launch programs from the Start button? Our quick spin around Windows 95 reminded us why Windows got such a bad reputation in the ‘90’s for being unstable.  We didn’t even get our test copy fully booted after installation before we saw our first error screen.  Windows in space … was that the most popular screensaver in Windows 95, or was it just me? Hello Windows 3.1!  The UI was still outdated in some spots.   Ah, yes, Media Player before it got 101 features to compete with iTunes. But, you couldn’t even play CDs in Media Player.  Actually, CD player was one program I used almost daily in Windows 95 back in the day. Want some new programs?  This help file about new programs designed for Windows 95 lists a lot of outdated names in tech.    And, you really may want some programs.  The first edition of Windows 95 didn’t even ship with Internet Explorer.   We’ve still got Minesweeper, though! My Computer had really limited functionality, and by default opened everything in a new window.  Double click on C:, and it opens in a new window.  Ugh. But Explorer is a bit more like more modern versions. Hey, look, Start menu search!  If only it found the files you were looking for… Now I’m feeling old … this shutdown screen brought back so many memories … of shutdowns that wouldn’t shut down! But, you still have to turn off your computer.  I wonder how many old monitors had these words burned into them? So there’s yet another trip down Windows memory lane.  Most of us can remember using Windows 95, so let us know your favorite (or worst) memory of it!  At least we can all be thankful for our modern computers and operating systems today, right?  Similar Articles Productive Geek Tips Geek Fun: Remember the Old-School SkiFree Game?Geek Fun: Virtualized old school Windows 3.11Stupid Geek Tricks: Tile or Cascade Multiple Windows in Windows 7Stupid Geek Tricks: Select Multiple Windows on the TaskbarHow to Delete a System File in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Ask How-To Geek: Learning the Office Ribbon, Booting to USB with an Old BIOS, and Snapping Windows

    - by Jason Fitzpatrick
    You’ve got questions and we’ve got answers. Today we highlight how to master the new Office interface, USB boot a computer with outdated BIOS, and snap windows to preset locations. Learning the New Office Ribbon Dear How-To Geek, I feel silly asking this (in light of how long the new Office interface has been out) but my company finally got around to upgrading from Windows XP and Office 2000 so the new interface it totally new to me. Can you recommend any resources for quickly learning the Office ribbon and the new changes? I feel completely lost after two decades of the old Office interface. Help! Sincerely, Where the Hell is Everything? Dear Where the Hell, We think most people were with you at some point in the last few years. “Where the hell is…” could possibly be the slogan for the new ribbon interface. You could browse through some of the dry tutorials online or even get a weighty book on the topic but the best way to learn something new is to get hands on. Ribbon Hero turns learning the new Office features and ribbon layout into a game. It’s no vigorous round of Team Fortress mind you, but it’s significantly more fun than reading a training document. Check out how to install and configure Ribbon Hero here. You’ll be teaching your coworkers new tricks in no time. Boot via USB with an Old BIOS Dear How-To Geek, I’m trying to repurpose some old computers by updating them with lightweight Linux distros but the BIOS on most of the machines is ancient and creaky. How ancient? It doesn’t even support booting from a USB device! I have a large flash drive that I’ve turned into a master installation tool for jobs like this but I can’t use it. The computers in question have USB ports; they just aren’t recognized during the boot process. What can I do? USB Bootin’ in Boise Dear USB Bootin’, It’s great you’re working to breathe life into old hardware! You’ve run into one of the limitations of older BIOSes, USB was around but nobody was thinking about booting off of it. Fortunately if you have a computer old enough to have that kind of BIOS it’s likely to also has a floppy drive or a CDROM drive. While you could make a bootable CDROM for your application we understand that you want to keep using the master USB installer you’ve made. In light of that we recommend PLoP Boot Manager. Think of it like a boot manager for your boot manager. Using it you can create a bootable floppy or CDROM that will enable USB booting of your master USB drive. Make a CD and a floppy version and you’ll have everything in your toolkit you need for future computer refurbishing projects. Read up on creating bootable media with PLoP Boot Manager here. Snapping Windows to Preset Coordinates Dear How-To Geek, Once upon a time I had a company laptop that came with a little utility that snapped windows to preset areas of the screen. This was long before the snap-to-side features in Windows 7. You could essentially configure your screen into a grid pattern of your choosing and then windows would neatly snap into those grids. I have no idea what it was called or if was anymore than a gimmick from the computer manufacturer, but I’d really like to have it on my new computer! Bend and Snap in San Francisco, Dear Bend and Snap, If we had to guess, we’d guess your company must have had a set of laptops from Acer as the program you’re describing sounds exactly like Acer GridVista. Fortunately for you the application was extremely popular and Acer released it independently of their hardware. If, by chance, you’ve since upgraded to a multiple monitor setup the app even supports multiple monitors—many of the configurations are handy for arranging IM windows and other auxiliary communication tools. Check out our guide to installing and configuring Acer GridVista here for more information. Have a question you want to put before the How-To Geek staff? Shoot us an email at [email protected] and then keep an eye out for a solution in the Ask How-To Geek column. Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Download the New Year in Japan Windows 7 Theme from Microsoft Once More Unto the Breach – Facebook Apps Can Now Access Your Address and Phone Number Dial Zero Speeds You Through Annoying Customer Service Menus Complete Dropquest 2011 and Receive Free Dropbox Storage Desktop Computer versus Laptop Wallpaper The Kids Have No Idea What Old Tech Is [Video]

    Read the article

  • SanjayP&rsquo;s venture after Microsoft involves no Microsoft

    - by eddraper
    When I was at Microsoft, I always found Sanjay Parthasarathy to be a bright and passionate leader.  While he was a bit disconnected at times with what was really going on out in the trenches, I always thought he was true believer in what we in Developer Platform and Evangelism (DPE) were doing.  He got it.  He had started DPE and kicked a lot of doors down up in Redmond to make it happen.  Back in the early 2000s, battles over platform choices at large customers was trench warfare… bayonets and hand grenades at the P-Code level.  This model was not at all suited to Microsoft’s org structure at the time.  While there were plenty of people fully able to have competitive conversations around Windows Server, or AD, or Exchange, or the desktop, there weren’t many that could have deep technical conversations around Java vs .NET and the platform “stack” as a cohesive, unified unit of value.  This task fell to DPE. Sanjay ended up leaving Microsoft a number of months before me in 2009 and I remember thinking these exact words: “holy shit, SanjayP left Microsoft.”  When SanjayP left DPE years before that,  Sheila Gulati had stepped into his shoes and I thought we where starting to miss a beat.  Sheila had built an amazing business at Microsoft India, but I don’t recall being inspired by her as a leader.  SanjayP’s talks felt like the opening scene of “Patton” with George C. Scott pacing in front of the American flag.  Sheila was a voice on a con-call.  When she moved on in 2007, Walid Abu-Hadba was given the reigns.  Personally, I don’t ever recall even seeing his face.  I think I might recall hearing his voice on some con-calls, but for all intents and purposes he was invisible to me.  Perhaps this was the beginning of my carelessness around seeking “visibility.” Fast forward to Build 2011.  First off, we have no PDC – we have Build.  Microsoft had made an 11 year investment by this time in building an organization to make its technology relevant to developers.  One would think such an org would be in the driver’s seat of such an event, but we see Windows product group people on the podiums.  Watching, I could see the messaging unfold… but no story.  It was like the old days.  Demos and PowerPoints by team members building the tech, and in many cases VPs.  The ensuing confusion is almost legendary now.  Windows 8 was, and is, a pretty big deal… but who is telling the story – not just features and benefits, but the story around how it all fits together. Having been out of Microsoft for two years now, and looking in, I can only conclude that the “DPE of old” has at best been emasculated, and at worst been completely marginalized by internal politics, or perhaps the eternal march of the corporate entropy generator that resides at all large companies.  I don’t think this is a good thing for anyone. And now, back to Sanjay who is the father of Microsoft DPE… I noticed that he has moved back to India and is doing start-up work.  His current company Indix looks to be doing some interesting things with “big data” and here’s their stack: Nary a trace of anything Microsoft.  What could account for this?  I wonder….  Better availability of labor and expertise in India for this stack?  Donno, but even in India, leet R and Hadoop skills have to be hard to find. Technical superiority?  This, I sincerely doubt. This stack, with SanjayP’s name as CEO leaves me with an unsettling feeling.  If he did believe, he no longer does.  One doesn’t place bets with real money on things they don’t believe in.  Perhaps he never did believe, and was a corporate creature seeking to find a niche for himself after which he manipulated me and others.  Or perhaps… anger… be it passive aggression or an outright “in your face F*** you” to his former masters. I guess in the end, only he knows the true reason… But I have my theory...

    Read the article

  • XNA Notes 004

    - by George Clingerman
    The XNA community has been crazy busy again. It always make me fee like such a slacker collecting all of these notes as I see the tremendous output from people all over the world and it’s incredible and humbling. There are some amazingly skilled people working with XNA. On another not, I’m going to take a minute to get on my soapbox and say, if you are developing ANYTHING and are not using some sort of source/revision control, START IMMEDIATELY. This applies to teams of one. Projects for fun. And “I back up my hard drive” or “I use dropbox!” does NOT count as using source control. You’ll be doing yourself a HUGE favor if you find one, learn to use it and integrate it into your everyday workflow. I personally use Subversion. It’s hosted offsite at xp.dev.com and I use TortoiseSVN as my front end to interface with the repository. It’s simple and easy to use and has saved me from myself so many time. Honestly, get setup with some type of source control immediately. If you don’t understand how, grab another developer that does and have them walk you through setup and the basics of using it. Ok, I’m done. On to the notes… The XNA Team Only 14 days left to Submit XNA GS 3.1 Games! http://blogs.msdn.com/b/xna/archive/2011/01/24/14-days-left-to-submit-xna-gs-3-1-games-on-app-hub.aspx Shawn Hargreaves shares some great information on Exception Handling best practices on the XNA forums http://forums.create.msdn.com/forums/p/73333/448556.aspx#448556 http://blogs.msdn.com/b/ericlippert/archive/2008/09/10/vexing-exceptions.aspx XNA MVPs @CatalinZima gives us a peek at Chicken’s Can’t Fly http://www.amusedsloth.com/games/chickens-cant-fly/ Screen-space deformations in XNA for WP7 from Catalin Zima http://twitter.com/CatalinZima/statuses/30313083767357440 http://www.amusedsloth.com/2011/01/screen-space-deformations-in-xna-for-windows-phone-7/ XNA Developers Going to GDC? Don’t miss the XNA panel hosted by a plethora of well known XNA community names! http://forums.create.msdn.com/forums/p/73576/448842.aspx#448842 MasterBlud does an interview with @Xalterax http://twitter.com/MasterBlud/statuses/28510774812999680 http://www.xboxhornet.com/wordpress/?p=7102 Luke Schneider of Radiangames posts about The Radiangames Style http://radiangames.com/?p=532 Holmade Games had a “vote for the new playable character” poll going on for Hurdle Turtle this past week http://holmadegames.blogspot.com/2011/01/new-level-pack-vote-for-your-favorite.html IGF v0.1.0.0 release post mortem http://indiefreaks.com/2011/01/24/v0-1-0-0-release-post-mortem/ James an Super Dunner post Good Morning Gato #46 and a look at the Vampire Smile box art http://www.ska-studios.com/2011/01/21/good-morning-gato-46/ http://www.ska-studios.com/2011/01/20/vampire-smiles-digital-box-art/ Alfredo Di Napoli creates Cow Pong using XNA and F#! http://alfredodinapoli.wordpress.com/2011/01/25/cow-pong-a-simple-xna-game-in-f/ Xbox LIVE Indie Games Signed In Podcast posts Episode #61 http://www.signedinpodcast.com/?p=559 Gamergeddon posts the January 23rd edition of XBLIG Round Up http://www.gamergeddon.com/2011/01/23/xbox-indie-games-round-up-january-23rd/ Indie Asylum posts Antipole Review http://www.indieasylum.com/reviews/38-xblig/112-antipole.html 1UPOrPosion Reviews OSR Unhinged http://www.1uporpoison.com/xblig/osr-unhinged/ DarkstarMatryx review Warbirds at Work http://www.darkstarmatryx.com/?p=185 Review of Aban Hawkins and the 1000 Spikes http://www.armlessoctopus.com/2011/01/24/xbox-indie-review-aban-hawkins-the-1000-spikes/ XboxHornet reviews Corrupted http://www.xboxhornet.com/wordpress/?p=7123 XBLIG 2010: The Best And The Worst http://www.gamasutra.com/blogs/JamieMann/20110121/6840/ Xbox LIVE Arcade Sales Analysis - an interesting read for XBLIG developers wondering how they’re doing compared to arcade.. http://www.gamerbytes.com/2011/01/xbla_sales_analysis_dec_2010.php Best of Indies for January 25th http://www.thisisfakediy.co.uk/articles/games/best-of-the-indies-25th-january-2011 Decimation X3 appears as an arcade machine in the wild! http://twitter.com/mdoucette/statuses/29605562484260864 XNA Game Development Guiseppe De Francesco (@PinoEire) announced Torque X 4.0 CEV is now in RC phase! http://www.garagegames.com/community/blogs/view/20779 DrMistry of mstargames shares his struggle (and mistakes) with learning to use the Content Pipeline http://www.mstargames.co.uk/mistryblogmain/35-genblog/181-pontent-cipeline-more-like-it.html New Tutorial posted XNA 2D Basic Collision Detection with Rotation from Ioannis Panagopoulos http://www.progware.org/Blog/post/XNA-2D-Basic-Collision-Detection-with-Rotation.aspx Sgt. Conker roars to life! Doing a much better (and prettier) job of collecting XNA news from around the interwebs. http://www.sgtconker.com/ http://www.sgtconker.com/2011/01/dedication-for-captain-boki/ http://www.sgtconker.com/2011/01/screen-space-deformations-in-xna-for-windows-phone-7/ http://www.sgtconker.com/2011/01/xna-4-0-light-pre-pass-2/ http://www.sgtconker.com/2011/01/indiefreaks-game-framework-0-1-0-0-released/ Offering a little free publicity for XBLIGs http://forums.create.msdn.com/forums/p/73465/448321.aspx#448321 Ben Kane writes about building loot tables from Excel using the Content Pipeline http://benkane.wordpress.com/2011/01/23/building-loot-tables-from-excel-using-the-content-pipeline/ Good tips on attracting a game artist AND an offer to create your cover art for FREE http://forums.create.msdn.com/forums/t/72998.aspx If you’re an XBLIG developer keeping your eye on places to release on the PC, might want to be watching the IndieCity blog. Seems like these guys are well on their way to constructing something worth watching. http://www.indiecity.com/blog/ DVMGames spotted a new crowd-funding site for Indies http://twitter.com/DVMGames/statuses/29947274767372289 http://www.8bitfunding.com/ Transmute continues to make progress and there’s a nice dev blog to follow along here http://forgottenstarstudios.com/blog/

    Read the article

  • SQL Authority News – Play by Play with Pinal Dave – A Birthday Gift

    - by Pinal Dave
    Today is my birthday. Personal Note When I was young, I was always looking forward to my birthday as on this day, I used to get gifts from everybody. Now when I am getting old on each of my birthday, I have almost same feeling but the direction is different. Now on each of my birthday, I feel like giving gifts to everybody. I have received lots of support, love and respect from everybody; and now I must return it back.Well, on this birthday, I have very unique gifts for everybody – my latest course on SQL Server. How I Tune Performance I often get questions where I am asked how do I work on a normal day. I am often asked that how do I work when I have performance tuning project is assigned to me. Lots of people have expressed their desire that they want me to explain and demonstrate my own method of solving performance problem when I am facing real world problem. It is a pretty difficult task as in the real world, nothing goes as planned and usually planned demonstrations have no place there. The real world, demands real solutions and in a timely fashion. If a consultant goes to industry and does not demonstrate his/her capabilities in very first few minutes, it does not matter how much fame he/she is, the door is shown to them eventually. It is true and in my early career, I have faced it quite commonly. I have learned the trick to be honest from the start and request absolutely transparent communication from the organization where I am to consult. Play by Play Play by Play is a very unique setup. It is not planned and it is a step by step course. It is like a reality show – a very real encounter to the problem and real problem solving approach. I had a great time doing this course. Geoffrey Grosenbach (VP of Pluralsight) sits down with me to see what a SQL Server Admin does in the real world. This Play-by-Play focuses on SQL Server performance tuning and I go over optimizing queries and fine-tuning the server. The table of content of this course is very simple. Introduction In the introduction I explained my basic strategies when I am approached by a customer for performance tuning. Basic Information Gathering In this module I explain how I do gather various information for performance tuning project. It is very crucial to demonstrate to customers for consultant his capability of solving problem. I attempt to resolve a small problem which gives a big positive impact on performance, consultant have to gather proper information from the start. I demonstrate in this module, how one can collect all the important performance tuning metrics. Removing Performance Bottleneck In this module, I build upon the previous module’s statistics collected. I analysis various performance tuning measures and immediately start implementing various tweaks on the performance, which will start improving the performance of my server. This is a very effective method and it gives immediate return of efforts. Index Optimization Indexes are considered as a silver bullet for performance tuning. However, it is not true always there are plenty of examples where indexes even performs worst after implemented. The key is to understand a few of the basic properties of the index and implement the right things at the right time. In this module, I describe in detail how to do index optimizations and what are right and wrong with Index. If you are a DBA or developer, and if your application is running slow – this is must attend module for you. I have some really interesting stories to tell as well. Optimize Query with Rewrite Every problem has more than one solution, in this module we will see another very famous, but hard to master skills for performance tuning – Query Rewrite. There are few do’s and don’ts for any query rewrites. I take a very simple example and demonstrate how query rewrite can improve the performance of the query at many folds. I also share some real world funny stories in this module. This course is hosted at Pluralsight. You will need a valid login for Pluralsight to watch  Play by Play: Pinal Dave course. You can also sign up for FREE Trial of Pluralsight to watch this course. As today is my birthday – I will give 10 people (randomly) who will express their desire to learn this course, a free code. Please leave your comment and I will send you free code to watch this course for free. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Video

    Read the article

  • Talking JavaOne with Rock Star Martijn Verburg

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top-rated speakers at each JavaOne Conference. They are awarded by their peers, who, through conference surveys, recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized. Martijn Verburg has, in recent years, established himself as an important mover and shaker in the Java community. His “Diabolical Developer” session at the JavaOne 2011 Conference got people’s attention by identifying some of the worst practices Java developers are prone to engage in. Among other things, he is co-leader and organizer of the thriving London Java User Group (JUG) which has more than 2,500 members, co-represents the London JUG on the Executive Committee of the Java Community Process, and leads the global effort for the Java User Group “Adopt a JSR” and “Adopt OpenJDK” programs. Career highlights include overhauling technology stacks and SDLC practices at Mizuho International, mentoring Oracle on technical community management, and running off shore development teams for AIG. He is currently CTO at jClarity, a start-up focusing on automating optimization for Java/JVM related technologies, and Product Advisor at ZeroTurnaround. He co-authored, with Ben Evans, "The Well-Grounded Java Developer" published by Manning and, as a leading authority on technical team optimization, he is in high demand at major software conferences.Verburg is participating in five sessions, a busy man indeed. Here they are: CON6152 - Modern Software Development Antipatterns (with Ben Evans) UGF10434 - JCP and OpenJDK: Using the JUGs’ “Adopt” Programs in Your Group (with Csaba Toth) BOF4047 - OpenJDK Building and Testing: Case Study—Java User Group OpenJDK Bugathon (with Ben Evans and Cecilia Borg) BOF6283 - 101 Ways to Improve Java: Why Developer Participation Matters (with Bruno Souza and Heather Vancura-Chilson) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Kirk Pepperdine, Ellen Kraffmiller and Henri Tremblay) When I asked Verburg about the biggest mistakes Java developers tend to make, he listed three: A lack of communication -- Software development is far more a social activity than a technical one; most projects fail because of communication issues and social dynamics, not because of a bad technical decision. Sadly, many developers never learn this lesson. No source control -- Developers simply storing code in local filesystems and emailing code in order to integrate Design-driven Design -- The need for some developers to cram every design pattern from the Gang of Four (GoF) book into their source code All of which raises the question: If these practices are so bad, why do developers engage in them? “I've seen a wide gamut of reasons,” said Verburg, who lists them as: * They were never taught at high school/university that their bad habits were harmful.* They weren't mentored in their first professional roles.* They've lost passion for their craft.* They're being deliberately malicious!* They think software development is a technical activity and not a social one.* They think that they'll be able to tidy it up later.A couple of key confusions and misconceptions beset Java developers, according to Verburg. “With Java and the JVM in particular I've seen a couple of trends,” he remarked. “One is that developers think that the JVM is a magic box that will clean up their memory, make their code run fast, as well as make them cups of coffee. The JVM does help in a lot of cases, but bad code can and will still lead to terrible results! The other trend is to try and force Java (the language) to do something it's not very good at, such as rapid web development. So you get a proliferation of overly complex frameworks, libraries and techniques trying to get around the fact that Java is a monolithic, statically typed, compiled, OO environment. It's not a Golden Hammer!”I asked him about the keys to running a good Java User Group. “You need to have a ‘Why,’” he observed. “Many user groups know what they do (typically, events) and how they do it (the logistics), but what really drives users to join your group and to stay is to give them a purpose. For example, within the LJC we constantly talk about the ‘Why,’ which in our case is several whys:* Re-ignite the passion that developers have for their craft* Raise the bar of Java developers in London* We want developers to have a voice in deciding the future of Java* We want to inspire the next generation of tech leaders* To bring the disparate tech groups in London together* So we could learn from each other* We believe that the Java ecosystem forms a cornerstone of our society today -- we want to protect that for the futureLooking ahead to Java 8 Verburg expressed excitement about Lambdas. “I cannot wait for Lambdas,” he enthused. “Brian Goetz and his group are doing a great job, especially given some of the backwards compatibility that they have to maintain. It's going to remove a lot of boiler plate and yet maintain readability, plus enable massive scaling.”Check out Martijn Verburg at JavaOne if you get a chance, and, stay tuned for a longer interview yours truly did with Martijn to be publish on otn/java some time after JavaOne. Originally published on blogs.oracle.com/javaone.

    Read the article

  • Talking JavaOne with Rock Star Martijn Verburg

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top-rated speakers at each JavaOne Conference. They are awarded by their peers, who, through conference surveys, recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized. Martijn Verburg has, in recent years, established himself as an important mover and shaker in the Java community. His “Diabolical Developer” session at the JavaOne 2011 Conference got people’s attention by identifying some of the worst practices Java developers are prone to engage in. Among other things, he is co-leader and organizer of the thriving London Java User Group (JUG) which has more than 2,500 members, co-represents the London JUG on the Executive Committee of the Java Community Process, and leads the global effort for the Java User Group “Adopt a JSR” and “Adopt OpenJDK” programs. Career highlights include overhauling technology stacks and SDLC practices at Mizuho International, mentoring Oracle on technical community management, and running off shore development teams for AIG. He is currently CTO at jClarity, a start-up focusing on automating optimization for Java/JVM related technologies, and Product Advisor at ZeroTurnaround. He co-authored, with Ben Evans, "The Well-Grounded Java Developer" published by Manning and, as a leading authority on technical team optimization, he is in high demand at major software conferences.Verburg is participating in five sessions, a busy man indeed. Here they are: CON6152 - Modern Software Development Antipatterns (with Ben Evans) UGF10434 - JCP and OpenJDK: Using the JUGs’ “Adopt” Programs in Your Group (with Csaba Toth) BOF4047 - OpenJDK Building and Testing: Case Study—Java User Group OpenJDK Bugathon (with Ben Evans and Cecilia Borg) BOF6283 - 101 Ways to Improve Java: Why Developer Participation Matters (with Bruno Souza and Heather Vancura-Chilson) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Kirk Pepperdine, Ellen Kraffmiller and Henri Tremblay) When I asked Verburg about the biggest mistakes Java developers tend to make, he listed three: A lack of communication -- Software development is far more a social activity than a technical one; most projects fail because of communication issues and social dynamics, not because of a bad technical decision. Sadly, many developers never learn this lesson. No source control -- Developers simply storing code in local filesystems and emailing code in order to integrate Design-driven Design -- The need for some developers to cram every design pattern from the Gang of Four (GoF) book into their source code All of which raises the question: If these practices are so bad, why do developers engage in them? “I've seen a wide gamut of reasons,” said Verburg, who lists them as: * They were never taught at high school/university that their bad habits were harmful.* They weren't mentored in their first professional roles.* They've lost passion for their craft.* They're being deliberately malicious!* They think software development is a technical activity and not a social one.* They think that they'll be able to tidy it up later.A couple of key confusions and misconceptions beset Java developers, according to Verburg. “With Java and the JVM in particular I've seen a couple of trends,” he remarked. “One is that developers think that the JVM is a magic box that will clean up their memory, make their code run fast, as well as make them cups of coffee. The JVM does help in a lot of cases, but bad code can and will still lead to terrible results! The other trend is to try and force Java (the language) to do something it's not very good at, such as rapid web development. So you get a proliferation of overly complex frameworks, libraries and techniques trying to get around the fact that Java is a monolithic, statically typed, compiled, OO environment. It's not a Golden Hammer!”I asked him about the keys to running a good Java User Group. “You need to have a ‘Why,’” he observed. “Many user groups know what they do (typically, events) and how they do it (the logistics), but what really drives users to join your group and to stay is to give them a purpose. For example, within the LJC we constantly talk about the ‘Why,’ which in our case is several whys:* Re-ignite the passion that developers have for their craft* Raise the bar of Java developers in London* We want developers to have a voice in deciding the future of Java* We want to inspire the next generation of tech leaders* To bring the disparate tech groups in London together* So we could learn from each other* We believe that the Java ecosystem forms a cornerstone of our society today -- we want to protect that for the futureLooking ahead to Java 8 Verburg expressed excitement about Lambdas. “I cannot wait for Lambdas,” he enthused. “Brian Goetz and his group are doing a great job, especially given some of the backwards compatibility that they have to maintain. It's going to remove a lot of boiler plate and yet maintain readability, plus enable massive scaling.”Check out Martijn Verburg at JavaOne if you get a chance, and, stay tuned for a longer interview yours truly did with Martijn to be publish on otn/java some time after JavaOne.

    Read the article

  • An Alphabet of Eponymous Aphorisms, Programming Paradigms, Software Sayings, Annoying Alliteration

    - by Brian Schroer
    Malcolm Anderson blogged about “Einstein’s Razor” yesterday, which reminded me of my favorite software development “law”, the name of which I can never remember. It took much Wikipedia-ing to find it (Hofstadter’s Law – see below), but along the way I compiled the following list: Amara’s Law: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. Brook’s Law: Adding manpower to a late software project makes it later. Clarke’s Third Law: Any sufficiently advanced technology is indistinguishable from magic. Law of Demeter: Each unit should only talk to its friends; don't talk to strangers. Einstein’s Razor: “Make things as simple as possible, but not simpler” is the popular paraphrase, but what he actually said was “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience”, an overly complicated quote which is an obvious violation of Einstein’s Razor. (You can tell by looking at a picture of Einstein that the dude was hardly an expert on razors or other grooming apparati.) Finagle's Law of Dynamic Negatives: Anything that can go wrong, will—at the worst possible moment. - O'Toole's Corollary: The perversity of the Universe tends towards a maximum. Greenspun's Tenth Rule: Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. (Morris’s Corollary: “…including Common Lisp”) Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law. Issawi’s Omelet Analogy: One cannot make an omelet without breaking eggs - but it is amazing how many eggs one can break without making a decent omelet. Jackson’s Rules of Optimization: Rule 1: Don't do it. Rule 2 (for experts only): Don't do it yet. Kaner’s Caveat: A program which perfectly meets a lousy specification is a lousy program. Liskov Substitution Principle (paraphrased): Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it Mason’s Maxim: Since human beings themselves are not fully debugged yet, there will be bugs in your code no matter what you do. Nils-Peter Nelson’s Nil I/O Rule: The fastest I/O is no I/O.    Occam's Razor: The simplest explanation is usually the correct one. Parkinson’s Law: Work expands so as to fill the time available for its completion. Quentin Tarantino’s Pie Principle: “…you want to go home have a drink and go and eat pie and talk about it.” (OK, he was talking about movies, not software, but I couldn’t find a “Q” quote about software. And wouldn’t it be cool to write a program so great that the users want to eat pie and talk about it?) Raymond’s Rule: Computer science education cannot make anybody an expert programmer any more than studying brushes and pigment can make somebody an expert painter.  Sowa's Law of Standards: Whenever a major organization develops a new system as an official standard for X, the primary result is the widespread adoption of some simpler system as a de facto standard for X. Turing’s Tenet: We shall do a much better programming job, provided we approach the task with a full appreciation of its tremendous difficulty, provided that we respect the intrinsic limitations of the human mind and approach the task as very humble programmers.  Udi Dahan’s Race Condition Rule: If you think you have a race condition, you don’t understand the domain well enough. These rules didn’t exist in the age of paper, there is no reason for them to exist in the age of computers. When you have race conditions, go back to the business and find out actual rules. Van Vleck’s Kvetching: We know about as much about software quality problems as they knew about the Black Plague in the 1600s. We've seen the victims' agonies and helped burn the corpses. We don't know what causes it; we don't really know if there is only one disease. We just suffer -- and keep pouring our sewage into our water supply. Wheeler’s Law: All problems in computer science can be solved by another level of indirection... Except for the problem of too many layers of indirection. Wheeler also said “Compatibility means deliberately repeating other people's mistakes.”. The Wrong Road Rule of Mr. X (anonymous): No matter how far down the wrong road you've gone, turn back. Yourdon’s Rule of Two Feet: If you think your management doesn't know what it's doing or that your organisation turns out low-quality software crap that embarrasses you, then leave. Zawinski's Law of Software Envelopment: Every program attempts to expand until it can read mail. Zawinski is also responsible for “Some people, when confronted with a problem, think 'I know, I'll use regular expressions.' Now they have two problems.” He once commented about X Windows widget toolkits: “Using these toolkits is like trying to make a bookshelf out of mashed potatoes.”

    Read the article

  • Spolskism or Twitterism: A Doctor writes...

    - by Phil Factor
    "I never realized I had a problem. I just 'twittered' because it was a social thing to do. All my mates were doing it. It made me feel good to have 'followers'; it bolstered my self-esteem. Of course, you don't think of the long-term effects on your work and on the way you think. There's no denying that it impairs your judgment…" Yes, this story is typical. Hundreds of people are waking up to the long term effects of twittering, and seeking help. Dave, who wishes to remain anonymous, told our reporter… "I started using Twitter at work. Just a few minutes now and then, throughout the day. A lot of my colleagues were doing it and I thought 'Well, that's cool; it must be part of what I should be doing at work'. Soon, I was avidly reading every twitter that came my way, and counting the minutes between my own twitters. I tried to kid myself that it was all about professional development and getting other people to help you with work-related problems, but in truth I had become addicted to the buzz of the social network. The worse thing was that it made me seem busy even when I was really just frittering my time away. Inevitably, I started to get behind with my real work." Experts have identified the syndrome and given it a name: 'Twitterism', sometimes referred to as 'Spolskism', after the person who first drew attention to the pernicious damage to well-being that the practice caused, and who had the courage to take the pledge of rejecting it. According to one expert… "The occasional Twitter does little harm to the participant, and can be an adaptive way of dealing with stress. Unfortunately, it rarely stops there. The addictive qualities of the practice have put a strain on the caring professions who are faced with a flood of people making that first bold step to seeking help". Dave is one of those now seeking help for his addiction… "I had lost touch with reality. Even though I twittered my work colleagues constantly, I found I actually spoke to them less and less. Even when out socializing, I would frequently disengage from the conversation, in order to twitter. I stopped blogging. I stopped responding to emails; the only way to reach me was through the world of Twitter. Unfortunately, my denial about the harm that twittering was doing to me, my friends, and my work-colleagues was so strong that I truly couldn't see that I had a problem." Like other addictions, the help and support of others who are 'taking the cure' is important. There is a common bond between those who have 'been through hell and back' and are once more able to experience the joys of actually conversing and socializing, rather than the false comfort of solitary 'twittering'. Complete abstinence is essential to the cure. Most of those who risk even an occasional twitter face a headlong slide back into 'binge' twittering. Tom, another twitterer who has managed to kick the habit explains… "My twittering addiction now seems more like a bad dream. You get to work, and switch on the PC. You say to yourself, just open up the browser, just for a minute, just to see what people are saying on Twitter. The next thing you know, half the day has gone by. The worst thing is that when you're addicted, you get good at covering up the habit; I spent so much time looking at the screen and typing on the keyboard, people just assumed I was working hard.I know that I must never forget what it was like then, and what it's like now that I've kicked the habit. I now have more time for productive work and a real social life." Like many addictions, Spolskism has its most detrimental effects on family, friends and workmates, rather than the addict. So often nowadays, we hear the sad stories of Twitter-Widows; tales of long lonely evenings spent whilst their partners are engrossed in their twittering into their 'mobiles' or indulging in their solitary spolskistic habits in privacy, under cover of 'having to do work at home'. Workmates suffer too, when the addicts even take their laptops or mobiles into meetings in order to 'twitter' with their fellow obsessives, even stooping to complain to their followers how boring the meeting is. No; The best advice is to leave twittering to the birds. You know it makes sense.

    Read the article

  • Integrating Code Metrics in TFS 2010 Build

    - by Jakob Ehn
    The build process template and custom activity described in this post is available here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/CodeMetricsSample.zip Running code metrics has been available since VS 2008, but only from inside the IDE. Yesterday Microsoft finally releases a Visual Studio Code Metrics Power Tool 10.0, a command line tool that lets you run code metrics on your applications.  This means that it is now possible to perform code metrics analysis on the build server as part of your nightly/QA builds (for example). In this post I will show how you can run the metrics command line tool, and also a custom activity that reads the output and appends the results to the build log, and also fails he build if the metric values exceeds certain (configurable) treshold values. The code metrics tool analyzes all the methods in the assemblies, measuring cyclomatic complexity, class coupling, depth of inheritance and lines of code. Then it calculates a Maintainability Index from these values that is a measure f how maintanable this method is, between 0 (worst) and 100 (best). For information on hwo this value is calculated, see http://blogs.msdn.com/b/codeanalysis/archive/2007/11/20/maintainability-index-range-and-meaning.aspx. After this it aggregates the information and present it at the class, namespace and module level as well. Running Metrics.exe in a build definition Running the actual tool is easy, just use a InvokeProcess activity last in the Compile the Project sequence, reference the metrics.exe file and pass the correct arguments and you will end up with a result XML file in the drop directory. Here is how it is done in the attached build process template: In the above sequence I first assign the path to the code metrics result file ([BinariesDirectory]\result.xml) to a variable called MetricsResultFile, which is then sent to the InvokeProcess activity in the Arguments property. Here are the arguments for the InvokeProcess activity: Note that we tell metrics.exe to analyze all assemblies located in the Binaries folder. You might want to do some more intelligent filtering here, you probably don’t want to analyze all 3rd party assemblies for example. Note also the path to the metrics.exe, this is the default location when you install the Code Metrics power tool. You must of course install the power tool on all build servers. Using the standard output logging (in the Handle Standard Output/Handle Error Output sections), we get the following output when running the build: Integrating Code Metrics into the build Having the results available next to the build result is nice, but we want to have results integrated in the build result itself, and also to affect the outcome of the build. The point of having QA builds that measure, for example, code metrics is to make it very clear how the code being built measures up to the standards of the project/company. Just having a XML file available in the drop location will not cause the developers to improve their code, but a (partially) failing build will! To do this, we need to write a custom activity that parses the metrics result file, logs it to the build log and fails the build if the values frfom the metrics is below/above some predefined treshold values. The custom activity performs the following steps Parses the XML. I’m using Linq 2 XSD for this, since the XML schema for the result file is available, it is vey easy to generate code that lets you query the structure using standard Linq operators. Runs through the metric result hierarchy and logs the metrics for each level and also verifies maintainability index and the cyclomatic complexity with the treshold values. The treshold values are defined in the build process template are are sent in as arguments to the custom activity If the treshold values are exceeded, the activity either fails or partially fails the current build. For more information about the structure of the code metrics result file, read Cameron Skinner's post about it. It is very simpe and easy to understand. I won’t go through the code of the custom activity here, since there is nothing special about it and it is available for download so you can look at it and play with it yourself. The treshold values for Maintainability Index and Cyclomatic Complexity is defined in the build process template, and can be modified per build definition: I have taken the default value for these settings from my colleague Terje Sandström post on Code Metrics - suggestions for approriate limits. You’ll notice that this is quite an improvement compared to using code metrics inside the IDE, where Red/Yellow/Green limits are fixed (and the default values are somewaht strange, see Terjes post for a discussion on this) This is the first version of the code metrics integration with TFS 2010 Build, I will proabably enhance the functionality and the logging (the “tree view” structure in the log becomes quite hard to read) soon. I will also consider adding it to the Community TFS Build Extensions site when it becomes a bit more mature. Another obvious improvement is to extend the data warehouse of TFS and push the metric results back to the warehouse and make it visible in the reports.

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • Get your content off Blogger.com

    - by Daniel Moth
    Due to blogger.com deprecating FTP users I've decided to move my blog. When I think of the content of a blog, 4 items come to mind: blog posts, comments, binary files that the blog posts linked to (e.g. images, ZIP files) and the CSS+structure of the blog. 1. Binaries The binary files you used in your blog posts are sitting on your own web space, so really blogger.com is not involved with that. Nothing for you to do at this stage, I'll come back to these in another post. 2. CSS and structure In the best case this exists as a separate CSS file on your web space (so no action for now) or in a worst case, like me, your CSS is embedded with the HTML. In the latter case, simply navigate from you dashboard to "Template" then "Edit HTML" and copy paste the contents of the box. Save that locally in a txt file and we'll come back to that in another post. 3. Blog posts and Comments The blog posts and comments exist in all the HTML files on your own web space. Parsing HTML files to extract that can be painful, so it is easier to download the XML files from blogger's servers that contain all your blog posts and comments. 3.1 Single XML file, but incomplete The obvious thing to do is go into your dashboard "Settings" and under the "Basic" tab look at the top next to "Blog Tools". There is a link there to "Export blog" which downloads an XML file with both comments and posts. The problem with that is that it only contains 200 comments - if you have more than that, you will lose the surplus. Also, this XML file has a lot of noise, compared to the better solution described next. (note that a tool I will refer to in a future post deals with either kind of XML file) 3.2 Multiple XML files First you need to find your blog ID. In case you don't know what that is, navigate to the "Template" as described in section 2 above. You will find references to the blog id in the HTML there, but you can also see it as part of the URL in your browser: blogger.com/template-edit.g?blogID=YOUR_NUMERIC_ID. Mine is 7 digits. You can now navigate to these URLs to download the XML for your posts and comments respectively: blogger.com/feeds/YOUR_NUMERIC_ID/posts/default?max-results=500&start-index=1 blogger.com/feeds/YOUR_NUMERIC_ID/comments/default?max-results=200&start-index=1 Note that you can only get 500 posts at a time and only 200 comments at a time. To get more than that you have to change the URL and download the next batch. To get you started, to get the XML for the next 500 posts and next 200 comments respectively you’d have to use these URLs: blogger.com/feeds/YOUR_NUMERIC_ID/posts/default?max-results=500&start-index=501 blogger.com/feeds/YOUR_NUMERIC_ID/comments/default?max-results=200&start-index=201 ...and so on and so forth. Keep all the XML files in the same folder on your local machine (with nothing else in there). 4. Validating the XML aka editing older blog posts The XML files you just downloaded really contain HTML fragments inside for all your blog posts. If you are like me, your blog posts did not conform to XHTML so passing them to an XML parser (which is what we will want to do) will result in the XML parser choking. So the next step is to fix that. This can be no work at all for you, or a huge time sink or just a couple hours of pain (which was my case). The process I followed was to attempt to load the XML files using XmlDocument.Load and wait for the exception to be thrown from my code. The exception would point to the exact offending line and column which would help me fix the issue. Rather than fix it in the XML itself, I would go back and edit the offending blog post and fix it there - recommended! Then I'd repeat the cycle until the XML could be loaded in the XmlDocument. To give you an idea, some of the issues I encountered are: extra or missing quotes in img and href elements, direct usage of chevrons instead of encoding them as &lt;, missing closing tags, mismatched nested pairs of elements and capitalization of html elements. For a full list of things that may go wrong see this. 5. Opportunity for other changes I also found a few posts that did not have a category assigned so I fixed those too. I took the further opportunity to create new categories and tag some of my blog posts with that. Note that I did not remove/change categories of existing posts, but only added.   In an another post we'll see how to use the XML files you stored in the local folder… Comments about this post welcome at the original blog.

    Read the article

  • Tuning Red Gate: #4 of Some

    - by Grant Fritchey
    First time connecting to these servers directly (keys to the kingdom, bwa-ha-ha-ha. oh, excuse me), so I'm going to take a look at the server properties, just to see if there are any issues there. Max memory is set, cool, first possible silly mistake clear. In fact, these look to be nicely set up. Oh, I'd like to see the ANSI Standards set by default, but it's not a big deal. The default location for database data is the F:\ drive, where I saw all the activity last time. Cool, the people maintaining the servers in our company listen, parallelism threshold is set to 35 and optimize for ad hoc is enabled. No shocks, no surprises. The basic setup is appropriate. On to the problem database. Nothing wrong in the properties. The database is in SIMPLE recovery, but I think it's a reporting system, so no worries there. Again, I'd prefer to see the ANSI settings for connections, but that's the worst thing I can see. Time to look at the queries, tables, indexes and statistics because all the information I've collected over the last several days suggests that we're not looking at a systemic problem (except possibly not enough memory), but at the traditional tuning issues. I just want to note that, I started looking at the system, not the queries. So should you when tuning your environment. I know, from the data collected through SQL Monitor, what my top poor performing queries are, and the most frequently called, etc. I'm starting with the most frequently called. I'm going to get the execution plan for this thing out of the cache (although, with the cache dumping constantly, I might not get it). And it's not there. Called 1.3 million times over the last 3 days, but it's not in cache. Wow. OK. I'll see what's in cache for this database: SELECT  deqs.creation_time,         deqs.execution_count,         deqs.max_logical_reads,         deqs.max_elapsed_time,         deqs.total_logical_reads,         deqs.total_elapsed_time,         deqp.query_plan,         SUBSTRING(dest.text, (deqs.statement_start_offset / 2) + 1,                   (deqs.statement_end_offset - deqs.statement_start_offset) / 2                   + 1) AS QueryStatement FROM    sys.dm_exec_query_stats AS deqs         CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest         CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp WHERE   dest.dbid = DB_ID('Warehouse') AND deqs.statement_end_offset > 0 AND deqs.statement_start_offset > 0 ORDER BY deqs.max_logical_reads DESC ; And looking at the most expensive operation, we have our first bad boy: Multiple table scans against very large sets of data and a sort operation. a sort operation? It's an insert. Oh, I see, the table is a heap, so it's doing an insert, then sorting the data and then inserting into the primary key. First question, why isn't this a clustered index? Let's look at some more of the queries. The next one is deceiving. Here's the query plan: You're thinking to yourself, what's the big deal? Well, what if I told you that this thing had 8036318 reads? I know, you're looking at skinny little pipes. Know why? Table variable. Estimated number of rows = 1. Actual number of rows. well, I'm betting several more than one considering it's read 8 MILLION pages off the disk in a single execution. We have a serious and real tuning candidate. Oh, and I missed this, it's loading the table variable from a user defined function. Let me check, let me check. YES! A multi-statement table valued user defined function. And another tuning opportunity. This one's a beauty, seriously. Did I also mention that they're doing a hash against all the columns in the physical table. I'm sure that won't lead to scans of a 500,000 row table, no, not at all. OK. I lied. Of course it is. At least it's on the top part of the Loop which means the scan is only executed once. I just did a cursory check on the next several poor performers. all calling the UDF. I think I found a big tuning opportunity. At this point, I'm typing up internal emails for the company. Someone just had their baby called ugly. In addition to a series of suggested changes that we need to implement, I'm also apologizing for being such an unkind monster as to question whether that third eye & those flippers belong on such an otherwise lovely child.

    Read the article

  • Take Advantage of Oracle's Ongoing Assurance Effort!

    - by eric.maurice
    Hi, this is Eric Maurice again! A few years ago, I posted a blog entry, which discussed the psychology of patching. The point of this blog entry was that a natural tendency existed for systems and database administrators to be reluctant to apply patches, even security patches, because of the fear of "breaking" the system. Unfortunately, this belief in the principle "if it ain't broke, don't fix it!" creates significant risks for organizations. Running systems without applying the proper security patches can greatly compromise the security posture of the organization because the security controls available in the affected system may be compromised as a result of the existence of the unfixed vulnerabilities. As a result, Oracle continues to strongly recommend that customers apply all security fixes as soon as possible. Most recently, I have had a number of conversations with customers who questioned the need to upgrade their highly stable but otherwise unsupported Oracle systems. These customers wanted to know more about the kind of security risks they were exposed to, by running obsolete versions of Oracle software. As per Oracle Support Policies, Critical Patch Updates are produced for currently supported products. In other words, Critical Patch Updates are not created by Oracle for product versions that are no longer covered under the Premier Support or Extended Support phases of the Lifetime Support Policy. One statement used in each Critical Patch Update Advisory is particularly important: "We recommend that customers upgrade to a supported version of Oracle products in order to obtain patches. Unsupported products, releases and versions are not tested for the presence of vulnerabilities addressed by this Critical Patch Update. However, it is likely that earlier versions of affected releases are also affected by these vulnerabilities." The purpose of this warning is to inform Oracle customers that a number of the vulnerabilities fixed in each Critical Patch Update may affect older versions of a specific product line. In other words, each Critical Patch Update provides a number of fixes for currently supported versions of a given product line (this information is listed for each bug in the Risk Matrices of the Critical Patch Update Advisory), but the unsupported versions in the same product line, while they may be affected by the vulnerabilities, will not receive the fixes, and are therefore vulnerable to attacks. The risk assumed by organizations wishing to remain on unsupported versions is amplified by the behavior of malicious hackers, who typically will attempt to, and sometimes succeed in, reverse-engineering the content of vendors' security fixes. As a result, it is not uncommon for exploits to be published soon after Oracle discloses vulnerabilities with the release of a Critical Patch Update or Security Alert. Let's consider now the nature of the vulnerabilities that may exist in obsolete versions of Oracle software. A number of severe vulnerabilities have been fixed by Oracle over the years. While Oracle does not test unsupported products, releases and versions for the presence of vulnerabilities addressed by each Critical Patch Update, it should be assumed that a number of the vulnerabilities fixed with the Critical Patch Update program do exist in unsupported versions (regardless of the product considered). The most severe vulnerabilities fixed in past Critical Patch Updates may result in full compromise of the targeted systems, down to the OS level, by remote and unauthenticated users (these vulnerabilities receive a CVSS Base Score of 10.0) or almost as critically, may result in the compromise of the affected systems (without compromising the underlying OS) by a remote and unauthenticated users (these vulnerabilities receive a CVSS Base Score of 7.5). Such vulnerabilities may result in complete takeover of the targeted machine (for the CVSS 10.0), or may result in allowing the attacker the ability to create a denial of service against the affected system or even hijacking or stealing all the data hosted by the compromised system (for the CVSS 7.5). The bottom line is that organizations should assume the worst case: that the most critical vulnerabilities are present in their unsupported version; therefore, it is Oracle's recommendation that all organizations move to supported systems and apply security patches in a timely fashion. Organizations that currently run supported versions but may be late in their security patch release level can quickly catch up because most Critical Patch Updates are cumulative. With a few exceptions noted in Oracle's Critical Patch Update Advisory, the application of the most recent Critical Patch Update will bring these products to current security patch level and provide the organization with the best possible security posture for their patch level. Furthermore, organizations are encouraged to upgrade to most recent versions as this will greatly improve their security posture. At Oracle, our security fixing policies state that security fixes are produced for the main code line first, and as a result, our products benefit from the mistakes made in previous version(s). Our ongoing assurance effort ensures that we work diligently to fix the vulnerabilities we find, and aim at constantly improving the security posture our products provide by default. Patch sets include numerous in-depth fixes in addition to those delivered through the Critical Patch Update and, in certain instances, important security fixes require major architectural changes that can only be included in new product releases (and cannot be backported through the Critical Patch Update program). For More Information: • Mary Ann Davidson is giving a webcast interview on Oracle Software Security Assurance on February 24th. The registration link for attending this webcast is located at http://event.on24.com/r.htm?e=280304&s=1&k=6A7152F62313CA09F77EBCEEA9B6294F&partnerref=EricMblog • A blog entry discussing Oracle's practices for ensuring the quality of Critical patch Updates can be found at http://blogs.oracle.com/security/2009/07/ensuring_critical_patch_update_quality.html • The blog entry "To patch or not to patch" is located at http://blogs.oracle.com/security/2008/01/to_patch_or_not_to_patch.html • Oracle's Support Policies are located at http://www.oracle.com/us/support/policies/index.html • The Critical Patch Update & Security Alert page is located at http://www.oracle.com/technetwork/topics/security/alerts-086861.html

    Read the article

  • Adventures in Scrum: Lesson 1 &ndash; The failed Sprint

    - by Martin Hinshelwood
    I recently had a conversation with a product owner that wanted to have the Scrum team broken up into smaller units so that less time was wasted on the Scrum Ceremonies! Their complaint was around the need in Scrum to have the entire “Team” (7+-2) involved in the sizing of the work during the “Sprint Planning Meeting”.  The standard flippant answer of all Scrum professionals, “Well that's not Scrum”, does not get you any brownie points in these situations. The response could be “Well we are not doing Scrum then” which in turn leads to “We are doing Scrum…But, we have split the scrum team into units of 2/3 so that they can concentrate on a specific area of work”. While this may work, it is not Scrum and should not be called so… It is just a form of Agile. Don’t get me wrong at this stage, there is nothing wrong with Agile, just don’t call it Scrum. The reason that the Product Owner wants to do this is that, in effect, through a number of miscommunications and failings in our implementation of Scrum, there was NO unit of potentially Shippable software at the end of the first sprint. It does not matter to them that most Scrum teams will fail the first Sprint, even those that are high performing teams. Remember it is the product owners their money! We should NOT break up scrum teams into smaller units for the purpose of having less people tied up in the Scrum Ceremonies. The amount of backlog the Team selects is solely up to the Team… Only the Team can assess what it can accomplish over the upcoming Sprint. - Scrum Guide, Scrum.org The entire team must accept the work and in order to understand what they can accept they must be free to size it as a team. This both encourages common understanding and increases visibility on why team members think a task is of a particular size. This has the benefit of increasing the knowledge of the entire team in the problem domain. A new Team often first realizes that it will either sink or swim as a Team, not individually, in this meeting. The Team realizes that it must rely on itself. As it realizes this, it starts to self-organize to take on the characteristics and behaviour of a real Team. - Scrum Guide, Scrum.org This paragraph goes to the why of having the whole team at the meeting; The goal of Scrum it to produce a unit of potentially shippable software at the end of every Sprint. In order to achieve this we need high performing teams and this is what Scrum as a framework has been optimised to produce. I think that our Product Owner is understandably upset over loosing two weeks work and is losing sight the end goal of Scrum in the failures of the moment. As the man spending the money, I completely understand his perspective and I think that we should not have started Scrum on an internal project, but selected a customer  that is open to the ideas and complications of Scrum. So, what should we have NOT done on our first Scrum project: Should not have had 3 interns as the only on site resource – This lead to bad practices as the experienced guys were not there helping and correcting as they usually would. Should not have had the only experienced guys offsite – With both the experienced technical guys in completely different time zones it was difficult to get time for questions. Helping the guys on site was just plain impossible. Should not have used a part time ScrumMaster – Although the ScrumMaster attended all of the Ceremonies, because they are only in 2 full days of the week it makes it difficult for the team to raise impediments as they go. Should not have used a proxy product owner. – This was probably the worst decision that was made. Mainly because the proxy product owner did not have the same vision as the product owner. While Scrum does not explicitly reject the idea of a Proxy Product Owner, I do not think it works very well in practice. The “single wringable neck” needs to contain both the Money and the Vision as well as attending the required meetings. I will be brining all of these things up at the Sprint Retrospective and we will learn from our mistakes and move on. Do, Inspect then Adapt…   Technorati Tags: Scrum,Sprint Planing,Sprint Retrospective,Scrum.org,Scrum Guide,Scrum Ceremonies,Scrummaster,Product Owner Need Help? Professional Scrum Developer Training SSW has six Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools.

    Read the article

  • Why do my 512x512 bitmaps look jaggy on Android OpenGL?

    - by Milo Mordaunt
    This is sort of driving me nuts, I've googled and googled and tried everything I can think of, but my sprites still look super blurry and super jaggy. Example: Here: https://docs.google.com/open?id=0Bx9Gbwnv9Hd2TmpiZkFycUNmRTA If you click through to the actual full size image you should see what I mean, it's like it's taking and average of every 5*5 pixels or something, the background looks really blurry and blocky, but the ball is the worst. The clouds look all right for some reason, probably because they're mostly transparent. I know the pngs aren't top notch themselves but hey, I'm no artist! I would imagine it's a problem with either: a. How the pngs are made example sprite (512x512): https://docs.google.com/open?id=0Bx9Gbwnv9Hd2a2RRQlJiQTFJUEE b. How my Matrices work This is the relevant parts of the renderer: public void onDrawFrame(GL10 unused) { if(world != null) { dt = System.currentTimeMillis() - endTime; world.update( (float) dt); // Redraw background color GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); Matrix.setIdentityM(mvMatrix, 0); Matrix.translateM(mvMatrix, 0, 0f, 0f, 0f); world.draw(mvMatrix, mProjMatrix); endTime = System.currentTimeMillis(); } else { Log.d(TAG, "There is no world...."); } } public void onSurfaceChanged(GL10 unused, int width, int height) { GLES20.glViewport(0, 0, width, height); Matrix.orthoM(mProjMatrix, 0, 0, width /2, 0, height /2, -1.f, 1.f); } And this is what each Quad does when draw is called: public void draw(float[] mvMatrix, float[] pMatrix) { Matrix.setIdentityM(mMatrix, 0); Matrix.setIdentityM(mvMatrix, 0); Matrix.translateM(mMatrix, 0, xPos, yPos, 0.f); Matrix.multiplyMM(mvMatrix, 0, mvMatrix, 0, mMatrix, 0); Matrix.scaleM(mvMatrix, 0, scale, scale, 0f); Matrix.rotateM(mvMatrix, 0, angle, 0f, 0f, -1f); GLES20.glUseProgram(mProgram); posAttr = GLES20.glGetAttribLocation(mProgram, "vPosition"); texAttr = GLES20.glGetAttribLocation(mProgram, "aTexCo"); uSampler = GLES20.glGetUniformLocation(mProgram, "uSampler"); int alphaHandle = GLES20.glGetUniformLocation(mProgram, "alpha"); GLES20.glVertexAttribPointer(posAttr, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, 0, vertexBuffer); GLES20.glVertexAttribPointer(texAttr, 2, GLES20.GL_FLOAT, false, 0, texCoBuffer); GLES20.glEnableVertexAttribArray(posAttr); GLES20.glEnableVertexAttribArray(texAttr); GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture); GLES20.glUniform1i(uSampler, 0); GLES20.glUniform1f(alphaHandle, alpha); mMVMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVMatrix"); mPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uPMatrix"); GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mvMatrix, 0); GLES20.glUniformMatrix4fv(mPMatrixHandle, 1, false, pMatrix, 0); GLES20.glDrawElements(GLES20.GL_TRIANGLE_STRIP, 4, GLES20.GL_UNSIGNED_SHORT, indicesBuffer); GLES20.glDisableVertexAttribArray(posAttr); GLES20.glDisableVertexAttribArray(texAttr); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0); } c. How my texture loading/blending/shaders setup works Here is the renderer setup: public void onSurfaceCreated(GL10 unused, EGLConfig config) { // Set the background frame color GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); GLES20.glDisable(GLES20.GL_DEPTH_TEST); GLES20.glDepthMask(false); GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA); GLES20.glEnable(GLES20.GL_BLEND); GLES20.glEnable(GLES20.GL_DITHER); } Here is the vertex shader: attribute vec4 vPosition; attribute vec2 aTexCo; varying vec2 vTexCo; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; void main() { gl_Position = uPMatrix * uMVMatrix * vPosition; vTexCo = aTexCo; } And here's the fragment shader: precision mediump float; uniform sampler2D uSampler; uniform vec4 vColor; varying vec2 vTexCo; varying float alpha; void main() { vec4 color = texture2D(uSampler, vec2(vTexCo)); gl_FragColor = color; if(gl_FragColor.a == 0.0) { "discard; } } This is how textures are loaded: private int loadTexture(int rescource) { int[] texture = new int[1]; BitmapFactory.Options opts = new BitmapFactory.Options(); opts.inScaled = false; Bitmap temp = BitmapFactory.decodeResource(context.getResources(), rescource, opts); GLES20.glGenTextures(1, texture, 0); GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture[0]); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR); GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, temp, 0); GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0); temp.recycle(); return texture[0]; } I'm sure I'm doing about 20,000 things wrong, so I'm really sorry if the problem is blindingly obvious... The test device is a Galaxy Note, running a JellyBean custom ROM, if that matters at all. So the screen resolution is 1280x800, which means... The background is 1024x1024, so yeah it might be a little blurry, but shouldn't be made of lego. Thank you so much, any answer at all would be appreciated.

    Read the article

  • Official and unofficial apps in the iOS, WP7, and Android marketplaces

    - by Bil Simser
    The last few months have seen people complaining about the lack of "official" apps in the Windows Phone marketplace. In fact a couple of months ago I wrote about this very thing here and if we really needed these official apps or could get by with third-party solutions. Recently a list of "Top 100 Mobile Apps" crossed my desk and it was curious. 40 iPhone apps, 40 Android apps, 10 WP7 apps, and 10 BlackBerry apps. Really? 10 for WP7? So I wondered if the media was just playing this up and maybe continuing to do what I think most vendors are doing which is treating Windows Phone as the red-headed step-child you keep in the basement while all along there's nothing wrong with them. I put together the list and went digging to see how many of the top 40 iOS and Android apps were also on the Windows Phone platform (sorry BlackBerry, you should just shut your doors right now). Here's the results. Note, these are all *free* apps. There might be other pay apps that have official representation across all mobile devices, I just chose to hunt these ones down because I'm cheap. In the top 40, I easily plucked out 20 that had official apps on all three platforms. These were: Amazon Mobile, ESPN Score Centre, Evernote, Facebook, Foursquare, Google Search, IMDB, Kindle, Shazam, Skype (yes, I know, in beta on WP7), SlackerRadio, The Weather Channel, TripIt, Twitter, Yelp, Flixster, Netflix, TuneIn Radio, Dictionary.com, Angry Birds, and Groupon. Hey, that's pretty good IMHO. 20 or so apps, all free, and all fully functional and supported (and in some cases, even better looking on the Windows Phone platform than the other platforms). A dozen or so more apps had official apps on some platforms but not all, so yes, there are gaps here. Here's a rundown of the hangers-on: Adobe Photoshop Express This looks great on the iOS platform and there's even an official version on droid. Hope Adobe brings this to WP7. There are other photo editing programs though if you go looking (maybe we can get Paint.NET to be ported to the phone?) BBC News A few apps offer news feeds but nothing official on the Windows Phone. The feeds are good but without video this app needs some WP7 love. Dropbox Again Windows Phone looses out here with no official app. There are a few third party ones that will help you along and offer most of the functionality that you need but no integration that an official app might bring. Epicurious Droid seems to be the trailer here as there are apps for it but nothing official (from what I can tell). Both iOS and WP7 have them. Flipboard It's sad with Flipboard as it's such a great newsreader. The only offiical app is for iOS but frankly the iPhone version looks horrible so without a tablet the experience here isn't that hot. Maybe with WP8. Currently there's nothing even remotely similar to this on the other platforms. Google+ Is anyone still using this? No official app for WP7 but some clones. Apparently there's no API so people are just screen scraping. Ugh. Mint.com This app has all kinds of buzz and a lot of votes on the application requests site. Official apps for iOS and droid. No WP7 love (yet). TED Quite a few TED apps on WP7 but nothing official. I think the third party ones suffice and some are pretty nice looking, taking advantage of the Metro interface and making for a good show. WebMD There's a third party app on WP7 here but nothing official. It seems to contain all the same information and functionality the official apps do so not sure if an official one is needed but its here for inclusion. The other apps in the top 40 were either very specific to the platform (for example all three of them have a "Find my Phone" app). There are others that are missing out on the WP7 platform like ooVoo, Words With Friends, and some of the Google apps (Google Voice for example). Since you can integrate your GMail account right into the Windows Phone (via linked inboxes) I'm not sure if there's a need for an official GMail app here. Looking at the numbers Windows Phone still gets the worst of the deal here with half a dozen highly popular "offical" apps that exist on the other mobile platforms and in some cases, nothing even remotely similar to the official app to compare. This doesn't include things like Instagram, PInterest, and others (don't get me started on those). Still, with over 20+ highly popular free apps all represented on all three mobile platforms I don't think it's a bad place to be in. The Windows Phone platform could get a little more love from the vendors missing here, or at least open up your APIs so the third party crowd can step in and take up the slack. P.S. these are just my observations and I might have got a few items wrong. Feel free to chime in with missing or incorrect information. I am after all human. Well, most of me is.

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • Cardinality Estimation Bug with Lookups in SQL Server 2008 onward

    - by Paul White
    Cost-based optimization stands or falls on the quality of cardinality estimates (expected row counts).  If the optimizer has incorrect information to start with, it is quite unlikely to produce good quality execution plans except by chance.  There are many ways we can provide good starting information to the optimizer, and even more ways for cardinality estimation to go wrong.  Good database people know this, and work hard to write optimizer-friendly queries with a schema and metadata (e.g. statistics) that reduce the chances of poor cardinality estimation producing a sub-optimal plan.  Today, I am going to look at a case where poor cardinality estimation is Microsoft’s fault, and not yours. SQL Server 2005 SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; The query plan on SQL Server 2005 is as follows (if you are using a more recent version of AdventureWorks, you will need to change the year on the date range from 2003 to 2007): There is an Index Seek on ProductID = 1, followed by a Key Lookup to find the Transaction Date for each row, and finally a Filter to restrict the results to only those rows where Transaction Date falls in the range specified.  The cardinality estimate of 45 rows at the Index Seek is exactly correct.  The table is not very large, there are up-to-date statistics associated with the index, so this is as expected. The estimate for the Key Lookup is also exactly right.  Each lookup into the Clustered Index to find the Transaction Date is guaranteed to return exactly one row.  The plan shows that the Key Lookup is expected to be executed 45 times.  The estimate for the Inner Join output is also correct – 45 rows from the seek joining to one row each time, gives 45 rows as output. The Filter estimate is also very good: the optimizer estimates 16.9951 rows will match the specified range of transaction dates.  Eleven rows are produced by this query, but that small difference is quite normal and certainly nothing to worry about here.  All good so far. SQL Server 2008 onward The same query executed against an identical copy of AdventureWorks on SQL Server 2008 produces a different execution plan: The optimizer has pushed the Filter conditions seen in the 2005 plan down to the Key Lookup.  This is a good optimization – it makes sense to filter rows out as early as possible.  Unfortunately, it has made a bit of a mess of the cardinality estimates. The post-Filter estimate of 16.9951 rows seen in the 2005 plan has moved with the predicate on Transaction Date.  Instead of estimating one row, the plan now suggests that 16.9951 rows will be produced by each clustered index lookup – clearly not right!  This misinformation also confuses SQL Sentry Plan Explorer: Plan Explorer shows 765 rows expected from the Key Lookup (it multiplies a rounded estimate of 17 rows by 45 expected executions to give 765 rows total). Workarounds One workaround is to provide a covering non-clustered index (avoiding the lookup avoids the problem of course): CREATE INDEX nc1 ON Production.TransactionHistory (ProductID) INCLUDE (TransactionDate); With the Transaction Date filter applied as a residual predicate in the same operator as the seek, the estimate is again as expected: We could also force the use of the ultimate covering index (the clustered one): SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WITH (INDEX(1)) WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; Summary Providing a covering non-clustered index for all possible queries is not always practical, and scanning the clustered index will rarely be optimal.  Nevertheless, these are the best workarounds we have today. In the meantime, watch out for poor cardinality estimates when a predicate is applied as part of a lookup. The worst thing is that the estimate after the lookup join in the 2008+ plans is wrong.  It’s not hopelessly wrong in this particular case (45 versus 16.9951 is not the end of the world) but it easily can be much worse, and there’s not much you can do about it.  Any decisions made by the optimizer after such a lookup could be based on very wrong information – which can only be bad news. If you think this situation should be improved, please vote for this Connect item. © 2012 Paul White – All Rights Reserved twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • How Estimates Became Quotes

    - by Lee Brandt
    It’s our fault. Well, not completely, but we haven’t helped the situation any. All of what follows comes from my own experiences which, from talking to lots of other developers about it, seems to be pretty much par for the course. Where We Started When we first started estimating, we estimated pretty clearly. We would try to imagine something we’d done that was similar to the project being estimated and we’d toss it about in our heads a bit and see how much bigger or smaller we thought this new thing was, and add or subtract accordingly. We wouldn’t spend too much time on it, because we wanted to get to writing the software. Eventually, we’d come across some huge problem that there was now way we could’ve known about ahead of time. Either we didn’t see this thing or, we didn’t realize that this particular version of a problem would be so… problematic. We usually call this “not knowing what we don’t know”. It’s unavoidable. We just can’t know. Until we wade in and start putting some code together, there are just some things we won’t know… and some things we don’t even know that we don’t know. Y’know? So what happens? We go over budget. Project managers scream and dance the dance of the stressed-out project manager, and there is nothing we can do (or could’ve done) about it. We didn’t know. We thought about it for a bit and we didn’t see this herculean task sitting in the middle of our nice quiet project, and it has bitten us in the rear end. We now know how to handle this in the future, though. We will take some more time to pick around the requirements and discover all those things we don’t know. We’ll do some prototyping, we’ll read some blogs about similar projects, we’ll really grill the customer with questions during the requirements gathering phase. We’ll keeping asking “what else?” until the shove us down the stairs. We’ll take our time and uncover it all. We Learned, But Good The next time comes, and you know what happens? We do it. We grill the customer for weeks and prototype and read and research and we estimate everything down to the last button on the last form. Know what that gets us? It gets us three months of wasted time, and our estimate will still be off. Possibly off by a factor of four. WTF, mate? No way we could be surprised by something! We uncovered every particle. We turned every stone. How is it we still came across unknowns? Because we STILL didn’t know what we didn’t know. How could we? We didn’t know to ask. The worst part is, we’ve now convinced the product that this is NOT an estimate. It is a solid number based on massive research and an endless number of questions that they answered. There is absolutely now way you don’t know everything there is to know about this project now. No way there is anything you haven’t uncovered. And their faith in that “Esti-Quote” goes through the roof. When the project goes over this time, they might even begin to question whether or not you know what you’re doing. Who could blame them? You drilled them for weeks about every little thing, and when they complained about all the questions, you told them you wanted to uncover everything so there would be no surprises. SO we set them up to faile Guess, Then Plan We had a chance. At the beginning we could have just said, “That’s just a gut-feeling estimate, based on my past experience with similar projects. There could still be surprises.” If we spend SOME time doing SOME discovery and then bounce that against our own past experiences, we can come up with a fairly healthy estimate. We can then help the product owner understand that an estimate is a guess. Sure, it’s an educated guess, but it is still a guess. If we get it right it will be almost completely luck. Then, we help them to plan the development by taking that guess (yes, they still need the guess for planning purposes) and start measuring early and often to see if we still think we are right. We should adjust the estimate and alert the product owner as soon as we see problems (bad news does not age well) and we should be able to see any problems immediately if we are constantly measuring our pace. In lean software, we start with that guess and begin measuring cycle times immediately. Then we can make projections based on those cycle times and compare them to our estimate. This constant feedback is the best way to ensure that there are no surprises at the END of the project. There sill still be surprises, but we’ll see them sooner and have a better understanding of how they will affect our overall timeline. What do you think?

    Read the article

  • "Yes, but that's niche."

    - by Geertjan
    JavaOne 2012 has come to an end though it feels like it hasn't even started yet! What happened, time is a weird thing. Too many things to report on. James Gosling's appearance at the JavaOne community keynote was seen, by everyone (which is quite a lot) of people I talked to, as the highlight of the conference. It was interesting that the software for the Duke's Choice Award winning Liquid Robotics that James Gosling is now part of and came to talk about is a Swing application that uses the WorldWind libraries. It was also interesting that James Gosling pointed out to the conference: "There are things you can't do using HTML." That brings me to the wonderful counter argument to the above, which I spend my time running into a lot: "Yes, but that's niche." It's a killer argument, i.e., it kills all discussions completely in one fell swoop. Kind of when you're talking about someone and then this sentence drops into the conversation: "Yes, but she's got cancer now." Here's one implementation of "Yes, but that's niche": Person A: All applications are moving to the web, tablet, and mobile phone. That's especially true now with HTML5, which is going to wipe away everything everywhere and all applications are going to be browser based. Person B: What about air traffic control applications? Will they run on mobile phones too? And do you see defence applications running in a browser? Don't you agree that there are multiple scenarios imaginable where the Java desktop is the optimal platform for running applications? Person A: Yes, but that's niche. Here's another implementation, though it contradicts the above [despite often being used by the same people], since JavaFX is a Java desktop technology: Person A: Swing is dead. Everyone is going to be using purely JavaFX and nothing else. Person B: Does JavaFX have a docking framework and a module system? Does it have a plugin system?  These are some of the absolutely basic requirements of Java desktop software once you get to high end systems, e.g., banks, defence force, oil/gas services. Those kinds of applications need a web browser and so they love the JavaFX WebView component and they also love the animated JavaFX charting components. But they need so much more than that, i.e., an application framework. Aren't there requirements that JavaFX isn't meeting since it is a UI toolkit, just like Swing is a UI toolkit, and what they have in common is their lack, i.e., natively, of any kind of application framework? Don't people need more than a single window and a monolithic application structure? Person A: Yes, but that's niche. In other words, anything that doesn't fit within the currently dominant philosophy is "niche", for no other reason than that it doesn't fit within the currently dominant philosophy... regardless of the actual needs of real developers. Saying "Yes, but that's niche", kills the discussion completely, because it relegates one side of the conversation to the arcane and irrelevant corners of the universe. You're kind of like Cobol now, as soon as "Yes, but that's niche" is said. What's worst about "Yes, but that's niche" is that it doesn't enter into any discussion about user requirements, i.e., there's so few that need this particular solution that we don't even need to talk about them anymore. Note, of course, that I'm not referring specifically or generically to anyone or anything in particular. Just picking up from conversations I've picked up on as I was scurrying around the Hilton's corridors while looking for the location of my next presentation over the past few days. It does, however, mean that there were people thinking "Yes, but that's niche" while listening to James Gosling pointing out that HTML is not the be-all and end-all of absolutely everything. And so this all leaves me wondering: How many applications must be part of a niche for the niche to no longer be a niche? And what if there are multiple small niches that have the same requirements? Don't all those small niches together form a larger whole, one that should be taken seriously, i.e., a whole that is not a niche?

    Read the article

  • Sending Outlook Invites

    - by Daniel Moth
    Sending an Outlook invite for a meeting (also referred to as S+ in Microsoft) is a simple thing to get right if you just run the quick mental check below, which is driven by visual cues in the Outlook UI. I know that some folks don’t do this often or are new to Outlook, so if you know one of those folks share this blog post with them and if they read nothing else ask them to read step 7. Add on the To line the folks that you want to be at the meeting. Indicate optional invitees. Click on the “To” button to bring up the dialog that lets you move folks to be Optional (you can also do this from the Scheduling Assistant). Set the Reminder according to the attendee that has to travel the most. 5 minutes is the minimum. Use the Response Options and uncheck the "Request Response" if your event is going ahead regardless of who can make it or not, i.e. if everyone is optional. Don’t force every recipient to make an extra click, instead make the extra click yourself - you are the organizer. Add a good subject Make the subject such that just by reading it folks know what the meeting is about. Examples, e.g. "Review…", "Finalize…", "XYZ sync up" If this is only between two people and what is commonly referred to as a one to one, the subject would be something like "MyName/YourName 1:1" Write the subject in such a way that when the recipient sees this on their calendar among all the other items, they know what this meeting is about without having to see location, recipients, or any other information about the invite. Add a location, typically a meeting room. If recipients are from different buildings, schedule it where the folks that are doing the other folks a favor live. Otherwise schedule it wherever the least amount of people will have to travel. If you send me an invite to come to your building, and there is more of us than you, you are silently sending me the message that you are doing me a favor so if you don’t want to do that, include a note of why this is in your building, e.g. "Sorry we are slammed with back to back meetings today so hope you can come over to our building". If this is in someone's office, the location would be something like "Moth's office (7/666)" where in parenthesis you see the office location. If some folks are remote in another building/country, or if you know you picked a time which wasn't free for everyone, add an Online option (click the Lync Meeting button). Add a date and time. This MUST be at a time that is showing on the recipients’ calendar as FREE or at worst TENTATIVE. You can check that on the Scheduling Assistant. The reality is that this is not always possible, so in that case you MUST say something about it in the Invite Body, e.g. "Sorry I can see X has a conflict, but I cannot find a better slot", or "With so many of us there are some conflicts and I cannot find a better slot so hope this works", or "Apologies but due to Y we must have this meeting at this time and I know there are some conflicts, hope you can make it anyway". When you do that, I better not be able to find a better slot myself for all of us, and of course when you do that you have implicitly designated the Busy folks as optional. Finally, the body of the invite. This has the agenda of the meeting and if applicable the courtesy apologies due to messing up steps 6 & 7. This should not be the introduction to the meeting, in other words the recipients should not be surprised when they see the invite and go to the body to read it. Notifying them of the meeting takes place via separate email where you explain the purpose and give them a heads up that you'll be sending an invite. That separate email is also your chance to attach documents, don’t do that as part of the invite. TIP: If you have sent mail about the meeting, you can then go to your sent folder to select the message and click the "Meeting" button (Ctrl+Alt+R). This will populate the body with the necessary background, auto select the mandatory and optional attendees as per the TO/CC line, and have a subject that may be good enough already (or you can tweak it). Long to write, but very quick to remember and enforce since most of it is common sense and the checklist is driven of the visual cues in the UI you use to send the invite. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • 24 Hours of PASS – first reflections

    - by Rob Farley
    A few days after the end of 24HOP, I find myself reflecting on it. I’m still waiting on most of the information. I want to be able to discover things like where the countries represented on each of the sessions, and things like that. So far, I have the feedback scores and the numbers of attendees. The data was provided in a PDF, so while I wait for it to appear in a more flexible format, I’ve pushed the 24 attendee numbers into Excel. This chart shows the numbers by time. Remember that we started at midnight GMT, which was 10:30am in my part of the world and 8pm in New York. It’s probably no surprise that numbers drooped a bit at the start, stayed comparatively low, and then grew as the larger populations of the English-speaking world woke up. I remember last time 24HOP ran for 24 hours straight, there were quite a few sessions with less than 100 attendees. None this time though. We got close, but even when it was 4am in New York, 8am in London and 7pm in Sydney (which would have to be the worst slot for attracting people), we still had over 100 people tuning in. As expected numbers grew as the UK woke up, and even more so as the US did, with numbers peaking at 755 for the “3pm in New York” session on SQL Server Data Tools. Kendra Little almost reached those numbers too, and certainly contributed the biggest ‘spike’ on the chart with her session five hours earlier. Of all the sessions, Kendra had the highest proportion of ‘Excellent’s for the “Overall Evaluation of the session” question, and those of you who saw her probably won’t be surprised by that. Kendra had one of the best ranked sessions from the 24HOP event this time last year (narrowly missing out on being top 3), and she has produced a lot of good video content since then. The reports indicate that there were nearly 8.5 thousand attendees across the 24 sessions, averaging over 350 at each one. I’m looking forward to seeing how many different people that was, although I do know that Wil Sisney managed to attend every single one (if you did too, please let me know). Wil even moderated one of the sessions, which made his feat even greater. Thanks Wil. I also want to send massive thanks to Dave Dustin. Dave probably would have attended all of the sessions, if it weren’t for a power outage that forced him to take a break. He was also a moderator, and it was during this session that he earned special praise. Part way into the session he was moderating, the speaker lost connectivity and couldn’t get back for about fifteen minutes. That’s an incredibly long time when you’re in a live presentation. There were over 200 people tuned in at the time, and I’m sure Dave was as stressed as I was to have a speaker disappear. I started chasing down a phone number for the speaker, while Dave spoke to the audience. And he did brilliantly. He started answering questions, and kept doing that until the speaker came back. Bear in mind that Dave hadn’t expected to give a presentation on that topic (or any other), and was simply drawing on his SQL expertise to get him through. Also consider that this was between midnight at 1am in Dave’s part of the world (Auckland, NZ). I would’ve been expecting just to welcome people, monitor questions, probably read some out, and in general, help make things run smoothly. He went far beyond the call of duty, and if I had a medal to give him, he’d definitely be getting one. On the whole, I think this 24HOP was a success. We tried a different platform, and I think for the most part it was a popular move. We didn’t ask the question “Was this better than LiveMeeting?”, but we did get a number of people telling us that they thought the platform was very good. Some people have told me I get a chance to put my feet up now that this is over. As I’m also co-ordinating a tour of SQLSaturday events across the Australia/New Zealand region, I don’t quite get to take that much of a break (plus, there’s the little thing of squeezing in seven SQL 2012 exams over the next 2.5 weeks). But I am pleased to be reflecting on this event rather than anticipating it. There were a number of factors that could have gone badly, but on the whole I’m pleased about how it went. A massive thanks to everyone involved. If you’re reading this and thinking you wish you could’ve tuned in more, don’t worry – they were all recorded and you’ll be able to watch them on demand very soon. But as well as that, PASS has a stream of content produced by the Virtual Chapters, so you can keep learning from the comfort of your desk all year round. More info on them at sqlpass.org, of course.

    Read the article

  • Private Cloud: Putting some method behind the madness

    - by Sudip Datta
    Finally, I decided to join the blogging community. And what could be a better time to start than the week after OpenWorld 2012. 50K+ attendees, demonstrations, speaker sessions and a whole lot of buzz on Oracle Cloud..It was raining clouds in this year's Openworld. I am not here to write about Oracle's cloud strategy in general, but on Enterprise Manager's cloud management capabilities. This year's Openworld was the first after we announced the 12c Cloud Control and we were happy to share the stage with quite a few early adopters. Stay tuned for videos from our customers and partners, I will post them as they get published. I met a number of platform administrators in Oracle-DBAs, Middleware Admins, SOA Admins...The cloud has affected them all, at least to the point where it beckoned more than just curiosity..Most IT infrastructure are already heavily virtualized (on VMWare and on others including Oracle VM), and some would claim they are already on “cloud” (at least their Sysadmins told them so). But none of them were confident of the benefits because their pain points continued to grow.. Isn't cloud supposed to ease those? Instead, they were chasing hundreds of databases running on hundreds of VMs, often with as much certainty propounded by Heisenberg. What happened to the age-old IT discipline around administration, compliance, configuration management? VMs are great for what they are. I personally think they have opened the doors to new approaches in which an application stack gets provisioned and updated. In fact, Enterprise Manager 12c is possibly the only tool out there that can provision full-fledged application as VM Assemblies. In this year's Openworld, customers talked on how they provisioned RAC and Siebel assemblies, which as the techies out there know, are not trivial (hearing provisioning time for Siebel down from weeks to hours was gratifying indeed). However, I do have an issue with a "one-size fits all" approach to cloud. In a week's span, I met several personas: Project owners requiring an EC2 like VM instance for their projects Admins needing the same for Sparc-Solaris. DBAs requiring dedicated databases for new projects APEX Developers needing just a ready-to-consume schema as a service Java Developers looking for a runtime platform QA engineers needing a fast clone of their production environment If you drill down further, you will end up peeling more layers of the details. For example, the requirements for Load testing and Functional testing are very different. For Load testing the test environment should ideally be the same as the production. You shouldn't run production on Exadata and load test on a VM; they will just not be good representations of one another. For Functional testing it does not possibly matter. DBAs seem to be at the worst affected of the lot. It seems they have been asked to choose between agile provisioning and  faster runtime performance. And in some cases, it is really a Hobson's choice, because their infrastructure provider made no distinction between the OLTP application and the Virtual desktop! Sad indeed. When one looks at the portfolio of services that we already offer (vanilla IaaS, VM Assembly based PaaS, DBaaS) or have announced (Java PaaS, Instant Cloning, Schema-aaS), one can possibly think that we are trying to be the "renaissance man" ! Well I would have possibly digested that had it not been for the various personas that I described above. Getting the use cases right is very important for an application such as cloud management. We iterate and iterate over these over and over again and re-validate them in CABs (Customer Advisory Boards). We consider over the major aspects of tenancy: service placement, resource isolation (can a tenant execute an expensive SQL and run away with all the resources), quota and security. We, in Engineering, keep reminding ourselves that we are dealing with enterprise clouds. We owe it to our customer base ! In the coming posts, I will drill down more into each of the services. In the meanwhile, here are some collateral and  demos for starters with EM 12c. http://www.oracle.com/technetwork/oem/cloud-mgmt/index.html Sudip Datta The views expressed here are my own and do not necessarily reflect the views of Oracle. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter --

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37  | Next Page >