Source-control 'wet-work'?
- by Phil Factor
When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it
was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of
accident has become a rare event. If it weren’t for a deranged laptop, and
my distraction, the code wouldn’t have been lost this time. As always, I
sighed, had a soothing cup of tea, and typed it all in again. The new code I
hastily tapped in was much better: I’d held in my head the essence of how the
code should work rather than the details: I now knew for certain the start point,
the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write
logical code that performed better. Because I could work so quickly, I was able
to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact,
easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling
and half-baked ideas.
What a shame that technology is now so good that developers rarely experience the cleansing shock
of losing one’s code and having to rewrite it from scratch. If you’ve never
accidentally lost your code, then it is worth doing it deliberately once for the
experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them
in the bin, and started again from scratch. Leonardo’s obsessive reworking
of the Mona Lisa was renowned because it was so unusual: Most artists have
been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and
the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven
Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.
Now, any writer or artist is seduced by technology into altering or refining
their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from
the blank page. It is easy to pick away at a flawed work, but the real creative
process is far more brutal.
Once, many years ago whilst running a software house that supplied commercial software to local
businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and
it was all hand-cut code. For us, it represented a breakthrough as it was for a
government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in
a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a
faulty tape drive. There were some fragments left on individual machines, but
they were all of different versions. The developers were in despair.
Strangely, I managed to re-write the bulk of a three-month project in a manic and
caffeine-soaked weekend. Sure, that elegant universally-applicable
input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.
Yes, the code lacked architectural elegance and reusability. By dawn on Monday,
the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up
what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the
delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the
user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface
guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new
bugs that had crept in. I still have the disk that crashed, up in the attic.
In IT, we have had mixed
experiences from complete re-writes. Lotus 123 never really recovered from a
complete rewrite from assembler into C, Borland made the mistake
with Arago and
Quattro Pro
and
Netscape’s complete rewrite of their
Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme
circumstances where no other course of action seemed possible. The rewrite
didn’t come out of the blue. I prefer to remember the rewrite of Minix
by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly
older Linus. The
rewrite of CP/M didn’t
do too badly either, did it? Come to think of it, the guy who decided to
rewrite the windowing system of the Xerox Star never regretted the decision.
I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate
whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a
different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite
lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim
experience brings out the humility in any experienced programmer. I’m referring
to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and
coding standards, and they already have a solution, then what is wrong with considering a
complete rewrite?
Rewrites are so painful in the early stages, until that point where one realises the payoff, that
even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control
systems, and disaster recovery systems, are just too good nowadays. If I
were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll
know I didn’t have the nerve to delete it and start again. There was a time that
one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has
made us almost entirely immune to such a merciful act of God.
An old friend of mine with long experience in the software industry has long had the idea of the
‘source-control wet-work’,
where one hires a malicious hacker in some wild eastern country to hack into one’s own
source control system to destroy all trace of the source to an application. Alas,
backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the
idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would
systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying
into the idea.
In reading the full story of the
near-loss of Toy Story 2,
it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it
to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be
rewritten anyway. Was this an early case
of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are
far too prone to cling to our existing source-code.