In a previous post I wrote about the uncompromising artistry of Stanley Kubrick, who produced film classics at the cost of wildly unpredictable schedules and budgets. You can reach for similar brilliance in programming, but you had better do it on your own time or with a generous CFO. There is a different, more workable, and healthier attitude towards our craft. Just keep at it, enjoy it, and don’t worry about making a dent in the universe. Reading Woody Allen’s autobiography over the holidays, indulge me to draw another cinematic parallel with this veteran New York writer/director. Don’t worry, it will also be about coding.
Woody Allen (1935) is one of the most consistently prolific cinematographers in the business. He has written and directed over fifty films over an equal number of years, almost like clockwork and at 85 shows no intention of stopping. He doesn’t approach his oeuvre as a project with a culmination. His business is about keeping busy. He is in it for the long run, if only to act as an antidote to the unavoidable spectre of death and oblivion. But let’s not get into his glum outlook on life here.
Today I want to talk about the widespread Scrum practice of demoing new user-facing features to key stakeholders after every Sprint. I consider it a dubious practice because it can prevent and degrade a culture of quality. Neither the Scrum guide nor common sense demands that you should strive to hold such a demo at regular intervals. The Agile 2 movement understands that turning teams into feature factories kills quiet reflection, adaptation and true agility.
There’s a lot that people think the Scrum guide dictates which is in fact not in it. There are no story points, no poker planning sessions, and no Fibonacci ranges. Like religions accrue articles of faith that are in no holy book, so Scrum has given rise to received practices based on wrong assumptions. One belief is that you must hold a demo at the end of the Sprint. There is no such mandatory demo. There is a review, during which you inspect the outcome of the Sprint and determine future adaptations. [..] The Sprint Review is a working session and the Scrum Team should avoid limiting it to a presentation. There you have it, straight from the horse’s mouth.
SUMMARY: There’s plenty of free advice on how to become a better developer. You can safely skip much of it. You’ve probably heard it all before in different guises. Moreover, the huge scope of present-day IT is such that most advice has limited relevance. It’s rarely universally useful, and even if it is, knowing your golden tips doesn’t mean people live by them.
Blogs catering to developers are bursting with well-meaning bullet point lessons on how you too can join the pick of the bunch. Tell-tale warning signs to help you spot slackers, sociopaths and generic time wasters in your team are another favourite. Many of these articles strike me as hunches based on shreds of personal and/or anecdotal evidence. Who are you to take this authoritative tone with me? I can think up my own Ten Commandments of Seven Deadly Sins, easy. It would be a hodgepodge of stuff I experienced first-hand or read, ordered and filtered by my own subconscious preferences, hoping you, the reader, will find it useful. I won’t do it.
Pair programming has a long and respectable history. None other than Frederic Brooks of Silver Bullet fame practised it as early as 1953. In terms of seniority it makes Scrum, CI/CD and BDD look like recent fads by comparison. But where many teams practise the latter with varying degrees of commitment and consequent success, pair programming has never been popular in any team that I ever worked in. From personal experience I am not surprised. I must have racked up no more than a week of solid pairing during 22 years as a developer, which was more than enough to my taste. What’s more, I wasn’t convinced of the alleged benefits.
Summary Pushing for test coverage is a bad incentive. It favors quantity over quality. You should prioritize your code with regard to test-worthiness. I discuss three such categories. First there is code that processes unmanaged and possibly malicious (user)input. You should test this for robustness. Secondly there is complex business logic with possible implementation errors. Test this for correctness. Thirdly there are quirks with trusted frameworks running code that doesn’t perform at scale. Run performance tests under realistic production loads to capture these.