Fertile Questions
for the Robust Programs chapter of “How to Learn Computer Science”.
Welcome
Thank you for buying my book! This page discusses the content in the “Robust Programs” chapter and answers the “Fertile Questions” I asked there. There are no perfect answers, however: you may even disagree, but the point of a fertile question is to make you think.
Here are the questions, and my suggested answers. Do you agree?
What proportion of a computing project should be testing?
As always, there is no right answer, I’m asking the question to make you think. As we gather from the book, testing wasn’t really considered a separate activity to programming until the 1960s. Testing was synonymous with debugging, back when programs were small and predictable. Testing became important when programs grew larger and less predictable, and by 1979 when Glenford Myers wrote “The Art of Software Testing” the industry standard was to spend about half the time coding and half the time testing. So that meant as much time testing as coding!
That 50% fraction has fallen in recent decades thanks to automated testing techniques, and studies show that 40% is still common. But the point is this: you should decide when designing the code what tests to run, and these should include all branches of selection statements, all subroutines and all possible inputs. In a complex program this could take a while!
Which is more effective: black-box or white-box testing? Why?
It depends. (You’ve probably guessed by now that these Fertile Questions are not a multiple choice quiz!). The two types of testing are verry different: they will discover very different types of error.
White-box testing, where an experienced programmer investigates the code you have written, can drive out syntax and runtime errors using their own coding skill, for example they might spot an array index out of range error (going beyond the end of an array) that may take a while to spot later through black-box testing, as it would only show up for certain inputs.
Other errors may only reveal themselves through black-box testing. In the case of the Therac-25 incident (see LEARN page 70), no amount of white-box testing could have revealed the fault, known as a “race condition”, but more black-box testing might have done so, such as a test operator bashing every key in different combinations and noting the results!
Can we make our programs foolproof?
Sadly, no. As I said in the book (page 69) we can expect 500 errors per 10,000 lines of code, so that’s about 1 error every 20 lines! Our objective must always be to reduce these errors. First we carefully design and specificy the program, with structure diagrams, flowcharts and pseudocode. Then we use structured programming and maintainability techniques: using lots of subroutines and commenting everything. Then we perform “dry run” testing, walking through the algorithms without even coding them. Modern software developers also write test plans during design and development, not afterwards. Then we test our code, both wite- and black-box testing of every module and line of code, with every possible input (or as near as we can), and we repeat this process when anything changes. But we can never be sure there are zero bugs.
What are all the ways in which computer systems can fail?
Consider self-driving cars, the Mars rovers, or a social media app.
A computer system can fail when it encounters something the programmer didn’t anticipate. This is why a key objective of defensive design is anticipating misuse. This can be accidental, such as a doctor entering a patient’s blood pressure wrong, or deliberate such as a hacker trying to guess a password. The doctor’s program should first validate the input (no human has blood pressure higher than 380, and anything over 180 is an emergency!) but it could also check a patient’s blood pressure against their medical history. If the new reading is way out of line with historical readings, it could raise an alarm, and the doctor would double-check. And password-guessing should be defeated by limiting attempts, adding a time delay or popping up a “Turing Test” CAPTCHA before allowing more attempts.
In the case of the self-driving car, it’s impossible to “train” the program on everything it might encounter, hence they all currently require a human driver to stay alert at all times and take over at a moment’s notice. For example a tyre blowout, others’ erratic driving and extreme weather conditions all currently cause problems for self-driving cars.
The Mars rover must operate without human input, as it takes up to 20 minutes for radio signals to reach Mars and 20 minutes for the reply! So the programs must be pretty robust and allow the machine to handle rocks in the way, wind and storms and unexpected ground conditions alone.
A social media app must continue to be useful offline, e.g. allowing the user to browse recent posts and queue their own updates for when they are back on the internet. It must manage thousands of updates every second for busy timelines, and ensure security and privacy is never compromised, imagine the horror if your DMs went public!
As I said in the book (page 71),
Modern robust programming includes anticipating misuse through authentication, sanitisation and validation, plus a formal development methodology such as agile, structured programming techniques focused on modular, maintainable code, and a rigorous testing regime.
How to Learn Computer Science, page 71
Remember, if you haven’t got the book yet, it’s available at Amazon and all good stores, check the home page for links, and if you enjoy my work, why not buy me a coffee?
