Welcome
Fertile Questions for the Issues chapter of “How to Learn Computer Science”.
Thank you for buying my book! This page discusses the content in the “Issues” chapter and answers the “Fertile Questions” I asked there. There are no perfect answers, however: you may even disagree, but the point of a fertile question is to make you think.
Here are the questions, and my suggested answers. Do you agree?
Is AI a force for good?
In the LEARN book I tell the story of Palestinian construction worker Halawim Halawi whose innocent Facebook post was mis-translated leading to his arrest by the Israeli security forces. It goes without saying that they should not have relied on an AI-generated translation before being so heavy-handed, but human nature leads us to trust computers more than we should. I also tell the stories of PREDPOL, California’s predictive policing model which sent police to mostly black areas, and Amazon’s recruitment AI hiring mostly male candidates. Badly-trained AI models can amplify existing human prejudices. and can make existing problems of injustice worse, not better.
However it’s not all bad news. This Forbes article lists 10 ways AI is being used for good, including cancer screening, saving bees from decline, using augmented reality to help disabled people, powering machine-learning models to research climate change, maximising crop yield to combat hunger in the developing world.
My answer to the fertile question is: yes, but we must be sure to work to make AI a force for good, including being aware of its ability to worsen inequality not reduce it, if we don’t correct for human biases. I recommend Algorithms of Oppression by Safiya Noble as further reading, and this article. And Raspberry Pi has a great intro to AI for schools here.
Are there any positives to being offline?
From the LEARN book:
On Facebook I have another several GB of data, including family photos, groups and pages that I like, and a friends list. I’m happy to trade a piece of my privacy for the convenience of chatting with friends and colleagues, but I admit I came close to deleting my account in the wake of the Cambridge Analytica data scandal. Like many, I decided to stay for the convenience of sharing family pictures with my relatives and chatting with computing teachers in various groups. To live in the modern world means to choose where to draw that dividing line between private and shared data; to choose which technology companies you trust and how much of your life you trust them with.
How to Learn Computer Science, p220, Alan Harrison
Living your life offline will protect you from some risk: the risks of fake news, cyberbullying, exposure to harmful content such as pornography, hate speech and radicalisation, and having your data exploited and used to sell you products and services. But it’s a trade-off, there are benefits to being online such as connecting with people, enjoying the content others have made and discussing it with like-minded people, and sharing your own content: the things you make and do. It can be an incredibly positive experience if you make some art or music and others tell you how great it is! Most of us live our lives partly online, partly offline these days and try to find a healthy balance.
I recommend we all try to find the balance that works for us, and use tools like the phone’s “Digital Wellbeing” settings, or apps like Forest and Study Bunny, and take regular phone breaks to connect with people in the real-world or just have time to do offline hobbies. I play guitar and write books, what’s your favourite offline activity?
Can we trust the big tech companies to keep us safe?
In the LEARN book I give examples of big tech companies wielding great power without much accountability, for example Google will down-rank abusive images if they choose to, it’s not a decision that “we the people” make through our elected leaders. Is this enough? Most countries have decided that we need some legislation around online content, but disagree on fundamental issues of what content to police and how to police it. For now at least, the tech companies have much power and little regulation. I think the answer to the question currently is “no”, but some governments are working on tightening regulations. The EU’s “Digital Services Act” is an attempt to curb the worst influences of big tech, and Amnesty International says it “moves us towards an online world that better respects our human rights by effectively putting the brakes on Big Tech’s unchecked power. Expect to see more legislation like this that attempts to ensure powerful internet companies submit to popular democratic control.
How dangerous is “fake news”?
We heard in the book that conspiracy-driven terrorism such as QAnon poses a new threat to public security, according to the FBI. The COVID pandemic was made more difficult to manage by the spread of misinformation, including “masks don’t work” and “vaccines kill”. It’s important that we consider the balance of freedoms here: freedom of speech and expression is important, but unchecked, in the internet age, can also impact public safety. Fake news can fuel international and domestic terrorism, worsen public health issues and much more. It’s certainly not a small problem, it is a dangerous consequence of how easy it is to publish digital content.
What is the environmental impact of a of a smartphone?
There are many factors to consider:
- Carbon footprint of manufacture, transportation (of both raw materials inward, and finished goods outward) and of charging the phone every day – this is about 85kg in the first year, the same as driving a family car from London to Glasgow.
- Consumption of rare elements such as lithium and molybdenum, which are finite resources and unscrupulous mining companies often exploit their workers
- The impact of e-Waste including toxic substances like lead, cadmium and mercury being dumped in landfill which can pollute food and water supplies.
We can reduce our impact on the environment by not changing our phones quite as often, and lending support to the “right to repair” movement, demanding that manufacturers make devices we can fix and keep running for longer.
Why is the Investigatory Powers Act 2016, also known as the Snoopers’ Charter?
Every country has its security forces, in the UK these are called MI5 and MI6, who work with the police and National Crime Agency to detect and prevent serious crime and terrorism. These organisations need robust powers to do this, including the right to intercept our mail and tap our phones. But when criminals use digital technology such as encrypted email and instant messaging, this can make it more difficult for the security forces.
To address these concerns, the UK gave security forces new powers to intercept and collect our messages, internet histories and even hack into our computers, if they believe a crime has been committed. But critics of the law believe it goes too far. Civil rights organisation Liberty said it would “undermine everything that’s core to our freedom and democracy — our right to protest, to express ourselves freely and to a fair trial, our free press, privacy and cybersecurity”. Critics of the law suggested it “legalised snooping (spying on people)” hence the nickname “snoopers’ charter“.
Civil society needs a good balance of freedoms and police powers, and this is often a tricky balance to achieve.
How can we prevent algorithms from being racist or sexist?
In the LEARN book, I wrote that in June 2020 – amid worldwide fury at the murder of George Floyd, a 46-year-old black man, by a white police officer in Minneapolis – IBM, Amazon and Microsoft announced they would pause sales of their AI-powered face recognition technology to police in the US. The algorithms misidentified dark-skinned women nearly 35% of the time, while nearly always getting it right for white men. Alison Powell, a data ethicist in the UK explained: “Face-recognition systems have internal biases because they are primarily trained on libraries of white faces. They don’t recognise black and brown faces well.”
In 2018 Amazon shelved an AI recruiting tool which had been trained on CVs (resumes) previously submitted to the company , but of course most were from men. In effect, because of historic human bias in the tech industry, the system taught itself that male candidates were preferable!
These are just two examples in the book of bias in algorithms. To help guard against this we must ensure AI is designed to be inclusive and trained on good quality data that reflects all of humanity. But more importantly, as Hannah Fry writes in her book Hello World: “Perhaps the answer is to build algorithms to be contestable from the ground up. Imagine that we designed them to support humans in their decisions, rather than instruct them. To be transparent about why they came to a particular decision, rather than just inform us of the result.”


You must be logged in to post a comment.