You can’t have missed this story:
I’ve read this story on several websites now and become enraged at the lack of technology insight provided. Why? Because how we discuss technology and its impact on our world matters. It really does. Tech is changing the world in ways we are yet to understand, so we really need to explain it well in news stories like this, and in the classroom.
So let me start with the headline of this blog. Alexa did not “tell a child to touch a live plug with a penny”. Alexa outsourced the query to an internet search, having found no adequate response in its database. The girl will have heard the following: “Here’s something I found online…” before Alexa proceeded to read a web page published by someone else, not Amazon.
In this way, the girl has basically performed an internet search. Nothing more, nothing less. The girl and her family, and the journalists who love a good “tech gone bad” story, lapped it up and reported it as a “fault with Alexa”. Amazon was quick to claim it had “quickly fixed an error”, which just means it changed the heuristic for web pages that mention the coin challenge, so they will no longer be selected as an appropriate response.
There are two issues with this reporting… (scroll down for more…)
- The only popular web pages about the “penny challenge” or “outlet challenge” are those warning parents about the challenge, such as this one from CBS News. Like the “Tide pod craze” of 2018, this is largely an attempt by teens to freak out other teens and adults, the actual numbers of children and young people putting themselves at risk is probably vanishingly small, which is why as adults we must be careful we do not amplify the risks by unnecessary overreaction, which causes more children to take risks, through the Streisand effect.
- Blaming Alexa here is shooting the messenger. Alexa simply relayed the content of a popular web page, but because we have learned to trust algorithms to make better decisions than humans in so many aspects of our lives (which is a big issue, too big for this column), we are more likely to take advice from an AI such as Alexa than a static webpage we found ourselves (or at least we believe we found ourselves!). Alexa also divorced the content from its context, which was almost certainly the CBS webpage or one similar that gave a warning of the dangers of the “challenge”.
I gave a talk at a recent CAS meeting about the dangers of too much specificity in online safety. We need to stop talking about individual cultural phenomena and platforms, and start talking about behaviours. In this case, what actually kept the family safe was a basic understanding of the risk of mains electricity, but this would have been a non-story if the family had understood Alexa’s role here as a mere messenger, relaying web content that was itself potentially unsafe, and Alexa was not endorsing the content in any way.
As for the “journalists” reporting this as an “Alexa told my kid to do something bad” story… do better. And if you’re a teacher, I urge you to use this blog content in your classroom to have a discussion around the dangers of trusting AIs. And if you want to understand more about the issues and impacts of Computing for GCSE / High-school level teaching, read my book.