Categories
AI pedagogy podcast teaching and learning tech

Podcast S2 Ep5: “Will AI revive the art of tinkering?”

My discussion with Miles Berry and Becci Peters is live on all good podcast platforms and here: pod.httcs.online/e/s2e05

Podcast thumbnail - Alan holding his two books.

The transcript follows below.

Alan: Hello, and welcome to How To Teach Computer Science the podcast. My name’s Alan Harrison, and I wrote the books how to teach computer science and how to learn computer science available in online bookstores. And if you like the podcast, you’ll love hearing me in-person. Visit. httcs.Online to find out more about my training and consultancy, and I could be speaking soon, live at your school on inset day, jokes optional. More details about this and book purchase links at httcs.online, that’s the initials of how to teach computer science dot online. Listeners to the pod get a special discount code too, just type HTTCSPOD in the checkout page at johncattbookshop.com to get 20% off everything. That’s everything including classics, such as teaching walkthroughs by Tom Sherrington, the Huh series by Mary Myatt. And of course my two little books. 

I’ve got no time for shenanigans today because I’ve got a 45 minute chat with two of the best people in computing education in the UK coming right up. 

If you are grateful for my blog, please buy my books here or buy me a coffee at ko-fi.com/mraharrisoncs, thanks!

Welcome to the podcast. And today I’ve got two brilliant guests. We’re going to talk about AI again, but it seems like it’s changing every day, so that’s good.

First of all, I’ve got Becci Peters from BCS. Morning, Becci. How are you? 

Becci: Good. Thanks. 

Alan: Great, thanks. Yeah. And also on the podcast today, we have Professor Miles Berry. How are you, Miles? 

Miles: I’m well, thank you, Alan. It’s lovely to be here and to see you too, Becci. 

Becci: You too, Miles. 

Alan: Good. Yeah great to have you both on to talk about, well, AI.

You might have heard about it. It’s in the news a lot at the moment. So what I wanted to do today is I’m trying to make this podcast something that teachers can listen to. On the way to work and get something useful out of it each day. And I just thought, can we cut through the noise today? And can we tell teachers listening to this what do they need to know about AI? Miles where should we start?

Miles: How long have you got there, Alan? Yeah. This is an impossible question to answer, but let’s at least make an attempt on this.

I think there are three aspects of this, just as we’ve got those three aspects, dimensions, whatever you call them, to our computing curriculum. So I would see those very much along the same lines of the foundations of AI, the applications of AI, and then the implications of AI. Yes, for us as individuals.

but also for our pupils and indeed for our society. And it might sound arrogant to suggest civilization, but who knows where we can go with this. So I think it’s worth teachers and indeed their pupils, their students having knowledge and skills around all three of those layers. At the moment, whenever we’re talking about AI, we seem to find ourselves talking about generative AI, but it is worth broadening the scope here and considering other aspects of machine learning, other aspects of artificial intelligence.

But the really cool stuff is all happening around generative AI in one form or another. So I think there is something there about. Teachers ought to know a little bit about what’s happening behind the screen, how these amazing machines do this amazing work, what it is that this is based on, a hand waving notion of how the algorithms work, and that sort of unplugged understanding of what actually is going on here.

And then a whole load of stuff around the applications of this. And very often this is what one sees on training courses and conferences and so on. Look at all of these cool things that we can do with this. And this is very cool. And just having your eyes open to the different things that we can now use these tools to do is part and parcel of any sort of stuff.

Professional development or indeed what we might want to do with our pupils and then there ought to be also a stepping back and thinking about the implications of this and yeah, saving a little bit of teacher time a little bit on that sort of workload reduction is no bad thing, but at what costs, what are the, where do we spend it?

Teachers still have to play a pivotal, vital role in the education of young people. What is the world that we are preparing them for going to be like? And of course, all of the sort of due diligence things around intellectual property and data protection and stuff around sustainability and stuff around bias.

I could go on, but I should stop. You might want to ask Becci the same question, or do I just pass on to Becci now? What do you think, Becci? What should they know about all of this? Please do.

Becci: I think you’re right. It is important to know about all the different aspects. I think, as you say, there’s all sorts of wonderful things that you can do with it.

So one of the things that I’ve been doing is I’ve been like making little short videos with showing some of the free tools because not every school’s got the budget to be able to buy into some of this stuff. So showcasing some of the little things that you can do that will save a bit of time.

But, it is worth noting that, it’s not 100 percent accurate. Everything that you see that is generated by generative AI, taking it with a pinch of salt, giving it a once over, and double checking whether One, do you want to use it in the first place? And two, does it need any kind of edits or anything?

And then I think from the student point of view, they generally know more than we do generally about AI. TikTok is full of videos of different things that they can do. And That’s where they’re getting most of their knowledge, and that’s not how it should be. So think about teaching your students what it is, what are the benefits of it, but also what are the risks of it?

When should they and shouldn’t they use it? And if you need some free resources, CAS has some, so go check out the CAS AI website. 

Alan: Brilliant I will do. one of the problems you mentioned there is, the inaccuracy, the hallucinations and so on. So how can we ensure that teachers and students are being prudent with the tool and they’re not getting misconceptions, which we then have to iron out. 
 

Becci: I think part of it is that, having that discussion with the students about, so obviously depending on the age of your students depends on what kind of AI they’re going to be allowed to use that doesn’t necessarily depend on whether they’re using it.

We know that PRIMMary school kids are using it, but they’re not technically allowed to. If you, the safe bet is you as the teacher display something on the board where you’re all having a discussion, but you’re the one using it so that you’re not getting around any age issues because most of them are 13 plus some of them are 18 plus. So to be able to have that discussion with the students and say, right, well, if I type in this prompt, this is what it gives me.

Now let’s discuss what it’s given back and whether that’s good, bad and have a discussion about why and really help them to understand what the dangers are of using it and then having that conversation about when it’s appropriate. So if they’ve got some form of NEA, then they obviously cannot use AI.

And if they do, they need to be explicitly referencing that and the safest way to do that is to just not use it at all. The JCQ guidelines are so strict on that. Obviously they’re not going to have it in their exam, but if you’re setting some kind of homework task, which is not NEA, there are no guidelines about whether or not it can or cannot be used.

Guaranteed, they will be trying to use it. So thinking carefully about the tasks that you’re setting and not just setting, write this, answer these questions, because they’ll just use AI to do it and they won’t think about it themselves. 

Alan: Yeah, I think I think that’s important. Setting an essay homework, for instance, is probably dead as a as a means of getting them to think and explore or as a means of assessment because they are, yeah, then 

Miles: I’m going to get back to your question about. How should we teach them to be able to tell? So the point of the essay is not the essay. It’s the process and not the product here. Assignments are not merely about assessment. This, we talk about summative and formative. I’d like to add in another adjective into the mix there of constructive assessment, where we acknowledge really clearly. That the point of the assessment is to provide an opportunity for learning to take place.

That if you are going to set one of those eight plus mark questions as a homework, the point of this is not so you get an answer to the question. You can use the generative AI to get that answer. The point is for them to walk through the process reading about this, bringing to mind all of their prior learning, marshalling their own argument.

We spoke before the call started about, early morning activities. Respect to Alan who ran to the gym before the call started. He could so easily have got in his car. Running there. has so many advantages for him as a person, for the environment, and yeah, I suspect he’s a very safe driver, but there is far less danger of him, killing somebody on his run than if he were driving.

Alan: No, just much more danger of me, much more danger of me slipping on the ice and breaking something personally, but there you go. 

Miles: Oh, that’s another weird thing. I’m not sure I think we’re torturing the metaphor if I take this too far. So, you know, There are occasions. When the tools that we have, the technologies we as a society have built, make life easier for us. That doesn’t necessarily mean they make life better. And so there are occasions when, like running, like you’re going to the gym, it is worth doing the hard work, rather than taking the easy way out. We’ve got that message when it comes to personal fitness, present company accepted. But Not necessarily yet because of these cool shiny things around getting the we become lazy We take our eyes off the road and our hands off the wheel because the machine is very good Doing much of this so your question was around How can we teach them to tell, and, this danger of hallucination?

And I think I come back to this notion of a knowledge rich curriculum. That knowledge really does matter for this. Your ability to make sense of the response you get from the machine, to be able to tell whether that’s plausible or likely to be correct, and indeed your ability to even prompt well. is down to the knowledge you have of that particular domain.

So yes, it has read loads more books than any of us have, but we can only really make good use of these tools if we have the knowledge ourselves. And that includes the domain specific knowledge, which really does matter. But I think it also includes something around the knowledge of how the generative responses are, forgive me, generated.

And, this sense of what is the algorithm here, I think, matters, and that hallucination is built into the process because of the stochastic parrot, stochastic pirate nature of the way it is producing text. And that actually there are better ways of prompting this retrieval augmented generation, give it the document to start with, and it’s way less likely to hallucinate as a result of that.

Ask it to demonstrate its chain of thought. And again, you’re likely to get to develop your own trust in this. Forgive me for a moment longer. I remember the days when Wikipedia came out. We started using this in schools and we had, teachers were telling their pupils back then, you cannot trust Wikipedia.

It is made up by people. Now, here we are in 2025, made up by people sounds like a really strong selling point for Wikipedia. But it developed a critical literacy. of the content there, because you encourage pupils to think, is this right? Is this just the result of some random person coming in and graffitiing a Wikipedia page?

This time it may be the machine that’s making stuff up, but again, returning to that sort of critical digital literacy about, okay, I can read this, but should I trust this? Will matter 

Advertisements

Alan: it’s interesting you bring up the example of wikipedia there Miles and i remember having this conversation with students who threw at me the “you can’t trust wikipedia because anyone can edit it” and and there was a study done years ago where wikipedia was pretty much on a par with encyclopedia britannica for accuracy in most areas the only pages you can’t trust really on Wikipedia are pop culture pages, which get updated by young people all of the time, K pop bands that they love or hate and so on. And most of it is… 

Miles: I know very little of this, Alan. Yeah. I remember the study and the interesting thing was that the errors that they had found on the Wikipedia pages were all I think almost all corrected before publication. The errors they had found on the dead tree printed encyclopedia were waiting for the next edition.

Alan: Yeah, exactly. you made the point there that perhaps something human edited is now seen as of greater value than something AI generated. Is that is that going to persist? Do you think, do you, or will the AIs just get better? 

Becci: Well, they’ve already gotten a lot better, let’s face it.

Alan: Yeah, that’s true. 

Becci: We’re two and a half years in now, just, well, not quite, nearly, just over two years, they’ve already got significantly better than they were when they were first released to the world. 

Alan: Yes. You can’t do I think there are I tried the one, can an anaconda fit in a shopping mall? And it said no, of course, anacondas are far too big to fit in a shopping mall. Stuff like that doesn’t happen anymore. 

Miles: Stop putting your anacondas in shopping malls. It’s not a good idea. 

Alan: No, it genuinely did. 

Miles: I think there are things where we humans will continue to appreciate human added value to this. So I love the Suno thing, this create me a song in the style of. I still enjoy listening to something which I have verifiable trust was the product of a human singer, of human artists.

If you are grateful for my blog, please buy my books here or buy me a coffee at ko-fi.com/mraharrisoncs, thanks!

And there are going to be a large number of areas where, yes, the machine may be better at this in some sort of measurable, qualitative, quantitative way. That doesn’t mean to say it’s something which we should just leave to the machines. I think teaching is going to be one of those things where Yes, the machine may be very good at setting tasks and marking work and so on, but it’s, there is a personal aspect to this.

And it is worth doing the thought experiment about what it is that makes us human beings. I want to say unique, but different from the AIs. It’s very good at faking loads of things. But there are, I’m sure, still things which for a little while longer yet are part of, a Almost uniquely human preserve and some of that is around curiosity.

Some of that is, I think, around character. It’s, it has no set of moral values baked into the language model. Yes, guardrails are typically put in place and I’m grateful for that. But that sense of, I’m doing this because this is the right thing to do. And there’s stuff around there around creativity.

And creativity is not just making something new, but it’s also about participation in a creative community. Yes, I am, of course, an enthusiast for these technologies, but I think it would be a shame if we lost sight of uniquely human value. 

Yeah I’m thinking, when we talk about generative AI creating stuff, like, like you say, songs in the style of, and so on makes me wonder, If we will ever get those step changes in artistic style or paradigm changes that let’s say in music, rock and roll when people first heard Elvis, there was.

Absolute gnashing of teeth among the old people and the young were, yeah, this is for me, so that was a step change in musical taste. How is AI going to do that? It’s not, is it? We need the human input. And if you think about art, you think about the impressionist movement was absolutely rejected.

When money first exhibited at the salon, it was like, what on earth is this? And then, we all look at money, and all of that now with great affection. And that’s my favorite part of the national gallery. When I wonder in. I get a few minutes in London. But that, I can’t imagine that step change in some kind of art and a new paradigm emerging if we’re leaving it all to AI, which which is derivative, isn’t it?

I think you may be onto something. It’s worth bringing this home into the classroom, into schools and thinking, okay, if we still value that sort of, amazing human creativity of thinking in a way that has never been thought before, what should, what we do in the classroom, what should the education system do to nurture that sort of combination of creativity and curiosity and intention and determination?

These things, I’m sure, matter as we go forward. I don’t want to say never for the AIs, but I think you may be onto something. It’s worth looking at what’s going on in science. Sciences, these technologies, AI, rather than just merely generative AI, has transformed so much of science. Have a look at what our friends at DeepMind Google.

are doing with AlphaFold of identifying the structure of proteins, given the amino acids, just by trying out the combina sorry, there’s more to it than that, by trying out the combinations. Look at what they’re doing with their weather forecasting, where it’s better than our current atmospheric model based approaches to weather forecasting.

Science is changing because of this, but at the moment, as far as I can tell, It isn’t like commissioning original experimental research. It’s because it’s, it doesn’t have that sense of moving forward beyond the bounds of current knowledge, current understanding to coming up with new theory and new areas for exploration.

Maybe ChatGPT six will be there, but I suspect that might be for a bit longer yet. 

Alan: Coming back to, I started this conversation saying, let’s talk about the practical aspects of AI and what teachers can do. So I’ll come back to Becci and and say, right, what can we do in the classroom that’s really valuable with the AI tools that we’ve got.

Advertisements

Becci: Obviously, you can use it with different aspects of lesson planning. If it’s a particularly stale topic, you might want to get some ideas about how you can make it a bit more engaging. It’s great coming up with ideas, especially when you’re a really tired teacher and it’s that time of the day or the week or the year or whatever it might be.

And you’re just like, I can’t think of any ideas. I’ve run out of creativity. And you just need You know, just, ask GPT or whatever to come up with 10 ideas for teaching. Whatever topic it is you want to teach see what it comes up with. You can ask for more details on the other. It can then plan the entire task for you. It’s quite good. 

Marking and things. I don’t think it’s quite there yet. I think we’ll get there, but I don’t know when there’s people experimenting with it, but I don’t think it’s quite there yet. One of the things that I was playing with this week that I really like, so Brisk Teaching is a Google Chrome extension, which is free and it can do all sorts of wonderful things and it’s specifically made for teachers. But one of the things that I mean I learned about this at BETT actually that it can do is: So you can, if you’ve got your lesson materials on whatever topic it might be, you can then create a “boost engagement” activity that Brisk just takes over for you.

And basically it takes your lesson materials, so maybe it’s your slides or your worksheet, whatever it might be it will then give each student their own individual chat bot about that topic, and it will talk to them and make sure it understands what, the content and whatever. But you as the teacher then get a breakdown of all the students who are doing this, how, which percent of them are engaged in it.

And it will then give you, for each of the learning objectives in the lesson, it will then give you a breakdown as to whether they’ve not done that bit at all, whether they partially understand it or whether they’ve completely nailed it. And it’s, I think it’s a really nice thing that you can do as homework where.

You know exactly what the students are doing. And you can see all of their conversations that they’ve had with the chatbot as well. So in that sense, it’s pretty safe in that sense. They can’t, they’re going to use AI for their homework. So they can’t like cheat and use AI for their homework because they’re going to have to, but they can’t get it to do it for them.

They’re just going to have the conversation. You don’t have to mark it because it’s going to do all that, but you can go in and have a look at the conversations and, double check, if a student, if it’s showing that all reds for all the learning objectives and you’re thinking why is this student not getting it?

You can go in and have a look at that student’s conversation, see what the misconceptions are, and then obviously address it. So there’s all sorts of cool things that you can do. Um, There’s a lot of these kind of rapper apps that exist. I’m not going to name them, but there’s a few of them about, and you can get free versions.

You can get. That the paid versions and brisk is one of them and they are quite useful, but I do find that the generic generative AI is better, partly because as a teacher, you’re having to learn how to prompt it effectively and partly because you’re not restricted with what you can get it to do. Some of the rapper apps, I don’t know of anything that has that feature like brisk doors where the students can have the conversation and you can track all the kids progress.

But all the generic things like make me a lesson plan, make me a worksheet, whatever, you can do all that with the generic stuff anyway, but you’re going to learn how to prompt it. So I feel like the generic way forward is definitely better. 

Miles: If your school is willing to fund the premium subscriptions to ChatGPT or the equivalent other language models.

It’s worth playing with creating your own custom GPT or custom chat bot there. So you can give it very specific system messages and knowledge based stuff and then create a bot which your pupils over the age of 13, of course, because terms and conditions still apply, can interact with. Again, checking the intellectual property rules there.

If you are grateful for my blog, please buy my books here or buy me a coffee at ko-fi.com/mraharrisoncs, thanks!

Provide it with a version of an exam specification. Provide it with example exam questions and the mark schema and all of that sort of things. Check the terms and conditions. And allow it to enter into a conversation to support your pupils or to challenge your pupils. I love that idea of the customized one to one chat bot and being able to I’m going to try and suck out from that.

The assessment data is really powerful, but this is, again, a thing which teachers could do for themselves in a way which is very specific to their particular context. But in terms of a teacher’s own generative AI skills moving beyond the sort of basic prompt response window to fine tuning it, creating an language model based application is well worth experimenting with. I think some of the most exciting stuff happens when our pupils start interfacing with this. So whilst I have issues with getting ChatGPT or its equivalent to mark a pupil’s work, it’s a whole other matter if they ask for feedback on their work, because it’s their work.

They own the intellectual property in it, assuming they didn’t make it. chat, dbt equivalent, do the work in the first place, and empowering them to take more charge of that educational process. And, lovely examples of read through my notes here, tell me if I’ve still got any misconceptions or identify my knowledge gaps.

That sort of personal tutoring thing that come back to, what are our human values about nurturing pupils own curiosity and trying to rekindle that. Joy in learning. So lots and lots of things which are actually entirely achievable now because of this amazing technology. 

Alan: Yeah, I think that the personalization is probably the most exciting feature of it. If we can capture that, because of course, what do we want to achieve in the classroom? We want to make the learning relevant and accessible. And yet we have a classroom of 30 pupils, all very different backgrounds and interests. So we do our best and we wander the classroom and we try to know our children.

And of course, there’s that pressure to, oh, you’ve got to make a, have a relationship with all your children and know what they do. And I remember reading something a few years ago was an American teacher and he said, Oh, well, I have an index card on every student and I write down their favorite sports team and their favorite… and I’m thinking an index card on every student. Yeah. So when I, when, so he said, when I have a a meeting with that student coming up, I’ll get the index card out. And then, so I’ll say to the student, Hey, great bears game or whatever it was, and I’ll relate to that student and, um. I was just, that’s just not possible in any meaningful sense for a human to do that. And I remember teaching, I think 300 pupils in one year was the most that I saw. So we can’t do that, But AI can, of course.

Miles: It’s really good at summarizing data. You of course need to play by the rules of the Data Protection Act GDPR and anonymize this data unless you’re working in a very secure environment. But if you give it a spreadsheet full of how well kids have done on all of the end of lesson, end of topic tests that they’ve done, it will analyze that.

Well, produce all of your lovely visualizations, but also look for the interesting patterns there as to several of these peoples have still not got this particular idea. It would be worth revisiting this. Good teachers can do this for themselves, but it’s really hard to do this. What you’re saying 300 kids in a week and the AI is very good at that sort of working with large amounts of data and coming up with the patterns and the exceptions.

Alan: We briefly skimmed over marking just now, and I had this conversation on LinkedIn last week where someone was advocating AI marking and I said, well, look, if you’ve already done, if you’ve took the grunt work out of marking, if you’re not taking the pile of books home and ticking everything and then writing what went well and even better if on every book if you’ve replaced that with whole class feedback where you maybe skim the work and you create a slide of misconceptions that you spotted and things that the class could improve. And then you give them the work back and you say, right, these are all the things I’ve seen. Go and improve your work. That’s what I ended up doing. And so 90 percent of the work was gone. So if you’ve already moved away from traditional marking to something like the valuable tasks that I’ve just explained, whole class feedback, there’s very little left to automate.

And what’s left is the human bit that we don’t want to automate. And I’m frightened that we’re doing that thing. There’s a meme that went round, I seem to be doing the laundry and the cleaning while AI writes the music and the artwork and, we’re in danger of going down that road where AI is doing all the fun stuff and we’re doing the grunt work instead of the other way around. We’re taking the human out of the wrong bit of the process. 

Miles: I am becoming more confident in its ability to award grades correctly. It does seem to be down to exactly how much detail you give it in the prompt. And that it’s, I have no hard data to go by here, but my feeling is that it’s pretty good at that.

It’s really good at giving detail. personalized feedback to students. So at Roehampton, we’ve spun up a thing which will allow a student to upload a draft of their academic assignments and alongside the assignment brief and get really detailed feedback on how they’ve addressed the brief and the quality of their writing and so on.

Advertisements

Way more so than me or any, I think almost any of my colleagues would do. In advance of the assessment deadline, this seems like a really good use of the technology, saving some of our workload, but much more improving the quality of our students writing. My colleague has put very good guardrails in place that it won’t rewrite sections, it won’t suggest a grade for the work, it will apparently give a recipe for chocolate cake if you want it to, but it’s, broadly speaking, It’s staying within the bounds that it’s been given.

The whole marking their essays and giving them feedback on their essays, we’re saying we still have to do that work because these are decisions of significant effect, and a human has to be kept in the loop at that point. And the same applies with for the awarding organizations for the exam boards at the moment, other than like multiple choice items, Ofqual’s rules are you have to have human oversight of the marking process for GCSE and A level.

I think rightly the other point I would make is about motivation. How many PRIMMary school kids, teenagers are going to want to write An essay, do a homework, fill in an exam paper to get feedback from the robot at the end of the day. The motivation is because I want my teacher to see what I have learnt, what I can do.

The human aspect of my teacher has read my work and thinks this about it and suggests this as where I go next. I think is still our preserve. I did ask this question to a year group of 11 year olds that I was working with at the start of a lovely term long cross curricular policy around you need to work around artificial intelligence.

That’s for another time. And their response was, it depends on the feedback. But if the AI gives us very warm and constructive feedback, we’d quite like to have that, please. A teacher just crossing out everything that we have spelt wrong, not so much. So their view may be rather different from my own view.

What do you reckon, Becci? 

Becci: I think it does depend on, like, As you say what is it that’s being assessed and how that relates to the teacher. If it’s multiple choice questions we don’t need AI for that anyway, but you do need tech. For students to be able to get immediate feedback. That’s great. Doesn’t necessarily need AI to be able to do that. It depends on the questions, but if it’s something that the students can write, an open ended answer, then yeah, you could use AI. But as you say, it’s, it depends at what stage. So if it’s just a simple in class, just need to do it and then we’ll get the whole class feedback generated and, the teacher can view it, then, I can see the benefit in that, especially if as Alan said earlier, you’re teaching 300 kids in a week sort of thing.

I think where you’ve got the danger when it comes to things like GCSEs is the fact that, that makes a major impact. In one sense, it would be great because. You would have so much data to be able to train it on that maybe it would be fairly accurate, but I don’t think anybody would consent to it only being AI.

You still need that human oversight as well.

Alan: Yeah, I totally agree. Yeah, I’m just really frightened of taking the human out completely. 
 

 Just coming back to a practical use of AI again where it can add value. I was coding last week and I thought, oh, I wonder if I can code something in flask which is a a Python web stack and I thought, oh, well, I’ll just ask copilot. And within the hour I had an app running which had a built in Python IDE and did some stuff like checking it for code readability. And I thought, wow, and I did that in a couple of hours. This wouldn’t have been possible if I just sat reading books about it for the, it would have took me about a year to get to this point. And so I’ve now got this idea for an app and the basic code and I’m going to finish it in the next few weeks. Having used chat GPT and copilot to get to this point. So that made me think. Could you- 

Miles: you’ve got the knowledge already and this helps. So this makes a big difference. So VS codes copilot integration is phenomenally good. The integration with VS code and the chat GPT app running on the desktop is really good as well. So it will help do these things. And that I think is something which we should try bringing into the classroom of exposing pupils over the age of 13 terms and conditions to working alongside these tools, which are so very good at helping with that software development process.

I think. There is still foundational knowledge that you have that allowed you to make a start with this, to understand what it was trying to do, to tweak it in particular ways, to give it feedback. 

Alan: I think you’re right. I hadn’t really thought about the level of knowledge I needed to be able to ask the right questions. And I hadn’t thought about how easy it was for me to take the code and put it together, in a, website with HTML, CSS and JavaScript and so on. And I understood the basic structure of a website. So it wasn’t difficult for me to then plug the code into the right places. So I guess there was, I’ve suffered the curse of knowledge there, haven’t I? I didn’t know what I already knew. 

If you are grateful for my blog, please buy my books here or buy me a coffee at ko-fi.com/mraharrisoncs, thanks!

Becci: So I saw somebody’s posted on LinkedIn. That they had no knowledge of code and I don’t know how much no knowledge of code means if they genuinely mean nothing or they mean maybe the tiniest little bit, but they said that within a few hours they’d managed to create a website now haven’t seen the website.

I didn’t. I didn’t read the LinkedIn post that closely, but if somebody if it is possible to create something with no knowledge of the code. Where does that take us? Maybe that’s a whole other podcast episode, Alan, but I think it’s really interesting that, we always talk about this. You’ve got to have the domain knowledge. And I think that it’s definitely true, but it does make me wonder if you don’t have the domain knowledge, what can you make? 

Alan: I think it is staggering how much you can make without really knowing anything about coding. And I think it is totally possible. But that brings me to something I was reading the other day, which is of course, CT 2.0 from Matti Tedre and Peter Denning. But CT 2. 0. Was Matti’s name for this new style of computational thinking, which isn’t thinking algorithmically designing an algorithm to solve a problem.

It is, deciding on what kind of model you need to put together and how to train it and how to to turn something like a neural network into a useful function. And computational thinking is going to change because we’re moving from procedural algorithms to data driven algorithms and how does that relate to what we just said? Sorry, I’ve gone off on one now. 

Miles: No, No, not at all. I think we’ve still not quite fixed what we mean by computational thinking 1. 0, so I’m just delighted we’ve released a new version of this. I’m very much an early adopter of these things. If your definition of computational thinking is, as some exam boards seem to, promote, oh, it is abstraction and algorithms and decomposition and pattern recognition, learn these definitions and you will be fine on those questions, then You have missed something over the last, I don’t know, what is this, it’s getting on for 20 years.

It is about the thinking that comes before the coding. It’s the stuff you do before you put your fingers on your trackpad or on the keyboard or whatever. And as long as we are thinking of computational thinking as, the thinking that precedes the computation. Thinking, computation, I don’t know, then we’re fine.

It’s just the way that the toolbox that we will use to solve problems computationally isn’t so much sitting in front of an editor and typing lines of Python which exhibit repetition and iteration and sequence. It’s much more about finding really good representative training data and choosing the right machine learning.

I’m going to have to use a word here, aren’t I? Algorithm. So that may still be a little bit relevant to make sense of that data and to build a model that links input to output. All of that I have to do on my, in my head or on a whiteboard or on paper or in a notepad. Before I actually start gluing these, sorry, gluing these pieces together, that’s, writing, instructing the AI to build this system for me, or whatever the actual hands on work looks like.

That still is computational thinking. I’m more than happy for Matti Tedra to label this CT 2. 0 because that does recognize that the way we solve problems with computers isn’t quite how it was when Jeanette Wing wrote her paper back in 2006. Some of these ideas, pattern recognition, pattern CT 2. 0, I’d have thought. The other thing, bear with me, so Becci knows the barefoot thing well. The lovely Barefoot Computational Thinker’s diagram, there’s that whole left hand side, which is the list I’ve just given, the right hand side of that diagram or that illustration of collaboration and perseverance and yes, debugging, whatever that means now, and all of that remains just as important in CT 2. 0 as it did in CT 1. 0 or in CT 0. 1 alpha or whatever the first version might have been. 

Alan: Tinkering springs to mind. As, yes. 

Miles: Thank you. Yes. Very. I was trying to, from what? Tinkering. Yes. Tinkering very much. Isn’t the AI great at encouraging that? Let’s just try this approach to problem solving.

Alan: So, So me designing my. App. I mean, It’s even got a, it’s got some tentative names like six pack of code or six hack because because I’m going to ask people to write code six different ways to solve the same problem, all of that, all of that has run 

Miles: from where she says, I always try to believe six impossible things before breakfast. Yeah, yes, six simple. I think Lewis Carroll is out of copyright, you could have six impossible things as your website. 

Alan: Six Impossible things. That’s the name of the app. You heard it here first. OK, brilliant, but it was just tinkering and it’s going to result in something. Who knows what? Becci, do we just raise the profile of tinkering in the classroom?

Becci: I think so. I think, as Miles says, those bits down the right hand side of the poster, I’m gonna have to get it off, I’m gonna have to Google it and remind ourselves what’s all on it, but I do think those are the important skills that you.

Advertisements

We know that students need to learn how to use AI, but we know that they need to learn the human stuff more, the stuff that AI won’t be able to do. So that collaboration, that, those bits and pieces, here we go, I found it. So it’s tinkering, creating, debugging, persevering and collaborating. Yeah. 

Miles: I got, I got most of them.

Becci: You did. You did very well, Miles. But yeah, so I think that those are, as you say, those are the important things. Those are the things that do still apply. Even if you’re, you’re making something with AI, you can still create something. You can still collaborate. You might be working with another person.

You may be working with AI that’s still collaborating. Um, Still having that. debugging, is it doing what I want it to do, tinkering and keep changing things and then persevering because it’s not doing what you’ve asked it to, you can still do all those things without necessarily doing those bits on the left, the logic evaluation, algorithms, patterns, decomposition and abstraction.

So it’s definitely still important. 

Alan: So for the purposes of the podcast, I am sharing that computational thinkers poster from barefoot. And I will put a link to it on the the podcast notes. Yeah so I think those approaches to computational thinking are still very important. But as you say, Becci, perhaps things like abstraction, decomposition algorithms, maybe less does that mean that we have to throw out our curriculum and start again? Miles, you probably have an opinion on curriculum.

Miles: So I am a firm and unashamed believer in a knowledge rich curriculum, although I’m starting to pivot towards knowledge based. thinking rich as where we head with this. So you need to know stuff. I’m sorry about that, but you know, I think there is still stuff, you know, when, when we were sat around the table doing the current programs of study, current for a little while longer yet, the quote that stuck in my mind was the thing from William Morris about interior decor. He says, do not have anything in your house. unless you know that it is useful or believe that it is beautiful. And I think as a principle, what is it? This is the Marie Kondo approach to curriculum design. It should spark joy. The stuff which gets kids excited ought to be part and parcel of what we’re teaching in these lessons. Promoting a love of learning. Curiosity, I come back to this. That matters still. There are foundational things which I think It’s worth knowing how to do by hand before you start using the technology to speed it up to automate the process. I suspect we will still be teaching kids pencil and paper arithmetic and learning their times tables, despite the ubiquity of devices which will do all of that for us now.

What’s the equivalent over here in computing land? Does it? Do kids need to know about? A bubble sort? Do they need to know about the difference between linear search and binary search? I’m not going to be able to argue yes, because if they get jobs as software engineers, it’s very important that they choose the right algorithm. That seems the wrong way round. This is not vocational training for the software industry, because they’re going to get the box to do a lot of that. But something in there about, there are, it’s your six impossible things thing. There are two ways, several ways, to find the right number from an ordered list.

And one of those is way quicker than another. Seems still worth teaching. That said, the technology landscape has moved on massively since 2012. And some recognition that the world has changed I think is worth doing when it comes to rethinking what goes into a computing curriculum. There is in the PRIMMe Minister’s, what is it?

AI action plan. There’s a thing in there which says. Which, this talked about digital skills for all in the manifesto, the AI action plan talks about AI and digital skills for all. I’d love to know which bit of AI isn’t digital, but we’ll leave that for another time. So there’s a thing in there about, We’re broadening the scope of what we mean by these essential skills for everybody now to probably include AI.

And there’s a thing about DfE have to talk to DCIT about this and DfE ought to jolly well have a look at what’s happened in South Korea. Not everything that’s happened in South Korea, but what’s happened in South Korea around software education of bringing the AI in at that level. If we do a redraft of the programs of study, there is certainly things I’d like to see go, but that’s for another podcast, Alan, the stuff which I would very much like to bring in, which is this understanding of how AI works, how to critically consider its impact, and also how to actually use this productively for meaningful tasks.

Alan: Becci, do you agree? Do we need to change the curriculum? And if so, what’s in and what’s out? No, that is another, another podcast. 

Becci: I’ll do a brief. I agree with Miles. Some knowledge is definitely still important, but I think for me the problem is testing students on recalling knowledge. I don’t think that’s the important bit. The important bit is applying the knowledge. So for me, it’s a knowledge base, but then very skills heavy. So whether that’s digital skills, whether that’s creative skills, whether that’s, applying the knowledge that you have to a situation, the more real world stuff that the students can do, If qualifications assess that, then they’d be well set up for qualifications and life. And surely that’s the way that education should go. 

Alan: Yeah, I’m a , you hear all the time. Don’t you? Oh, why do we need to know this? We can just Google it. And of course, yes, you can Google facts, but you can’t Google. Can’t Google wisdom, can you? You know, It’s what’s the old, knowledge is knowing a tomato is a fruit, but wisdom is not putting a tomato in a fruit salad or something.

If you are grateful for my blog, please buy my books here or buy me a coffee at ko-fi.com/mraharrisoncs, thanks!

Miles: Absolutely right. This is about that. capability. This is a combination of their knowledge and their skills as well. Perhaps Alan has some sort of wisdom about what the right thing to do is, the courage to do that. Yeah, it is. And my worry, certainly when it comes to assessment and, current GCSE, at least with at least one of the boards, this removal of practical programming from what is actually assessed seems such a shame in our subject.

And it feels We’ve become something which feels a lot more like physics with, required but not assessed practical work rather than something which feels a lot closer to D& T or music or art and design where actually making a thing is the way you demonstrate your capability within this domain.
 

Alan: Well, I think we’ve we’ve just about covered everything I wanted to cover, but I do annoyingly want to come back to practical tips just one more time. What can listeners to this podcast do in the classroom on Monday give us one tip. 

Advertisements

Miles: Very brief, and exactly what you’ve just asked me. PRIMM. PRIMM is utterly cool, but creating a PRIMM resource takes, like, expertise, and time, and so on. If you give it a program, and explain to it patiently what PRIMM means, it will come up with a whole worksheet for you. Based on the code that you have written, or code that it can write for you, of course. Which starts with, what do you think this code will do? And then ends with, okay, now go and make something of yourself. It’s got PRIMM. It can write code. It can work with code. It, if you want to try PRIMM out, but can’t find the time to make the resources. Get GPT to make these resources for you. 

Alan: Brilliant, brilliant. Becci, what do you think teachers could do on Monday after hearing this? 

Becci: I think the easiest thing is load up one of the free versions and have a discussion with it on the board and involve the students in the discussion. Find out what it can do. Scrutinise the outputs that it’s giving you. You don’t need to have any knowledge necessarily to do that, you can just open it up, start to have that conversation, involve the students in the discussion and go from there. 

Alan: Brilliant. I think that’s been amazing and I’m very, very grateful for your time this morning. Thank you very much, we must do another podcast about all the things we didn’t get onto at some point in the future, but for now, thank you very much, Becci and Miles. 
 

Becci: Thanks. Bye now.
 

Alan: So that’s it for another pod. Hope you enjoyed that. Don’t forget, I don’t get paid for this unless you kind people want to reward me in some way. You can visit my website, httcs. online to find out how. Maybe you want to gift me a WordPress subscription, buy me a coffee, or buy one of my books. It’s all good. And I’ll speak to you on the next episode. Bye.
 

If you are grateful for my blog, please buy my books here or buy me a coffee at ko-fi.com/mraharrisoncs, thanks!

mraharrisoncs's avatar

By mraharrisoncs

Freelance consultant, teacher and author, professional development lead for the NCCE, CAS Master Teacher, Computer Science lecturer.