TRANSCRIPT: Tales4Teaching ep. 65 – Rethinking digital writing: how AI is changing the game
Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio before quoting in print.
Intro: Welcome to Tales4Teaching, a podcast where we explore stories with purpose in higher education. We will share expert insights, engaging interviews, and thought-provoking discussions that will inspire your teaching.
Joan: On behalf of Deakin University, I would like to acknowledge the Traditional Custodians of the unceded land and waterways on which you are located. I acknowledge the Wadawurrung people of the Kulin Nation as the Traditional Owners on which this podcast was recorded, and I pay my respects to elders past, present, and future. My name is Joan Sutherland and this is Tales4Teaching brought to you by Deakin Learning Futures. Welcome to today’s episode of rethinking digital writing: how AI is changing the game. In this episode, we’ll be delving into the world of AI and digital writing with our special guest, Lucinda McKnight. Lucinda is an expert in the field of using AI in digital writing and will share her insights on the ethical considerations surrounding the use of AI in writing and how we can ensure that the technology is used in a way that benefits students. Welcome, Lucinda, thanks for joining us.
Lucinda: Hello, Thank you for having me on. Oh, you’re welcome. To get us started, can you just tell us a bit about your role and your research on digital writing.
Lucinda: I’m an Australian Research Council Fellow in Deakin University’s Research Institute for Educational Impact, and so I am researching digital writing, especially in secondary schools, in secondary English education. So I have a three-year Commonwealth government funded Australian Research Council funded project where I am just exploring what teachers think digital writing is, how the nature of writing itself is changing and where we’re going to go with this in the future. So, as you can imagine, I started this about year ago, this project. So I’m a year into it already. And I spent last year trying to interests people and the year before trying to interest people in AI, and in AI writers and then people were not terribly interested. I think that COVID has meant that some, some really big developments in the world have been on the back burner in some ways, but now ChatGPT has arrived and everyone’s interested. So here we are.
Joan: And ChatGPT is just one tool, isn’t it? You talk about AI as a concept, I suppose and different tools associated with AI as well.
Lucinda: That’s right. So for the past couple of years I’ve been playing with lots of different AI writers. So people may not understand, but an AI writer could write your essay for you or write your research paper for you or whatever, even say, two years ago or so. Human quality texts that maybe needed a little bit more tinkering from a human to really polish it up. It was already being produced. So what ChatGPT has done is put a chat bot, kind of talking to you, interface on that service.
Joan: That’s generated a lot of interests because it’s getting more accessible to people, I suppose, for them to understand it as well.
Lucinda: Yeah, that’s right. It’s more accessible, but also I think it’s caught people’s imaginations and it’s made them, It’s really enchanted them and made them feel like they’re talking to a person once, once there’s this real anthropomorphizing of this IT and imagining as the person you can actually relate to all of a sudden it becomes very compelling. I think
Joan: It does indeed and we’re seeing that a lot in education, as we speak. So to talk about bringing English and literacy skills into the 21st century, since the world is digital, how can educators teach digital writing to our future students with your work in secondary school? I just at the outset to or want to say that this is relevant to tertiary education as well. I think it’s a shame we’ve got such a hard boundary between our sectors because the whole idea of writing and teaching, writing and doing writing and authentic writing and digital writing. It runs all the way through, right through primary, secondary, and tertiary, and then into industry. So it’s really relevant in a whole lot of different ways. So how do we teach digital writing? Well, I’m working with a model that talks about three different dimensions of digital literacy, or literacy in general, operational, cultural, and critical. So how do we actually use it? How would we use ChatGPT? What’s, what’s a really strong prompt to put into one of these AI writers that produces the kind of writing you want it to do for you. And I don’t know if you’re aware, but there’s a whole industry that sprung up already in prompt engineering, but who can pay for good prompts to be written for them. So there’s the operational then there’s cultural. How would we use these things?
So how are journalists and poets and writers and people out there, how are they actually using all these really great AI related tools or services in creative kinds of ways or productive and effective sorts of ways in their writing. And how can that be translated through so that the writing that we’re doing in school or at uni is meaningful. It’s like writing in the real-world. And it looks at maximizing the potential of these things.
What do we have to keep in terms of what you might call old-fashioned writing or fundamental writing skills, what still needs to be taught. Because if writing, learning to write and learning to think are intimately interwoven together, then. What do we still need to teach and how if we want to become experts at working with these AI tools are doing this hybrid kind of writing, human AI writing. What do humans still need to be able to know and to do. And our critical is about, well, hang on a moment. What does this all mean? How much is this service costing? Who trained the service? What, what was it trained on? Didn’t have copyright permission to use all the material that they’re on the internet that it read, to be trained to learn to becoming an AI writer. What are the ethical implications of using these things? How can they be weaponized? How can they share disinformation at, at scale, at massive proportion. So all of these critical sort of dimension has to be a part of the whole picture of learning about digital writing as well.
Joan: You mentioned earlier around making it meaningful as well from a digital writing perspective, can you expand on that a little bit more?
Lucinda: I think meaningful is gonna be a term that’s going to be used a lot in the future because AI doesn’t really know what meaning means. Humans. Humans understand context. They understand audience, they understand who they’re actually writing for. They have a rhetorical purpose. They know what they want to achieve in the world. E.g. we have a national assessment standardized assessment program, NAPLAN, which asks students to do meaningless writing. Just give them a prompt like here’s a box, write a story about it, but no purpose and no audience. So that kind of writing, I think he’s dead in the water. We need to be requiring students to do human writing that has, or even if it’s hybrid riding with AI, humans need to be able to bring those skills of creativity and empathy and the things that humans are really good at to the table.
Joan: We’re definitely going to see a rising need for creativity aren’t we and it’s just different modes and AI is just one mode that we can use to help with that digital writing. I know you’ve been in this space for quite some time now. So can you explain a little bit about how digital writing has evolved with the rise of AI? Given that it’s been around for some time.
Lucinda: And that’s a really good question because I think some people don’t realize that these AI writers like ChatGPT are really just an extension of predictive text. They just predict what, instead of a couple of words, few words, or a little phrase that might come, come next, they predict much more, much more lengthy texts that might come next. So you’ve already been using these predictive text on your phone, in your e-mails and all that kind of thing. So digital writing has been evolving ever since. I would say the early 1990s when the Windows interface made writing in things like Microsoft Word so much more popular and taken up by people. So already from those early days we were using things like spell check and more recently grammar check and programs, things like Grammarly. All of these things are components of digital writing, writing with a machine or robot kind of assistance, so to speak.
Joan: With those different types of technology with AI, specifically, what are the benefits of this technology in terms of creativity and originality? Given you just mentioned around the different dimensions that you, can, you expand on that a little bit.
Lucinda: I think that this ChatGPT for example is a fantastic brainstorming tool. You can say, you can ask it, come up with 20 ideas for whatever. And it will, it will come up with lots of ideas for you to think about and evaluate. And then you can also ask it to evaluate the ideas, give it some parameters, and ask it to come up with rationales for different things so you can play with it in a dialogical sort of way. It’s like a partner you can be working with and testing out ideas with and exploring ideas with, you can ask it to come up with three different plans for writing an essay on something, or do you put some ideas into it, maybe basics of ideas. And then you can decide yourself, you can evaluate well, which of these three different ways is better? This is something that’s talked about a lot. The idea of bringing evaluative judgment because only humans can really ultimately decide what quality is. As Margaret Berman in Deakin’s CRADLE institute always says, yes, humans can do this kind of work and then you can use it in all sorts of different ways. Honestly, the number of really exciting, fabulous sorts of things. I’ve seen. There are people writing plays and performing plays with AI as one of the characters in real time. There are poets carving poetry out of AI written text as it appears on the screen, there are artists and all sorts of people working with images, AI content generators that will create images. There’s that great opportunity to work multimodally in writing, thinking about writing as composing with images and sound and text. I can go on and on, but I’ve got to stop.
Joan: I want to go and play with some of these now! What I’m hearing, and I’ve heard this a lot lately, everyone got excited with the ChatGPT, and hearing you’re saying earlier, and now it’s around that evaluative judgment and that critical thinking side of things. How do we teach that so that people are using different AI tools to effectively, essentially and not just using it to pop in a prompt and generate an essay, per say?
Lucinda: Well, I think we have to look in the classroom at different kinds of prompts, different kinds of different inputs, different outputs, how they relate to each other. How we can evaluate what’s good and what’s not. But I want to come back to the critical side of things too, because there’s also this idea about plagiarism and the fact that these things can be used for cheating. And I think that that’s a major consideration on that critical side of things about ethical use of them.
My prediction is that, I think it’s already the case that really, because they can be used in so many different ways in preparing a piece of, an essay, or assessment, say. You can’t disentangle the human and machine components. In the future. I think it will just be understood, like with spellcheck or grammar check that we are all writing with these things. But for the moment, it’s really difficult to work out how to ethically say well this, I acknowledged that I use this prompt and I received this text, this output, and I used this output in this particular way. So we’re at this stage where we’re wondering about how to do what might be called human badging of the text. And thinking critically about, about ways to do this, that a fair assessment circumstances.
Joan: It’s an important critical point that you raised in relation to those dimension. And it does make me think about the fairness and equitable resources as well and where the data is actually coming from. Thinking about some of the key challenges facing teaching digital writing when working with AI. How can we overcome these obstacles to enhance the learning experience and empower educators to use AI when teaching digital writing?
Lucinda: Well, I think the first thing that educators need is time. They need time and access to these things, to play with them, to test them out, to experiment with them. And they need to have that time for this extraordinary shift is a massive game changer in the Fourth Industrial Revolution. They need to have time to be able to take this on board. Because otherwise, I think people might try to put their heads in the sand and say, I just don’t have time to deal with this. I’m gonna go on teaching like I’ve always been teaching, but that’s not going to cut the mustard because the sorts of tasks that were routinely set in the past, are just all too easily done with AI and perhaps even should be done with AI. Then also, I think teachers need, and all educators need a lot of support in terms of understanding the dimensions of what these things are. But also they’re taking your data as you’re playing with the thing too, that they’re taking your data away and doing things with it that you are not aware of. So when you sign up to ChatGPT, you’re entering into a binding legal agreement with OpenAI, the company that makes it. And teachers need time to be able to take this on board and understand all of this as well. So they can model ethical use of these kinds of tools for their students as well.
Joan: It’s definitely a great point you make in relation to the modeling and having time. I’m seeing a real need for behaviour change as well. And the motivating factor is there’s so many tools out there and thinking, what can I do actually do with them? It’s something like, do you think about around the cost of tools as well? And this ChatGPT is just one of them and we think, well it’s a free tool. Let’s delve in and use it. But I think what’s really important, there’s always a cost to software or any tool that we actually use, and in this instance, that’s your data. So be mindful of that and being aware of the terms of usage is a great message to highlight as well, not just for ChatGPT, but anything that you’re actually entering data into as well.
Lucinda: Yeah, I think it’s Zygmunt Bauman, the philosopher, has a great saying that if you’re not paying for something, you’re the product.
Joan: Yes. And that’s totally true, isn’t it? So the work that you’ve done, you’ve created a self-audit for educators using AI in digital writing, there were three priority tasks that you highlight before using ChatGPT specifically. Can you explain the importance of them before using AI in any educational context?
Lucinda: Sure, well number one was actually reading the terms of use really carefully and understanding exactly what they mean. And what a good example of that is that you cannot or should not, must not, input copyrighted material that you don’t have to, you don’t have a license to into ChatGPT. So things like, putting in things from national curriculum documents or anything at all that might be under a particular kind of license that doesn’t specifically state it can be inputted into one of these things, then you just can’t, you can’t use it. I think probably a good thing to think for yourself is, did I write this? When you’re thinking about what goes into a prompt, did I write this? And if you didn’t write it, then perhaps no, big red flags go up. So it’s that important. And from what I’ve seen so far, there are a lot of people just rushing to put stuff in. And obviously this is one reason why we can’t input student work into these things. Because we don’t own, we don’t own that student work. And OpenAI can take that work. They can take the IP of that work and just share it around and do things with it that we might not be aware of. So we’ve got to be very, very careful about how we use these things. So we can once again model being careful about them and modelling how important the terms of use are with all of these things as well.
Joan: I just wish they made the terms of use very easy to read. Very cumbersome.
Lucinda: for everyone out there, the OpenAI terms of use are not too bad compared to some of those great long screens you can have to read for days scrolling down.
Joan: Yeah. Just that you go, okay. Yeah. I’ll just do it. The next one you have is educated configurations for ChatGPT.
Lucinda: Yeah, and I think number one there flows on from what we were just saying, that it’s not recommended to be used for marking. So a lot of people, educators have gone straight there. Oh, this can mark things for me. Well, yes, it could have a go at marking things for you. However, it’s too crude a tool and OpenAI is really open and honest about this. They say in their advice to educators that it should not be used for making judgments like that about students. It comes back to what we were talking about earlier and the idea of evaluative judgment really resting with humans, needing to rest with humans. So yeah, I think that’s probably the most important thing there, but there are a number of other educator considerations to around the limitations of things like ChatGPT, e.g. the finite nature of the corpus that it’s trained on, the fact it’s only trying to up to 2021. And so it’s going to be ignorant, so to speak, of recent developments that might be important.
Joan: The third one you had was around advice for legal and ethical use of ChatGPT. Is there anything that you’d like to highlight?
Lucinda: I think we’ve probably covered it to some extent, but I think that everyone using it needs to be dealing with whoever in their institution is responsible for legal advice, but copyright advice for all those sorts of things. And touch base with them about get, get that support about anything that they have in mind for doing with it. I think don’t, don’t rush in is important and it’s so easy to rush it because as you said, you just think, OK, can’t be bothered with that. We’ve all been there, we’ve all been sure with these things. So, yeah, it’s just so important to think about some of the larger dimensions and really focus in those terms of use, thinking, hang on a moment, this is actually not me using a tool. This is an actual exchange. I’m giving something away here as well as receiving something and having that at the forefront of people’s minds.
Joan: There are different implications for any tool, isn’t there in terms of usage and looking at it. If you look at the terms of usage there’s definitely implications as you mentioned, what you put in, you’re getting something out, out of it as well. So where is that coming from? What is it and how do you evaluate it as well? Your research has highlighted digital writing. How will AI impact digital writing in higher education In two areas, content creation and assessments?
Lucinda: Yeah, that is the big question that everybody’s asking. I think that Deakin University has taken an open, sort of positive approach to thinking about how we will be using this for content creation. I think if we all acknowledge that it’s being used out there in industries, it’s incumbent upon us as educators, tertiary educators to be preparing students for the future. Therefore, we have to look at how it’s being used and be working with students on coming up with ways that it can be used effectively. However, we’ve also got this great big question about assessment because when students are competing with each other in secondary school, they’re being ranked and things like that. What, what is the role that these kind of, this sort of, augmentation machines really, in a sense, they augment what humans are able to do. And if some humans are using augmentation machines and others are not, how can it possibly be fair? There are very, very big questions about those things. One of the things that’s been touted as being more explicit about how you’re using AI and being marked then on your effective use of AI. But it’s really difficult because you sort of think, with everything you think, aha, I could, I could try and get around students using ChatGPT like this. And you think, for example haha, I’ll get them to write a critique of an AI written piece of writing. But then you think, hang on a moment, students could just put an AI written piece of writing into ChatGPT to do a critique.
So it’s kind of like every layer because of its dexterity and its ability to do so many different things. You have to think really carefully. I think we will be looking at what Margaret Bearman and calls grounding. Grounding means actually having to talk about real embodied experiences that you have in a unit or in a study, and linking them to readings and quotes from tutors and all that kind of thing.
Joan: That’s making it meaningful for the individual, their own context and what that actually looks like to them and making it relevant I suppose.
Lucinda: That’s right. So in a utopian vision or world in relation to this stuff, in the Utopian version, that’s the way we’d go towards more meaningful assessment, more contextual, more embedded, more related to students’ lives. But the more dystopian version I think, is that we go back to in-person pen and paper type exams as the only way to be really sure that we’re actually assessing what people can do. But the thing is if that’s not how anyone’s writing out in the world anymore, then what’s the relevance and the meaningfulness of that? If it’s just a kind of empty hoop jumping thing for assessment. It’s huge. I think that the questions that you’ve asked there about content creation and assessment are the biggest questions really there for us.
Joan: They are and I think they’ll take some time and energy and just working out an iterative approach, I suppose to say what works, what doesn’t as well.
Lucinda: I just think that there’s no, there’s gonna be no strict line that’s drawn between, well, this is where digital writing starts and this is where it doesn’t because even things like making videos, writing scripts for little videos, little short form videos and stuff like that. I think of that as digital writing and that would definitely be being done in primary school. But I also, I’m very firmly committed to teaching handwriting. And there’s research that shows that there are cognitive benefits to using your hand to write as well. I think that needs to continue in parallel. And then you’ve got all this digital side of things which is using typing. Fundamentally typing is the most basic digital literacy at the moment. And then using word processors effectively, using those spell check and predictive text things effectively. And then we know that something like ChatGPT will be built into the next version of Microsoft Word. So if primary school students are using any kind of word processor, It’s highly likely it will have a writing coach built into it. So it will be how do you work with a digital writing coach? That’s on the horizon.
Joan: Yeah. I think what you’re highlighting is the importance of, especially from an educator’s perspective, you’re looking at how you can model the effective use of the different components of AI. But you mentioned earlier that they need time to play and understand the different types of AI and how they can actually put it into their own contexts. Is there any other suggestions that you have for educators out there that might want to know more about digital writing and how to enhance it in their space?
Lucinda: Well, I think access to really good professional learning is important. So I think schools and universities have gotta get behind supporting staff with money and time to be able to go out and have that learning as well. So there are lots of people doing very exciting things already in this space. And teachers are going to need to have time to be exposed to all of these new ideas and what we’ve just covered today together, only a fraction of the dimensions of this, the implications of this, the way it’s evolving so quickly. So it’s teachers and educators need time to take their breath. And they need to also be aware that they need to have the time to think deeply about all of this stuff because it is, it is very significant. I think it’s a very significant shift, a little bit similar to the arrival of the internet. What that did to research and knowledge construction and understanding of what knowledge actually is. This is just as big and we need, we need time to come to terms with it.
Joan: The big invasive time that we all need more off, don’t we? But I think it’s definitely something that we all need to take a pause, I think you mentioned, take a breath and actually think about ways in which you can make a difference for yourself and build your own capability and knowledge in this area before just delving in and saying this is going to be one size fits all, because we know there’s so many things on the horizon. But just having an understanding of AI and how it works in the educational context as well, right?
Lucinda: That’s right. So it’s almost as if every day the implications of it are emerging with the affordances of it and also the issues in relation to it as well. So that those two approaches that sort of creative and critical dimensions of it are really important moving forward.
Joan: So just to wrap up this podcast, just a question around your research. I know that ChatGPT there’s been different versions of it, but the release into mainstream I suppose, how has that impacted your research that you spoke about at the beginning?
Lucinda: Yes. So the first stage of my research was meant to be a national survey about digital writing, about how teachers who are teaching digital writing. And it was meant to go out last year, but I ended up taking three months long service leave last year. The survey got delayed and in the meantime, ChatGPT came out. I feel as if I was incredibly lucky that I had that delay to my survey because even though it had stuff about AI writers written through it, I was already thinking and the testing effect of the survey had shown the teachers didn’t even really necessarily know what an AI writer was. However, now, about to do the survey, when it gets through its ethics process, then it will be completely different. And it will be, although it would have been interesting to have a pre, a pre ChatGPT sort of Ground Zero type. What would teach us where we are teachers out with digital writing beforehand? Now, it will be a much richer survey with much data about teachers, needs and ideas in this area.
Joan: Well, I totally agree. It reminds me of the transition teachers made pre COVID when we had to go to remote teaching and the change that actually happened, the information that you can get from them is so much richer because they’ve got different experiences, lived experiences that are meaningful to them, that they’ll be able to feed back to that process.
Lucinda: Yeah, that’s so true is like, imagine if you did a survey, how do you use Zoom for teaching in COVID compared to now? So it’s that kind of paradigm shift.
Joan: look, I really want to thank you for your time today, Lucinda, I’ve really, really loved this conversation and the work you’re doing is amazing. I’m sorry that we didn’t get onto it sooner. But are there any closing comments that you’d like to share with the audience?
Lucinda: only just that, as we’ve said, really encourage teachers to get in there and play with it. Don’t put your head in the sand. Whatever level you’re teaching at primary, secondary, tertiary, get in there and really have a go at using it and don’t just be happy with what it puts out for you. Ask it, ask it again. How could this be improved on or how, what’s a different perspective you could give me on this? Really drill it, push it, push it. If you don’t just get it to write a lesson plan for you, ask it to right, the most creative, wacky, unusual, different impactful lesson plan it could possibly write. And you can see then you can compare the outputs and learn how to use it more effectively.
Joan: Sounds like you’re gonna be a prompt engineer or could be anyway.
Lucinda: That’s how I’ll make my million dollars putting myself out there as a prompt engineer. Well, there’s one of our fabulous academics at Deakin working in assessment, has already said once you write a really good prompt, it’s long enough such that you should have actually done the assignment.
Joan: I love it. Well, thank you for your time and your expertise and knowledge that you brought to this conversation, really loved it and I doubt we’ll hear more form you later in the year.
Lucinda: Yeah.
Joan: Fantastic.
Lucinda: Thank you.
6 April 2023
Last modified: 29 May 2023 at 2:47 pm