Skip to navigation Skip to content

TRANSCRIPT: Tales4Teaching ep. 68 – Empowering educators to leverage AI in teaching and learning

Transcripts are generated using a combination of speech recognition software and human transcribers and may contain errors. Please check the corresponding audio before quoting in print.

Intro: Welcome to Tales4Teaching, a podcast where we explore stories with purpose in higher education. We will share expert insights, engaging interviews, and thought-provoking discussions that will inspire your teaching.

Joan: On behalf of Deakin University, I would like to acknowledge the Traditional Custodians of the unceded land and waterways on which you are located. I acknowledge the Wadawurrung people of the Kulin Nation as the Traditional Owners on which this podcast was recorded, and I pay my respects to elders past, present, and future. My name is Joan Sutherland and this is Tales4Teaching brought to you by Deakin Learning Futures.

Hello and welcome to today’s podcast. I’m really excited to be joined today by the Director of Digital Learning team here at Deakin University, Associate Professor Trish McCluskey. Hi Trish, welcome.

Trish: Hey Joan, how are you?

Joan: Very well. Thanks for joining us today on the podcast.

Trish: No worries. Recently you were on an ABC radio interview around talking about Deakin’s perspective around AI. And there’s a lot of talk around AI and the surveillance of students and detection of teaching. I’m one of the things that really struck me around your interview, and it’s around your messaging that we have, is around how AI provides so many different opportunities. How do we actually talk about that and how do we effectively communicate the positive potential of AI to educators and students and shift that focus from AI as a form of cheating to a tool for enhancing learning?

Trish: Yes, and I expect that people are probably getting sick listening to me talking about AI at this stage, Joan, because we’ve done a number of interviews and Deakin are certainly leading the way in terms of our approach to it and our approach to seeing it as a positive good for student learning and success. For those who perhaps haven’t heard the ABC interview. Every interview that we have done to date, the focus and the request has been on how are you going to stop students cheating with this introduction of generative AI? And the approach that we’ve been taking is trying to get people to shift their thinking and see generative AI, ChatGPT, as an opportunity for change, for change in how we design learning, for change in the world of work, and let’s look at it as an opportunity to provide our students with a different skill set, a new skill set, that will make them invaluable wherever they go in their futures. So we try and step away from, yes, there will be cheating.

There has always been cheating and this is no different. And it’s an arms race to try doing things like banning it or tracking students and catching them cheating. We’ve got to move away from that surveillance that students are somehow intent on cheating and getting through their courses with the minimal effort. In reality, most students are really curious and passionate and are enjoying their study. So what do we do? What do we do at the university level to try and change people’s views of how generative AI can support us in our learning design and in supporting students to succeed. As you know, many of our listeners will no change in higher education is hard. We have a long traditional history.

We have very robust systems and processes. And to get some of those changes made requires a lot of time and a lot of paperwork. I’ve heard it said that changing a university is like shifting a graveyard. And that’s not very far from the truth. We do have innovation in universities and we have people who are really keen to change. But when you actually want to change the core structure and assurance of value of our courses, rightly, so we need to take a more perhaps bureaucratic approach to that.

Joan: You mentioned it is a huge opportunity for change. And I go back to COVID times, the beloved COVID, where there was a lot of shift in higher education. So although it was a hard institutional change, it did happen because it had to happen. And it seems like with AI coming on board and it happening so quickly, there is a transformational shift that is actually happening and everyone is talking about it. It’s just how that actually happens. So I suppose my question to you is, what steps in higher education can we take to ensure that we’re equipping students with the knowledge necessary and the skills to thrive in this industry with AI, part of their lives and part of our lives?

Trish: Yeah. Look, who knew that there would be a silver lining of the COVID cloud? And I’m trying not to be flippant about it because it was a very serious three years for all of us. However, we did prove to ourselves we can pivot. We can pivot pretty responsively and with quality outputs. So we know we can do this. If this had happened three years ago, it might have been a different story. So there’s the confidence to actually implement the change.

So what can we do? I think the first thing we have to do is start by demystifying the whole concept of AI. We have to talk about it. We have to communicate regularly. We have to listen to our students who’ve been using it for probably a lot longer than what we have. Yes. We need to look at what are some of the success stories or where are academics using it in their learning and teaching. And more importantly, how is artificial intelligence, generative artificial intelligence in particular, being used in the workplace where our students are going. So having a conversation with our colleagues in industry in the profession, saying What do you see the future of this and how can we support this industry, this sector, this profession with equipping students to do that, you know, to, to succeed in that role. The other thing we need to do, I think, is focus on AI as a tool, not as a replacement. A lot of people are very anxious and there’s a lot of media coverage about AI will replace our jobs. A term or a phrase that’s being repeated often is, you will not be replaced by AI, but you might be replaced by someone using AI. So therefore, we need to make sure that our students and our staff are using it effectively and for the greatest impact.

Joan: Interesting you say that because I interviewed Jesse McMeikan, who’s a manager of the industry projects. And he says the exact same things that we need to work alongside AI and equip students with the ability to do that and it won’t replace them, but it may replace they’ve got to be able to work with it effectively, essentially.

Trish: Exactly right. Yeah. So obviously in higher education we’re teaching students to be industry ready. How do we ensure AI is integrated into the curriculum so students learning about having the confidence to deal with it, to use it alongside their roles in the future and alongside their learning journey?

Trish: We’ll need to take a multi-faceted approach to that. And as I’ve said previously, we need to work closer with industries and professions. I would like to see us introducing something like, remember back in the school days when you used to get your pen license, before you could graduate using proper ballpoint pen from a pencil, you had to get your pen license. So maybe we should introduce something fun that’s a bit like using an AI license. So we get students and staff to come in, present them with some problem sets for things that will support them in their daily work, but also support them in their learning design and coach them and work with them as to how to actually craft prompts in a way that’s going to get the best results, to play and have fun. And I’ve been using it every day since it’s come out at the end of November, and I was using a couple of other tools last year like Jasper. And it’s amazing because it is just everywhere. So when you open up your emails now, or Microsoft Office, There’s just continuous prompts. Your sentences get finished for you. I got an email this morning and someone who’s inviting me to something and it was okay, that sounds like a plan which sounds like me. That’s the language is even being used. Thank you very much. So responding to emails and even using your voice to generate some of the responses is really good. I think we need to provide hands-on experience. Yes, practice, practice, practice.

Joan: It’s a consistent message that’s coming through around having a play. I know use that terminology because it’s sort of like you have to go in and have a play, see what works for you, what doesn’t work for you, what are the challenges and but what are the benefits as well?

Trish: Yeah.

Joan: So my next question to you is how you use it with your work? So you mentioned that you’re using these tools every day you’ve been exploring it, what other tools have you been using and how has it helped you in your work?

Trish: (inaudible) is recruiting for a position that shall remain nameless. And I was pushed for time, so I took the key selection criteria for that position and I fed them into GPT and said, generate some good interview questions for this position description, we’ve got ten perfect questions. It didn’t sound like me though, so I said, Okay, can you make them a bit friendlier? Again, another ten questions re-framed. That saved me so much time. But the good thing is that because I know what good answers look like and we can use evaluative judgment, I could look at those immediately and say that those are perfect. I’m going to use those. And when you’re a professional and you’ve already developed the skills of critical judgment and critical evaluative judgment. You can do that. But remember that a number of our students are beginners. So if they go in and try to use what’s presented to them, it’s not going to work if they don’t know how to critically evaluate what is presented.

These tools do make up stuff. There’s an awful lot, people call it hallucinations, I prefer to think of confabulation, or they just fill in the blanks and create things. So it will be a good way to actually, I’m thinking of the license, the AI license, giving people false information and say you critique it and find out where generative AI has got it wrong. I use it in the car as I’m driving to Geelong, ask it to write lists for me. I’ve been exposed to it quite a bit because my son is a digital artist. Using digital tools, especially things like Render. Dali-E, some of the more visually related tools and coding. It’s been fascinating to watch the development of it in the last six months.

Joan: And that’s what is baffling, isn’t it? It’s the last six months. Like you’re not talking about years. You’re talking about the last six months and how much it’s advancing day-by-day and different tools that are coming based upon the models behind it.

Trish: Yeah, we know, we know at universities we’re used to things being planned, we’d like to plan, would like to know, hey, there’s a shiny new thing coming. How do we roll it out? How do we engage staff? How do we design resources for it? But it’s here, it’s all around us. So we’re now in a much more reactive way of working rather than the controlled responsive way we have done in the past, which has been good fun, I think you’ll agree.

Joan: Absolutely.

Trish: A bit hair-raising at times. but I think I’m

Joan: Just touching on what you said around that critical judgment and evaluative judgment. That’s the human component that we’re talking about. And we talk a lot about ChatGPT and other tools, but there’s the human element and that can’t do that at this stage. But it can’t do that and that’s where the human comes in. So it’s teaching people around that. How do you actually do that? How do you evaluate? How do you use your thinking skills and how do you offload some of those tasks, as you mentioned, generating interview questions is it needed to be generated by you, can it be done by something like ChatGPT? Just one other point I want to raise and ask you actually was around the ethical considerations. So this is getting bigger and bigger as time goes on as would be expected because it’s a new technology. What ethical considerations need to be taken into account when implementing any of these AI tools.

Joan: I think their ethical considerations in the use of any kind of digital technology. And this is one of the more serious issues that whilst they’re fun and they’re shiny, we do need to raise awareness around things like data privacy. Because AI driven tools, they collect and analyze vast amounts of student data and behaviors and performance and track through algorithms. So teachers need to understand and communicate to students about the data that these tools collect and how it’s used, and what are the measures that we haven’t placed at Deakin to ensure the privacy. There’s also a lot of bias. We know that it’s rubbish in rubbish out and the rubbish has gone into a lot of training these tools has been what has existed out there on the Internet, in reddit, in books, on Wikipedia. And a lot of that is inherently biased. So I think that it’s when we talk about using them, we have to give feedback to the tools.

So if you ask for a response like I did, create me an image of an academic and the academic was white, middle aged, male. I have to go back and say No, not all academics are, this didn’t work. So we can actually work together to feed back to the tools because they’re learning. They’re learning every time we interact with them. We have to, they can… a lot of these new tools are black boxes. We don’t know what’s going in. We don’t know how they work, how they’re tracking their information. So I think that, that transparency needs to be made as available as possible and clarified with academics and students. So understanding how they make decisions can also help academics to train students how to manage expectations and be aware of where their data might be going or how it might be being used, including their voice. We just think it’s student work. But at the moment, there’s a lot of danger around if your voice is anywhere on the internet, it can be used for nefarious purposes. And it might not, you might not have any control over that. So I don’t want to get into the doom and gloom other but awareness of some of the more difficult challenges these tools present.

Joan: And it is awareness, isn’t it? And clarifying those points you made in around data privacy, how it’s actually being used, the bias like we’re having the conversation. And I think that’s really the important point that we have that conversation because no doubt there were actions that come off these considerations as well. So you can inform people and just ask people to acknowledge that there are these biases and there are these issues, and how can you deal with them individually and collectively as well. So, Finally, I’d love to thank you for sharing your wisdom around AI and your experience so far. Have you got any other recommendations around AI in higher education or any other lasting thoughts that you’d like to leave us with?

Joan: Look, I would really like to finish with putting the human in the loop. So whilst we are talking about teaching and training academics on the use of tools, we also have to train and teach them about critical thinking, about human problem-solving, about some of the essential human qualities that AI tools don’t have that we need to amplify and teamwork. We need to make sure that people are not aware. This is what I bring. If it’s a repetitive task that or an activity that can be done by AI, It should be done by AI.

Joan: Yeah, absolutely. Say use it where you can otherwise use your human elements. So thank you Trish and have a lovely day.

Trish: Thanks John. Take care.

Joan: Bye.

29 May 2023

back to top