Professor Koopman has a new book on AI in the classroom available here.
The following transcript was generated using automated transcription software for the accessibility and convenience of our audience. While we strive for accuracy, the automated process may introduce errors, omissions, or misinterpretations. This transcript is intended as a helpful companion to the original audio and should not be considered a verbatim record. For the most accurate representation, please refer to the audio recording.
MICHAEL DUNNE: I'm Michael Dunne. Last week we talked about the ethics of AI and how this disruptive technology is indeed disrupting our social and cultural world today. We're going to bring you a discussion about how it's disrupting the classroom, and by doing so challenging both student and instructor alike. After all, is using something like ChatGPT to write a term paper cheating, or is it using a highly functional tool to do basic work that allows the student to apply their creativity to more important functions? And as a teacher, it's important to guard against both over reliance on such technology, while at the same time teaching students to use this tool that will become an even bigger part of their lives. Going forward, two professors will join us today to talk about their real-world experience with artificial intelligence. Rebekah Hanley, clinical professor for the University of Oregon School of Law, and Colin Koopman, who is a professor of philosophy and the director of the New Media and Culture Program at the U of O. Thanks to you both for joining me.
REBEKAH HANLEY/COLIN KOOPMAN: Thank you. Thanks for having me.
DUNNE: So, I'm gonna start with a question for you both, and we'll just start with Rebecca, how is AI impacting education? And maybe you take that two ways for students, but also for Professors such as yourself, or even, you know, educators in the K through 12 realm.
HANLEY: Our students are using it. Okay? Some are sharing with their faculty and peers that they're using it. Others are not. Faculty are experimenting with it, and so the technology is finding its way into some courses, for some activities. Other faculty are less inclined to explore the technology and what it can do for learning. But I think overall, there's a real tension and a real desire to figure out how, whether and how the technology can be used to productively enhance learning, and then to try to avoid and help students avoid the temptation to lean on it when there's the possibility that it's going to undermine or erode progress and learning.
DUNNE: Colin, same question to you.
KOOPMAN: Yeah, I think let me latch onto that word tension, okay? Because what I've seen with the advent of ChatGPT and other generative AI in terms of classroom and higher education environment, is just a production of a lot of different tensions that are very difficult for those who are enmeshed in them to manage. So, take it from a student perspective. A student has an essay due in a class where the professor has said you may not use AI to write your paper, because the point of this class is for you to learn the reflective, critical thinking skills involved in writing. And what we really want from your essay is to use your thought expressed on paper, your critical thinking expressed on paper, and the critical thinking is what we really care about in this class. So, it wouldn't make sense to use AI to do critical thinking for you. It would be analogous to taking your robot to the gym to lift weights for you, it just doesn't do the job. So the Professor might say you can't use it, but imagine a student has paper due the next day. Maybe they have three or four papers due across all other classes the next day with that same rule. There's just a lot of pressure on students sometimes to take this route of, well, this is the way that I can get this done. And they know full well, perhaps because their roommate or dorm mate has told them repeatedly, you can do it, and there's it's very unlikely that they're going to be able to catch you, right? So it creates this incentive structure, and a lot of students have found themselves kind of slipping into increased use of AI based on just sort of, as it were, kind of dipping their toes right off the bat, right? So let's use it to generate an outline, maybe. But there's been a lot of research that shows that dipping the toes leads very quickly to sort of full fledged like, let's dive in and fully use it. So there's this tension right from the student side of, should I learn the things that I'm meant to be learning in this class? Or is the a more important? Or is, you know, maybe the B? Minus is more important because they need that to maintain a certain grade level for financial aid or for their student athlete.
DUNNE: Rebecca, okay, I'm going to take myself as an example. I have been a lifelong, terrible speller since primary school to now in my 50s, and I'm never going to be a good speller, and for the past decades, I haven't had to be, because spell check is my net. It provides me with a tool that I don't have to become proficient in something. I'm asking this or I'm setting this up, because as an educator, as somebody who also understands legal ramifications, but also just teaching ramifications, wWhen does a tool become just something that a student can use and everybody accepts that that's great? Or when can it become something that's a crutch?
HANLEY: You ask the best questions and they don't have clear answers. You know, before spell check was invented, you could have gone to a dictionary. Indeed, you could have spent a whole lot of time figuring out how to spell the words you were trying to write. It would have taken you a long time, because, of course, if you don't know how to spell them, they're hard to find. Sure spell check expedited that process of figuring out what is the proper spelling of the word you're trying to use. I don't know if you feel like now spell check is a crutch, and it's impeding your ability that you otherwise would have had to gradually improve your spelling. Or if that just is a skill that isn't that important, given that this tool exists.
DUNNE: Yeah, and I don't know how to answer that because, you're right, I don't need to learn how to spell. And I've reached a certain level in my life where it hasn't really hindered me, but it's something that I'll never really be good at. And so I'm going to shift to you Colin, in terms of, you know, especially as a professor in philosophy, and I know that ethics is such a critical part of that, where is the balance between, like you just talked about, a student who is told not to use something like AI, but it's really important to get that grade, or to be able to do that. Or where do we leave students to kind of choose? Well, that's a tool versus that's a cheat, I don't know.
KOOPMAN: I think we have to think about what we want our students to be learning in the classroom and what the students themselves want to be learning why they're there. And that's always, there's always a balance to be struck between that, which can sometimes be a tension. Sometimes a student is there, quite frankly, for just a degree, and they see it as a means to an end of a job. Other times, I would say, most times, most of the students I meet at U of O and in other education, you know, higher ed university environments, they're really there because they're passionate about learning. They're passionate about learning something, but they might find themselves in another class, including sometimes mine. Hard to believe, where they're actually not passionate about that material, and that's where they might slip into using it as a crutch and not developing those skills.
DUNNE: You were gonna say something Rebecca.
HANLEY: Yeah, I just wanted to explore a little bit more about skill development and what students are hoping to or needing to achieve when they're studying in school. And maybe I sit in a different position as someone in a professional school, but I am looking to help students prepare to be competent lawyers after graduation, and so they need to develop some foundational legal analysis and legal writing skills so that they can operate when and if they need to, without the benefit of AI, but also so that they can critically evaluate whatever output they might secure from a large language model while they're practicing a problem that shockingly continues to occur in the legal profession is that lawyers are misusing generative AI, and they are not critically evaluating the output. They are deferring inappropriately and and, and as a result, they are submitting things to the court, for example, that include hallucinations, so-called hallucinations or errors that have been introduced by the AI. They just fundamentally are. They are either misunderstanding or otherwise misusing these tools that can be very helpful to lawyers in practice, but only if they understand what they are, and they understand what the what the weaknesses and limitations of them are, and so I seek to help students develop the foundational skills that they will need to operate when they don't have the benefit of AI, but also to use AI responsibly and productively so that they can, you know, contribute as competent professionals.
DUNNE: Collin, talk about AI and the development of original and creative content. And what I'm asking here is especially in academia, you know, and we talked about using tools, but also maybe not over using them. Is there a line for you as an instructor, where a student is developing their own thoughts and putting them down on paper, but perhaps utilizing AI to some degree? I guess what I'm asking is, and you talked about this a little bit, but I want to pull the thread. I mean, are there, you know, I don't know Table of Contents notations that are correct to use an AI, but they might cross a line and use AI and stop being the original owner of their words.
KOOPMAN: Yeah, there are certainly legitimate uses. I mean, I think your example of spell check or grammar checking software, a lot of which now partially incorporates AI technology in the back engine. Of course, there was grammar checking software for years. It didn't involve what we now call AI. There's also sort of a lot of mystification around what is AI and what is not AI, what qualifies and what doesn't. I remember a few years ago, just this is a small tangent, but I called into my local garbage service to ask a question about, I think, a missed pickup, and the automated phone tree said, Thank you for calling the garbage company. X, y, z, we strive for great customer service, assisted by our new AI, and it was just an automated phone tree computer that could hear a tone and then channel you in a certain direction. It certainly wasn't AI, but they were trying to jump on the AI bandwagon. Somebody had sold them this, right? So, there's certainly legitimate uses, I think. But yeah, again, a lot of the reporting that I'm seeing shows that it's really hard. There's this tension for students like this balance they're trying to find that leads to a tension of how, you know, how much can I use? How much should I use? And then even with students who are very reflective about it, and know, there's just so much incentive for so many of them on the side of, well, I used it a little bit, and it's right there, and I could use it just, you know, I used it to help me brainstorm an outline for which it's maybe useful, maybe not. I do think its uses are overblown, and I hope we can talk about that in a little bit. But, you know, maybe it sort of served up b minus level content, or B level content. Maybe that's all they really want in this class, unfortunately, not striving for the A and they say, Well, okay, now I'm gonna write the paper. I'm tired. You know, there's just, there's so much opportunity there for them to just sort of use it. Well, let's have it help me write. I'm not good at writing introductions. I'm not good at writing conclusions, sure. Well, I'm not good at writing the middle paragraph, right? And have it drafted, and then, then I'll rewrite the draft, right? Turns out, like the draft is great and my rewrite is worse, so I'm going to just go back to using the original draft. And I can see lots of cases where students have maybe sort of inadvertently slipped into having AI write 75% of their paper and don't even really understand that. That's what happened? Okay, until after the fact, and maybe they're in a room talking to a professor who says, This is not the kind of work that I see you write by hand when we do the handwritten assignments, what happened? And then it maybe kind of hits them, and they weren't even intending to be dishonest and plagiarize a paper.
DUNNE: Okay, I'm gonna get back to you on the uses being overblown, because I think that's a fascinating topic. But I'm going to switch to you, Rebecca, because you're somebody who studies the law, and you're someone who teaches the law. Is the use of AI, and especially the people who are developing are there concerns that we're sort of in this, for lack of a better phrase, the wild, wild west of what AI is today, and perhaps where it's going. As you as a legal scholar, do you have concerns about the legality of use, about development? Take it wherever you'd like to go.
HANLEY: I have lots of concerns. I do think we're in this moment of transition, and it isn't totally clear what norms will evolve as the technology takes hold and becomes integrated into more and more of the platforms that we use for various things, including producing written work products, but not limited to that. You know, some of the questions that come up for me as I'm hearing Colin speak like this technology is here. I think it's very unlikely that it's going to go away. I want to figure out what we do as educators, given that these incentives exist, the technology exists. This reality exists where students. Are likely to turn to the technology for a little or a lot while they're completing their assignments. How do we respond in this moment? So what do we change the way we assess? Do we change our assignments? Do we change our instructions? Do we change our expectations? How do we adapt? Yeah, because I'm not sure we can change the circumstances we find ourselves in.
DUNNE: Colin, you were going to weigh in here.
KOOPMAN: Yeah. So a good example of that change is I read reporting recently that one of the companies that produces little blue books that even kind of preceded my time in college, but these are the blue books that you would hand write your exam papers in that these companies are now booming because there's huge demand for Blue Book and including, for the first time ever in my lifetime as a teacher, I will be using these in my class in winter, not for all of my assignments, but for some of them. And partly, there's a real fairness concern in the background, there are the students who kind of just sort of freely use AI, kind of brazenly and without any humility. I think that's a very small percentage. It's not, it's non zero. I think there are a number who kind of, you know, a sizable percentage kind of slip into using it and just kind of feel the pressure. And then there's still a sizable percentage, at least at Uo, who don't use it. And out of fairness, because these students are committed to learning, and out of fairness to those students, you know, I have to, it's my duty to create a classroom environment that's structured in a way that they're not being sort of after the fact, penalized for not using it and for actually taking the risk of trying to write a paper that demonstrates their critical thinking skills.
DUNNE: Rebecca, you want to weigh in too?
HANLEY: Yeah, my first-year law students, I'm asking to just slow down and do more by hand on paper. And so I purchased from all of my students a traditional composition notebook. I have them in class every day. I have them writing every day, whether it's, you know, working through legal analysis or reflecting on their process or planning out their next steps. I have them writing by hand in class, where I can see them doing it, and then I can look at what they've written and how they've written it. But at the same time, I'm also teaching an upper-level class about generative AI, helping our upper-level students develop the literacy that they need, the understanding of the technology that I think they need to operate in the world that we're that we're in right now, and just so that that doesn't sound old fashioned to our listeners. Research shows that retention and memory is much stronger when a person writes their notes, as opposed to not taking notes at all. Is one contrast case or taking notes on a computer. So I always, and I always have, I take notes by hand on a composition book. I've always done that just because I know that that's how I'm going to retain it. So, it's actually not kind of a throwback to do that, although it's something that we're now in a position where I think we have to create more structures, kind of creating incentives for our students to do that. Saying, here's the composition book, actually, here's 20 minutes in class. You have to do it this way. Tell me if you get the same results that I get when I take notes by hand. Another innovation. It's not an innovation. Another strategy that some folks are using is moving returning to more oral examinations, which creates some challenges in terms of fairness and bias and scalability and yet, it is a way to really get at what does this student know? What has this student learned? What is this student able to do? And so there, I don't think that's being implemented widely, because there's a lot of you know challenges with Sure, but it is a way to isolate the students’ knowledge and skill from all that technology brings to the table.
DUNNE: Okay, we're getting to the end, but Colin, you brought up the idea of the concept that there may be overblown uses for AI. Talk about that.
KOOPMAN: What I meant by that is, you know, I think we're in this moment where there's just a lot of overblown height about, I say AI is capable of, okay, I'm not an AI skeptic. I think these tools are really cool, but I think we're in the 1999 early 2000 moment with AI. And I don't just mean that we're in a stock market bubble with respect to hyper inflated valuations of companies that you know aren't generating revenue. All of them are huge profit losers. They're just throwing all this money at capital expenditures to build these data centers, you know, all that, aside the sort of the hype and leverage of the stock market, which your listeners can go get better advice on from financial professionals, of which I am not one, but you know, just the sort of hype around sort of what it's capable of doing and what it can't do. These are really cool tools, but I don't think we've yet arrived at the place where we see well, this is what actually they're good for. And if we're not mindful of that, we're going to end up sort of being led down, you know, being led by the nose down a path by Gemini or Claude or, you know, some open AI chat bot that's going to take us somewhere where we individually, but also as a society, don't want to go right? So, if you go back to that 1999 moment, sort of the promise of the internet and the democratization of speech, and it turns out that actually the internet did not democratize speech. It divided us. It separated us as a society, and not just in the United States, but you can find this in most of the Western liberal democracies. We're not connected as Facebook promised. We're divided. We're polarized more than ever, and the Internet didn't deliver what it promised. It delivered something else, you know, it delivered lots of, you know, cool shopping experiences, perhaps. But you know, it didn't deliver. I worry that AI is going to do the same thing.
DUNNE: I'd love to talk all day. We got to cut it off here. Rebekah Hanley, clinical professor for the University of Oregon School of Law, and Colin Koopman, who is a professor of philosophy and the director of the New Media and Culture Program at the U of O. really appreciate both of you coming in and talking about this fascinating subject.
KOOPMAN/ HANLEY: Thanks so much. Michael, thanks.
DUNNE: That’s the show for today. All episodes of Oregon On The Record are available as a podcast at KLCC.org. Tomorrow on the show, you'll hear about an OSU study which showed that magic mushrooms sold at many retail establishments have no psilocybin at all. I'm Michael Dunne, and this has been Oregon On The Record from KLCC. Thanks for listening.