Universal Design for Learning and Large Language Models

Recently, as part of my series of conversations on LLMs and their impact on education, I got the chance to speak with Professor James Basham from the University of Kansas. Prof. Basham is a Senior Director for CAST (Center for Applied Special Technology), and his work focuses on new, modern, and technologically-powered ways of teaching and learning. We spoke about a wide-ranging set of topics from Universal Design for Learning frameworks, accessibility, policy, and more. I hope you enjoy this transcript of our conversation, and I’d once again like to thank Prof. Basham for making the time to speak with me.


Tell me about your experiences using ChatGPT in the second semester of the 22-23 academic year. How did you use it in your coursework?

Well, my area of research is technology and innovation in education, so I had been personally using earlier versions of AI and large language models way before ChatGPT came out. And I've always integrated the technology into my classroom. So when this newer version came out, I definitely used it last semester. 

Last spring, I taught a Philosophy of Science course in a doctoral seminar, and we used it as another student in the class, actually. So, as we had classroom discussions and I asked questions of the class to reflect on things, after all the human students would go through and answer the questions and we'd have some discussion, we would bring the AI into it, having the AI answer questions, and then reflect on what it was saying. 

Also, because it's a doctoral seminar, we work a lot on writing papers, so they were encouraged to go ahead and try it and use it, but then go through and critique and do some editing to the text to make sure it was appropriate. It definitely changes the way you grade things, because there's a lot more fact checking and things like that. I asked them to submit if they had used it, and only one student used it in a draft, and then decided to toss the draft!

You and your colleagues at the CIDDL (Center for Innovation, Design and Digital Learning) wrote an article suggesting ways that teachers could change up their assignments to contend with the sudden impact of ChatGPT, particularly for term papers. Do you think that such assignments are going to change permanently? Are they going to go away and be replaced by something else?

I don't know if they're going to go away permanently, but I definitely think they'll change. I definitely think that if a professor or teacher solely uses a term paper as the only means of assessing a student's understanding of a topic, they're probably doing the wrong thing. For a long time in many subject areas and different teaching situations, we've known that a term paper is probably not the best option to measure someone's ability to understand something. It's really an old school way.

Obviously, in the doctoral-level course I was just teaching, which was heavily focused on academic writing, it's a whole different scenario. But I teach an undergraduate course in the fall, and I’ve not had students write term papers in probably over 15 years. It just seems like an old-school way to assess knowledge, especially when there are so many other mediums available—from video and audio to even posters or cartoons.

I mean, there's so many different ways for people to communicate and demonstrate understanding in the modern world, that we can allow students to personalize their responses, and then develop rubrics to measure them on the same topic. It complicates things even more when you look at other mediums; for instance, video. If a student were to submit this poorly framed video, they would be graded kind of poorly. In a term paper, we often prioritize things like grammar and style as much if not sometimes more than their actual content, which has always been funny to me. Oftentimes, in other mediums like video, things like framing and communication style in a video carry as much weight as they do in a paper, but this has often been overlooked. So there are different ways that teachers and professors could utilize AI, but it's probably a kickstart for many in the education system, because they've traditionally just been writing papers, and it's kind of laughable. 

A couple of years ago, I had a student that developed a VR app in a teaching class focused on effective teaching methods. They can choose any way to communicate their end product, and they go through a proposal process for it. What they proposed was to take someone through this VR world where they would engage with things in Spanish, and they created a whole simulation around that. I told them that I wasn't expecting a whole commercially viable product, but a prototype would be kind of neat. So they went ahead and developed it, and this is a person that's now teaching Spanish at a high school. So they obviously demonstrated they can embed the teaching strategies we were working on in class within the app, and reflect on those strategies and their methods in a director's cut. I thought it was very meaningful; it was mind-blowing to many in the class and many of my colleagues around here that still use papers as the primary product of measurement.

Do you think that this is going to cause a wider paradigm shift across high schools and colleges around America? Earlier, I spoke to Prof. Josh Brake from Harvey Mudd College, who likened it to the effect calculators had on math classes. Do you think that the effect will be similar, or will something different happen?

I think it's going to be similar, if not grander in some ways. I lived through that calculator era, where there was—and still is—this ongoing debate in many math classrooms about whether calculators should be allowed. When I first took a statistics course back in the day, we were allowed to have a calculator but were not allowed to use software to solve any problems. I can tell you long stories about a midterm taking six hours to complete in class because we were all running ANOVAs by hand, which is not fun.

I definitely think that, with the initial onset of AI, where the public is beginning to realize the future, there's going to be a lot of divides, not just between high schools and universities, but even within these institutions. You're going to have some lone wolves, going out and trying new things, and you're going to have some that are going to be using traditional methods.

A lot of the research I do is based on a framework called Universal Design for Learning (UDL), which was founded by CAST. What UDL calls for is multiple means in the way teachers and individuals represent information, engage in the learning process, and express themselves. I think the schools globally, be it high schools, elementary schools, universities, or even the corporate training sites that have adopted UDL will find it less of a leap to adopt these new technologies.

I think schools that are still kind of coming along to this whole process, and may have even backtracked since the pandemic, are definitely going to have a tougher time adopting this, because it's not only going to change the way that students express themselves, it will actually change the way we go through the process of developing things like literacy. 

We're just looking at large language models at the moment, but new AIs are going to come out that will transform humanity as a whole. I think it's the role of the education system to help integrate that in an ethical way into the way that we educate our kids and help them think about what the future is.

Have you and CAST thought about ways that LLMs and other AI tools can fit into the UDL framework? I know it’s a little early, but I wanted to hear if you had any specific ideas about places in the framework where it could be integrated.

Well, it can definitely be integrated throughout the framework as an option. Often, when we see UDL implemented in schools, it's implemented in ways where students have options to take hold of their own learning, and to be self-advocates and go down their own path. So it's obviously useful as a simple tool.

I think the longer-term impact we'll start seeing with AI, which my colleagues and I hinted at in an academic piece we recently wrote, revolves around personalized learning. I think we're going to start seeing more personalized approaches to education. In 2016, I published a research piece that demonstrated students provided with a personalized learning environment can make significant progress in a single year compared to their counterparts.

I don't think it's going to be an immediate thing. People are going to be talking about it and working towards it. But I believe UDL provides a foundation for this kind of learning. This has been the focus of my doctoral students here at KU and our work for the last six, seven, eight years. The future of the UDL framework is to serve as this foundation for the future of learning.

And on the topic of the future, though I probably shouldn't speak too much on it, there's a National Educational Technology Plan, or NETP, that the Office of Educational Technology puts out. I was on the team that last wrote it, published right before President Obama left office and republished during the Trump administration. We talked about personalized learning in that plan, and it's being rewritten now. I'm on the team helping with that rewrite, and personalized learning is going to be a significant component of it.

Another critical aspect we need to consider with AI going forward is around access and accessibility. We need to ensure everyone has access, and that accessibility is built into these systems from the very beginning. Those of us who've been studying this work for a long time and working within industry partnerships around technology have come to understand the importance of accessibility, which was not even a consideration when we first started. Now, it's a huge consideration. Even in places like Microsoft and Amazon, access and accessibility are crucial, not just from a consumer perspective, but in making sure we design products and product systems for all individuals from the very beginning.

Two questions here. Firstly, as much as you can tell me, what role would AI tools play in the new educational technology plan? Secondly, can you tell me a little about how you think AI tools could integrate into furthering the goal of accessibility by default?

Well, I can't really tell you much about the new educational technology plan, but I can say that the previous one didn't even mention AI. However, personalized learning has been a big thing in the tech world for a long time, and we've been trying to move towards it. I can say it's going to play a role. The Office of Educational Technology (OET) just put out a blueprint document to discuss the role of AI in education, and the team did a wonderful job summarizing some key points. So, I can say that AI will certainly play a role in the future of the national education technology perspective.

As for how AI can make things more accessible, it's almost unbelievable to conceive. If we go down this line of personalization, not just in education but for technology as a whole, AI can facilitate a lot of this personalization. Within this, accessibility must be part of the consideration.

For example, I'm wearing reading glasses right now. At some point, could a computer eventually adjust so that I wouldn't have to wear my glasses, or could another tool transform so that I interact with it in such a way where I wouldn't need reading glasses? This is just one aspect; some articles are coming out recently that are reporting progress being made in treating different cancers with AI.

AI is going to transform society in numerous ways, with accessibility being one of them. If it knew exactly what I needed to be present in a situation, could it automatically turn on those tools? That's a pretty basic idea. Having a better understanding of accessibility, beyond what we know today, is something that still needs to be figured out. However, it's definitely going to transform the way we provide access and direct accessibility for individuals with disabilities and other learning needs.

AI tools, and particularly LLMs, have grown and developed very quickly in the past few months. As the industry begins to pour billions into funding further development, what do you think about the prospect of the capabilities of such tools increasing to a rate where educators and regulators simply can’t keep up? How could we deal with that in the future? 

I would argue that we've already experienced exponential growth in this field and it's evident that many people are struggling to keep up. Just look at the learning process that OpenAI has gone through with GPT. It's clear that machines can "learn" faster than humans. This isn't a question anymore, it's a foregone conclusion.

However, I think the bigger question that we need to grapple with revolves around our understanding of intelligence. This is a societal discussion that we need to have: Are these machines intelligent? How do we define intelligence? Generally, intelligence, when measured via a test, is about task mastery. If we stick to this definition, then yes, machines can probably achieve these tasks faster, quicker, and with a more robust dataset than a human can, especially from a recall perspective.

But we need to reach an understanding on what role machines play in human intelligence. This leads us into debates about how we integrate AI into education, and the policy implications of this integration. Conversations about regulation are already happening at various levels of government around the world. Some have suggested shutting down AI developments for a period of time, but is that feasible? Probably not at this point.

We need to figure out the ethical boundaries of AI. This likely involves reaching a universal understanding, much like what we've seen in the biotech space with CRISPR and genetic technologies. We have to acknowledge that not all will respect these boundaries, like the rogue scientist in China a few years back, but we have to establish them nonetheless.

These ethical boundaries may change over time as we learn more, but we must establish some consensus on how to proceed. Unfortunately, and perhaps your question was hinting at this, there are very few people in Washington D.C., and in other legislative bodies, who understand even the basics of what we're getting into. Many view AI as a narrow technology, which is a mindset that may have worked when AI was less advanced, but as we move towards more generalizable AI, we need a broader understanding.

Back in the day, we used to have an Office of Technology at the federal level that guided our understanding of technology. It was a non-partisan body that existed until the late '90s or early 2000s. I believe we need to return to something like that, a bipartisan or nonpartisan effort to understand what's happening and provide guidance on a global scale, not just in the U.S.

One of the key issues with AI tools, and ChatGPT and other LLMs in particular, has been the potential for the tool to enable widespread, turnkey cheating for students. Is that a concern you share? If so, what ideas do you have to tackle that threat?

It's kind of funny; I think the conversation should be whether it even is cheating or not. Is spellcheck cheating? That's one of the questions I always have. Back when I was in college, we weren't allowed to use spell check. I remember my English 101 professor said "I don't like the way these fonts look when they're printed, so you have to handwrite or use a typewriter to turn in your assignments", and I was like, "That's stupid."

So is it cheating? I would actually argue that if a student can cheat simply by using AI, it was probably a poorly designed learning experience from the very beginning. I think we have to really reflect on what the purpose of what we are doing in the education system is. What are we teaching? How are we teaching it? How are we assessing it? These are all critical features. So if my students can cheat using AI, it's my fault, because I've designed a poor environment. So I'd start by having people reflect on that.

If you had a magic wand to make any change to an AI tool like ChatGPT, regardless of technological feasibility, what would you change?

I'd actually make it more transparent. One of the issues right now is that there's no way to lift up the hood and see what's going on. Even the developers don't understand how these more generalizable, large language models are working all the time, and I think that's part of the problem. We need to have greater transparency in the decisions they're making and how they're answering problems. It's a big problem that these large language models are putting out falsified information, or disinformation, even more so as we enter a new political season.

I think that greater transparency needs to be put forth in that process. That's probably where I'd start if I could wave a wand. That's probably a bigger ask; there's smaller asks, such as injecting something into an output that tells it if it was written by an AI versus a human. But I think if I got to wave the wand to do anything, I'd say let's figure out a way to make it transparent to the human understanding. 

Previous
Previous

The Psychology of LLMs in Education

Next
Next

AI Tools for Teachers