The Complex Junction of AI and Society

The latest conversation I had in my series was with Kashyap Kompella, CEO of RPA2AI. He and his company provide advice to companies, VCs, and governments on topics of artificial intelligence from investments, implementing AI technology, its ethics, and more. As an expert in not only the field of AI technology, but AI policy, for many years, I was very excited to talk with him about his opinions on how AI can affect society, particularly about its role in education, its misuses, and how it should be governed.

As a disruptive technology, AI has begun seeping into educational spaces, particularly in the form of ChatGPT, OpenAI’s large language model (LLM)-based chatbot. This has received mixed reactions. Many believe that such tools are the future of education and should be integrated into schools, while many also believe that it threatens the way we learn and should be shut out. Kashyap brought an interesting point to me, which is that tools like ChatGPT are already nearly ubiquitous, so it’s a necessary skill for students to learn. He told me that "If your perspective is that we should teach useful skills to kids, then tools like ChatGPT should be integrated into education." He also provided an example where ChatGPT proved its usefulness in an educational setting: "When I teach MBA students, I'll say write an essay using ChatGPT and then critique it yourself. Can you point out the flaws? This helps you to better prompting." However, he also pointed out a potential pitfall: skill erosion. He told me that, once upon a time, he used to be able to remember the phone numbers of all of his close friends. Now he can’t conjure up most of them from memory, instead having to rely on his phone to remember them for him. Drawing the parallel to ChatGPT, he said "Maybe if I use ChatGPT for 10 years, I won't be able to frame sentences without it." Of course, then the point of consideration is that if ChatGPT could replace a skill like formally formatting an email today, maybe it will free up our minds to think about more complex things that it can’t yet solve, like coming up with a viscerally compelling argument or story. But much of our education is layered; we learn addition and subtraction as elementary schoolers, so that when we convert to calculators, the operations we do actually mean something, and aren’t just magic boxes that eat numbers and spit out more numbers. The risk of reducing more menial skills to pure magic is truly present with ChatGPT and other AI tools, and should be taken into consideration.

In the vein of ChatGPT making things easier, there is one area where ChatGPT could potentially make a dangerous activity even more potent: misinformation. Much has been said about how ChatGPT and other LLMs (coupled with increasingly strong image generation AI) could lead to a deluge of misinformation that is even harder to spot and stop than before. While a terrifying prospect in and of itself, Kashyap reminded me that all misinformation is a human endeavor, saying that "ChatGPT makes it easier to produce information and disinformation. But misinformation is spread using human nature and existing distribution channels, not fancy technology." After all, digitally created or enhanced misinformation has been around as long as Photoshop and imaginative writing has been. He said that, when it comes to misinformation, "What wins is distribution. It's easier to produce disinformation [with ChatGPT], but the real problem lies in the distribution."

Another wide issue with AI tools like ChatGPT is who is pulling the strings regarding their development. All AI is built off training data; the outputs an AI model will give you are a direct result of what data you have trained it off, and an LLM like ChatGPT is no exception. Kashyap told me that "Right now, who's making the decision whether ChatGPT can generate a specific thing or not? OpenAI is making that decision." This can (and in many cases, demonstrably does) lead to cultural tunnel vision in its output, just like how a Google image search for a wedding will likely only return results for a Christian wedding. It’s a similar issue with social media companies choosing what content to block and what to allow; there’s a potential to silence perspectives that shouldn’t be silenced. Kashyap said that "We need to evolve alternate mechanisms of governance for these kinds of issues, and for that, we need more education of the regulators."

But that precipitates the question: are regulators able to effectively regulate the current firestorm of AI? After all, this is one of the fastest-paced episodes of technological progress in recent history, likely rivaled only by the explosion of the World Wide Web. Kashyap agreed, saying that "Newer technologies are coming faster than we're adopting regulation. There are a lot of challenges around regulating the technology." These challenges include education of regulators (as he alluded to earlier), as well as the inherent slowness in the way our regulatory system works. For the former, Kashyap is already working on it; as part of his work at RPA2AI, he has an initiative called AI Profs, where he teaches lawyers, boards of directors, and governments about the big picture of AI technology, including its use cases and where it could be applied, to equip them to make better decisions. As part of this, he also cautions companies to not get involved in the white-hot race that has outrun regulators. He alluded to the age-old concept of FOMO, or fear of missing out, as one of the reasons why many companies today are adopting the technology. For his part, he says that it’s his job to counsel against joining this bandwagon, saying that “If you have a longer term view, you probably should proceed with caution, not move fast and break things. I go to social media and see people saying,  ‘If you don't use these five tools tomorrow, you're left behind’, but that's not the case”

Overall, as the general public has realized the power of AI, there has been a lot of progress, but also a lot of hype. My conversation with Kashyap taught me to separate the grain from the chaff when it came to the technology, focusing on the issues that matter (like effective regulation) and not the issues that don’t (like bandwagoning on a technology because everyone else is doing it too).

Previous
Previous

AI Tools for Teachers

Next
Next

What Should Schools Do About AI?