Brian K. Smith (Peter Julian)
The promise and peril of artificial intelligence鈥攃hatbot technology in particular鈥攈as been a hot topic since last fall with OpenAI鈥檚 launch of ChatGPT (Generative Pre-trained Transformer), a chatbot that impressed many with its ability to generate detailed and human-like text, although critics noted its uneven factual accuracy. Journalists, artists, ethicists, academics, and public advocates raised concerns about how ChatGPT could negatively affect education, disrupt entire industries, and be used to sow political and social chaos.
By January, ChatGPT reached more than 100 million monthly users, a faster adoption rate than that of Instagram and TikTok. On March 14, OpenAI released GPT-4, an upgrade of the version used in ChatGPT. Microsoft and Google also have introduced their own chatbots.
In the following Q&A, Brian K. Smith, the Honorable David S. Nelson Chair and associate dean for research at the Lynch School of Education and Human Development, talks about AI/ChatGPT鈥檚 potential鈥攆or better or worse. Smith's research interests include computer-based learning environments, human-computer interaction, and computer science education. He also worked in artificial intelligence throughout his career.
OpenAI Ceo Sam Altman met with Washington, D.C., lawmakers earlier this year to clarify misconceptions about ChatGPT by explaining its uses and limitations, but some legislators believe that the new technology warrants a dedicated regulation agency.聽 Is that wise? 聽
Whether it鈥檚 government, industry, academia, or some combination, people need to think about the societal implications of any technology. As many suggest, those implications could be bad, but they could also be positive. For example, much progress has been made using machine learning in breast cancer analysis. It鈥檇 be great to incentivize and celebrate these positive applications while continuing to look for and minimize possible biases and adverse effects. In the short term, that might be a regulatory body. In the long term, we should educate future technologists to think as deeply about technical knowledge and the societal impacts of their innovations.
Researchers warn that large language models like the type used by ChatGPT could be used by disinformation campaigns to more easily spread propaganda鈥攁nd, as models become more accessible, easier to scale, and compose in more credible and persuasive text, they will be very effective for future influence operations.聽 Is the danger legitimate? What could be done to mitigate the threat of the tool鈥檚 weaponization if in the wrong hands?
There are and will always be bad actors in the world, and they鈥檒l use whatever they can to do bad things. Will some bad people use ChatGPT to spread misinformation, write convincing phishing emails, etc.? Without a doubt. But I think we know a lot about how bad actors work with existing tools, and that knowledge goes a long way. We focus on the bad getting worse, but the good also gets better with new technologies. 聽
In a survey of 1,000 college students, online magazine Intelligent found nearly 60 percent used the chatbot on more than half of all their assignments, and 30 percent of them used ChatGPT on written assignments.聽 Some universities worry about ChatGPT鈥檚 impact on student work and assessments鈥攇iven that it passed graduate-level exams at the University of Minnesota and Penn鈥檚 Wharton School of Business鈥攂ut are refusing to bar the chatbot, instead advising professors to set their own policies.聽 What should colleges consider when it comes to ChatGPT?
Writing is a huge part of how students are assessed in education, so it鈥檚 not surprising that there鈥檚 concern about a program that generates reasonable essays, computer programs, language translations, etc. But ChatGPT is a technology that allows an opportunity to rethink what and how students learn鈥攎uch like calculators, spell-checkers, Wikipedia, and similar tools. Changing education is challenging, so how do we do it? 糖心vlog直播平台鈥檚 Center for Teaching Excellence created an excellent that provides strategies for utilizing it to teach and minimize cheating. Other universities are investigating similar ways to work with ChatGPT versus trying to ban its use. The key is getting educators to start thinking together as a community to develop pedagogy that situate ChatGPT and other tools as intellectual partners rather than stuff to cheat with (it鈥檚 not called 鈥淐heatGPT鈥).
What do you mean when you talk about 鈥渢ools as intellectual partners?鈥
People started talking about intelligence amplification or augmentation in the 1950s. The basic idea is that machines can assist us with cognitive tasks that would otherwise be difficult to perform alone. A calculator is a good example: It lets us offload things like computing square roots and multiplying big numbers by hand so we can focus on higher-level problem solving. You can imagine something similar with ChatGPT. I can prompt it to create a sample syllabus, party invitation, or a Q and A for the Chronicle and then iterate on the initial text to make it read in my voice and style and correct any errors it made along the way. ChatGPT is like a partner helping me brainstorm and improve ideas in this scenario.
By the way, I didn鈥檛 use it for this Q and A.
In a TIME magazine article, proponents of generative AI said it will 鈥渞eorient the way we work, unlock creativity and scientific discoveries, allow humanity to achieve previously unimaginable feats, and boost the global economy by over $15 trillion by 2030.鈥 But they expressed multiple concerns, not the least of which is the existential risk posed by AI companies creating Artificial General Intelligence (AGI), a tool that 鈥渢hinks and learns more efficiently than humans,鈥 potentially without human guidance or intervention.聽 How can we guarantee that AIs are aligned with human values? 聽
OpenAI did a lot of work creating 鈥済uardrails鈥 to keep ChatGPT from spouting lots of crazy things. Unfortunately, that鈥檚 become politicized, with some saying ChatGPT is 鈥渨oke鈥 because it might avoid talking about certain people and ideas. But ChatGPT and similar language systems are trained on billions of documents written by humans. Suppose those programs produce language that goes against human values. That鈥檇 be because people have expressed and will continue to express horrible things that oppose human values. We can鈥檛 blame a computer for learning our bad habits; humans need to stop war, violence, discrimination, etc. Don鈥檛 hate the chatbot, hate the game.
TIME cautioned that the big technology companies that will eventually control AIs would likely become not only the world鈥檚 richest corporations by charging whatever they want for commercial use, but potentially morph into 鈥済eopolitical actors and to rival nation states.鈥 聽Are these fears realistic?聽 If so, what measures might be implemented to curb these developments?
This one鈥檚 out of my league; I鈥檓 afraid I don鈥檛 know anything about how AI might be used to create the Federal Kingdom of Microsoft or Amazon Republic. It鈥檚 an interesting scenario, but I鈥檓 hoping those companies might help us use AI to solve the significant challenges we face as a society. It won鈥檛 do much good for Google to take over a continent when it floods due to climate events. I look to our students鈥攑ast, present, and future鈥攖o help with this. Hopefully, they鈥檒l become the leaders of organizations that use AI for good rather than technological empire building.
Phil Gloudemans | University Communications | April 2023