Artificial intelligence platforms such as ChatGPT have caught the attention of researchers, students and the public in recent weeks. For this dean's message, I have invited Chirag Shah, an Information School professor and expert in AI ethics, to share his thoughts on the future of generative AI.
— Anind K. Dey, Information School Dean and Professor
ChatGPT has caused quite an uproar. It’s an exciting AI chat system that leverages huge amounts of text processing to provide short, natural-sounding responses as well as complete complex tasks. It can write long essays, generate reports, develop insights, create tweets, and provide customized plans for various goals from dieting to retirement planning.
Amid the excitement about what ChatGPT can do, many have quickly started pointing out issues with its usage. Plagiarism and bias are the most immediate concerns, and there are many long-term challenges about the implications of such technology on educational processes, jobs, and even human knowledge and its dissemination at a global scale.
We have entered a new era in which systems can not only retrieve the information we want, but generate conversations, code, images, music and even simple videos on their own. This is powerful technology that has the potential to change how the world works with information, and as with any revolutionary technology, its benefits are paired with risk and uncertainty.
Traditionally, we have had two types of systems to access information: direct and algorithmically mediated. When we read newspapers, we are accessing information directly. When we use search engines or browse through recommendations on Netflix’s interface, we are accessing algorithmically mediated information. In both categories, the information already existed. But now we are able to access a third type: algorithmically generated information that didn’t previously exist.
There could be great benefits to having AI create information. For example, what if an author working on a children’s book needed an illustration where astronauts are playing basketball with cats in space? Chances are, no system could retrieve it. But if the author makes a query to DALL-E, Imagen, or Stable Diffusion, for example, they will get a pretty good response that is generated rather than retrieved.
Generated information can be customized to our given need and context without our having to sift through sources. However, we have little understanding of how and why the information was provided. We can be excited about an all-knowing AI system that is ready to chat with us 24/7, but we should also be wary about being unable to verify what the system tells us.
What if I asked you which U.S. president’s face is on the $100 bill? If you said “Benjamin Franklin,” then you fell for a trick question. Benjamin Franklin was a lot of things — a Founding Father, scientist, inventor, the first Postmaster General of the United States — but he was never a president. So, you’ve generated an answer that doesn’t or shouldn’t exist. Various pieces of otherwise credible information you know, such as presidents on dollar bills and Benjamin Franklin as a historical figure, gave you a sense of correctness when you were asked a leading question.
Similarly, algorithmically generated information systems combine sources and context to deliver an answer, but that answer isn’t always valid. Researchers are also concerned that these systems invariably can’t or won’t provide transparency about their sources, reveal their processes, or account for biases that have long plagued data and models in AI.
Big tech companies and startups are quickly integrating such technologies, and that raises many pressing questions. Will this be the new generation of information access for all? Will we or should we eliminate many of the cognitive tasks and jobs that humans currently do, given that AI systems could do them? How will this impact education and workforce training for the next generation? Who will oversee the development of these technologies? As researchers, it’s our job to help the public and policymakers understand technology’s implications so that companies are held to a higher standard. We need to help ensure that these technologies benefit everyone and support the values we want to promote as a society.
Oh, and if that Benjamin Franklin question tripped you up, don’t feel bad. ChatGPT gets it wrong too!