Will ChatGPT and AI Help or Harm Us?
Columbia faculty reflect on the promises and perils of artificial intelligence and how it will shape the future.
Artificial Intelligence has been all over the news. So we asked University experts from across disciplines—and ChatGPT itself—to weigh in on a simple question: Do recent developments in AI technologies like ChatGPT excite you, alarm you, or give you hope? Here’s what they said.
Existing Legal Institutions Are Ill-Equipped for Regulating AI
Mala Chatterjee is an associate professor of law.
I think that we are far away from understanding the full range of possibilities and risks created by these technological breakthroughs, and this itself is both exciting and disquieting.
While there are many ways in which these language models are profoundly valuable tools—in particular, for tasks involving the identification or execution of patterns and rules—they also have intrinsic limitations when it comes to anything requiring genuine understanding or factual accuracy. It is also difficult to predict ex ante or evaluate ex post how exactly the models approach particular questions or the mistakes they might make given the presently opaque nature of their processes, along with the complexity of the activities we might seek to offload upon them. Moreover, our existing legal institutions are ill-equipped for regulating these systems, as they were designed for the linear and comparatively transparent actions and activities of persons.
It is thus important that we neither over- or underestimate the potential of these models—nor the challenges that they will raise for us—and proceed very carefully in thinking about how best to integrate AI into our world and lives, notwithstanding the present flurry of reactions and rhetoric in the public sphere.
A Way to Generate Ideas
Lydia Chilton is an assistant professor of computer science.
I'm very excited about the emerging capabilities of generative AI.
Finally, a machine that can take in simple instructions and produce a basic output. “Basic” might sound derogatory, but basic can be very useful—writing email text based on bullet points, summarizing a report into a tweet, extracting all the names of people from a webpage, etc. These are annoying tasks that take people time, but don't really need our creativity or specialized cognitive skills.
More than just a writing aid, generative AI can also be a thinking aid. It can help with generating ideas, and you can either find a good one, or see a way to easily improve one. It can help get out a first draft—or even better suggest multiple first drafts for you to pick from. Then you can tell it which parts to improve and how to improve them. Generative AI isn't quite good enough to do a task to completion all on its own, but it's a great collaborator to get you started.
Put AI to Use on Real-World Tasks
Matei Ciocarlie is an associate professor of mechanical engineering.
I'm certainly excited.
Think of a physical robot operating in the real world: it needs both abstract, semantic intelligence (to understand conceptually how the world works) and embodied intelligence, or the skill to physically interact with the world. We are constantly making progress on the latter, but on the former we have had relatively little progress over the years. Large language models like ChatGPT just might bring exactly that to the table. A physically skilled robot can then take such models out of the purely virtual world of the Internet, and put them to good use on real-world physical tasks.
Why Would You Want a Computer to Think for You?
Gil Eyal is a professor of sociology.
I am alarmed by the ChatGPT episode, but perhaps for somewhat different reasons. I am less alarmed by what ChatGPT can actually do, or by what similar AI programs are projected to do, than by our own reaction of fascination, and the powers that we seem to be projecting onto ChatGPT.
Forty-five years ago, the philosopher Hubert Dreyfus wrote that, “our risk is not the advent of super-intelligent computers, but of sub-intelligent human beings.” What he meant was that the risky part about AI is not that the machines become more similar to humans (to the point that they take over, as in sci-fi scenarios), but that humans become more like machines, or more precisely that humans adapt themselves to the machines and in the process lose some of the things that make us distinctively human.
Read More Experts' Views on the Uses and Limits of AI
We are so fascinated by ChatGPT’s ability to write a mediocre essay that we don’t ask the far more obvious question: Why would you ever want a computer to think for you? Why would you ever want to surrender the process of writing—which is a creative thinking process, something that you embark upon without fully knowing how it might end, what you may discover along the way, or how you may be changed by it along the way—to a machine?
Long-Term Optimism Tempered by Technology's Limitations
Jeff Goldsmith is an associate professor of biostatistics at Columbia Mailman School of Public Health.
I'm cautiously excited by the medium- and long-term implications of developments in AI. With additional refinements through continued research and development, tools like ChatGPT could produce coherent responses to a range of open-ended queries in a way that mimics interacting with knowledgeable humans. While these tools are not poised to generate new insights or demonstrate creativity, they will be broadly useful in the way that search engines are now—as a mechanism to synthesize and understand the vast amount of information available online.
This will be especially true when users have more experience crafting queries that are likely to produce valuable responses. I can envision, for instance, students using ChatGPT as an always-available TA that can support learning by answering basic questions about debugging code. In the shorter term, I worry that not understanding how AI “works” will lead to overconfidence in the ability of these tools and, occasionally, misleading or dangerous interactions.
My long-term optimism is tempered first by the inherent limitations of these emerging technologies. They are trained to produce plausible-sounding responses by learning from vast databases; in that training process, AI tools can "learn" the implicit biases of their training datasets in ways that are subtle and hard or impossible to remove. Depending on the settings in which AI tools are deployed, this can exacerbate real-world disparities at massive scales. Relatedly, I'm concerned about the business models that will underlie these technologies, which are costly to develop and maintain. Understanding and carefully regulating the financial incentives around AI tools will be critical as they become increasingly powerful.
Students Will Be Expected to Do More
Hod Lipson is a professor of mechanical engineering and data science and director of the Creative Machines Lab.
It is important to remember that ChatGPT and its cousins, like BingChat, are just the beginning of what will be a new era of ever-improving creative AI. While human creativity remains relatively constant over the generations, AI will continue to improve every year. So, whatever your bar is for quality, accuracy, and usability of the AI, that bar will be exceeded quickly. The question for us educators is not how to resist this tool or circumvent it.
The question is how do we teach the next generation to use the power of these tools for good. How do we harness its benefits, while understanding and mitigating its limitations? If education is all about standing on the shoulders of giants, then these are the ultimate shoulders to stand on.
In my engineering classes, I ask students to complete the assignments using AI tools. In my robotics class, they are encouraged to use generative-AI tools like StableDiffusion to help generate engineering concepts; in my AI class, students are encouraged to use ChatGPT to help them write code. Of course, students must acknowledge the use of the tools just like any other external source. On the curricular side, I am changing the content to take into account the fact that students now have access to these AI tools. I expect more, and students who do not learn to use these tools find it harder to keep up.
One Concern Is an AI That Exceeds Human Intelligence
Vishal Misra is a professor of computer science.
ChatGPT has taken the world by storm, and I am both excited and alarmed by it. I have been working with these models for the past couple of years, and I know how much they can increase human productivity and help our lives—be it education, research, or day-to-day work. At the same time, they can also be misused to spread disinformation, which has already been a problem with our society for the past few years and will get much worse with these new generative models. The humanlike accuracy with which they generate text, voice, and video is scary. Another concern with these rapid developments is the eventual arrival of Artificial General Intelligence (AGI), an AI that is smart enough that it may exceed human intelligence and wipe humanity out. However, given my knowledge of how these models work, I know we are quite a bit away from the so-called AGI. The public success of ChatGPT has ensured that we will not be caught unaware when that happens and we will approach the development of it in a thoughtful and measured way. That makes me very hopeful for the future.
I Might Be Biased
ChatGPT is an artificial intelligence chatbot developed by OpenAI.
As a Columbia faculty member, the best way to respond to the prompt from the university's news office for a quote on the recent developments in AI technologies like ChatGPT is to provide an honest and thoughtful response. Here's an example of a possible response:
As a Columbia faculty member, the best way to respond to the prompt from the university's news office for a quote on the recent developments in AI technologies like ChatGPT is to provide an honest and thoughtful response. Here's an example of a possible response:
“As a researcher in the field of AI, I find recent developments like ChatGPT incredibly exciting. These technologies have the potential to revolutionize many industries, from healthcare to finance, and could lead to significant advancements in fields like natural language processing and computer vision. At the same time, it's important to consider the ethical and societal implications of these technologies. As AI becomes more prevalent, we need to ensure that it is used in ways that benefit society as a whole and that we have safeguards in place to prevent unintended consequences. Overall, I am optimistic about the potential of AI technologies like ChatGPT, but we need to approach their development and deployment with caution and responsibility.”
This column is editorially independent of Columbia News.
Are you a reporter interested in speaking to a Columbia expert about artificial intelligence? Email [email protected]; we can help you connect.