Introducing Columbia AI

A new effort will promote Columbia’s work on artificial intelligence, with courses, curricula, events, digital tools, and more.

November 13, 2024

Artificial Intelligence has swiftly moved into many corners of life, from how we plan trips, to how we invest, and receive health care. AI is also having an impact on Columbia University and its 17 schools. It’s affecting the University’s teaching and learning, its administrative operations, and its research—both on the foundational science of AI, and in the areas where AI intersects with health, sustainability, policy, business, science, and the humanities.

Columbia AI is aimed at amplifying the impact of grassroots AI efforts across the University. As the initiative, which was announced in February, kicks into high gear, Columbia News spoke to its leaders, Shih-Fu Chang, dean of Columbia Engineering; Garud Iyengar, director of the Data Science Institute, and Jeannette Wing, executive vice president for research.

Here are some excerpts from the conversation about what Columbians can expect.

What Is Columbia AI?

Garud Iyengar

Iyengar: Columbia has deep AI expertise in many distinct areas—for example, the foundations of AI and its applications to finance, disease prediction, climate modeling, and robotics. Columbia AI is an effort to bridge this unique strength to forge new collaborations and help increase its impact. We aim to further integrate AI into research, the classroom, and administrative operations.

The Data Science Institute (DSI) is serving as the coordinating hub for Columbia AI, and its mandate is to leverage its connections across schools to seed new AI-related research and support existing projects. We anticipate offering grant proposal support and bringing researchers on board for interdisciplinary collaborations, especially those that bring all of our campuses together.

We have launched a new website, ai.columbia.edu. The site, which currently has news and events with a listing of research centers to come soon, is quickly expanding into a central repository of AI news, events, and resources, and is a way to see how AI is transforming every corner of the University. Our goal is to create one place where Columbians and people outside the University can learn about the AI landscape here and get involved. We are planning a series of events across the University, as well as a University-wide AI event in March to showcase our interdisciplinary collaborations.

AI for Research

Jeannette Wing

Wing: Columbia University Information Technology (CUIT) is also building tools to help everyone at Columbia harness the power of AI. We have a licensing agreement with OpenAI, which allows Columbia users who register with a license to use a version of ChatGPT that protects the information we share, which is important for many reasons, including ensuring that research-related queries aren’t inadvertently sharing unpublished findings. We hope soon to have a Columbia GitHub, called GitLab, where tools like this one can be shared within Columbia, and people can develop them together.

Earlier this spring, New York State, private donors, the Simons Foundation, and six academic partners, including Columbia, committed to create Empire AI, a state-of-the-art artificial intelligence computing center for AI academic research. It’s an effort that’s unique in the U.S. and probably the world. It will position New York State as a leader in AI.

Shih-Fu Chang

Chang: We’re introducing a major initiative as part of Columbia AI called “AI+X.” Columbia has deep expertise in the science of AI, and, of course, has broad leadership in a range of diverse fields, like medicine, media, law, and so on. We want to leverage our strength across all these fields, and to focus on how AI could advance them.

Under the “AI+X” umbrella, we’re introducing the AI Experts in Residence Corps, researchers who are passionate about innovative applications of AI and will work in a close cohort to collaborate with faculty in different disciplines to discover ways to leverage AI in a given field to benefit society.

AI Scholars on Rotation is a new program in which core faculty members with expertise in AI will embed themselves in different fields to explore new collaborations, like using AI to analyze ancient texts, or to develop visual art, or to conduct physics experiments, to name just a few examples.

Columbia Technology Ventures (CTV) will help turn innovations developed through these efforts into real-world solutions with its commercialization resources, such as the Columbia Lab-to-Market Accelerator Network, which aims to launch new startups.

AI for Education

Iyengar: The initiative intends to impact education at Columbia in two ways: through courses exposing all students to AI, and through AI-powered educational tools.

Arts and Sciences and Columbia Engineering, through their joint AI and Society Initiative, are developing courses that integrate AI in different ways. The first course, already underway this semester, is called AI in Context. It’s led by professors from Engineering, English, Music, and Philosophy. The course tackles AI from various perspectives, teaching students the basics of the technology, how to use it to generate music and writing and literary analysis, and how to to address the limits, and the philosophical and ethical issues associated with existing AI technology.

We’ll soon be launching other interdisciplinary courses, for example, AI and Writing; AI and Ethics; and AI and Storytelling. We expect to introduce many more in the semesters and years to come. The goal is to make sure that every undergraduate student in the class of 2028 will have an opportunity to understand AI in the context of their own field of study.

Beyond the classroom, a group led by Vishal Misra of the computer science department in Columbia Engineering, along with the Center for Teaching and Learning, is testing and experimenting with various AI-driven educational tools that we plan to integrate into classes to offer interactive support. Think of these as 24-7 on-call personalized tutors that adapt to the learning style of the student to provide the best support.

Chang: Columbia’s Center for Teaching and Learning has been working with us to answer critical questions about pedagogy in this new age of AI. How should instructors use AI? How to teach critical thinking skills when students can simply pull answers using AI? What are the guidelines, best practices, and tools we can share on how to iterate AI in your teaching, your assessments, and in the classroom? These are all important questions that we’re working on with the Center for Teaching and Learning and with the Provost’s Office, and then also with Columbia University Information Technology (CUIT), which is scaling up the tools we create. The educational computing unit there has been a great partner in that effort.

AI for Administrative Operations

Wing: The initiative is collaborating with the Chief Operating Officer Cas Holloway and CUIT to use AI for improving our research and business processes, to streamline work and help people do more creative work. To take just one example, last year we did a project with Columbia Libraries where we used generative AI to enhance the search capabilities of their website, CLIO. Our new CLIO+AI tool summarizes retrieved papers and explains why the papers it found were relevant to the user’s query. The CLIO+AI tool’s automatic language translation capability means users can access Columbia’s multilingual archived resources.

Creating Responsible Innovation

Wing: AI is a transformative technology reshaping how we live and work, but it also brings new risks that demand urgent attention. While the tech companies are focused on how to bring AI to consumers and businesses, academia can focus on how to make AI trustworthy. Columbia, in particular, promotes the responsible and ethical use of AI.

Tackling complex issues like algorithmic fairness, disparate environmental and health impacts, and workforce transformation requires the diverse expertise that universities have. At Columbia, journalists collaborate with engineers to combat misinformation, while scholars across fields such as business, policy, and law work to understand what it means to regulate AI, for example, to ensure safety and reduce inequities as AI technology is deployed. These are challenges that only a research university of Columbia's breadth and caliber can address, and we see it as our responsibility to help shape the future of safe, ethical AI.

Collaborations With Government and the Private Sector

Iyengar: Another important area we’ll be working on is collaboration with industry. We have partnerships with Capital One, Amazon, Dream Sports, and Infosys already underway. The goal here is not to develop near-term products but to cooperatively address the more fundamental problems that companies do not have the know-how or the bandwidth to address.

For example, we’re working with industry partners to look at how you put guardrails around AI technology that can be used in financial services. There’s an opportunity to use AI to democratize services that are currently only available to high net-worth individuals. High quality market analyses are currently very expensive to produce but with generative AI these reports can be produced at a fraction of the cost. However, one needs to make sure that the technology doesn’t hallucinate and generate false information, and doesn’t end up making unethical use of data.

As a University, we can play a unique role in helping companies explore ethical issues, like how to address fairness in AI and how to build a trustworthy relationship between humans and machine models.

Growing Existing AI Initiatives and Creating New Ones

Chang: AI has been around at Columbia for a long time, transforming fields such as genomic sequencing, cancer detection, and climate modeling. We have major interdisciplinary centers, like the Learning the Earth With Artificial Intelligence and Physics (LEAP) center, which combines machine learning and AI with physics to build much more precise climate models, and the Center for Smart Streetscapes (CS3), which uses crowdsourced data and AI to make city streets safer, looking at a very small level, parking spot by parking spot, intersection by intersection.

Columbia AI will ensure that researchers at Columbia are aware of all the impressive work being done at the University, and will also foster new opportunities for collaborations to advance AI regulation, revolutionize cancer care and climate modeling, and make media more safe and trustworthy, to name just a few opportunities. In addition, we are working with a group of faculty leaders to develop grand visions and big ideas for AI that span over multiple years in accomplishing the critical societal impacts needed.

Columbia is ready to lead AI-driven innovation—not just in research, but also in education and operations—bringing the whole University into the AI conversation, and having that conversation with the world beyond our doors.