Columbia Hosts Cybersecurity Briefing for Washington Policymakers

The event focused on opportunities and threats to the supply chain posed by microelectronics and artificial intelligence.

May 16, 2023

Columbia held a briefing for Washington policymakers this month to advise them on the promises and threats posed by rapidly evolving artificial intelligence technologies that have been introduced to the public over the last year.

Three Columbia experts led the briefing: Professor Salvatore Stolfo, who is credited with creating the area of machine learning applied to intrusion detection systems in the mid-1990’s; Professor Simha Sethumadhavan, who specializes in microelectronics supply chain and cybersecurity; and Professor Junfeng Yang, whose research focuses on making reliable and secure systems including AI. Jeannette Wing, Columbia’s executive vice president for research, moderated the discussion.

The briefing focused on threats to the supply chain of microelectronics hardware; threats within large language models that are at the heart of new applications such as ChatGPT; and threats in the datasets underlying large language models and AI algorithms. 

Below are some highlights from the discussion. The quotes have been lightly edited for clarity.

Simha Sethumadhavan

“Microelectronics need no introduction as they are everywhere, from cheap toys to national security systems, and are largely responsible for pretty much all the societal advances over the last five to seven decades. Assurance means an ability to provide high quality evidence that chips have not been tampered in any way, and this is important because tampering will have very serious and widespread catastrophic consequences.”

“Congress could require all hardware vendors to set aside some portion of the budget toward security. This would remove the first-mover disadvantage that exists today, because when companies want to implement security, it comes at some cost, and nobody wants these parts to cost more, especially when the customer does not realize what they're getting in return.”

“If there is one thing that you want to take away from this talk, it should be that there is an urgent need to enable design assurance for commodity microelectronics.”

Salvatore Stolfo

“The fundamental issue is how can anyone verify the truthfulness of any data source as being consumed by these larger models? There simply are no fact checkers that can operate at the scale at which these systems operate.”

“What we can do is leverage our current legal and regulatory infrastructure that's been honed now for many decades in the finance and medical areas. These sectors can teach us how we can ensure that the information that is published by these AI companies is truthful.”

“Nobody's going get this right at first. So, we have to revisit whatever is decided by Congress on regulating AI, learn from the mistakes, learn from what worked well, and something that we hope AI systems will do is learn from those mistakes and improve. There has to be a revisiting of regulations on some sort of schedule that makes sense.”

Junfeng Yang

“There are many risks associated with large language models. The first one is misinformation: The model outputs may be biased or incorrect. The second one is disinformation: These models can be used by the attackers for malicious purposes, such as launching a disinformation campaign. Lastly, these models are trained through a very complex process and their own supply chain can be vulnerable, when it comes to information provenance.”

“We should not pause AI or large language model research for six months for two reasons. The first reason is, I believe the net impact of these models on our society will be a huge plus instead of negative. They can be used to make a much more positive impact on many, many things that our society relies on. The second reason is that if you just pause in the U.S., other countries are going to keep doing their research.”

Watch a full video of the event here:

Are you a reporter interested in speaking to a Columbia expert about artificial intelligence? Contact [email protected] to be connected.