Sam Altman: AI privacy safeguards can’t be established before ‘problems emerge’
The CEO of OpenAI said on Thursday that it’s too early to implement privacy regulations for artificial intelligence because the technology — and how it impacts society — is rapidly evolving.
“It's very difficult to predict all of this in advance,” said Sam Altman, who has run OpenAI since 2019, at a major privacy conference in Washington, D.C. “Dynamic response is the only way to responsibly figure out the right guardrails for new technology.”
“The right thing to do is to watch this incredible new wave fall out and respond very quickly as the problems emerge.”
Altman cited an example, pointing to the fact that many OpenAI users discuss their most personal problems when interacting with the system. While doctors and lawyers offer clients confidentiality and privilege — the concept that nothing an individual says to them can be repeated, except in extreme cases — there is no such provision for people confessing private and highly charged personal challenges with generative AI.
Altman offered no plan for how to confront that issue and put the onus on “society” and not artificial intelligence companies to address a problem which raises profound privacy concerns.
“We don't have [privilege] yet for AI systems, and yet people are using it in a similar way, and that is a place where I think society will have to come up with a new sort of framework,” said Altman, who was speaking during an interview at the IAPP’s global summit focused on privacy.
“The way that I believe works best for that is a very tight feedback loop and sort of watching what's happening in people's lives and iterating in response to that.”
When asked what privacy means to him, Altman answered that he “would be too shy to say that in this room.”
Altman wasn’t the only person at the IAPP event discussing regulating artificial intelligence. A House Energy and Commerce Committee staffer helping to manage an all-Republican working group of lawmakers drafting a new federal privacy bill said AI regulation will be a big focus of the discussions.
The staffer, Evangelos Razis, said “the stakes of getting a pro-innovation regulatory agreement right” are high.
“We're not gunshy around addressing risks where they come up, when they're clear,” Razis said, referring to how the working group is approaching its treatment of AI in the bill-writing effort.
However, he said, “the priority, the presumption, is how can we pour gasoline over fire?”
Suzanne Smalley
is a reporter covering privacy, disinformation and cybersecurity policy for The Record. She was previously a cybersecurity reporter at CyberScoop and Reuters. Earlier in her career Suzanne covered the Boston Police Department for the Boston Globe and two presidential campaign cycles for Newsweek. She lives in Washington with her husband and three children.