AI could enable the cheap creation of deadly viruses, pose job loss and surveillance risks, but also offers significant medical and technological benefits, making it crucial for the US to maintain leadership over China. (Shutterstock)
- AI could enable the cheap and rapid creation of deadly viruses, warns Jason Matheny of the Rand Corp.
- AI promises medical and technological advances but also risks job losses and enhanced authoritarian surveillance.
- Maintaining the US lead in AI over China is crucial for national security, highlighted by President Biden's policies.
Share
Getting your Trinity Audio player ready...
|
Opinion by Nicholas Kristof on July 27, 2024.
Nicholas Kristof
Opinion
Here’s a bargain of the most horrifying kind: For less than $100,000, it may now be possible to use artificial intelligence to develop a virus that could kill millions of people.
That’s the conclusion of Jason Matheny, president of the Rand Corp., a think tank that studies security matters and other issues.
“It wouldn’t cost more to create a pathogen that’s capable of killing hundreds of millions of people versus a pathogen that’s only capable of killing hundreds of thousands of people,” Matheny said.
In contrast, he noted, it could cost billions of dollars to produce a new vaccine or antiviral in response.
Related Story: The Unnerving Changeability of JD Vance
I told Matheny that I’d been The New York Times’ Tokyo bureau chief when a religious cult called Aum Shinrikyo had used chemical and biological weapons in terror attacks, including one in 1995 that killed 13 people in the Tokyo subway. “They would be capable of orders of magnitude more damage” today, Matheny said.
I’m a longtime member of the Aspen Strategy Group, a bipartisan organization that explores global security issues, and our annual meeting this month focused on artificial intelligence. That’s why Matheny and other experts joined us — and then scared us.
Fears of Biological Weapons
In the early 2000s, some of us worried about smallpox being reintroduced as a bioweapon if the virus were stolen from the labs in Atlanta and in Russia’s Novosibirsk region that retain the virus since the disease was eradicated. But with synthetic biology, now it wouldn’t have to be stolen.
Some years ago, a research team created a cousin of the smallpox virus, horse pox, in six months for $100,000, and with AI it could be easier and cheaper to refine the virus.
One reason biological weapons haven’t been much used is that they can boomerang. If Russia released a virus in Ukraine, it could spread to Russia. But a retired Chinese general has raised the possibility of biological warfare that targets particular races or ethnicities (probably imperfectly), which would make bioweapons much more useful. Alternatively, it might be possible to develop a virus that would kill or incapacitate a particular person, such as a troublesome president or ambassador, if one had obtained that person’s DNA at a dinner or reception.
Related Story: The Kamala Harris Report Card
Assessments of ethnic-targeting research by China are classified, but they may be why the U.S. Defense Department has said that the most important long-term threat of biowarfare comes from China.
AI Has a Hopeful Side
AI has a more hopeful side as well, of course. It holds the promise of improving education, reducing auto accidents, curing cancers and developing miraculous new pharmaceuticals.
One of the best-known benefits is in protein folding, which can lead to revolutionary advances in medical care. Scientists used to spend years or decades figuring out the shapes of individual proteins, and then a Google initiative called AlphaFold was introduced that could predict the shapes within minutes. “It’s Google Maps for biology,” said Kent Walker, president of global affairs at Google.
Scientists have since used updated versions of AlphaFold to work on pharmaceuticals including a vaccine against malaria, one of the greatest killers of humans throughout history.
So it’s unclear whether AI will save us or kill us first.
Related Story: CA Has Seen Many New Towns, but This Big Project Is Stalled
Scientists for years have explored how AI may dominate warfare, with autonomous drones or robots programmed to find and eliminate targets instantaneously. Warfare may come to involve robots fighting robots.
Robotic killers will be heartless in a literal sense, but they won’t necessarily be particularly brutal. They won’t rape and they might also be less prone than human soldiers to rage that leads to massacres and torture.
AI Could Amplify Social Unrest
One great uncertainty is the extent and timing of job losses — for truck drivers, lawyers and perhaps even coders — that could amplify social unrest. A generation ago, American officials were oblivious to the way trade with China would cost factory jobs and apparently lead to an explosion of deaths of despair and to the rise of right-wing populism. May we do better at managing the economic disruption of AI.
Dictators have benefited from new technologies. Liu Xiaobo, the Chinese dissident who received a Nobel Peace Prize, thought that “the internet is God’s gift to the Chinese people.” It did not work out that way: Liu died in Chinese custody, and China has used AI to ramp up surveillance and tighten the screws on citizens.
Related Story: The Unnerving Changeability of JD Vance
AI may also make it easier to manipulate people, in ways that recall Orwell. A study released this year found that when Chat GPT-4 had access to basic information about people it engaged with, it was about 80% more likely to persuade someone than a human was with the same data. Congress was right to worry about manipulation of public opinion by the TikTok algorithm.
All this underscores why it is essential that the United States maintain its lead in artificial intelligence. As much as we may be leery of putting our foot on the gas, this is not a competition in which it is OK to be the runner-up to China.
President Joe Biden is on top of this, and limits he placed on China’s access to the most advanced computer chips will help preserve our lead. The Biden administration has recruited first-rate people from the private sector to think through these matters and issued an important executive order last year on AI safety, but we will also need to develop new systems in the coming years for improved governance.
Related Story: The Kamala Harris Report Card
“We’ve never had a circumstance in which the most dangerous, and most impactful, technology resides entirely in the private sector,” said Susan Rice, who was President Barack Obama’s national security adviser. “It can’t be that technology companies in Silicon Valley decide the fate of our national security and maybe the fate of the world without constraint.”
I think that’s right. Managing AI without stifling it will be one of our great challenges as we adopt perhaps the most revolutionary technology since Prometheus brought us fire.
—
Contact Kristof at Facebook.com/Kristof, Twitter.com/NickKristof or by mail at The New York Times, 620 Eighth Ave., New York, NY 10018.
This article originally appeared in The New York Times.
c.2024 The New York Times Company
RELATED TOPICS:
This French Bulldog Is So Fetch: Meet Toaster Strudel
2 hours ago
The Fed Expects to Cut Rates More Slowly in 2025. What That Could Mean for Mortgages, Debt and More
5 hours ago
Big Lots Holds Going-Out-of-Business Sales After Deal to Save Company Fails
17 hours ago
The Latest: House Approves New Government Funding Bill
18 hours ago
Rams’ Matthew Stafford and Jets’ Aaron Rodgers Collide in Matchup of Familiar Foes
19 hours ago
‘Embarrassing’ Night for Stephen Curry in 51-Point Loss at Memphis
20 hours ago
It’s Eggnog Season. The Boozy Beverage Dates Back to Medieval England but Remains a Holiday Hit