July 12, 2023
American tech leader Bill Gates argues that AI has potential for serious risks, but those risks are manageable and will likely be overcome through careful international collaboration and planning — much like the risk of nuclear war in previous decades — which is important because of the almost unprecedented benefits in human history of the emerging technology, according to the July 11 edition of Gates Notes, the inventor's regular newsletter.
"The risks created by artificial intelligence can seem overwhelming," Gates wrote. "What happens to people who lose their jobs to an intelligent machine? Could AI affect the results of an election? What if a future AI decides it doesn't need humans anymore and wants to get rid of us?
"These are all fair questions, and the concerns they raise need to be taken seriously," he added. "But there's a good reason to think that we can deal with them: This is not the first time a major innovation has introduced new threats that had to be controlled. We've done it before."
Citing threats like nuclear weapons and mitigations like treaties and arms reduction efforts, Gates argues that humans are likely to overcome the new risks through similar intelligent collaboration and cooperation, which is especially important because of the untold benefits that overshadow these risks. Citing the innovation of computers in the office workspace and the subsequent adaptation society was able to make, Gates argues that AI can simply make people more productive and give them more time back to live their lives outside of work.
In the face of very real threats like deepfakes, Gates notes that public and private institutions like DARPA and Intel are already engineering countermeasures to detect and combat these threats (Intel claims 96% accuracy for its solution, FakeCatcher; DARPA dubs its Semantic Forensics efforts SemaFor).
Advocating for all citizens to become involved in the public dialogue shaping the future of AI, Gates closes on an optimistic note. "Finally, I encourage everyone to follow developments in AI as much as possible," he writes. "It's the most transformative innovation any of us will see in our lifetimes, and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks. The benefits will be massive, and the best reason to believe that we can manage the risks is that we have done it before.