By Will Knight | 02.23.23 |
Hello, readers! After two weeks of wall-to-wall chatbots, it feels like a good time to soberly remind ourselves that artificial intelligence is being used to do much more physical and consequential things than generate text. |
Making military AI behave 🤖 💣 |
A new State Department proposal asks other nations to agree to limits on the power of military AI. |
Last Thursday, the US State Department outlined a new vision for developing, testing, and verifying military systems—including weapons—that make use of AI. The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy represents an attempt by the US to guide the development of military AI at a crucial time for the technology. The document does not legally bind the US military, but the hope is that allied nations will agree to its principles, creating a kind of global standard for building AI systems responsibly. Among other things, the declaration states that military AI needs to be developed according to international laws, that nations should be transparent about the principles underlying their technology, and that high standards are implemented for verifying the performance of AI systems. It also states that humans alone should make decisions around the use of nuclear weapons. When it comes to autonomous weapons systems, US military leaders have often reassured that a human will remain "in the loop" for decisions about use of deadly force. But the official policy, first issued by the DOD in 2012 and updated this year, does not require this to be the case. |
Attempts to forge an international ban on autonomous weapons have so far come to naught. The International Red Cross and campaign groups like Stop Killer Robots have pushed for an agreement at the United Nations, but some major powers—the US, Russia, Israel, South Korea, and Australia—have proven unwilling to commit. One reason is that many within the Pentagon see increased use of AI across the military, including outside of non-weapons systems, as vital—and inevitable. They argue that a ban would slow US progress and handicap its technology relative to adversaries such as China and Russia. The war in Ukraine has shown how rapidly autonomy in the form of cheap, disposable drones, which are becoming more capable thanks to machine learning algorithms that help them perceive and act, can help provide an edge in a conflict.
|
Earlier this month, I wrote about onetime Google CEO Eric Schmidt's personal mission to amp up Pentagon AI to ensure the US does not fall behind China. It was just one story to emerge from months spent reporting on efforts to adopt AI in critical military systems, and how that is becoming central to US military strategy—even if many of the technologies involved remain nascent and untested in any crisis. Lauren Kahn, a research fellow at the Council on Foreign Relations, welcomed the new US declaration as a potential building block for more responsible use of military AI around the world. |
A few nations already have weapons that operate without direct human control in limited circumstances, such as missile defenses that need to respond at superhuman speed to be effective. Greater use of AI might mean more scenarios where systems act autonomously, for example when drones are operating out of communications range or in swarms too complex for any human to manage. Some proclamations around the need for AI in weapons, especially from companies developing the technology, still seem a little farfetched. There have been reports of fully autonomous weapons being used in recent conflicts and of AI assisting in targeted military strikes, but these have not been verified, and in truth many soldiers may be wary of systems that rely on algorithms that are far from infallible. And yet if autonomous weapons cannot be banned, then their development will continue. That will make it vital to ensure that the AI involved behave as expected—even if the engineering required to fully enact intentions like those in the new US declaration is yet to be perfected. | Drop me a DM on Twitter or just reply to this email to let me know what you think of the newsletter so far—and what you'd like to see me write about next. And if you missed it, have a listen to Know It All: 1A and WIRED's guide to AI—a four-part special on AI from WIRED and National Public Radio's 1A show that went out earlier this week. See you next week! |
|
|
I said we'd take a break from chatbots, but I can't resist including a link about the secret history of Bing's talkative new ChatGPT-based interface. Posts on Microsoft support forums show the bot was quietly tested in public last year, under the codename Sydney, but wasn't always polite. | |
|
|
No comments:
Post a Comment