World Leaders Debate Is It OK To Kill People With Terminator Style Robots

It’s been a very strange week – probably – in The Hague, where the ongoing circus of Microsoft’s Bing AI has publicly melted into a wonderful, home-destroying Pinocchio chaos machine, gathering together military leaders from 50 countries to ” responsible” to discuss. ” the use of artificial intelligence in the military.

The time? Absolutely impeccable. The substance of the Dutch-hosted summit, though? Worried.

According to Reuters, Leaders have certainly gathered, and they have certainly said some things. It has also been reported that they signed an agreement to comply with “international legal obligations” in a way that “does not undermine international security, stability and accountability,” which is good on the surface. But it is reported that that already modest agreement was non-binding, and that there were human rights advocates, per Reuterscautioned that there was no specific language regarding weapons “such as AI-guided drones, ‘kill bots’ that could kill without any human intervention, or the risk that AI could escalate military conflict.”

In other words, while the whole point of the summit was for leaders to establish some firm ground rules, there are no firm details in any norms that might be established — making them unenforceable. seemingly coherent. However, the United States cordially requests that you please abide by them, thank you very much!

“We invite all states to join us in enforcing international norms,” ​​US Under Secretary of State for Arms Control Bonnie Jenkins said in a February 16 statement, according to Reuters“as it relates to military development and use of AI” and autonomous weapons.

“We want to express,” said Jenkins, “that we are open to participation with any country that is interested in joining us.”

Per Reuters, if the United States offered anything more concrete, Jenkins said that “human accountability” and “appropriate levels of human judgment” should be leveraged to responsibly incorporate artificial intelligence into military operations. Which, sure, is all good.

But both human accountability and human judgment are fundamental expectations of any army. AI systems are not put on the doorstep of the military; at the end of the day, even if military-AI integration means humans won’t be pulling as many triggers as they do today, humans are building and releasing the AI ​​systems that will do the killing. Humans are, after all, accountable for the results of AI systems, whether the machines perform as intended or go completely off the rails.

And again, without really defining any clear-cut rights and wrongs—especially when it comes to the use of specific and lethal AI-powered weapons—broader statements about judgment and accountability are ultimately sound.

Perhaps unsurprisingly, we’re not the only ones with questions.

The US statement “paves the way for states to develop AI for military purposes in any way they see fit as long as they can say it’s ‘responsible’,” said Jessica Dorsey, an assistant professor of international law in Utrecht University of the Netherlands. Reutersalso calling America’s declaration a “missed opportunity” for the nation to show real leadership in the field of AI ethics.

msgstr “Where is the enforcement mechanism?” she asked.

To be fair, the US Department of Defense has written some guidelines for the use of American military AI.

But countries can break and re-make their own rules. Seriously Establishing strong, distinct international safeguards and expectations, as this summit seems to have been the goal of, may be the best way to ensure AI accountability, especially given how weak are these systems that are very compatible in reality. Researchers have also warned that an international AI arms race could easily destroy civilization, so no. (Reuters Chinese representative Jian Tan is reported to have argued that international leaders should “oppose absolute military advantage and seek hegemony through AI,” adding that the United Nations should play a major role in facilitating the development of AI. )

In the most optimistic reading, the summit was a first step, albeit a baby step.

“We are moving into an area that we do not know, for which we have no guidelines, rules, frameworks or agreements,” Dutch Foreign Minister Wopke Hoekstra said before the event, according to Reuters. “But we’ll need them sooner rather than later.”

On that note, for everyone’s sake, let’s hope next year’s meeting has a little – or a lot – more juice. If there’s ever a time for pink commitment level agreements, it’s not when deciding how to build war robots.

READ MORE: The US, China, and other nations argue for ‘responsible’ use of military AI (Reuters)

More about AI: Man “Sure” his AI girlfriend will save him when the robots take over

Leave a Reply

Your email address will not be published. Required fields are marked *