cryptotion.com 2 Years Old Cryto News Website

Autonomous weapons are here, but the world is not ready for them

[ad_1]

This is possible It is remembered as the year the world learned that lethal autonomous weapons transitioned from a futuristic concern to a reality on the battlefield. It’s also the year policymakers fail to agree on what to do about it.

On Friday, the 120 countries participating in the United Nations Convention on Certain Conventional Weapons were unable to agree on whether to limit the development or use of lethal autonomous weapons. Instead, they vowed to continue and “intensify” discussions.

“It’s very frustrating, and a real missed opportunity,” says Neil Davison, senior scientific and policy advisor at the International Committee of the Red Cross, a Geneva-based humanitarian organization.

The failure to reach an agreement came about nine months after the United Nations announced the first use of a lethal autonomous weapon in the armed conflict, in the Libyan civil war.

In recent years, more and more weapons systems have integrated autonomy elements. Some missiles, for example, can fly without specific instructions within a specific area; But they still generally depended on a person to launch an attack. Most governments say, for now at least, that they plan to keep people “informed” when such technology is used.

But advances in artificial intelligence algorithms, sensors, and electronics have made it easier to build more complex autonomous systems, raising the possibility that machines can decide on their own when to use lethal force.

A growing list of countries, including Brazil, South Africa, New Zealand and Switzerland, argue that lethal autonomous weapons should be restricted by treaty, such as chemical and biological weapons and landmines. Germany and France support restrictions on certain types of autonomous weapons, including those potentially targeting humans. China supports a very narrow set of restrictions.

Other countries, including the United States, Russia, India, the United Kingdom and Australia, object to the ban on lethal autonomous weapons, arguing that they need to develop the technology to avoid placing them at a strategic disadvantage.

Killer robots have long captured the public’s imagination, inspiring lovable science fiction characters and dystopian visions of the future. The recent renaissance in artificial intelligence, and the creation of new types of computer programs capable of reasoning humans in particular fields, has prompted some of the biggest names in technology to warn of the existential threat posed by intelligent machines.

The issue has become more pressing this year, after the United Nations report, which said that a Turkish-made drone known as the Kargu-2 was used in the Libyan civil war in 2020. Forces allied with the Government of National Accord have reportedly launched the drones. Against the forces that support the Libyan National. Army Commander Major General Khalifa Haftar who targeted and attacked people independently.

“The logistical convoys and the retreating forces of Haftar … were chased and engaged remotely by combat drones,” the report stated. The systems are programmed to attack targets without the need for data communication between the trigger and the munition: in fact, a real “shoot, forget and find” ability.

The news reflects the speed with which autonomous technology is improving. “Technology is evolving much faster than the military-political debate,” says Max Tegmark, a professor at MIT and one of the founders of the Future of Life Institute, an organization dedicated to addressing the existential risks facing humanity. “And we are heading, by default, to the worst possible outcome.”


[ad_2]
Source link
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts