30 Mar 2009

Robots murderers would be allowed to use their own



The terrorists who often cause unintended civilian casualties in Pakistan, are a step closer to a generation of robots-lethal hunters murderers working on little, if any, human control.

The Department of Defense is funding independent research or self-administration, the army of robots that can find and destroy targets on their own initiative. Edge of computer programs, not real people, wondering if the fire of their weapons.
"The trend is clear: the war will continue and autonomous robots may be subject to their conduct, Ronald Arkin, a robotics expert at the Institute of Technology in Atlanta, Georgia, wrote in a study commissioned by the Army of the earth.

"The pressure to increase the battlefield is forcing the pace more and more autonomy to the point where robots lethal final decision," he predicted. "The time available to make the decision to shoot or not shoot back at a distance too short for him to make smart decisions."

Autonomous robotic systems is likely to be armed operation in 2020, according to John Pike, a defense and intelligence expert and director of security Web site GlobalSecurity.org in Washington.

That prospect alarms experts, who fear that machines will not be able to distinguish between civilians and legitimate targets in a war zone.

"We are sleepwalking into a new world where robots decide who, where and when to kill," said Noel Sharkey, an expert in robotics and artificial intelligence at the University of Sheffield, England.

Human operator thousands of miles away in Nevada, using satellite communications, control of the current generation of missiles fired robotic planes known as Predators and Reapers. Robots armed land, such as the Army's Modular Advanced Armed Robotic System, also need a manufacturer of men before shooting.

Already, about 5,000 lethal and nonlethal robots are deployed in Iraq and Afghanistan. Besides targeting the Taliban and al-Qaida, the exercise of supervision, disarm bombs, to transport supplies and carry out other military tasks. So far, none of these machines are self-employed, all are under human control.

The Pentagon plans for its future combat system, taking into account the increasing levels of independence for their robots.

"Every business independently without human intervention should also be considered under the conditions defined by the user," he said in 2007 the Army requested proposals for the future robots.

For example, the Pentagon said that the air-air combat in May occur too quickly to allow remote control of a fire in one of the unmanned aircraft weapons.

"There's really no way to a system that is remotely controlled can effectively operate in an offensive or defensive air combat environment," Dyke Weatherington, the Pentagon's deputy director of unmanned systems working group, said in a press conference on December 18, 2007. "The requirement that is an autonomous system," he said. "It will take many years to find."

Many have armed phalanx autonomous rapid-fire, which is designed to shoot down enemy missiles or aircraft that have penetrated the outer defenses, not wait for a man of decision.

At Georgia Tech, Arkin concludes a three-year contract from the Army to find ways to ensure that the robots are used properly. His idea is an "ethical governor" system that robots must obey the internationally recognized laws of war and U.S. rules of engagement.

"Robots should be obliged to adhere to the same laws as humans or that they should not be allowed on the battlefield," Arkin wrote.For example, a robot from the computer "brain" is designed to block a rocket in a hospital, church, cemetery or cultural history, even if enemy forces were grouped closely. The presence of women or children is also a no-no robot.

Arkin concludes designed a robot could act with more restraint that soldiers of the man in the heat of battle and cause fewer victims.

"Robots can be built that do not show fear, anger, frustration or revenge, and ultimately, a more human than humans, even in tough circumstances," he wrote.

Sharkey, critic of British armed autonomous robots, ethics Arkin said the governor is "a good idea in principle. Unfortunately, it is doomed to failure, because at present, no robots or AI (artificial intelligence) systems to discriminate between a combatant and an innocent person. CEC This simply does not exist. "

Selmer Bringsjord, an artificial intelligence expert at Rensselaer Polytechnic Institute in Troy, NY, is also concerned.

"I am concerned. The stakes are very high," said Bringsjord. "If we give power to the robots to do unpleasant things, we must use logic to teach them not to do something unethical. If we can not understand this, one should not build these robots."

Filled Under:

0 komentar :

Posting Komentar