Safe Mode: On
Autonomous Drones With Guns And A Mission?

One afternoon last fall at Fort Benning, Ga., two model-size
planes took off, climbed to 800 and 1,000 feet, and began criss-crossing
the military base in search of an orange, green and blue tarp.
The automated, unpiloted planes worked on their own, with no human guidance, no hand on any control.



After 20 minutes, one of the aircraft, carrying a computer that
processed images from an onboard camera, zeroed in on the tarp and
contacted the second plane, which flew nearby and used its own sensors
to examine the colorful object. Then one of the aircraft signaled to an
unmanned car on the ground so it could take a final, close-up look.Target confirmed.

This
successful exercise in autonomous robotics could presage the future of
the American way of war: a day when drones hunt, identify and kill the
enemy based on calculations made by software, not decisions made by
humans. Imagine aerial “Terminators,” minus beefcake and time travel.The
Fort Benning tarp “is a rather simple target, but think of it as a
surrogate,” said Charles E. Pippin, a scientist at the Georgia Tech
Research Institute, which developed the software to run the
demonstration. “You can imagine real-time scenarios where you have 10 of
these things up in the air and something is happening on the ground and
you don’t have time for a human to say, ‘I need you to do these tasks.’
It needs to happen faster than that.”The demonstration laid the
groundwork for scientific advances that would allow drones to search for
a human target and then make an identification based on
facial-recognition or other software. Once a match was made, a drone
could launch a missile to kill the target. Military systems with
some degree of autonomy — such as robotic, weaponized sentries — have
been deployed in the demilitarized zone between South and North Korea
and other potential battle areas. Researchers are uncertain how soon
machines capable of collaborating and adapting intelligently in
battlefield conditions will come online. It could take one or two
decades, or longer. The U.S. military is funding numerous research
projects on autonomy to develop machines that will perform some dull or
dangerous tasks and to maintain its advantage over potential adversaries
who are also working on such systems. The killing of terrorism
suspects and insurgents by armed drones, controlled by pilots sitting in
bases thousands of miles away in the western United States, has
prompted criticism that the technology makes war too antiseptic.
Questions also have been raised about the legality of drone strikes when
employed in places such as Pakistan, Yemen and Somalia, which are not
at war with the United States. This debate will only intensify as
technological advances enable what experts call lethal autonomy.The
prospect of machines able to perceive, reason and act in unscripted
environments presents a challenge to the current understanding of
international humanitarian law. The Geneva Conventions require
belligerents to use discrimination and proportionality, standards that
would demand that machines distinguish among enemy combatants,
surrendering troops and civilians.



“The deployment of such systems would reflect a paradigm shift
and a major qualitative change in the conduct of hostilities,” Jakob
Kellenberger, president of the International Committee of the Red Cross,
said at a conference in Italy this month. “It would also raise a range
of fundamental legal, ethical and societal issues, which need to be
considered before such systems are developed or deployed.”
Drones flying over Afghanistan, Pakistan and Yemen can already
move automatically from point to point, and it is unclear what
surveillance or other tasks, if any, they perform while in autonomous
mode. Even when directly linked to human operators, these machines are
producing so much data that processors are sifting the material to
suggest targets, or at least objects of interest. That trend toward
greater autonomy will only increase as the U.S. military shifts from one
pilot remotely flying a drone to one pilot remotely managing several
drones at once. But humans still make the decision to fire, and in the case of CIA
strikes in Pakistan, that call rests with the director of the agency. In
future operations, if drones are deployed against a sophisticated
enemy, there may be much less time for deliberation and a greater need
for machines that can function on their own.The U.S. military has begun to grapple with the implications of emerging technologies.

“Authorizing
a machine to make lethal combat decisions is contingent upon political
and military leaders resolving legal and ethical questions,” according
to an Air Force treatise called Unmanned Aircraft Systems Flight Plan
2009-2047. “These include the appropriateness of machines having this
ability, under what circumstances it should be employed, where
responsibility for mistakes lies and what limitations should be placed
upon the autonomy of such systems.”In the future, micro-drones
will reconnoiter tunnels and buildings, robotic mules will haul
equipment and mobile systems will retrieve the wounded while under fire.
Technology will save lives. But the trajectory of military research has
led to calls for an arms-control regime to forestall any possibility
that autonomous systems could target humans. In Berlin last year,
a group of robotic engineers, philosophers and human rights activists
formed the International Committee for Robot Arms Control (ICRAC) and
said such technologies might tempt policymakers to think war can be
less bloody. Some experts also worry that hostile states or
terrorist organizations could hack robotic systems and redirect them.
Malfunctions also are a problem: In South Africa in 2007, a
semiautonomous cannon fatally shot nine friendly soldiers. The
ICRAC would like to see an international treaty, such as the one banning
antipersonnel mines, that would outlaw some autonomous lethal machines.
Such an agreement could still allow automated antimissile systems. “The
question is whether systems are capable of discrimination,” said Peter
Asaro, a founder of the ICRAC and a professor at the New School in New
York who teaches a course on digital war. “The good technology is far
off, but technology that doesn’t work well is already out there. The
worry is that these systems are going to be pushed out too soon, and
they make a lot of mistakes, and those mistakes are going to be
atrocities.”Research into autonomy, some of it classified, is
racing ahead at universities and research centers in the United States,
and that effort is beginning to be replicated in other countries,
particularly China.“Lethal autonomy is inevitable,” said Ronald
C. Arkin, the author of “Governing Lethal Behavior in Autonomous
Robots,” a study that was funded by the Army Research Office. Arkin
believes it is possible to build ethical military drones and robots,
capable of using deadly force while programmed to adhere to
international humanitarian law and the rules of engagement. He said
software can be created that would lead machines to return fire with
proportionality, minimize collateral damage, recognize surrender, and,
in the case of uncertainty, maneuver to reassess or wait for a human
assessment.In other words, rules as understood by humans can be
converted into algorithms followed by machines for all kinds of actions
on the battlefield.“How a war-fighting unit may think — we are
trying to make our systems behave like that,” said Lora G. Weiss, chief
scientist at the Georgia Tech Research Institute. Others, however, remain skeptical that humans can be taken out of the loop.

“Autonomy
is really the Achilles’ heel of robotics,” said Johann Borenstein, head
of the Mobile Robotics Lab at the University of Michigan. “There is a
lot of work being done, and still we haven’t gotten to a point where the
smallest amount of autonomy is being used in the military field. All
robots in the military are remote-controlled. How does that sit with the
fact that autonomy has been worked on at universities and companies for
well over 20 years?”Borenstein said human skills will remain critical in battle far into the future.

“The
foremost of all skills is common sense,” he said. “Robots don’t have
common sense and won’t have common sense in the next 50 years, or
however long one might want to guess.”Source: Peter Finn @ A Future For Drones: Automated Killing




















Added: Nov-28-2011 
By: Roach1983
In:
Science and Technology
Tags: Autonomous, Terminator, Skynet, Quadrotor, Drones, Military
Views: 4946 | Comments: 31 | Votes: 0 | Favorites: 4 | Shared: 34 | Updates: 0 | Times used in channels: 1
You need to be registered in order to add comments! Register HERE
Sort by: Newest first | Oldest first | Highest score first
Liveleak opposes racial slurs - if you do spot comments that fall into this category, please report them for us to review.