Sunday, March 01, 2009

Kinda like the Terminator?

Photobucket

What could possibly go wrong with a big push to electronic warfare?
FORT LEAVENWORTH, Kan. – For the first time since the end of the Cold War, the Army is updating its plans for electronic warfare, calling for more use of high-powered microwaves, lasers and infrared beams to attack enemy targets and control angry crowds.

The new manual, produced at Fort Leavenworth and set for release Thursday, also is aimed at protecting soldiers against remote-controlled roadside bombs and other nontraditional warfare used by increasingly sophisticated insurgents.

"The war in Iraq began to make us understand that there are a lot of targets that we should be going after in the offensive or defensive mode to protect ourselves," said Col. Laurie Buckhout, chief of the Army's electronic warfare division in Washington, D.C.

The 112-page manual, a copy of which was obtained by The Associated Press before its release at the Association of the United States Army meeting in Fort Lauderdale, Fla., doesn't offer specifics on new equipment or gadgetry but lays out in broad terms the Army's fear that without new equipment and training, U.S. forces may be at a deadly disadvantage.
Maybe they should read this article by Nicholas Carr, The Artificial Morality of the Robot Warrior:
Related major research efforts also are being devoted to enabling robots to learn from experience, raising the question of whether we can predict with reasonable certainty what the robot will learn. The answer seems to be negative, since if we could predict that, we would simply program the robot in the first place, instead of requiring learning. Learning may enable the robot to respond to novel situations, given the impracticality and impossibility of predicting all eventualities on the designer’s part. Thus, unpredictability in the behavior of complex robots is a major source of worry, especially if robots are to operate in unstructured environments, rather than the carefully‐structured domain of a factory.
Troubling Failures.

The authors also note that “military robotics have already failed on the battlefield, creating concerns with their deployment (and perhaps even more concern for more advanced, complicated systems) that ought to be addressed before speculation, incomplete information, and hype fill the gap in public dialogue.”

They point to a mysterious 2008 incident when “several TALON SWORDS units—mobile robots armed with machine guns—in Iraq were reported to be grounded for reasons not fully disclosed, though early reports claim the robots, without being commanded to, trained their guns on ‘friendly’ soldiers; and later reports denied this account but admitted there had been malfunctions during the development and testing phase prior to deployment.”

They also report that in 2007 “a semi‐autonomous robotic cannon deployed by the South African army malfunctioned, killing nine ‘friendly’ soldiers and wounding 14 others.” These failures, along with some spectacular failures of robotic systems in civilian applications, raise “a concern that we … may not be able to halt some (potentially‐fatal) chain of events caused by autonomous military systems that process information and can act at speeds incomprehensible to us, e.g., with high‐speed unmanned aerial vehicles.”
I guess if your name isn't Sarah Connor, you don't have anything to worry about....

4 comments:

Sorghum Crow said...

I didn't read the whole thing, but seriously, what could go wrong?

ellroon said...

And if something does, you just hit this little reset button...

Steve Bates said...

"Thus, unpredictability in the behavior of complex robots is a major source of worry, especially if robots are to operate in unstructured environments, rather than the carefully‐structured domain of a factory."

Isaac Asimov made a good living off of this fact for a while.

I find myself reminding people who are intrigued by the possibilities of robots for whatever purpose that computer processors, contrary to what everyone was told in the 1950s, are not anything like human brains... I mean, not even remotely similar. Digital devices with a somewhat human-brain-like architecture can be built, but the level of complexity achievable with current hardware is nowhere near that of the human brain. Some of the necessarily simplified experiments are pretty interesting, but I don't want to think of turning them loose even on a battlefield, let alone in a crowd control situation. I do not have a good feeling about this.

ellroon said...

Exactly, Steve.

My favorite quote from the character Dr. Ian Malcolm in Jurassic Park which I've used several times: * Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.