Saturday, November 01, 2008

DARPA's New Advanced Video Spying

DARPA Contract Description Hints At Advanced Video Spying

By Walter Pincus
Monday, October 20, 2008; A13
Courtesy Of The
WashingtonPost

Real-time streaming video of Iraqi and Afghan battle areas taken from thousands of feet in the air can follow actions of people on the ground as they dig, shake hands, exchange objects and kiss each other goodbye.

The video is sent from unmanned and manned aircraft to intelligence analysts at ground stations in the United States and abroad. They watch video in real time of people getting in and out of cars, loading trunks, dropping things or picking them up. They can even see vehicles accelerate, slow down, move together or make U-turns.

"The dynamics of an urban insurgency have resulted in a rapid increase in the number of activities visible in the video field of view," according to the Defense Advanced Research Projects Agency.

Although the exploits of the Predator, the Global Hawk and other airborne collectors of information have been widely publicized, there are few authoritative descriptions of what they can see on the ground.

But some insights into the capabilities of the Predator and other aircraft can be drawn from a DARPA paper that describes the tasks of a contractor that will develop a method of indexing and rapidly finding video from archived aerial surveillance tapes collected over past years.

"The U.S. military and intelligence communities have an ever increasing need to monitor live video feeds and search large volumes of archived video data for activities of interest due to the rapid growth in development and fielding of motion video systems," according to the DARPA paper, which was written in March but released last month.

Last month, Kitware, a small software company with offices in New York and North Carolina, teamed up with 19 other companies and universities and won the $6.7 million first phase of the DARPA contract, which is not expected to be completed before 2011.

During the Cold War, satellites and aircraft took still pictures that intelligence analysts reviewed one frame at a time to identify the locations of missile silos, airplane hangars, submarine pens and factories, said John Pike, director of GlobalSecurity.org, an expert in space and intelligence matters.

"Now with new full-motion video intelligence techniques, we are looking at people and their behavior in public," he said.

The resolution capability of the video systems ranges from four inches to a foot, depending on the collector and environmental conditions at the time, according to the DARPA paper. The video itself is also shaped by the angle to the ground from which it is shot, although there are 3-D capabilities that allow viewers on the ground to manipulate videos of objects so they can see them from different vantage points.

Systems also exist that allow tracking, moving-target detection of objects under forest or other cover and determination of exact geographic location. Development is underway of systems that allow recognition of faces and gait -- in other words, human identification.

Currently, because there are so many activities or objects to be watched for hints of suspicious behavior, "more analysts . . . watch the same, real-time video stream simultaneously," according to DARPA. "If any of the given activities or objects are spotted, the analyst issues an alert to the proper authorities."

Future collection systems are expected to provide even more imagery, cover areas greater than 16 square miles and make it more difficult "for a limited number of analysts to effectively monitor and scrutinize all potential activities within the streaming field of view," DARPA wrote.

Today's volume of intelligence data, beyond just streaming video, already "makes it very difficult to detect specific events in real time and too time intensive to search archived video," the DARPA paper said. The effort underway is designed to find a way to index similar activity, then search and retrieve it from archives. The proposed new system should be able to analyze real-time streaming video as it is received in a ground station and match it on command to archived video from more than one video library.

One notion, described by DARPA, would be that an analyst with a standing alert to watch for U-turning cars could employ the new system to quickly match a real-time event with archived clips of cars making such turns before an attack.
National security and intelligence reporter Walter Pincus pores over the speeches, reports, transcripts and other documents that flood Washington and every week uncovers the fine print that rarely makes headlines -- but should. If you have any items that fit the bill, please send them to fineprint@washpost.com .

No comments:

Post a Comment