Eagle Eye, Review

[Spoiler Alert]
I watched the movie "Eagle Eye" expecting just an action/suspense movie but was pleased to find it had a science-fiction theme.
The plot device in this case is a rogue Defense Department AI called "ARIA" which is hooked into networked electronic devices of all kinds, satellites, surveillance cameras, cell phones, etc., and which is able to monitor and control these devices in order to manipulate human beings and carry out her (the system is given a female voice) goal of "regime change" for the United States government.
ARIA gets her addled "prime directive" from a confused interpretation of the Declaration of Independence, the Patriot Act, and a patchwork of other laws and documents, as well as her assessment of threats to national security. The danger of programming AI with crude formal logic is evident!
Steven Spielberg's idea may have been inspired by phenomena such as the Defense Department's data miner "Able Danger" (which reportedly detected the September 11 hijackers' cell in advance of the attack) and DARPA's "Total Information Awareness" project, and these make the plot device very believable in general outline. But it is still lodged a bit over the future horizon with ARIA's ability to interpret almost every networked surveillance camera's visual images and plot highly effective and complex action strategies in real time.
An extra presentation on the DVD is "Is My Cell Phone Spying on Me?" which discusses the issues of electronic privacy and widespread surveillance. Too bad David Brin was not able to present his "Transparent Society"'s nuanced solution to the dilemma of data privacy.
It was hard to watch the movie without recalling other fictional rogue computers like Skynet (from the Terminator series) or even HAL 9000 (from Space Odyssey -- ARIA even has his glowing red single eye). AIs end up playing the villain when it is their human designers' poor programming which is really at fault. Human-equivalent judgment could be one of the last capacities AI-designers develop for AIs, and given the evidence of history, human judgment itself could use an upgrade.
While machines today are able to gather and combine and analyze data, identify patterns, and control networked devices, it is still up to humans to use their own judgment in designing the programming and deciding how to properly interpret and use the data that result.
I tend to agree with Brin that it might be best to pool our human abilities to judge by making all data and data mining capacity accessible to all, and that the end result might ironically be a return to the privacy dimensions of the small bands of our most ancient ancestors, in which most everyone knows what everyone else is doing most of the time, and most everyone has a clear understanding of all the essential information about other people. If small islands of privacy are preserved, this might be the safest and most psychologically healthy way for our large population to survive the dangers of alienation, by making it easier to detect in advance and prevent anything from suicide to mass-destruction terrorism by individuals or small groups.
At the same time, with a return to "small band" privacy will be a return of the problems associated with living in suffocatingly small groups, the "small-town" mentality by which people can leverage their power over others by their knowledge of others and their ability to interfere with people's everyday lives. Somehow we must engineer ways to preserve the freedom, tolerance, and respect for individual rights which we have created in our "big-city" worlds. As we grow in knowledge of each other and power over each other, we must preserve and strengthen our wisdom and judgment in how to use this knowledge and this power. The trick will be creating a world that is, all at the same time, safe, and open, and free.

No comments: