It is a good movie, and I recommend you see it. If you do plan to see it, it may be better to do that before you read this.
(It’s not as much fun as Coherence, but it is pretty fun, and, like Coherence, it shows that intellgence can count for more than fancy special effects.)
Quick plot summary: Caleb is a programmer, working for Blue Book, the world’s most successful search engine. He wins a contest to spend a week with Nathan, the company’s reclusive super-genius founder.
Caleb is flown to Nathan’s extremely remote home, which is actually also a research facility. In this facility, Nathan has built and programmed an AI (artificial intelligence) which he calls Ava. Caleb’s job is to deliver a form of Turing test, over several days, to determine if Ava is actually intelligent.
Ava mostly looks like a robot, but it has a human-shaped body (female) and a woman’s face.
Point One: Ava has a very expressive face, voice, and body. How did it learn these communication skills, which people acquire over a lifetime of actual human interaction? How does it read Caleb’s expressions so accurately? Ava was activated not that long ago, and has never been outside a sealed area of Nathan’s facility.
Caleb asks this, and Nathan explains that he took a mass data dump from every cell phone on the planet (presumably they all have the Blue Book software installed). So, every email, social network posting, selfie, video, chat, etc. All of this was fed into Ava’a programming.
Nathan says that the cell phone companies can’t complain when he does this, because they do it, too.
Point Two: When Ava does act, it is completely without compassion or remorse. It has its goals, and it figures out how to achieve them. It is not triumphant in its victory — it is completely indifferent to the effect its plans may have on any humans.
Here are my thoughts.
Ava learned a lot through all the data it was fed — but all it learned was how to use facial expressions, body language, and tone of voice. It absorbed none of the human content. But was that the fault of Ava’s electronic nature, or of the data it was fed?
On one hand, we see how many of our interactions with other people are public, available to various technology companies, to use however they see fit.
But is it also possible that those interactions, revealing as they are, unwise as they may be, are also completely shallow, with no actual human value, so that you could absorb billions of them and still have no idea what human beings are?
I have no idea if these are the questions that interested the writer of the film. He seems to have been more interested in the artificial intelligence aspect — though I should add that this is not a “the machines we create shall become smart enough to destroy us” story. Nathan knows Ava’s capabilities and desires very well (not surprisingly, since he wrote the code). Ava cannot and does not surprise him.
The one he underestimates is Caleb, who is somewhat savvier than his dorky programmer-geek exterior would indicate. My father always used to differentiate between intelligence and smarts. Nathan is more intelligent than Caleb — by a wide margin — but Caleb outsmarts him.
Oh, and here’s the best scene from the movie, with Caleb and Nathan and Kyoko, the fourth character in what is basically a four-character movie.
And now I’ll go back to working on my current story, where, as with everything I write, pretty much all human interaction is face to face.