The namesake of Brian Christian's new book, The Most Human Human, is an award he won in 2009 for basically acing the Turing Test — a 5 minute instant messaging conversation between a human judge and either a machine or another human, where the judge has to determine which one is which. Every year the Turing Test is formally administered and the Loebner Prize is awarded to the most human-like machine (or rather, its engineer) and one of the "confederates" — the real people thrown in to talk to the judges — wins the odd honor of being crowned "the most human human."

The Turing Test is meant to be the metric for artificial intelligence, but Christian argues (by drawing on an impressive triumvirate of college degrees in Computer Science, Philosophy and Poetry) that it has actually become one for our own intelligence. He found the time to explain to us (appropriately over Turing-esque Gchat) the evolving idea of what it means to be human, why he's not worried we'll ever have to submit to machine overlords, and how winning tactics in the Turing Test are remarkably applicable to dating.

The Turing Test is a starting point for discussion in many of the reviews of your new book and in your interviews, but to—in the spirit of your book—avoid a mechanical answer, how would you pitch your book without mentioning the Turing Test? It's sort of an allegory about what it means to communicate with other people and what it is about ourselves that we value and place emphasis on, and how that has been changing over the time particularly as a result of the advance of the computer in the 20th century.

You mention that in the past philosophers strove to differentiate themselves from animals, but then we invented machines. How do you think that's changed our understanding of our own uniqueness? I think there is a great irony that it's the things that philosophers tended to put so much of the emphasis on—factual recall, memory, mathematics, deductive logic, procedural thinking—were really the first things computers proved themselves to be good at. For me that was the red alert moment for philosophers to scramble and get back to this definition of what human uniqueness is all about.

One way that you argue humans are still different from (and superior to) machines is that they haven't made "transition from imitator to innovator." Yeah, that's a great quote from Gary Kasparov. In the context of chess, he is saying is there is a huge body of opening theory that every pro chess player has to memorize, but in order to compete at the highest level you have to make a contribution to opening theory and not simply know opening theory. It's kind of like academia. You can be a really good high school teacher merely by knowing a lot about science, but if you want to be a professor at a research university you have to actually be adding to science. You have to be changing the thing, not merely understanding it. I think there's something to be said about that in the way of conversation, but definitely in the way of art. Part of what we identify with from some of the great pioneers of art, is not just their mastery but the effect that they had on the field. To my mind that's part of what communicating successfully means - it's not only about having a firm grasp on how the language works, or how conversations tend to work, but in reaching a level to have an aesthetic agenda to change those norms or push that envelope.

About meeting people, in your book you analogize speed dating to the Turing Test. How are they alike? Haha, right—well, you can definitely think of the Turing Test as a kind of speed date: you're thrown together with this total stranger and you have just five minutes to try to break through the formalities and get to a place where you're actually getting a sense of each other as distinct human beings. I was fascinated to learn, for example, that the inventor of speed dating—the Beverly Hills rabbi Yaacov Deyo—had to go so far as to ban certain questions, because they kept coming up again and again, and because they were so unproductive. Things like, "So what do you do?" and "So where are you from?"

For one, they're such popular questions that they're answerable without much original thought; for another thing, they are more "content" than "style," that is, they get more at what someone is like "on paper" than at their personality, quirks, mannerisms... In fact, typically what we love about people isn't their properties, it's their manner. One of the parallels that MHH draws is between this kind of conversation and high-level chess, which has this concept called "getting out of book." As all chess games open with the same position, it takes Grandmasters a long time to reach a position that neither has seen before—much of the energy of the game comes from that process of stepping out of known chess theory and into uncharted terrain. With conversation it's the same way—we typically start with pleasantries and familiar questions, feeling our way towards a line of discussion that gets both people thinking on their feet. Deyo's strategy of literally banning the familiar first questions seems to me a brilliant way to try to jump-start that process, and it was something I tried to bring into my Turing test conversations as well.

You warn readers about "low entropy information" in conversation—could you explain that concept in the context of picking someone up at a New York City bar? Shannon, one of the founders of information theory, famously came up with a mathematically precise definition of what information is. In a nutshell, what Shannon says is, "The information of a sentence is how surprising it is." He has these rigorous mathematical ways of defining surprise. So the information entropy pivots on how surprising it is, and a number of factors affect our ability to be surprising. For example, when we're communicating over text message for example, we have these phones with predictive text that's trying to guess what words we're saying. They're making it easier for us to say common words, and conversely harder to say unexpected words.

So ironically, it is kind of harder to be yourself when you're texting than it is when you're talking out loud. But it's also harder to speak idiosyncratically in a nosier environment, because it becomes harder to guess for the other person to guess what the missing pieces are. This is actually the definition of a Cloze test, where you delete part of a sentence and try to get the reader to guess what was there. So the noiser an environment is, the more generically we speak. This fact is problematic, because we use places like bars and clubs to meet people. I happen to think it's a big mistake.

Without writing a clear formula for a human conversation (that would defeat the purpose of your book), you do explain what qualities of a conversation make it distinctly human like site-specificity, information entropy, avoiding stateless conversations. Taken to an extreme though, in Neil Strauss' book "The Game" he talks about professional pick-up artists being "Social Robots," but is robot actually an accurate characterization of them? In the "Site-Specificity" chapter I talk about how there's often this intermediate step between humans doing a job and robots doing a job: namely, a step where the humans essentially become robots. Call centers now are a good example of this: instead of using their own judgment and instinct, many call center operators are simply the voicebox for computer software. They're typing what you say into a computer and reading back to you what the computer tells them to say. Essentially they're running AI software on their own minds—this becomes a kind of Turing test failure.

I think the same thing is basically happening with some of the "social robots" that Strauss describes in The Game, where they're internalizing huge pre-composed scripts and a set of if-then statements, like "if she says x, you automatically sayy," and in doing so basically become chatbots. The fact that the chatbot software is running on a human brain rather than a machine actually becomes surprisingly irrelevant. So in some sense, the Turing test is less about processors-vs.-brains than about rigid, fixed method vs. flexible method, the "figuring out" process.

I got the sense that machines are at their most human when we are at our least. Yes, that's it—as the Oxford philosopher John Lucas puts it, modern-day Turing test deceptions are often "not because machines are so intelligent, but because humans, many of them at least, are so wooden." Part of what I found really useful and instructive about the process of researching the Turing test was that it was full of these cautionary tales for times in our own lives where we literally fail to be as human as we could. Keeping the Turing test and chatbots in mind, knowing how they work, becomes a kind of compass needle.

After an amusing anecdote about the Loebner Prize co-founder falling for computer program he thought was a Russian woman, you state, "All communication is suspect" online. What's your remedy for that? Ironically we have this situation where the instincts of poetry become useful to IT. Poets are engaged in a battle against cliche, essentially: the constant effort to say the familiar things freshly, and I think that's essentially what a confederate is doing in a Turing test when they go up against a program like Cleverbot, which is quite literally a cliche machine, reproducing other people's words without fully understanding them. It's also, I think, part of what we want to do in our emails to combat spam: a context-appropriate neologism or novel metaphor goes a long way towards showing that the words are being generated on the spot by a live mind.

After winning the "Most Human Human" award, do you find yourself assessing everyone "humanness" in relation to yours? It's not so much a question of comparison of one person to another—I think it's more a question of "How human are we both being in this interaction at this particular moment?" Partly that takes the form of a fun game: I use the example in the book of "phonagnosia," which is to try to imagine what life would be like if you couldn't recognize the timbre of anyone's voice. Take the phone, for instance: you'd have to rely on them demonstrating either (a) knowledge, or (b) verbal style, such that you were convinced it was really them you were talking to. So I sometimes find myself waiting to identify the first moment in a phone call when someone says something "so them," or I say something "so me." More broadly, the Turing test hangs over my conversations now in this I think very productive way, where when I find myself operating in a "chatbot-like" fashion, I'm much more quickly able to identify what's missing from that conversation, to put a name to it and have a model for how to fix it. In this way, I think of chatbots as in some sense friendly rivals—they're not really threatening, at least not yet, but they keep us from resting on or laurels or growing complacent. That for me is one of the great life-affirming takeaways of the Turing test.

I noticed in my research about you that you're not on Twitter. I was wondering where you stand on Twitter—do you feel like the 140 character limit is similarly information entropy diminishing? People are surprised I'm a tech person and I'm not on Twitter. Typically Twitter is used to hyperlink to longer bodies of writing, so to some extent some people escape the character limit merely by the tweet standing in for a longer article. I would say that going up against that kind of compression can lead people to a very content driven model of what communication is also about, whereas something that I try to emphasize in the book is the influence of idiosyncratic style. So on the one hand, the intense compression can force people to get creative in how they are condensing their message, but on the other hand I think it does limit the talent. I certainly think it's dangerous, text messaging in particular because you have to not only have to deal with the compression, but also the predictive text.

You, quite impressively, hold degrees in Computer Science, Philosophy and Poetry, which is actually fairly evident in your book. It's funny, I didn't necessarily have a master plan when I studied those things. I knew in college that the mind presents an area where philosophy and computer science overlap - this very technical, abstract way of thinking intersects at the mind. What fascinated me about the Turing Test was is presented this place where using language expressively became a way to do battle with software in this test for philosophical stakes. It was this very thrilling notion for me that here was a place that essentially literature, programming and theorizing all converged.