The People Who Would Teach Machines to Learn

Posted by

Have you ever had an idea about how the human mind works? Douglas Hofstadter has had that idea. He’s also thought of all of the arguments against it and all of the counter arguments to those arguments. He’s refined it, polished it, and if it was really special, it’s in one of his books, expressed with impeccable clarity and sparkling wit. If you haven’t attempted to read his book Gödel, Escher, Bach, try now. For sheer intellectual scope, it’s a singular experience.

There was an article in the Atlantic profiling Dr. Hofstadter and his approach to AI research. It was well written and I thought it gave unfamiliar readers a reasonable sense of Hofstadter’s work. It contrasted this work with machine learning research, however, in a way that minimized the scope and quality of the work being done in that area.

In this little riposte, I’ll present my case that the machine learning research community is doing work that is just as high-minded as Hofstadter’s when viewed from the proper angle.  One caveat: It may at times sound like I am unnecessarily minimizing Hofstadter’s work. I can assure you I have neither the inclination nor the intelligence to offer that or any such opinion. All I contend here is that his approach to the problem he’s trying to solve isn’t the only one, and there are reasons to believe the machine learning approach is also valid.

AI brain

Everybody Loves A Good Story

The Atlantic story goes something like this: Dr. Hofstadter has a singular ambition, to build a computer that thinks like a human. This leads to a lot of speculation, research, and programming, but little in the way of what academics call “results”. He doesn’t publish papers or attend conferences. He is content to work with his small group of graduate students on the “big questions” in artificial intelligence, trying to replicate mind-level cognition in computers, and have his work largely ignored by the rest of the AI community.

Meanwhile, the rest of the AI community has enjoyed considerable scientific and commercial success, albeit on smaller questions. Techniques born in AI planning save billions of dollars and countless person-hours by automating logistics for many industries. Computer vision systems allow individual faces to be picked out from a database of tens of thousands. And machine learning has turned raw data into useful systems for speech recognition, autonomous vehicle navigation, medical diagnosis, and on and on.

But none of these systems come close to mimicking the human mind, especially in its ability to generalize. And here, some would say, we have settled for less. We have used techniques inspired by cognition to craft some nice solutions, but we’ve fallen short of the original goal: To make a system that thinks.

So Dr. Hofstadter, unimpressed, continues to work on the big questions in relative solitude, while the rest of AI research, lured by the siren’s song of easy success, attacks lesser problems.

It’s a nice, clean story:  The eccentric but brilliant loner versus the successful but small-minded establishment, and I suppose if you’re reading a magazine you’d prefer such a story to a messy reality. But while there’s certainly some truth to it, I don’t see it in nearly such black and white terms. Hofstadter’s way of working is to observe the human mind at work, then write a computer program that you can observe doing the same thing. This is, of course, a very direct approach, but I’m not convinced it’s the only or even necessarily the best one.

In the other corner, we have machine learning, frequently implied in the article as being one of the easier problems that Hofstadter avoids. Surely research into such a (relatively) simple problem would never contribute directly towards creating a human-level intelligence, would it?

I’m going to say that it just might. Here are three reasons why.

Accidental Intelligence

Remember where cognition came from; A system (evolution) did its best to produce a solution to the following optimization problem:

  1. Survive long enough to have as many children as possible
  2. (Optional) Assist your children in executing step 1

Out of this (this!) came a solution that, as a side effect, produced things like Brahms’ Fourth Symphony and the Reimann hypothesis.

With this in mind, it doesn’t seem unreasonable to think that a really marvelous solution to problems as general and useful as those in machine learning might lead to a system that exhibits some awfully interesting ancillary behaviors, one of which might even be something like creativity.

Am I saying that this will happen, or that I have any idea how it might? Absolutely not. But look at how the human mind was created in the first place, and tell me honestly that you would have seen it coming. The question is whether the set of problems that machine learning researchers tackle are interesting enough to inspire solutions with mind-level complexity.

There are suggestions in recent research that they might be.  In the right hands, the process of solving “simple” machine learning problems can produce deep insights about how information in our world is structured and how it can be used. I’m thinking of, for example, Josh Tenenbaum and Tom Griffiths’ ideas about inductive bias, Andrew Ng‘s work on efficient sparse coding, and Carlos Guestrin’s formulation of submodular optimizations. These ideas say fundamental things about many problems that used to require human intelligence to solve. Do these things have any relationship to human cognition? Given the solutions they’ve led to, it would certainly be hasty to dismiss the idea out of hand.

The Opacity of the Mind

Several years ago, a colleague and I were developing a machine learning solution that could figure out if a photograph was oriented right-side up or not.  After much algorithmic toying and feature engineering, we reached a point where I said, “Well, it works, but this is totally unrelated to the way people do it”.

My colleague’s reply was “How do you know? You’re imagining that you can keep track of everything that happens in your own mind, but you could just be inventing a story after the fact that explains it in a simpler way.  When you recognize that a photo is upside-down, it probably happens far too fast for you to recognize all of the steps that happened in between.”

There’s a fundamental problem in trying to model human cognition by observing cognitive processes, which is that the observations themselves are results of the process.  If we create a computer program consistent with those observations, have we modeled the actual underlying process, or just the process we used to make the observations?  Hofstadter himself admits the difficulty of the problem, and says that his understanding of his own mind’s operation is based on “clues” like linguistic errors and game-playing strategies.  As of now, our understanding of our own subconscious is vague at best.

Rather than trying to match observations that might be red herrings, why not use actual performance as the similarity metric?  Machine learning algorithms can learn to recognize handwritten digits nearly as well as humans can and often make the same mistakes.  Isn’t it at least possible that the inner workings of such an algorithm bears some relationship to human cognition, regardless of whether that relationship is easily observed?

To Learn is to Become Intelligent

Hofstadter, while working on the big questions, generally works on them in toy domains, like word games, trying to make his machines solve them like humans might. Machine learning research focuses on a more universal problem:  How to make sense out of large amounts of data with limited resources.

Consider the data that has been fed into the brain of a five-year old. We can start with several years of high resolution stereo video and audio. Hours of speech. Sensory input from her nose, mouth, and hands. Perhaps hundreds of thousands of actions taken and their consequences. Using this data, a newborn who cannot understand language is transformed into an intelligence that can play basic word games.  Can there be any doubt that life is at least in part a big data problem?

Moreover, consider how important learning is to our perception of intelligence. Suppose you had two computer systems that could answer questions in fluent Chinese (with a nod to John Searle). The first was built by a fleet of programmers and turned on yesterday. The second was programmed five years ago, had no initial ability to speak Chinese, but slowly learned how by interacting with hundreds of Chinese speakers. Which system would you say has more in common with the human mind?

Big Answers from Small Questions

Is the machine learning research community suffering from a lack of ambition? I don’t buy it. Just because the community isn’t setting out, in general, to build a human-level intelligence doesn’t mean they’re headed in a different direction. After all, the systems we’ve built so far can translate between languages, recognize your speech, face, and handwriting, pronounce words it has never seen, and do it all by learning from examples.

If the machine learning community does create a human-level intelligence, it may not be any one person who had the “aha!” idea that allowed it to happen. It might be more like Minsky envisioned in “Society of Mind“; not a single trick but a collection of specialized processes, intelligent only in combination.

If that sounds like a cop-out, like an excuse not to look for big answers, consider the path followed by children’s cancer research: Science has yet to find a single wonder drug or treatment that cures all childhood cancers, yet decades of piecemeal research on the parts of the problem has driven cure rates from about 10% to nearly 90%. A bunch of people working on smaller problems may not look like much in situ, but the final product is what matters most.

3 comments

  1. Reblogged this on Piece of Insight. and commented:
    “If the machine learning community does create a human-level intelligence, it may not be any one person who had the “aha!” idea that allowed it to happen. It might be more like Minsky envisioned in “Society of Mind“; not a single trick but a collection of specialized processes, intelligent only in combination.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s