Randy Olson discusses modern approaches to Artificial Intelligence

I watched a YouTube video earlier today of an interview with David Eagleman, where he discussed his thoughts on the current approach that most researchers are taking to the problem of Artificial Intelligence. To me, this is an extremely interesting topic to ponder.

He put to words a good portion of what has been on my mind about the field of AI.

I believe Eagleman is on the right track. Let’s start by looking up the definition of intelligence:

intelligence

1. capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.

Ignoring hard-coded solutions that exhibit intelligent behavior (because that is hard-coded pseudo intelligence; I don’t even consider it AI), some AI techniques (machine learning, NLP) have shown the ability to learn and understand the relationships between things. But can they reason about those facts? Can they understand the true meaning of those facts? Further, can they reason about those facts to the point where they can create new ideas without being taught them?

As far as I know, the answer to all three is those questions is “no.” (Please point me to the papers if you have found otherwise! I’d be very interested.)

Why? I highly doubt it’s because the people working on these problems are stupid. It’s more likely because the approach we’ve been taking is the wrong approach. As was mentioned in the video, the brain doesn’t solve problems by solving a sub-problem for every task and combining them together. One solution can also be the solution to an entirely different problem, or an amalgamation of the subsets of two solutions can be the solution to another problem. Breaking problems down into sub-problems and solving them individually is an inefficient approach when it comes to creating a true artificial general intelligence.

Most importantly, here’s something to consider: what is the only method by which we’ve seen intelligence be created on Earth? It wasn’t made by man nor by another intelligence; it was made by evolution over extremely long periods of time. Why, then, do we ignore this fact and set aside one of the most powerful creative tools we have available to us?

Below, I responded to a few criticisms of his video:

It’s very easy to say (just a hypothetical quote, not Eagleman’s) “No, no, we’re approaching this all wrong. We can’t do this in a piecemeal fashion. We need to approach this holistically.” That’s great and all… but how do you propose we do that in terms that are concrete enough that we can actually act on them (sadly, “watching how nature does it” is not enough)?

There are entire sub-fields of AI dedicated to this. Neuroevolution(ists?), for example, evolve artificial neural networks (ANNs, or “artificial brains,” if you will) with the task of solving a specific task. A fairly recent advent in this field is the multi-objective evolution of those ANNs, whereby the ANNs are evolved to solve a set of tasks. From there, we can design experiments that ask, “What task challenges (or set of task challenges) were ancient organisms faced with that required them to evolve intelligence to succeed?,” “What were the ‘building blocks’ to intelligence?,” etc. Indeed, this field promises to be extremely insightful, since we can not only attempt to create a general AI, but also hypothesize about how general intelligence was created in the first place. (Which is why neuroscientists are also involved in this field.)

Furthermore, are you absolutely certain that all of the subproblems we are solving won’t aid in building that system?

No, it’s impossible to prove that anything won’t ever do something. Sure, we could probably create a general AI if we kept at it like this for another 1,000 years or something. (Just think about how many sub-problems human brains have to solve!) It’s more a question of: which approach do we think is more fruitful? Should we continue following the approach that has accomplished relatively little in the past ~60 years1, or should we try a new approach that has a much more solid philosophical grounding?

1 Admit it: it’s ridiculous that the best AI has to offer are expensive machines that can drive a car or play Jeopardy/chess after ~60 years. Teenagers learn how to drive cars, and any trained person could win at Jeopardy if they had a similarly-tailored database of information that Watson had.

I mean no disrespect towards David Eagleman, but I wonder how much he knows about programming. Or specifically, modeling. I’m reasonably certain he got most of his facts right. But the reason we don’t have AI yet is not due to programmers methods. It is because our best computers cannot handle the load that a useful AI would take.

Indeed, I believe the true problem in this approach to AI right now is finding the proper way to design an artificial brain. Artificial neural networks? Markov brains? Something else?

With our current computational technology, we would likely need a highly distributed system to handle the computation required to simulate an artificial brain. But then, that begs the question: why do we need such powerful hardware to emulate the low-power, (relatively) small-sized computing center within our head? Are we modeling the brain correctly?

Original post: http://www.randalolson.com/2012/06/28/david-eagleman-are-we-taking-the-right-approach-to-artificial-intelligence/

Leave a Reply

  • Adami Lab Schedule

    December  2024
    MTWTFSS
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031