Intelligence

2016-03-16

A long and convoluted thought on "Intelligence", whatever that is.

I was reading about Intelligent Design theory recently and I noticed something weird. Its not something weird about the theory itself so much as something weird about intelligence generally. It also ties in materialist philosophy and computer programming.

Part 1: Intelligent Design Theory

I was reading some of the book Darwin's Doubt by Stephen Meyer (who is allegedly a different person than Stephanie Meyers, who wrote Twilight). The title is an allusion to the sudden emergence of perhaps 17 new phyla during Cambrian Explosion (depending on how you count it). He claims was the largest contemporary challenge to Darwin's theory and still has not been satisfactorily explained. I are using the working assumption that the first claim is a matter of historical record, and thus reliable, while the second is a matter of ongoing debate among biologists, and thus subject to change without notice. The root problem seems to be that there just aren't enough surviving fossils from the Precambrian to be sure about what happened back then.

But that's tangential. The important part is that Stephen Meyers identifies a list of traits that whatever-it-was that cause the new phyla must have had. He claims that the cause must be capable of: And that, at the same time, something (Occam's razor suggests the same something) by be generating large amounts of: He then goes on to argue that intelligent agents, as evidenced by human history, have these powers.

OK. I'm with him so far. I don't think that anyone is seriously arguing that sufficiently smart and capable intelligent agents could not have designed life forms. The real question is two-fold: is there anything else that could make the new creatures and, if so, who dun it? I am not even going to try to answer that here. Instead, I will just assume that it was intelligence because intelligence is what I need to for my idea. And there I run into a problem: Although it is not surprisingly in a book about biology, Stephen Meyer never asks the question of what exactly intelligence is.

To examine that question, I want to think a bit more about the intelligence we know best: our own.

Part 2: Humans as Robots

Note: If I had to guess, the proper term for this idea is Mechanism, but Wikipedia explains philosophy is such weird terminology that I'm frequently unsure on exactly which inter-related theory I am thinking of.

Terminology aside, there was a time (possible during the Industrial Revolution) when it was popular to describe humans as complex biological machines. For example, knees are like joints, blood vessels like pipelines, bones like support beams, and so on. Although the theory is now out of favor (in exactly that form), I can see the point. The biological machine is a lot more sophisticated than anything we can build, but the parallels are mostly pretty good.

Mostly. You see, there is one part of the body that doesn't have a good mechanical counterpart. That part is the brain, and it is much more like a computer. Admittedly it is a computer based on a very different architecture (parallel vs. sequential), but the connection is there. The most important hing about a computer is not how it works, however, but what it does. So let's follow this a bit farther.

I was reading vaguely recent (a year maybe?) somewhere (Scientific American? - a year is a long time) about the modern scientific understanding on the subconscious mind. The first idea was that Freud was right: there is one. The second idea was that he was spectacularly wrong about what it did. Instead of being a super-powerful lurker that secretly manipulates you in hopes of getting more sex, it just does boring, helpful things your conscious mind doesn't want to worry about. For example, it figures out who all the people in a group are from their faces, and the best way to walk through a room without hitting things, and monitors conversations you aren't paying attention to for someone saying your name. Not nearly as cool (or scary), but a lot more practical.

And you know, I have seen stuff like that elsewhere. It's that same sort of thing that goes on a lot in AI research. Computer recognition of faces is now good enough to be kind of scare, at least in optimal conditions. This, incidentally, is why you have to sit just so and hold a neutral expression for id photos - it messes up computer face matching systems otherwise (source: Wikipedia). Pathfinding is also an old AI task that is finally (now that we finally have workable robots) being applied to real-world situations. And as for voice recognition...

Admittedly, the processing that your subconscious mind does is still a lot better, but the parallel is there. I don't have any real problem with saying that the subconcious mind runs on biological "AI". Actually, scrap that. There is a problem: AI stands for "artificial intelligence" and defining something as the natural form of the artificial form of something natural feeds stupid. But apart from that, I am OK with it.

Here, unfortunately, is where the parallel seem to end. And we still haven't got to the sort of "intelligence" that we needed for Intelligent Design above.

Part 3: Human Intelligence vs. Artificial Intelligence

The conscious reasoning we do is not like what a computer does. At least to the best of my knowledge: I don't know how my conscious mind works very well.

I know what it "feels" like: Sometimes I have ideas sort of strung together. You sometimes see this in books, where it is called stream-of-consciousness. A more edited form is sentences. Sometimes I am trying to solves a problem. My mind builds me a chain of links from the problem to the solution, or sometimes the other way around. Sometimes thoughts just come out of nowhere, although a lot of these don't make sense and I'm not sure of they're the same sort of thing as the rest.

What are the common traits here? Well for one thing, human thought is basically linear. There are branches and ties to earlier stuff, but they are the exception. Thought normally is a line with stuff added to it rather than a web (although you can get a web if you think about the same thing all the time). For another thing, it works by building bridges between abstraction and reality. When I encounter the real world, I have to run it through successive abstractifications before I can truly process it. Going the other way, invention (whether of mechanisms, philosophies, or merely sentences) works by successive de-abstractification. First I have the idea of a something that will move me where I want to go without walking. Then I think out the basic components: wheels, a mover, a place to set. Then the detailed design (probably with the aid of the paper "memory" called diagrams). Finally, the the mechanical device itself. And then I look upon my idea incarnate: a steam-powered automobile.

Here, I want to stress something: This is not what a computer does. It probably isn't obvious to people who just use computers, but it's pretty clear when programming them. The computer really just blindly follows instructions. It is possible to make a program to do problem-solving, but it doesn't work very well, and the reason that it doesn't work very well is that the computer is trying to solve the problem very differently. Specifically, the computer is trying to solve the problem as a giant search.

I should probably note here that "search" has a broader-than-normal meaning in computer programming. In the simplest form, you just try everything and stop when you find the right answer. The more sophisticated form involves only checking the likely possibilities. But the programmer has to put in what's likely, so we are really just storing human intelligence for the computer to use later. Designing rules that reliably solve a problem is much harder than just solving it, so we aren't really closing in on a solution like that. We are just hiding the problem, while making it bigger (or at least showing it to be bigger) in the process.

The result of this is a phenomenon called the AI effect. Whenever someone gets a computer to do something previously impossible (like playing chess), that thing is no longer AI, it's just a search. And part of why this happens is that, when we know how to make a computer do something, we recognize that the "how" is wrong. We still do search, though, because we understand search. After more than 50 years, we still don't have a clue how to do the other kind of AI. As a result, we aren't really making intelligence, we are just getting better at faking it. And while the fakes are very useful, they aren't the real thing.

Part 4: Conclusion

And now, at last, I am going to tie this back to Intelligent Design theory. The connection is intelligence.

Intelligent Design theory claims that the appearance of new creatures closely resembles the behaviour of "intelligence". But Stephen Meyers never said exactly what the intelligence the agents have was. Humans run on "intelligence", but we don't know what that is either. And because we don't know what "intelligence" is, we can't put it in our computers. In fact, the one way do know to make "intelligence" - despite being wildly popular and studied in incredible detail - is still so badly understood that we can't even agree on when the intelligence is getting in, let alone how.

At this point, the problem that was bothering me about Intelligent Design theory before is coming into focus: The whole idea is a big cop-out! It says that the explanation for events we don't understand is something else we don't understand. But maybe that's too hasty; a connection is something, even if its between 2 unknowns. If we formulate it on that basis, we get something like this:

There are two major mysteries in the world (that concern us here), thought and new life forms. Although superficially different, these two have a lot in common. Both involve new things entering the world. In both cases, the new things are far too complex to arise by unguided chance, but, in both cases, there is know way to get to them in steps. They just spring into being whole and unexplained: something from nothing. And because they are so similar, and both unexplained, perhaps they are manifestations of the same thing.

There. A hopefully-simple thought in far too many words.

P.S. Something from nothing... I didn't realize it until I phrased it that way, but there are other things like that. The Big Bang is the obvious example. I don't think there is any good explanation for the origin of life either. I have heard the same thing about eukaryotic cells and multicellular life, but I don't know enough biology to judge this. The origin of human "intelligence" probably should be on the list as well, given that, at present, it can only be derived from other human intelligence (which would sill arguably be the case even if we someday learn how to make humans artificially).

Back to essays page
Back to home page