Monotheism and Superintelligent AI

Part 1: Monothesim

2015-07-23

Recently, a friend of mine (Daniel Gienow) asked me what effect the development of a god-like AI would have on monotheistic theology. I assume that it is mostly Christian theology in view here, although parts may generalize.

The first thing I would want to establish is "what is monotheism?". And that isn't as simple as it sounds.

Let's consider the Norse religion. There are two types of gods: the Aesir (warrior gods) and Vanir (fertility gods). There are giants (of various poor-defined types), who, while not exactly gods, operate on a similar power level and intermarry with them. There are also other monsters with the power to threaten the gods, such as Fenrir (the great wolf whom the god Tyr bound at the cost of his right hand) and Jormungand (the Midgard Serpent the stretches around the entire world to bite its own tail).

The world ends at Ragnarok, most of these die, and Surt, fiery lord of Muspell (where the lava comes from) will burn all nine worlds. I don't know what happens to Surt; he seems to just vanish at that point. Maybe some god took him down. Then the few surviving gods and two surviving humans rebuild, and all is well, for all time.

But let's imaging that things were different. Lets imaging that the only survivors were Surt the fire lord (giant?) and Lif and Lifthrasir, the two humans who survived by hiding somehow inside Yggdrasil. Lif and Lifthrasir have many descendents, and found a new people called the Lifsons. However, Surt makes sure the Lifsons remember that he is the only god. He also destroy all records that any other gods ever existed. So the question is, is the Lifson religion monotheistic?

I would have to say no. Surt is still a polytheistic god; he is merely one from a unusually small pantheon. To put it another way, he is the only god by happenstance, not by necessity. Other gods are possible; they once existed. Even if they never had, that would still be a coincidence, not an inevitability. And besides, if Lyra from Phillip Pullman's His Dark Materials trilogy shows up (which is in her power), I would give her a decent chance of taking down Surt. Then we would have a pantheon of zero gods, but that wouldn't exactly make their religion into atheism [also see below].

I think the key problem Surt has here is that he is a contingent being; not a necessary one. There are conditions that cause Surt to come into being (unrecorded, unfortunately) and conditions necessary for his continued existence (e.g. not chopped up by Freyr's magic sword). There are also other less obvious conditions, such as the continued existence of space and time.

In contrast, the God of monotheism is a different sort of entity. He (or She or It or whatever) is typically assumed to be a necessary being in the philosophical sense. For those who aren't familiar with the term, it means that the existence of God (if He is exists) is required by the basic rules of logic, in much the same way that they require 2 + 2 = 4. It would therefore be impossible to devise any (consistent) circumstances in which God would not exist. Note that because things like space and time are not necessary (as far as we know), God could exist without them.

So now we must ask, who is this (possible) necessary God? There are a number of depictions, including from Sorry to everyone on the list. I almost certainly am distorting/misrepresenting you. Ironically, the summary that gave me the most trouble was for Christianity, which is my religion. I am not sure if that is because I am less ignorant about it, or if it is just inherently more confusing. In any case, the point here is that there is a lot of variation.

In truth, I don't think even the name/title God is ideal. The word "god" (derivation uncertain) seems to be closer to analogy than a description; we are not saying God is a polytheistic god, any more than calling Him King means that he is a male hereditary monarch. It's just that He is sort of like one in some ways. This seems to appear in other languages as well, with the Hebrew "el" (= Arabic "allah") possibly meaning "strong [one]" and the Chinese "shangdi" meaning "highest emperor". Thus we have the convention in English where a monotheistic God gets a capital and a polytheistic god does not. They are very different entities, and a god is probably more like a human than God.

The confusing terminology comes because God is so very hard to understand and describe. This is probably because of inapplicable assumptions encoded into our thought processes and languages. Questions like "what did God do before He created the world" seem sensible even though they involve a fundamental error (time is a property of the universe and thus "before" is meaningless in this context). The Bible, possibly trying to address an older form of this, repeatedly describes God as "holy", which translates, not as "perfect and [morally] good", but as "other, alien". However you are imagining God, you understanding is almost certainly not strange enough. One of the best depictions of this idea I have encountered is in Clifford D. Simak's otherwise-forgettable novel "A Choice of Gods".

I should note that, in my understanding, all monotheisms refer to the same God. They may have very different understandings, but thy are all talking about the same Being. Although the superficial properties vary, the core is the same. The significance of this is a matter of disagreement, with views ranging from "all understandings are equally valid" (pluralism) to "our way is best, but God will accept the others" (some forms of Hinduism) to "our way or nothing" (traditional Christianity).

This seems to have turned into "What is Monotheism?". Sorry about that. Maybe the AIs will get in next time. In the mean time, I would recommend the extremely long essay Meditations on Moloch by Scott Alexander. Link added in archived version.

Part 2: AI

2015-07-23

Continuing Daniel Gienow's question about what effect the development of a god-like AI would have on monotheism.

I don't think there is too much doubt about what an AI is: It's an intelligence (the I) that we made. Except that it doesn't count if we make it the easy way. Those are called babies, and theology already knows how to deal with them. So AI is an intelligence that we made artificially (giving us the A). Except that we can't just do the easy way artificially because it's not artificial enough. Or even by completely rebuilding the DNA.

OK, lets just say that an AI has to be in a computer, step-by-step is too confusing. Of course, we can't just build a big computer box and have a someone inside to operate it. That was the Mechanical Turk, and is considered a hoax. I read a story once where someone used necromancy to summon ghosts and imprison them in a computer, but that's not AI either (the ghosts included Adolf Hitler, Genghis Khan, and Vlad the Impaler, so it obviously didn't turn out well).

So an AI has to be a computer program. Of course, we have a problem here. Computer programs just do what we tell them to. Sometimes, if you set things up right, you can get some apparently-unpredictable results. There is emergent order, where simple rules combine to give complex patterns (fractals are a simple case). You can toss together a bunch of complex things and see how they interact too, although that usually doesn't work out that well.

The real power lies in learning algorithms. In one formulation, these are programs (or parts of programs) that attempt to determine information. In another phrasing, they are programs that modify their own behaviour based on their experiences. This is the formulation that people care about here.

We don't know exactly how an AI would work, but it has got to learn. Why? Because we learn, and intelligence (science fiction "sapience") here means "kind of like us". Not shaped like us, or with exactly the same thought patterns as us, but like us in the way that matters. A person, not an animal. Something with free will, or at least a reasonable facsimile. Something that can choose.

We should probably ask how the AI could be that. This leads to a related question: How do we do it? In truth, we aren't really sure. There are a few ideas, but no consensus.

One rather-bleak theory is that we don't. We are really just automatons ourselves who think that we are full people. I understand this theory is popular among neurologists, which either means there is good evidence for it, or that neurology depends on that assumption. Personally, I am inclined to the second view. For one thing, the theory doesn't explain the "I" that is discussing the question.

Another theory is that there is something else going on in people that we haven't noticed yet. When we do, it will seem obvious in hindsight. If this seems strange, consider a historical parallel: After the Industrial Revolution, there was a school of thought that compared the human body to a machine. Bones were like bars, elbows were like joints, etc. The brain was a bit unclear, but I assume they thought something would come up. This "mechanical person" model had any number of supporters, and, indeed, it did explain a lot. But then computers were developed. Pretty soon the model had been updated to contain a computer-like thing (the brain). Now, I don't think anyone supports the pure-mechanical version any more. As soon as we had the concept of computers, it was obvious we needed it, even though we hadn't before. I can't help but think that "mind" now is what "brain" was then. What we have just doesn't seem complete.

A third theory is that people have these weird extra-physical things called souls that sort of hang around and control them (to some degree). If you have seen the movie Inside Out, it is like that, only with only one person in the control room. The soul somehow influences the body (presumably via the brain), but is not subject to the same deterministic physics as the rest of the world. Admittedly modern physics is not deterministic, but everyone seems to be quietly ignoring that.

In the first case, making an AI is conceptually simple. We just need to keep improving and expanding the sort of thing we have, and it will eventually merge seamlessly into the kind of AI we want.

The second theory is a bit more open-ended. We just sort of wait around and do science and engineering until we figure the bit that is missing. Or bits, there is no guarantee that there is only one more. In the mean time, we improve the algorithms we have for practical things, like rocket science and web searches, and hope for a breakthrough.

But if souls are important (the third case), we have a problem. Our AIs don't have them, and we don't know how to get them. We don't even know how we get them for ourselves. One possibility is that they are somehow generated by the developing body (if we could pin down when, we could finally solve the abortion debate). Another is that they float around and attach on (i.e. reincarnation). But the most common belief is that they are being supplied by some outside agency. This would probably be God, although Philip José Farmer's Riverworld series has aliens behind it.

But how do we attach these to our AIs? In the first case, we may be able to duplicate the soul-generating process artificially. Likewise for the second case, all we need is another "sticky" thing for the souls to attach to (it must be sticky, or the soul would float away again). In the third case, there is less we can do. On the other hand, that doesn't mean it is hopeless. As Gregory Benford points out, there is no reason why God can't give AIs souls. It's true that they are just electronics, but we are just organics, and it's not like some atoms are somehow more worthy than others.

Sadly, it seems I never did write a part 3. This might be because I really don't know how superintelligent AI would relate to monotheism. At least I think I never did.

Back to essays page
Back to home page