Suffering and AI Theory


I recently came across an idea in Christian philosophy that may be relevant to AI research.

I was reading a book on suffering (The Problem of Pain, by C.S. Lewis), and the author proposed that a purpose of suffering in the lives of good people was to shake them up. As I understand this model, people are continually truing to improve their lives, but occasionally get caught in a local maximum. It that case, the only thing to do (assuming further improvement is desired) is knock them out of it and hope they end up somewhere better next time.

To me, this seems similar to optimization problems in AI. For normal people, a simple analogy is that we are trying to find the highest point in a mountain range while blindfolded and carrying a GPS (that we can read through the blindfold). We can find out how high we are at any point, but we can't just look for the high ones. There are many algorithms for this, many of which try to find the answer by incremental improvement. Some examples are hill climbing (always go uphill until you reach a peak), genetic algorithms (simulated evolution, but way faster), and back-propagation for neural nets (a glorified form of matrix multiplication). These tend to work pretty well if you have a single pyramid-shaped "mountain".

However, all these algorithms have a problem on more rugged "landscapes": they find a local maximum and stay there. Some approaches, such as genetic algorithms have a limited ability to avoid this, and some (such as hill climbing) are fast enough that you can run them several times and take the best answer. Overall, however, they do tend to get stuck at less-than-optimal solutions (i.e. they end up at the top of a short mountain).

Based on my theological studies (all right, maybe that's too strong a word for reading a book while I was board), I have another possible way to address this problem: First, we store the current position in case we do worse next time. We can't do this in reality, but there is no reason why computers can't. Then, we try "hurting" the search position by moving it somewhere nearby at random and optimize it again. If that doesn't work (we get the same optimized position again), we try hurting it more (i.e. moving it farther). This would need some sort of end condition, obviously, but that sort of thing can be worked out.

There. I had a "new" idea.

Back to essays page
Back to home page