Applying Lessons From AI To Our Behavior

Maybe you’ve heard about Sam Harris’s analogy of the “moral landscape” in which Harris urges us to think about morality in terms of human and animal well-being, viewing the experiences of conscious creatures as peaks and valleys on a “moral landscape.”

Now, in artificial intelligence, there is something called a hill-climbing search. –You must now see the obvious link here.–

However there are several problems with hill-climbing search in its simplest form. First of all, hill-climbing search keeps just one current state in memory. Unfortunately, it leaves the agent sitting at local maxima with nowhere to go.

This will be an extremely relevant problem if we took it as our mission to find a peak in the moral landscape. Humans can be very much like agents that keep just one current state in memory.

Imagine we stumbled upon a post-scarcity society on Earth where everyone is plugged into a highly pleasurable virtual reality of their own liking. Is it reasonable to assume that anyone will want to wake from the dream and search for something more? Probably not. There is nothing in their immediate perception that encourages them to stop what they are doing.

This is the problem that Nietzsche identified with the Utilitarians. He suspected that such philosophies would encourage no effort to produce something higher. And he suggested that there are higher peaks that can only be achieved while incurring great suffering.

A philosophy that only looks at the now is bound to hit a local maxima and stay there. This is a problem with Buddhism. It is true that we would all be happier if we could all renounce the world and meditate in community. That is absolutely true. I am convinced it is not a scam.

However, this would still be a horrible outcome for humanity and our descendants. It would mean that there would be no more feverish technological progress catalyzed by Asperger-y, neurotic people. There would be no competition, and pressure to push the boundaries of medicine and science.

It sounds good from the point of view of “now” but from the point of view of the “big picture” it would mean we never cure aging, become integrated with an expanding galactic God, transcend our flesh to explore the vast realm of creativity and selfless joys in the virtual datascape, and so on. Our descendants would miss out on things we never knew existed.

Moreover, a problem with the hill-climbing algorithm is that random restarts cannot be used, because the agent cannot transport itself to a new state. This is also an importable analogy to describe our situation if we don’t make sure to hold tightly to a drive that creates new knowledge. We need to be placed in new environments naked against the strange, cold, winds of the unknown. This causes us to suffer – or at minimum, takes resources away from what could be producing good qualia – but if we cannot cast ourselves in a leap of faith from our peak, then we will never know just what we missed.

That applies even to the peaceful, thriving, post-scarcity economy in full-immersion realities. They should not stay there, and say “good enough.” They should send some randomizing probes to explore new configurations, until they stumble upon a higher peak, and then again. These probes would probably need to be conscious in order to report back on their newly charted territory. So they would necessarily be martyrs for the greater good in some way. The only way to avoid this Genesis-on-loop scenario is to have a fully developed science of consciousness, so that the peaks of experience can be specified physically down to atomic configuration without having to send already-sentient minds bouncing around to find them. Here is some work beginning an approach to a formal science of qualia.

A somewhat childish thought that I’ve had for a while is that if we take Nick Bostrom’s simulation argument seriously and thus assign some significant probability that we are nested several layers deep in the matrix, then it is easy to view us as doing a random walk to explore the environment. We have many copies, each trying out different actions at the quantum level, but over time, these accumulate to noticeable differences. The being(s) outside the simulation may be looking for a solution, mapping the qualia landscape with us. Not good reasoning, but good for theologians rapidly losing stock value on their 1st century desert-aesthetic. Feel free to take that idea.

Now, what are the more practical lessons that we can derive and use today based on these observations?

  1. Do random stuff every so often. Learn random stuff. Randomize a Wikipedia article until something valuable comes up. It could change your life.
  2. Don’t worry too much about hedonic calculations. If you feel like puking while running, sometimes you just have to say “fuck it” and keep running.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s