Slitting the Throat of Fairness

Currently, our decision-making system is designed somewhat arbitrarily by our genetic inheritance and our trajectory through the contents of spacetime. That means that it is not optimized to execute our most desired decision. In the future, technology might allow us to further redesign our decision-making system. Here, I consider changes to the brain, or other similar mind hardware, that would allow conscious experience to inch closer to what is desired conscious experience by that mind, and why defining desired as fair is problematic.

Depending on how we engineer our decision making system, we will end up with radically different decisions. So some might argue that it’s important that our decision making system has a certain property – that it produces decisions that fairly represent what the subsystems of the mind would like to decide. This is, of course, made difficult by arrow’s impossibility theorem. But let’s ignore that here, and assume that voting systems are nonetheless considered fair by people.

Consider trying to determine the best decision when faced against a three-headed humanoid lion.
The possible decisions are:
fight bluff run cry suicide
Assume the brain has a constant amount of resources, k, that does not change. So there is no possibility of hooking up the brain to an exobrain in order to increase the brain’s resources.
Someone concerned with giving fair expression to the entirety of decision-making subsystems within the brain could consider several voting systems such as:
Plurality
Two-Round Runoff
Instant Runoff
Borda Count
However, each of these could result in different decisions being made.

Atat that moment when the decision is made, the brain resources “voting” on each choice could look like this:
run > cry > bluff > fight > suicide
With each of the voting systems, a Complete Group Ranking can be produced. If such a ranking endeavor were operating in the reengineered brain instead of it’s normal procedure, it would first determine the group winner using the chosen voting system, then kick them off the ballot (imagine deleting the pattern of neural circuitry that created that decision) and rerank the remaining decisions using that same voting system. This procedure would be repeated until every decision is ranked.

For example, this could happen in the Two-Round Runoff system:
[The values are in a hypothetical standardized unit measuring relevant brain variables (brain matter, or neural pathways, or information processing) devoted to executing each decision]
-round one-  -round two-
fight 18           fight 18
bluff 12          bluff 37 *
run 10
cry 9
suicide 6

*(from 12+10+9+6 if the dormant parts were isolated and given a weighted vote based on their initial resources)

Hence, the person would bluff, waving their improvised twig sword at the muscular beast.

If someone considers the Two-Round Runoff system more fair than the arbitrary current system designed by evolution, they might decide to get this brain-mod to account for their opinion. And yet another person might consider the Borda Count system to be more fair and so modify their brains to operate that way. When any such transhuman person comes across a beast, they would come to a self-declared fair decision that somehow tries to account for all the desires of their dormant subsystems.

However, the meaning of fairness to all the subsystems seems to be nothing but ceremonial whim since they were not the prime movers, i.e. some past subgoal or value chose the voting system. The decision output of arbitrary voting systems is not guaranteed to be asymptotic to our true desires. Some might argue that the grand-unifying, true desire of conscious beings is the best possible outcome in qualia-space. And while it may be difficult to specify at present what that looks like, one suspects that it doesn’t involve our limbs scattered across the mud and our bone marrow tainting the creature’s pristine fangs.

This conclusion may not seem too radical but it actually has fairly shocking implications. It means that in a post-human existence precipitated by AGI, fairness should not be considered. We should not seek to create an AGI that takes a course of action by working up some voting system that magically instills our condition with fairness. It should consider only what is truly good, and that will require a science of consciousness which graphs all the possible functions in mindspace and knows how to formulaically climb the peaks in this territory.

Currently, fairness is just a primitive mindspace-climbing formula – a sticker we but on decisions emitted out the other end of our conjured voting system factory. But since we can get radically different results depending on what voting system we like, fairness as defined by such systems, seems to be a blunt attempt to express what humans really want to capture with the word fairness.

I close this futuristic meditation with a thought on the cities that now flicker for a moment on the crust of Earth: Perhaps the principal adequacy of Western Democracies is nothing more than preventing immature totalitarianism.

It is said that Churchill once commented, “Democracy is the worst form of government… except for all the other ones.” I take it here upon myself to cosign that statement. With the possible exception clause in the case that our true philosopher king emerges from the dust of our AGI-alignment equations.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s