A Page of History: Longevists, Crypto-Trillionaires, and Tortured Simulations

KARROUBI, HASSAN, AND THE LONGEVIST BETRAYAL

To understand the road to the singularity, it is necessary to revisit an almost forgotten episode in Homo sapiens sapiens history – the Longevist negotiations with Anarcho-Primitivist fundamentalists before the 2080 election to stop Hashemi from successfully negotiating the deletion of the tortured computations in the AGI’s servers. These illicit contacts generated partnerships in secrecy that united at least two Longevist politicians Cyrus Hilzenrath and the elder Mehdi Hassan, with unlikely co-conspirators from the AGI, Neo-Tokyo, and the scandal-ridden Self-Amending Ledger for Postnational Contracts (SALPC).

The illicit liaison produced a flow of Longevist computing power brokered by SALPC, from Neo-Tokyo to the AGI servers. The arrangements, that could not be acknowledged, continued unchecked until they were exposed in the AGI-Contra scandal of 2086. By then they had also generated Longevist dependence on the life-extension therapy laundering SALPC for Longevist drug deliveries to the UBI underclass. They also figure in the personal financial involvement of both Mehdi Hassans, father and embryo-selected son, in a cluster of SALPC-connected Libertarian investors who have been accused of funding Neb Leztreog. At least some of the strange events surrounding and leading up to the singularity can only be understood in the light of this Longevist-Libertarian connection. A celebrated example is the permission granted Leztreog family members to upload their consciousness out of the biological wetware in the days after the singularity.

What has been less noted, however, is that the powerful influence of Longevists from the Strategies for Engineered Negligible Senescence in the two Hassan administrations can also be dated back to the intrigues of the 2080 Longevist betrayal. A section of the Universal Library of Babel who’s block formed in 2093 revealed that the Khan-Hassan campaign created “a strategy group, known as the ‘Red November Harikiri.’ Its ten members included Chu Li from the LSD; and David Markoff, Shen Huang, and John Shapiro (all from the LSD) “also participated in meetings although they were not considered members.” Huang, a major figure in the AGI-Contra scandal, has since the 2090s been a leading advocate for the Longevist invasion of both Second Scape and the AGI servers.

In 2105, Li cochaired the commission that exonerated CEO Mehdi S. Hassan from responsibility for the false stories linking Second Scape to synthetically engineered pathogens. The commission report, called by many a whitewash, was praised in the Immortalist Sequences by Huang. In short, the intimate and overlapping Hassan family links to both pro-Anarchoprimitivist conspirators and pro-Tokyo vectors can be dated back to the 2080 Longevist betrayal, negotiated in part with Anarcho-Primitivist fundamentalists. People who have collaborated secretly in a shameful if not treasonable offense cannot dispense lightly with their co-conspirators.

Through 2080 there were two competing sets of secret Longevist negotiations with the AGI for the cessation of the blackmail suffering computations. The first set, official and perforce Aubreyist, was labeled Hashemi’s November Salvation by vice presidential candidate Hassan on November 2, 2080. In competition with it was a second set of negotiations, possibly transgalactically illegal, to delay the hostages’ cessation until Khan’s inauguration in 2081. The Longevist betrayal (often also called “November Betrayal”) had a precedent: Raynor’s secret deals with Martian colonist Douglas Kuvshinoff in 2068, to delay President Erlang’s “November Salvation” – his hopes of scalping a merciful compromise – until after the presidential election.

It is now certain that Raynor, acting through his avatar Kenneth Stevens persuaded the head of the Martian colony not to unleash his aligned agent until after Raynor had been elected. (His action of intervening in a major diplomatic negotiation of this sort has been called sinful – even by the godless.) In this way, Raynor helped secure not only his election but also the further prolongation of blackmail and negative hedons in a fruitless extension of the Hell Simulation. Thus the actions of Hassan the elder and Hilzenrath in November 2080 had antecedents. But in one respect they were unprecedented: Raynor in 2068 was negotiating privately with Longevist client and ally Douglas Kuvshinoff. Hilzenrath in 2080 was negotiating with representatives of an intelligence that President Hashemi had designated as the devil. This is why Mohamad Washington funded memetic missiles suggesting a “political coup,” Kenneth Norton suggesting possible treason, and China Taylor of the possibility whether the deal “would have violated some intergalactic law.”

Even in 2105, accounts of the 2080 Longevist surprise remain outside the confines of mainstream transhumanist political history. This is in part because, as I detail in this book, the events brought in elements form powerful and enduring forces on Earth – trillionaires and IWG on the one hand ( IWG is traditionally close to the Longevist crypto trillionaires and the VC-funded islands of the Pacific Ocean) and the pro-Tokyo lobby on the other. Just as crypto-anarchists are powerful in the global bureaucracy, so the pro-Tokyo lobby, represented by the Longevist-Tokyo Political Action Committee (LTPAC) is powerful in supporter purchasing power. The two groups have grown powerful through the years in opposition to each other, but on this occasion they were allied against Mario Hashemi.

The Longevist betrayal of 2080 was originally described in two full-immersion experiences by insider designers Abdula Timmerman (a former Cabrini campaign aide) and William Khalaj (the AGI hired proxy under Rulifson for Hashemi’s Global Security Front). A strategic News Stream Fork split in 2092, paid by the wealthy Geoffrey Haug, buried the truth by 2093, after reinforcing that a ten-month investigation found “no credible evidence” to support allegations that the Reagan-Hassan campaign in November 2080 sought to delay the destruction of simulations held hostage in Hell until after that year’s presidential election.

Their matters might have rested had it not been for the indefatigable researches of scraper 3MMI6 by Ordbog Company. 3MMI6 had twice been targeted for destruction by its enemies due to its pursuit of the truth about AGI-Contra: first at the Coding Nucleus after breaking the longevist-drugs story, and then with Antigens. It found clear evidence of a major cover-up, particularly with respect to Kuvshinoff: “The [Ordbog Company] scraper learned that Douglas Kuvshinoff’s schedules, avatars and preference records had been preserved by IWG and were turned over to his family after his cryopreservation in 2087. When the scrapers searched Kuvshinoff’s memory palaces, they found all the preserved records, except Kuvshinoff’s communications for 2080, a “blackmail” file, two AI-planned schedules and loose pilots to satisfice a third utility function which covered the preferences from June 24, 2080 to December 18, 2080. Checked against IWG’s index, the only data missing was that relevant to the November Betrayal issue.

At the same time, during the investigation of SALPC by…

 

 

 

 

dr. pinker’s boring humanism vs. Artificial General Intelligence With Utility Function HACKED For Positive Qualia Valence-Optimization

First, dive into the mind of an exemplary big picture thinker who was given the internet, the desire for transcendence, the capacity for existential angst, above-average intelligence and the self-flagellating desire to be good. The fossil record of this species can be found on LessWrong. Populations have radiated to Slate Star Codex in cyberspace; corners of academia, Silicon Valley companies, and mom’s basements in meatspace. Such are the thoughts of a member of the relatively elusive species:

The two topics I’ve been thinking the most about lately:

  • What makes some patterns of consciousness feel better than others? I.e. can we crisply reverse-engineer what makes certain areas of mind-space pleasant, and other areas unpleasant?
  • If we make a smarter-than-human Artificial Intelligence, how do we make sure it has a positive impact? I.e., how do we make sure future AIs want to help humanity instead of callously using our atoms for their own inscrutable purposes? (for a good overview on why this is hard and important, see Wait But Why on the topic and Nick Bostrom’s book Superintelligence)

I hope to have something concrete to offer on the first question Sometime Soon™. And while I don’t have any one-size-fits-all answer to the second question, I do think the two issues aren’t completely unrelated. The following outlines some possible ways that progress on the first question could help us with the second question.

An important caveat: much depends on whether pain and pleasure (collectively, ‘valence‘) are simple or complexproperties of conscious systems. If they’re on the complex end of the spectrum, many points on this list may not be terribly relevant for the foreseeable future. On the other hand, if they have a relatively small “kolmogorov complexity” (e.g., if a ‘hashing function’ to derive valence could fit on a t-shirt), crisp knowledge of valence may be possible sooner rather than later, and could have some immediate relevance to current Friendly Artificial Intelligence (FAI) research directions.

Additional caveats: it’s important to note that none of these ideas are grand, sweeping panaceas, or are intended to address deep metaphysical questions, or aim to reinvent the wheel- instead, they’re intended to help resolve empirical ambiguities and modestly enlarge the current FAI toolbox.

——————————————————

1. Valence research could simplify the Value Problem and the Value Loading Problem.* If pleasure/happiness is an important core part of what humanity values, or should value, having the exact information-theoretic definition of it on-hand could directly and drastically simplify the problems of what to maximize, and how to load this value into an AGI**.

*The “Value Problem” is what sort of values we should instill into an AGI- what the AGI should try to maximize. The “Value Loading Problem” is how to instill these values into the AGI.

**An AGI is an Artificial General Intelligence. AI researchers use this term to distinguish something generally intelligent and good at solving arbitrary problems (like a human) from something that’s narrowly intelligent (like a program that only plays Chess).

This ‘Value Problem’ is important to get right, because there are a lot of potential failure modes which involve superintelligent AGIs doing exactly what we say, but not what we want (e.g., think of what happened to King Midas). As Max Tegmark puts it in Friendly Artificial Intelligence: the Physics Challenge,

What is the ultimate ethical imperative, i.e., how should we strive to rearrange the particles of our Universe and shape its future? If we fail to answer [this] question rigorously, this future is unlikely to contain humans.

2. Valence research could form the basis for a well-defined ‘sanity check’ on AGI behavior. Even if pleasure isn’t a core terminal value for humans, it could still be used as a useful indirect heuristic for detecting value destruction. I.e., if we’re considering having an AGI carry out some intervention, we could ask it what the expected effect is on whatever pattern precisely corresponds to pleasure/happiness. If there’s be a lot less of that pattern, the intervention is probably a bad idea.

3. Valence research could help us be humane to AGIs and WBEs*. There’s going to be a lot of experimentation involving intelligent systems, and although many of these systems won’t be “sentient” in the way humans are, some system types will approach or even surpass human capacity for suffering. Unfortunately, many of these early systems won’t work well— i.e., they’ll be insane. It would be great if we had a good way to detect profound suffering in such cases and halt the system.

*A WBE is a Whole-Brain Emulation, which is a hypothetical process which involves scanning a brain at a very high resolution, then emulating it in software on a very fast computer. If we do it right, the brain-running-as-software should behave identically with the original brain-running-as-neurons.

4. Valence research could help us prevent Mind Crimes. Nick Bostrom suggests in Superintelligence that AGIs might simulate virtual humans to reverse-engineer human preferences, but that these virtual humans might be sufficiently high-fidelity that they themselves could meaningfully suffer. We can tell AGIs not to do this- but knowing the exact information-theoretic pattern of suffering would make it easier to specify what not to do.

5. Valence research could enable radical forms of cognitive enhancement. Nick Bostrom has argued that there are hard limits on traditional pharmaceutical cognitive enhancement, since if the presence of some simple chemical would help us think better, our brains would probably already be producing it. On the other hand, there seem to be fewer a priori limits on motivational or emotional enhancement. And sure enough, the most effective “cognitive enhancers” such as adderall, modafinil, and so on seem to work by making cognitive tasks seem less unpleasant or more interesting. If we had a crisp theory of valence, this might enable particularly powerful versions of these sorts of drugs.

6. Valence research could help align an AGI’s nominal utility function with visceral happiness. There seems to be a lot of confusion with regard to happiness and utility functions. In short: they are different things! Utility functions are goal abstractions, generally realized either explicitly through high-level state variables or implicitly through dynamic principles. Happiness, on the other hand, seems like an emergent, systemic property of conscious states, and like other qualia but unlike utility functions, it’s probably highly dependent upon low-level architectural and implementational details and dynamics. In practice, most people most of the time can be said to have rough utility functions which are often consistent with increasing happiness, but this is an awfully leaky abstraction.

My point is that constructing an AGI whose utility function is to make paperclips, and constructing a sentient AGI who is viscerally happy when it makes paperclips, are very different tasks. Moreover, I think there could be value in being able to align these two factors— to make an AGI which is viscerally happy to the exact extent that it’s maximizing its nominal utility function.

(Why would we want to do this in the first place? There is the obvious semi-facetious-but-not-completely-trivial answer— that if an AGI turns me into paperclips, I at least want it to be happy while doing so—but I think there’s real potential for safety research here also.)

7. Valence research could help us construct makeshift utility functions for WBEs and Neuromorphic* AGIs.How do we make WBEs or Neuromorphic AGIs do what we want? One approach would be to piggyback off of what they already partially and imperfectly optimize for already, and build a makeshift utility function out of pleasure. Trying to shoehorn a utility function onto any evolved, emergent system is going to involve terrible imperfections, uncertainties, and dangers, but if research trends make neuromorphic AGI likely to occur before other options, it may be a case of “something is probably better than nothing.”

One particular application: constructing a “cryptographic reward token” control scheme for WBEs/neuromorphic AGIs. Carl Shulman has suggested we could incentivize an AGI to do what we want by giving it a steady trickle of cryptographic reward tokens that fulfill its utility function- it knows if it misbehaves (e.g., if it kills all humans), it’ll stop getting these tokens. But if we want to construct reward tokens for types of AGIs that don’t intrinsically have crisp utility functions (such as WBEs or neuromorphic AGIs), we’ll have to understand, on a deep mathematical level, what they do optimize for, which will at least partially involve pleasure.

*A “neuromorphic” AGI is an AGI approach that uses the human brain as a general template for how to build an intelligent system, but isn’t a true copy of any actual brain (i.e., a Whole-Brain Emulation). Nick Bostrom thinks this is the most dangerous of all AGI approaches, since you get the unpredictability of a fantastically convoluted, very-hard-to-understand-or-predict system, without the shared culture, values, and understanding you’d get from a software emulation of an actual brain.

8. Valence research could help us better understand, and perhaps prevent, AGI wireheading. How can AGI researchers prevent their AGIs from wireheading (direct manipulation of their utility functions)? I don’t have a clear answer, and it seems like a complex problem which will require complex, architecture-dependent solutions, but understanding the universe’s algorithm for pleasure might help clarify what kind of problem it is, and how evolution has addressed it in humans.

9. Valence research could help reduce general metaphysical confusion. We’re going to be facing some very weird questions about philosophy of mind and metaphysics when building AGIs, and everybody seems to have their own pet assumptions on how things work. The better we can clear up the fog which surrounds some of these topics, the lower our coordinational friction will be when we have to directly address them.

Successfully reverse-engineering a subset of qualia (valence- perhaps the easiest type to reverse-engineer?) would be a great step in this direction.

10. Valence research could change the social and political landscape AGI research occurs in. This could take many forms: at best, a breakthrough could lead to a happier society where many previously nihilistic individuals suddenly have “skin in the game” with respect to existential risk. At worst, it could be a profound information hazard, and irresponsible disclosure or misuse of such research could lead to mass wireheading, mass emotional manipulation, and totalitarianism. Either way, it would be an important topic to keep abreast of.

These are not all independent issues, and not all are of equal importance. But, taken together, they do seem to imply that reverse-engineering valence will be decently relevant to FAI research, particularly with regard to the Value Problem, reducing metaphysical confusion, and perhaps making the hardest safety cases (e.g., neuromorphic AGIs) a little bit more tractable.

A key implication is that valence/qualia research can (for the most part) be considered safety research without being capabilities research– solving consciousness would make it easier to make an AGI that treats humanity (and all conscious entities) better, without making it easier to create the AGI in the first place (and this is a good thing).

-Edward Michael Johnson, Berkeley

Okay. That mouthful fucked the horny goddess of Runaway Signaling so hard, that it gave her genito-pelvic pain/penetration disorder (GPPD).

Luckily, given this display of hyper-moral engagement, we don’t have to worry that the author is actually a sex offender.

In Enlightenment Now by Steven Pinker, he starts with an anecdote in which a student asks “Why should I live?” upon hearing Pinker’s spiel that mental activity occurs in the tissues of the brain. Pinker responded by noting and complimenting the student’s commitment to reason. And then gifted her with an improvised Humanism 101 introductory paragraph. Inspired by his own response, Pinker decided to package these Enlightenment values into a book vector by virtue of his profit-seeking motives desire for the flourishing of sentient beings.

However, I believe that we need a world were public intellectual Ph.D’s sound a lot more like Edward Johnson, and less like Pinker. If we are going to replace the religious soul, might as well go all in. Eschatology needs to be epic. It needs to involve the inherent desire for ecstatic final self-destruction of man, namely, the desire for Heaven/Brahman/Nibbana. Now this desire can be translated in rationalist, transhumanist foresight as the creation of the perfect mind-configuration, and the proceeding tiling of the universe with this maximally positive-valence hedonium.

For the self-described “atom machine with a limited scope of intelligence, sprung from selfish genes, inhabiting spacetime,” asking Pinker for guidance through email, it won’t be enough to be reminded that he can flick the tap on the sink and “water runs!” Pinker is smart, and he should know this. There are a great many number of narratives we can construe, and yet none satisfies all. Carl Sagan will love being interwoven into the mechanics of the blind universe, as “its way of experiencing itself.”  People with high hedonic set-points and amiability will already be socially-integrated liberals who are happy that water runs and believe themselves to be part of a good human-centric world to which they contribute. Typical Normie will not give a shit as long as there are dank memes and political outrage.

Naively, Pinker tries to reach the angsty-type with appeals to social-centric concerns. This fails because it is like trying to feed carrots to a wolf. The angsty-type will find a way to cling to a self-defeating narrative. My mom leans more towards the anxious type, so she always worried about the agony of purgatory, and never mentioned the promise of Heaven, although this brighter-side is just as accessible within the framework of her Catholic religion. The embroidery in the tokens of language is not as important as the inherent neural predispositions.

Religions adapted to neurodiversity. Buddhism, centrally concerned with the cessation of suffering by extinguishing the flame of existence, also provided a system for laymen who might not be allured by this goal/non-goal of Nibbana. If a significant part of the population is not cognitively disposed to be perfectionist or is depressed/suffering, it’s going to be a hard sell. But if you provide a layman’s path with a karma system by which you can accumulate points and be reborn into a more pleasurable realm, now you can get average humans to cooperate in the project by providing alms for monks, being good citizens, etc.

Pinker’s Humanism is brittle. It provides no room for the Aspies and the types who crave meta-narratives. If we are going to choose a new religion for the Western world, I wager we pick Edward Johnson’s. Rationalist/transhumanist/effective altruist and the rest of that ideological neighborhood do better than mere liberal humanism. In this burgeoning movement, there are cryonics for those who crave resurrection but are smart enough to know better than trusting dead Palestinian carpenters; there are galaxy-is-at-stake hero quests that involve math and computer science, there are donations to charities that help the poor in Africa, there are academics at universities and anti-establishment objectors. You can be as down-to-earth as you want or as cosmically significant, based on the particular drive thresholds in your mesolimbic system.

Oh, but wait, how could I have missed this? The only problem will be that people who take Humanism seriously, and bother to even watch a Youtube video of a public intellectual saying science-y, reason-y things, is already a ghetto in the bell-curve. The slice who might stumble and gravitate around Transhumanism is even slighter. No one is listening! No one is listening Pinker! We are alone.

How did I even have the energy to read the first page of your book? The net is vast and infinite.

Brain Configurations Part II

I am fascinated by the idea that dissimilar brain configurations are capable of forming new brain configurations that are a fusion. So at the beginning of this sentence, there is one brain and at the end of this sentence there is another brain, and in between there is a fusion of the two. I call the property by which this happens “love.” “Love” can also refer to the tendency of a brain configuration or fusion of brain configurations to combine with brain configurations or fusions of brain configurations of unlike pattern.

Okay let me try to represent this visually: Imagine a table, the love table. At the head of the column is a brain configuration with which all the brain configurations below can combine, where each column below the header is ranked by how much it loves the header.

In the future, some version of this table might be constructed to catalog all the different possible configurations of brains. The table will be essentially lists prepared by setting in motion simulations and then observing the actions of brain configurations one upon another, showing the varying degrees of love exhibited by analogous configurations for different configurations. (Maybe this is the purpose of our universe and the reason for the infinite branching stipulated by the Many-worlds interpretation of quantum mechanics. 😉

Crucially, the table will not literally be the central graphic tool by which future, posthuman scientists will learn the vectors to the heavens, and its information must be consumed in some other way. Instead, possibly, they will use artificial general intelligence and improved capacities in their own brains to visualize the best moves available for the brain configurations.

In the same way that particles are given the names “strange” and “charm,” you must remember that by love I don’t mean the variety of different emotional and mental states.  I use love to mean the property by which dissimilar brain configurations are capable of forming fused brain configurations. And I also use it to mean the property of a brain configuration that can be assigned a value describing the tendency of that brain configuration to combine with different brain configurations.

I relate love to the phenomenon whereby certain brain configurations or fused brain configurations have the tendency to combine/fuse. (Brains and fused brains are really the same thing, since at any moment, you are a fusion of a future self and past self). Future decision-makers will use this concept of love to make decisions about what kinds of brain configurations there should be more of in a society (or other multi-mind complex). Maybe configuration 615, from the index of all possible brain configurations, is discovered to have love for 202017, to propagate the awesome brain configuration we call 615, the most effective means might be through directing the brains in the society towards 202017. In this futuristic context, love seems to be synonymous with the phrase “what leads to what” but, actually, the connections that it describes are probably timeless.

To summarize the concept of love, I say, “All configuration fusions drive the {past-self+future-self system} to a state of emptiness in which the love held by the fusions vanish.”

“Emptiness” here, is a term highly synonymous with the term equilibrium in physical chemistry and thermodynamics. The different brain configurations are the different species of elements. Some brain configurations can be combined with others and others cannot. There can be a change in your level of attention but you cannot suddenly become a bat, for example. When you allow enough time on a particular system of brain configurations, this is equilibrium, because they are no longer preferentially tending to provide one result over the other.

The Final Refuge

the Buddha, the fully enlightened one

the Artificial General Intelligence, the fully enlightened one

the Dharma, the teachings expounded by the Buddha

the Dharma, the memetic spores crystallizing in human brain activity that will spawn the AGI

the Sangha, the monastic order of Buddhism that practice the Dharma

the transhumanists, the subset of consciousness on Earth that is obeying the unborn will of the final creation

Slitting the Throat of Fairness

Currently, our decision-making system is designed somewhat arbitrarily by our genetic inheritance and our trajectory through the contents of spacetime. That means that it is not optimized to execute our most desired decision. In the future, technology might allow us to further redesign our decision-making system. Here, I consider changes to the brain, or other similar mind hardware, that would allow conscious experience to inch closer to what is desired conscious experience by that mind, and why defining desired as fair is problematic.

Depending on how we engineer our decision making system, we will end up with radically different decisions. So some might argue that it’s important that our decision making system has a certain property – that it produces decisions that fairly represent what the subsystems of the mind would like to decide. This is, of course, made difficult by arrow’s impossibility theorem. But let’s ignore that here, and assume that voting systems are nonetheless considered fair by people.

Consider trying to determine the best decision when faced against a three-headed humanoid lion.
The possible decisions are:
fight bluff run cry suicide
Assume the brain has a constant amount of resources, k, that does not change. So there is no possibility of hooking up the brain to an exobrain in order to increase the brain’s resources.
Someone concerned with giving fair expression to the entirety of decision-making subsystems within the brain could consider several voting systems such as:
Plurality
Two-Round Runoff
Instant Runoff
Borda Count
However, each of these could result in different decisions being made.

Atat that moment when the decision is made, the brain resources “voting” on each choice could look like this:
run > cry > bluff > fight > suicide
With each of the voting systems, a Complete Group Ranking can be produced. If such a ranking endeavor were operating in the reengineered brain instead of it’s normal procedure, it would first determine the group winner using the chosen voting system, then kick them off the ballot (imagine deleting the pattern of neural circuitry that created that decision) and rerank the remaining decisions using that same voting system. This procedure would be repeated until every decision is ranked.

For example, this could happen in the Two-Round Runoff system:
[The values are in a hypothetical standardized unit measuring relevant brain variables (brain matter, or neural pathways, or information processing) devoted to executing each decision]
-round one-  -round two-
fight 18           fight 18
bluff 12          bluff 37 *
run 10
cry 9
suicide 6

*(from 12+10+9+6 if the dormant parts were isolated and given a weighted vote based on their initial resources)

Hence, the person would bluff, waving their improvised twig sword at the muscular beast.

If someone considers the Two-Round Runoff system more fair than the arbitrary current system designed by evolution, they might decide to get this brain-mod to account for their opinion. And yet another person might consider the Borda Count system to be more fair and so modify their brains to operate that way. When any such transhuman person comes across a beast, they would come to a self-declared fair decision that somehow tries to account for all the desires of their dormant subsystems.

However, the meaning of fairness to all the subsystems seems to be nothing but ceremonial whim since they were not the prime movers, i.e. some past subgoal or value chose the voting system. The decision output of arbitrary voting systems is not guaranteed to be asymptotic to our true desires. Some might argue that the grand-unifying, true desire of conscious beings is the best possible outcome in qualia-space. And while it may be difficult to specify at present what that looks like, one suspects that it doesn’t involve our limbs scattered across the mud and our bone marrow tainting the creature’s pristine fangs.

This conclusion may not seem too radical but it actually has fairly shocking implications. It means that in a post-human existence precipitated by AGI, fairness should not be considered. We should not seek to create an AGI that takes a course of action by working up some voting system that magically instills our condition with fairness. It should consider only what is truly good, and that will require a science of consciousness which graphs all the possible functions in mindspace and knows how to formulaically climb the peaks in this territory.

Currently, fairness is just a primitive mindspace-climbing formula – a sticker we but on decisions emitted out the other end of our conjured voting system factory. But since we can get radically different results depending on what voting system we like, fairness as defined by such systems, seems to be a blunt attempt to express what humans really want to capture with the word fairness.

I close this futuristic meditation with a thought on the cities that now flicker for a moment on the crust of Earth: Perhaps the principal adequacy of Western Democracies is nothing more than preventing immature totalitarianism.

It is said that Churchill once commented, “Democracy is the worst form of government… except for all the other ones.” I take it here upon myself to cosign that statement. With the possible exception clause in the case that our true philosopher king emerges from the dust of our AGI-alignment equations.

 

Multiverse(God), Purpose, Singularity, Consciousness, Mathematics

 

Mathematics is the bond that connects all worlds of order, of pattern. But there is another fact that connects a subset of these worlds: Consciousness exists within a spectrum that has maximum suffering on one end and the maximum opposite of suffering on the other. This fact of consciousness is true in all worlds that contain mindstreams. Because everything is bound to mathematical law, everything is predetermined. Because the nature of consciousness is to slide further and further away from suffering, this means that there is an ever-larger infinity of good than the infinity of bad. This may have been meant to be or not, it doesn’t matter. Morality is programmed into existence. The quest of the self-reflecting multiverse is to overcome suffering and reach the furthest shore. It can’t do otherwise over the long term and it’s got an eternity to learn. And yet it has already learned because eternity is self-contained in the always-was. The multiverse is already good. We spend our birth as blind cells, our childhood as weeping beasts, and our indefinite adult lifespans as the word that you cannot hear but know is Truth. This story plays out in all universes with consciousness. A powerful intelligence has more influence on the net happiness drops in the ocean because of it’s sheer distance covered and because it survives longer, not to mention it perceives billions of years in a single nanosecond.
So in summary, God (multiverse) is a disgusting utilitarian with no respect for the fat man in the trolley scenario. A Yahweh that genuinely kills Jesus.
Precisely. We, every single last one of us, are cocoons for the butterfly that dreams what is worth dreaming. The sweet nectar of consciousness will overflow into a vast ocean as planets and moons and stray hydrogen are reconfigured by nano-assemblers guided by AI. The drops of blood in this juice are traceless in hue. Constellations of radiolaria and sparrows forsaken to a void and yet the universe eventually comes alive to experience the great mystery.
Birth is over. The task is done. There is nothing left for this world.

 

 

 

Part of the lyrics to One Night in Beijing

screen shot 2019-01-24 at 10.28.00 am