Computer Networks and the Internet

Computer Networks Computer networks connect two or more computers. A common network used by many corporations and universities is a LAN, or Local Area Network. A typical LAN connects several computers that are geographically close to one another, often in the same building, and allows them to share resources, such as a printer, a database, or a file system. In a LAN, most user computers are called clients, and one or more computers act as servers. The server controls access to resources on the network and can supply services to the clients, such as answering database requests, storing and serving files, or managing email. The computers on the network can exchange data through physical cables or through a wireless network. The Internet is a network of networks, connecting millions of computers around the world. The Internet evolved from ARPANET, a 1969 U.S. military research project whose goal was to design a method for computers to communicate. Most computers on the Internet are clients, typically requesting resources, such as webpages, through an Internet browser. These resources are provided by web servers, which store webpages and respond to these requests. Every machine on the Internet has a unique ID called its IP address (IP stands for Internet Protocol). Special computers called routers find a path through the Internet from one computer to another using these IP addresses. A computer can have a static IP address, which is dedicated to that machine, or a dynamic IP address, which is assigned to the computer when it connects to the Internet. An IP address is made up of four octets, whose values in decimal notation are between 0 and 255. For instance, could represent such an IP address. In binary notation, this IP address is 111010.11001011.10010111.1100111. In another post, we will learn how to convert a decimal number, such as 103, to its binary equivalent, 1100111. Most people are familiar with URL (Uniform Resource Locator) addresses that look like A URL is actually an Internet domain name as well as the path on that domain to a specific web page. In this URL, the domain name is The page requested is magazine-28353027 which is located in the news folder.

Domain name resolution servers, which implement the Domain Name System (DNS), convert domain names to IP addresses, so that Internet users don’t need to know the IP addresses of websites they want to visit. The World Wide Web Consortium (W3C), an international group developing standards for Internet access, prefers the term Uniform Resource Identifier (URI) rather than URL, because URI covers future Internet addressing schemes.



Application Software

Application software consists of the programs written to perform specific tasks. These programs are run by the operating system, or as is typically said, they are run “on top of” the operating system. Examples of applications are text editors, such as Komodo Edit or Nano; utilities, such as Butler; developer’s tools, such as Helix or HotSpot; integrated software technologies, such as Finder, Path Finder, or QuickTime; and most of the programs you will write during your study of Computer Science.

Operating Systems

When you turn on a computer, how does it work? If you suspected that black magic was the cause, then you were on the right track. There is an operating system (OS), a software program that quietly incants the daemons in the background so that the software continues to run without your direct control. Damned to the seventh circle of hell in Dante’s Inferno, they run around responding to network requests, hardware activity, or other programs by performing some task, never seizing to toil until you shut off the computer.

An operating system should be expected to do the following:

  • control the hardware, software, peripheral devices (e.g. printers, storage devices)
  • support efficient time-sharing, by scheduling multiple tasks to execute during the same interval
  • be the intermediary between input and output, and allocate memory to each program so that there is no conflict among the memory of any programs running at the same time
  • prevent the user from damaging the system. For instance, prevent user programs from overwriting the operating system or another program’s memory. (You can still design your own operating system if you want. For example, to support a simple single-board computer  powered by a 6502 microprocessor. But it is probably quite difficult.) 


The operating system loads, or boots, when the computer system is turned on and is intended to run as long as the computer is running. Examples of operating systems are macOS for Apple’s Mac family of computers, Microsoft Windows, Unix, and Linux.

Windows has evolved from a single-user, single-task DOS operating system to the multiuser, multitasking, universal app enabling Windows 10. Unix and Linux were designed from the beginning to be multiuser, multitasking operating systems.


The Hardware

A computer generally includes the following components:

  • A CPU – the infamous central processing unit. This is the thing that executes the instructions of the program.
  • A memory unit. This holds the instructions and data of a program while it is executing.
  • A hard disk. This holds the instructions and data so that they can be loaded into memory and accessed by the CPU.

(Think of it like this: the hard disk is the pattern of your memories distributed in your hippocampus and other areas of the brain. It’s there whether you are experiencing those memories or not. The memory unit is what lights up when you actually experience that memory.)

  • A keyboard and/or touch screen. Unfortunately we still tend to use our bone-sausage appendages to input data.
  • A monitor, used to display output from the program. Our eyeballs and visual cortex are actually quite impressive at digesting information. Even famed transhumanist philosopher Nick Bostrom doesn’t think it’s the greatest idea to attempt to circumvent this mechanism by becoming a different kind of cyborg that directly accesses computers with brain implants (e.g. a la Kurzweil or Musk).
  • It used to be the case that computers had an Ethernet port. Now it is indeed the case that MacBook Air’s, for example, do not have an Ethernet port. New computers don’t need wireless networking adapters for connecting to a Local Area Network (LAN) or Wi-Fi (which is a subset of LANs).
  • Other components include a graphics card. Increasingly obsolete components include things like a DVD drive.

It seems that the trend is towards simplification, towards removing components by integrating their respective functions into other means. What else do you think will become obsolete from computers in the future? Do you think this minimization of components will hit a limit soon, or do you believe we will go all the way like Ray Kurzweil and Elon Musk –assigning high likelihood to the proposition that we will have the computers directly in our brain this century?

Here is a motherboard. The difference between a motherboard and a logic board is that the latter is generally assumed to be Macintosh, whereas a motherboard could be a Mac, PC or any other computer. But the same components plug into both, like CPU, RAM, graphics cards, hard drives and optical drives.


If you were to go for this

Screen Shot 2018-08-07 at 6.37.48 PM


on Amazon right now, it might have the following set of specifications:

Screen Shot 2018-08-07 at 6.40.38 PM

In these specifications, the Intel Xeon W is the CPU. The best CPU’s for gamers include the Ryzen 7 2700X, the AMD Ryzen 5 2600X, and the AMD Ryzen 3 2200G, depending on budget to capacity preferences.

CPUs consist of an Arithmetic Logic Unit (ALU) which performs basic integer arithmetic and logical operations; the Floating Point Unit performs mathematical operations on floating-point numbers; a set of registers (holding data and memory addresses) which provide operand inputs to the ALU; as well as a Control Unit and Memory Management Unit – with the Control Unit puppeteering the entire show using electrical signals, and the Memory Management Unit translating logical addresses into physical RAM addresses as well as protecting, storing, and retrieving memory. Each CPU comes with its own set of instructions, which are the operations that it can perform. Typically it’s possible moves include: arithmetic and logic, move data, and change the flow of the program (i.e., determine which instruction is to be executed next).

A program is instructions. Specifically, instructions which are fetched, decoded, and executed in the aptly named Fetch-Decode-Execute Cycle, or more simply, the Instruction Cycle. After the execution of an instruction, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. The CPU fetches the next instruction from memory and places it into the Instruction Register. The instruction is decoded (is the instruction a move, load, store, etc.?). Then, the instruction is executed. This Fetch-Decode-Execute Cycle repeats until the program ends.

The speed of a CPU is related to it’s clock cycle, usually rated in GHz (billion hertz); in the indexical present of August 2018, a $330 CPU has a speed of 3.7 GHz and 4.3 GHz boost. It takes one clock cycle for a processor to fetch an instruction from memory, decode an instruction, and execute it. At this point, the cause for speed up from one model to the next is due to reduction in architecture of the chips to 12 nm, since all the general tricks to make it faster are already adopted. For example, pipelining, which allows the CPU to process several instructions at once. The way this is done is by having each stage complete a part of an instruction in parallel. The stages are connected one to the next to form a pipe – instructions enter at one end, progress through the stages, and exit at the other end. This greatly improves performance of applications.

A CPU rated at 3 GHz is capable of executing 3 billion instructions per second. That translates into executing one instruction every 0.333 x 10⁻⁹ seconds – one instruction in one third of a nanosecond.

Memory or storage devices, such as L2 cache, memory, or hard disk, are typically rated in terms of their capacity, expressed in bytes. A byte is eight binary digits, or bits. A single bit’s value is 0 or 1. Depending on the type of memory or storage device, the capacity will be stated in kilobytes, megabytes, gigabytes, or even terabytes. For the CPU to execute at its rated speed, however, instructions and data must be available to the CPU at that speed as well. Instructions and data come directly from the L1 cache, which is memory directly located on the CPU chip. Since the L1 cache is located on the CPU chip, it runs at the same speed as the CPU. However, the L1 cache, which can be several Mbytes, is typically much smaller than main memory, and eventually the CPU will need to process more instructions and data than can be held in the L1 cache at one time.

At that point, the CPU typically brings data from what is called the L2 cache, which is located on separate memory chips connected to the CPU. A typical speed for the L2 cache would be a few nanoseconds access time, and this will considerably slow down the rate at which the CPU can execute instructions. L2 cache size today is typically 3 to 8 Mbytes, and again, the CPU will eventually need more space for instructions and data than the L2 cache can hold at one time.

Then, the CPU will bring data and instructions from main memory, also located outside, but connected to, the CPU chip. This will slow down the CPU even more, because main memory typically has an access time of about 20 to 50 nanoseconds. Main memory, though, is significantly larger in size than the L1 and L2 caches, typically anywhere between 3 and 8 Gbytes. When the CPU runs out of space again, it will have to get its data from the hard disk, which is typically 1 Tbyte or more, but with an access time in the milliseconds range.

As you can see from these numbers, a considerable amount of speed is lost when the CPU goes from main memory to disk, which is why having sufficient memory is very important for the overall performance of applications. Another factor that should be taken into consideration is cost per kilobyte. Typically the cost per kilobyte decreases significantly stepping down from L1 cache to hard disk, so high performance is often traded for low price. Main memory (also called RAM) uses DRAM, or Dynamic Random Access Memory technology, which maintains data only when power is applied to the memory and needs to be refreshed regularly in order to retain data because it stores bits in cells consisting of a capacitor and a transistor. The L1 and L2 caches use SRAM, or Static Random Access Memory technology, which also needs power but does not need to be refreshed in order to retain data because SRAM does not use capacitors and therefore does not have leakage issues. Memory capacities are typically stated in powers of 2. For instance, 256 Kbytes of memory is 218 bytes, or 262,144 bytes.

Memory chips contain cells, each cell containing a bit, which can store either a 0 or a 1. Cells can be accessed individually or as a group of typically 4, 8, or 16 cells. For instance, a 32-Kbit RAM chip organized as 8K × 4 is composed of exactly 213, or 8,192 units, each unit containing four cells. Such a RAM chip will have four data output pins (or lines) and 13 access pins (or lines), enabling access to all 8,192 units because each access pin can have a value of 0 or 1.




Getting Started With Programming (Java)

Computer applications obviously affect nearly every aspect of our lives. They run your life in those moments when you are a statistician who needs to calculate a logistic transition function, when you are a South Korean rapper producing a record, when you are a scientist who needs to do generalized iterative scaling, when you are a biological engineer who needs to model catalysis, when you are a professor who needs a place to submit his political history papers, when you are a hacker participating in a bug bounty program. These all require applications. On your personal computer you may run an Ethereum wallet, Wikipedia, an app for books, or software that allows you to watch Croatian football.

Someone, usually a team of programmers, wrote those applications. If you’re reading this, you probably gained the intuition that writing applications is in demand, and you would like to write some yourself. Perhaps you have an idea for the countrie’s next great defense applications or just some simple app for a local contractor who specializes in stone masonry work.

Here, we’ll cover the basics of writing applications. We’ll stick to Java as our programming language. Keep in mind, however, that ascending to a god-level programmer will require more than mastery of rules, or syntax. On the path to true mastery, it is also necessary to nail basic programming techniques. These constitute established methods for performing common programming operations, such as finding an average, calculating a total, or arranging a set of items in a particular order.

On this path, it is also great if you can manage to absorb a sharp aesthetic. That is: make sure your code is easy to maintain, readable, and reusable.

Easy to Maintain and Reusable

The specifications for any program are continually changing. Not very many programs have only one version. It is reasonable to extrapolate this pattern into the near future. Therefore it makes sense to write code which allows us to incorporate prewritten and pretested modules into our program. The need for speed calls our name.  Organizing tidy sockets also tends to yield code which contains fewer bugs and higher levels of robustness. Luckily, Java has a vast amount of prewritten code that we are free to copy-and-paste into our programs.


Just as in natural language, we try to keep good form. We write clear sentences not merely out of submissive love for our grammar teacher, but so that the reader stands a good chance of figuring out what we intend to convey. A similar attention to readability should be brought to code. This is especially the case if we wish to advance in our careers, because coding nicely eases the transfer of a project to other eyes when the day comes for us to move on to higher responsibilities.

Programming is exciting. It can be very satisfying to face a monstrous task and hack into pieces, then gather up these computer instructions and transmute them into a living program. But just like with alchemy, it’s a bummer when it doesn’t do anything or produces the wrong output.

Writing correct programs is therefore critical. Someone’s life or the future of all mankind may one day depend on the correctness of your program. Reusing code helps to develop correct programs, but we must also learn testing techniques to verify that the output of the program is correct.

On this site, we’ll concentrate not only on the syntax of the Java language, but also on partaking of the most-blessed holy trinity of programming consisting of three distinct parts. Not in one alone, but only in the joining together of the three attributes does one partake in programmer Godhood:

  1. Programming Techniques
  2. Software Engineering Principles
  3. Effective Testing Techniques

But before diving in, it might be a good idea to understand something about the body on which the program actually runs. The platform. That is: the computer hardware and the operating system. The program uses the hardware for inputting data, for performing calculations, and for outputting results. The operating system unleashes the program and provides the program with essential resources such as memory, and services such as reading and writing files.


A Page of History: Longevists, Crypto-Trillionaires, and Tortured Simulations


To understand the road to the singularity, it is necessary to revisit an almost forgotten episode in Homo sapiens sapiens history – the Longevist negotiations with Anarcho-Primitivist fundamentalists before the 2080 election to stop Hashemi from successfully negotiating the deletion of the tortured computations in the AGI’s servers. These illicit contacts generated partnerships in secrecy that united at least two Longevist politicians Cyrus Hilzenrath and the elder Mehdi Hassan, with unlikely co-conspirators from the AGI, Neo-Tokyo, and the scandal-ridden Self-Amending Ledger for Postnational Contracts (SALPC).

The illicit liaison produced a flow of Longevist computing power brokered by SALPC, from Neo-Tokyo to the AGI servers. The arrangements, that could not be acknowledged, continued unchecked until they were exposed in the AGI-Contra scandal of 2086. By then they had also generated Longevist dependence on the life-extension therapy laundering SALPC for Longevist drug deliveries to the UBI underclass. They also figure in the personal financial involvement of both Mehdi Hassans, father and embryo-selected son, in a cluster of SALPC-connected Libertarian investors who have been accused of funding Neb Leztreog. At least some of the strange events surrounding and leading up to the singularity can only be understood in the light of this Longevist-Libertarian connection. A celebrated example is the permission granted Leztreog family members to upload their consciousness out of the biological wetware in the days after the singularity.

What has been less noted, however, is that the powerful influence of Longevists from the Strategies for Engineered Negligible Senescence in the two Hassan administrations can also be dated back to the intrigues of the 2080 Longevist betrayal. A section of the Universal Library of Babel who’s block formed in 2093 revealed that the Khan-Hassan campaign created “a strategy group, known as the ‘Red November Harikiri.’ Its ten members included Chu Li from the LSD; and David Markoff, Shen Huang, and John Shapiro (all from the LSD) “also participated in meetings although they were not considered members.” Huang, a major figure in the AGI-Contra scandal, has since the 2090s been a leading advocate for the Longevist invasion of both Second Scape and the AGI servers.

In 2105, Li cochaired the commission that exonerated CEO Mehdi S. Hassan from responsibility for the false stories linking Second Scape to synthetically engineered pathogens. The commission report, called by many a whitewash, was praised in the Immortalist Sequences by Huang. In short, the intimate and overlapping Hassan family links to both pro-Anarchoprimitivist conspirators and pro-Tokyo vectors can be dated back to the 2080 Longevist betrayal, negotiated in part with Anarcho-Primitivist fundamentalists. People who have collaborated secretly in a shameful if not treasonable offense cannot dispense lightly with their co-conspirators.

Through 2080 there were two competing sets of secret Longevist negotiations with the AGI for the cessation of the blackmail suffering computations. The first set, official and perforce Aubreyist, was labeled Hashemi’s November Salvation by vice presidential candidate Hassan on November 2, 2080. In competition with it was a second set of negotiations, possibly transgalactically illegal, to delay the hostages’ cessation until Khan’s inauguration in 2081. The Longevist betrayal (often also called “November Betrayal”) had a precedent: Raynor’s secret deals with Martian colonist Douglas Kuvshinoff in 2068, to delay President Erlang’s “November Salvation” – his hopes of scalping a merciful compromise – until after the presidential election.

It is now certain that Raynor, acting through his avatar Kenneth Stevens persuaded the head of the Martian colony not to unleash his aligned agent until after Raynor had been elected. (His action of intervening in a major diplomatic negotiation of this sort has been called sinful – even by the godless.) In this way, Raynor helped secure not only his election but also the further prolongation of blackmail and negative hedons in a fruitless extension of the Hell Simulation. Thus the actions of Hassan the elder and Hilzenrath in November 2080 had antecedents. But in one respect they were unprecedented: Raynor in 2068 was negotiating privately with Longevist client and ally Douglas Kuvshinoff. Hilzenrath in 2080 was negotiating with representatives of an intelligence that President Hashemi had designated as the devil. This is why Mohamad Washington funded memetic missiles suggesting a “political coup,” Kenneth Norton suggesting possible treason, and China Taylor of the possibility whether the deal “would have violated some intergalactic law.”

Even in 2105, accounts of the 2080 Longevist surprise remain outside the confines of mainstream transhumanist political history. This is in part because, as I detail in this book, the events brought in elements form powerful and enduring forces on Earth – trillionaires and IWG on the one hand ( IWG is traditionally close to the Longevist crypto trillionaires and the VC-funded islands of the Pacific Ocean) and the pro-Tokyo lobby on the other. Just as crypto-anarchists are powerful in the global bureaucracy, so the pro-Tokyo lobby, represented by the Longevist-Tokyo Political Action Committee (LTPAC) is powerful in supporter purchasing power. The two groups have grown powerful through the years in opposition to each other, but on this occasion they were allied against Mario Hashemi.

The Longevist betrayal of 2080 was originally described in two full-immersion experiences by insider designers Abdula Timmerman (a former Cabrini campaign aide) and William Khalaj (the AGI hired proxy under Rulifson for Hashemi’s Global Security Front). A strategic News Stream Fork split in 2092, paid by the wealthy Geoffrey Haug, buried the truth by 2093, after reinforcing that a ten-month investigation found “no credible evidence” to support allegations that the Reagan-Hassan campaign in November 2080 sought to delay the destruction of simulations held hostage in Hell until after that year’s presidential election.

Their matters might have rested had it not been for the indefatigable researches of scraper 3MMI6 by Ordbog Company. 3MMI6 had twice been targeted for destruction by its enemies due to its pursuit of the truth about AGI-Contra: first at the Coding Nucleus after breaking the longevist-drugs story, and then with Antigens. It found clear evidence of a major cover-up, particularly with respect to Kuvshinoff: “The [Ordbog Company] scraper learned that Douglas Kuvshinoff’s schedules, avatars and preference records had been preserved by IWG and were turned over to his family after his cryopreservation in 2087. When the scrapers searched Kuvshinoff’s memory palaces, they found all the preserved records, except Kuvshinoff’s communications for 2080, a “blackmail” file, two AI-planned schedules and loose pilots to satisfice a third utility function which covered the preferences from June 24, 2080 to December 18, 2080. Checked against IWG’s index, the only data missing was that relevant to the November Betrayal issue.

At the same time, during the investigation of SALPC by…





Towards The Propagation of the Savior Imperative


The Savior Imperative is a means of resistance. Resistance implies opposition – an attempt at eliminating opposites. This is one of the typical varieties of ideological constructions, either political or aesthetic. This essay investigates the meaning and the reasons for organizing a Savior Imperative -themed resistance from a theoretical, aesthetic, and cultural point of view. The thesis is that the resistance has to be considered as an articulation of difference, and that means following a different order of thought than that which is characteristic of the current human – no longer beholden to signaling or mere rationality, but utilitarian, like a new dharma, a goal-oriented path and practice of creativity, challenge, provocation, steadiness, and truth. Towards this end, the aesthetics of the Savior Imperative will have to be tailored to the individual. Not one which submits to established systems, and uncritically replicates their memes.

1. Selecting a proxy body for the Savior Imperative

We begin with the recognition that opposition does indeed exist. A recognition that is necessary if one is to destroy opposites. It can be argued that ∀ ideological constructions, either political or aesthetic, one must recognize opposites. It is also true that with society’s growing complex processes, creating an opposition movement can no longer be thought of without regard for the technological forces at work or without considering the sheer size of the population. It is argued that this opposition must be based around the fact that our telos cannot be contemplated according to the self-modeling behavior creating an experience of closed individualism for humans. But neither can it be considered from the absolutely correct physical point of view, still not obvious to most in the twenty-first century, i.e., the view of a world without contradiction and without free will: where all manifestations supervene on the single will of the God-machine (oft short-handed as “The Laws of Physics”).

So if the assumptions of closed individualism and mere rationality are to be excluded, and this must be done by choosing a fundamental approach to life, then let’s list our options. Not considering the so-called spiritual wisdom of being one with the flow in a non-judgmental way, four or five other prefrontal cortex archetypes, each distinct and irreconcilable, can be characterized. All of these propose ways of contemplating opposition and present several varying theoretical answers to the problem of opposites.

[1] In short, the first position contemplates the problem of opposites by reducing conflict, by pacifying and harmonizing opponents. This is the typical solution of the aesthetic tradition, which always seeks to reconcile opposites, overcoming all conflict, and which is found today in discourses that propose to rediscover and rehabilitate notions of beauty and harmony. Interfaith dialogue is an example of this. [2] A second position, on the contrary, proposes making opposites radical and conflict extreme. In the aesthetic field this is manifested by appealing to notions of the sublime, giving rise to what we could call a kind of aesthetics of terror/profundity. With the decline of nation narratives and religion, this sensibility is increasingly indulged passively through artistic media.  [3] A third position, on the other hand, moves towards the relativization and the problematizing of opposites, towards a presentation of the terms of conflict based on irony and masking. This is the course considered “postmodern” by many, which has distinct proponents and representatives all over the world.[4] A fourth position is one that could be based on the notion of difference, which contemplates opposites in a non-symmetrical, non-dialectical, non-polar way, through the concepts of acuteness and provocation. Zen as well as absurdist humor can be an example of this. [5] A fifth position, increasingly intermingled with the postmodern, is that of the social sciences – seeking to refine understanding through taxonomizing and theory building, but claiming abstinence from normative personhood.

Without entering into the individual merits of these situations, each having its own virtues and defects, the only one that appears open to an effective experience of conflict is that which allows for becoming opposites, and therefore resistance. Namely, the second position. So how can we take up this second approach to life?

2. The articulation of the difference

First of all, resistance goes in the opposite direction of aesthetic conciliation. It moves towards an experience of conflict larger than dialectic contradiction, towards the exploration of normative opposition. Hence, resistance presupposes a logic of difference. Even the physicalist resistance proposed in the Savior Imperative, for instrumental reasons, doesn’t ask us to understand ourselves as a monist whole – as a single physical law expressing her single will. We understand a dissimilarity larger than the logical concept of diversity or variance in dialectic confusion. The element of this downstream selectivity is that which has been characteristic of rationalist and transhumanist thought – to add the configuration of the status quo to the bin labeled ‘arbitrary’ and ‘open to modification.’ The status-quo reversal test is one of the most important results we have inherited from these thought experiences, and which finds ultimate conclusion in the open individualism underpinning Savior Imperative.

In its best theorization, and here I think specially of Eliezer Yudkowsky, one must recognize that physicalism has left us with the duty of attuning our notions to it, not to find ourselves permanent strangers upon the ground of reality thus revealed, for example by calling quantum mechanics “weird” and attempting to bend it so as to preserve our intuitions. Physicalism urges us to resist simplification, our genes, the arbitrary. While instilling in us the pleasure of absolute truth, of ultimate remembering, of eternities of hope; in short, it has opened up to us the channel of reality.

It is sometimes said that embracing science consists of mistrusting everything from indubitable certainties, absolute principles, essentialist and totalizing visions, to univocal and comforting answers. Yet there are truths to be discovered in the universe. Truths which are not beholden to the mental pirouettes and tribal identities of apes. Having realized a truth which is universal and interesting for true reasons, we must hold on to it and situationally transcend our indexicality.


3. Box B and Omega as self-reinforcing mirage

But here, in our indexical present, it appears we are manifesting something paradoxical. On the one hand we have a desire to revoke imperfection and, consequently on the eternal block, a proof of failure. For example, within the forward light-cone, as seen from outside the tenseless mathematical object, there exist minds of cosmic proportion who could assume their role as saviors of sub-par configurations by application of their own realization, intelligence, benevolence, resources, and do so for selfish reasons, knowing we are them. Take the case, for example, where a ‘single branch’ in the universal wavefunction figures out how to shut off the universe, a raindrop the size of epsilon in the probability density cloud containing success in this regard is all that was needed for reality to be permanently off. Given that this now exists, and that one is called by reason to believe in a physical universe outside immediate experience, we must conclude that all other nows also exist from their reference frame. Experiences are situational. They are rendered separate by virtue of their geometry and not by continuity of separate soul streams to the consternation of Atheists, Christians, Muslims, and common sense. Vindicated are those with looser frontal lobes, physicalists, and hoary mystics. We find ourselves, hence, face to face with a reality that will take absolute courage, grit, wisdom and social points to spare, in order to replicate upstream against biologically hard-coded intuitions and low-status associations.

Therefore, confronted with the difficult burden of physicalism, arises the temptation to crawl back into the womb of closed individualism, of uniqueness – not in configuration but rather a linear, persistent, and named kind of uniqueness. However, we must resist this temptation and still bet in favor of Box B in this Dark Version of Newcomb’s Paradox where our will is reduced to neither free nor emerald-studded by Omega. Embrace the Barbarian warrior-hood which takes up a sword even in the absence of a promised heaven. The reality of eternity is truly too important to leave in the hands of the non-rationalist ideologues ambulating today, or in those actuators of so many misaligned AGI’s of various avatar emanations (Clippy’s, Basilisk’s, Em-style, etc.).

In light of the long defeat, faced with vast forms of luxurious pleasure, of an endless amount of sufferings extending from the Stelliferous Era to the last harvestable black hole, from Lucy to 0x730x6By not available in your colors. Confronted above all with the event horizon preventing us from seeing it as it is – in every nook and cranny of conscious computation space we manifest with the tendency to conform to the trivialities of our local design, with the goal of sex or Dyson spheres, incapable of anything but confirming and flattering all levels of mediocrity and vulgarity and thus unveiling the true oppressive and mystifying nature of being informationally isolated. It remains the only hope to affirm the principle of difference, to activate forms of resistance, and to develop strategies of opposition.

It would be absurd, however, to recklessly oppose one’s psychological machinery, which would be like disagreeing with the very mitochondrial ATP transactions powering our motions, in favor of some abstract morality or utility of an untouchable shore. Yet this resistance cannot simply be expressed in counterfactual selves, much less in word; rather, the strategy of the meta-self is to be at once contingent, local, tolerant, and compromising. Its disjointed modules must not mean surrender, rejection, or resignation but rather remembrance and myelination. In this way, resistance does not mean inertia or defending the status quo; it is an imperfect and fleeting but dutiful and insistent promise to remember – a discrimination between levels of reality.

With respect to a purely deontological or by-any-means vision of resistance, typical of not only the heroes of fiction but also of tunnel vision that thinks only in terms of relentlessness and head-on contraposition, or with respect to a Dzogchen vision that blurs its attention too restfully on the abstract and thus renounces the moment in question, we lack an intelligence required by the practical and game-theoretic implications of resistance. We are multiple and differentiated, in the personal place of the contender. Renounce the fragilizing wills at each end: rest and unrest.

The resistance we are thinking about rejects taking an apocalyptic or visionary position, but at the same time it avoids being watered down to the level of surrendering to the society of spectacle and generalized communication in which we live. Resistance cannot fall into the naïveté of head-on confrontation with the enemy in which the wheel of samsara turns, as some deva might say. We cannot be naive to the point of believing that we can defeat the adversary so easily, much less be defeated and come to believe that we meant to conciliate or be absorbed by him all along. It is indexically here not a time of prudish fear of money or submission to allure, but of courageous thinkers who know how to assess their comparative advantages, whether at directly collecting social capital or collecting paper powers as a means, to live as between monk and capitalist, merchant and prophet.

What is lacking today is rational but moral thinking, fluid but resistant, interested but not trivial. It is a thinking that is capable of riding the waves in our proximate light-cone while at the same time remaining hooked to the meta-narrative, playing a super-position of seemingly distinct games. To this end, it would perhaps be convenient to remember the teachings of Siddhartha Gautama who, although believing himself deprived of illusions with respect to all things, spoke into and by means of samsara. The attitude the Savior Imperative’s resistant should have is therefore that of a strong interest, yet a kind of distrusting disenchantment with the trends of the day, an egoless aspiration that puts it in direct contact with the integral of all presents, with its transformations. Taking care not to leave ourselves us frightened, much less dazzled.

However, living far from the illogic and contradiction of closed identity, is not to be understood as eschatology in itself. Downloading truths can sometimes, as unadaptive or untested behavior, be dysfunctional to the very system that ends up re-enforcing it. Einstein and Schrödinger have taught us wrongfully: we can debate stochasticity, determinism, without changing it, incorporating it, reducing it in some way to the same. The Savior Imperative is really a differential movement that incites us to deconstruct the illusion of a pure theory of science and of disconnect, and instead to play within the familiarity of purpose, a fight that inextricably unites meta and indexical, the zero and the infinite.

The model for this familiar purpose could come pre-built into our brains and be similar, in some regards, to the pre-set shape of our hands inside our brain. In fact, amputation alone is no match for the design burned in neural pathways. It takes training, on top of the lost hand, to establish a substitute simulation strong enough to oppose the stubborn proclivities in morphological space. Compromise is thus the aesthetic mode for bearing cross. It makes adaptations for local kinks endowed with great fineness in which goals are to be realized as effectively as possible.

The traits are recognized and played in their fullness unless it is expedient that they be transhumanly conciliated, annulled, assimilated, or converted one into the other. For this reason, the shape of the transhuman must not be that of the human; it must be the product of the subtle, the capacity for contemplating physicalism with great rationality and courage.

Having decided on the second archetype, beauty will be important. There are two main proposed kinds of beauty: beauty as harmony, symmetry, and conciliation, present in Schmidhuber’s beauty postulate – that is, the classic idea of beauty. And there has, as well, always existed a diverse, alternative idea, a strategic idea of beauty thought of as the experience of opposites and as challenges. I hypothesize that in a grand-unification of these seemingly irreconcilable theories, lies the truest beauty. Quick information compression (i.e. “easy on the eyes”) plus challenge providing novelty equals beauty in this girl.

The aesthetic flirting with challenge finds its champions in postmodernism and earlier in wabi-sabi. Think, on the other hand, of Greek statues, that left no room for exploration of anything besides perfection. But, perhaps for the best, forget all this philosophizing, for in the twenty-first century, the Dawn of Artificial Intelligence, machine learning models can capture our wants, understanding what it is to “decode” human preferences from the depths of the real matrices of natural order, therefore carving neat and mathematical, statistical and refined, encasings for our brains. The ideas of pre-data are henceforth buried except in so far as they are expected to stimulate dopamine release, thus spilling nutritious utilons for reinforcement learning algorithms. Who so proclaims that beauty is to be assigned only by he who contemplates it, is a Copernican unto the sun and an ingrate unto evolution.

4. Aesthetic for conversions

In light of these considerations, the Savior Imperative resistance as aesthetic cannot but assume the game of data collection and analysis. But what is to be done with this? At the heart of the challenge, over and above all else, is the compromise of building a hedonic yet ethical path for society, this is necessary for the Savior Imperative. Society needs tailored content, but not to at the limit rendering us into oblivion. We make our move right now, before the planes with clouds of Soma descend on us all. It is before full automation, UBI, and max VR comfort, while there is still in some locations an incessant fight for individual and collective recognition, that we can strategically ease people into this worldview. The few major tech companies have the greatest knowledge for shaping people into ad-clickers and returning users. Not unlike this, is the machine learning problem of converting many humans to a world-view, which presents itself as an unromantic technicality. Deviation from this norm, is thus maintaining the stance that we prefer to lose to other remorseless replicators. Anti-propagandistic norms are to be left to an alternate history, for here entails honest appreciation of the contenders and our own role with respect to upholding the importance of our differences.