Materialism Is Not Dry, It Is More Thrilling Than Fantasy

The interesting question (to me) is whether someone who is not predisposed to enjoying LW-style rationality ought to pursue it if they seek to optimize their happiness. If you are a happy Christian who believes God is madly in love with you and can’t wait to bring up to your mansion in heaven post mortem, then LW is going to be depressing.

Even if you’re just a regular old None or agnostic who likes to believe in warm fuzzy concepts like “everything happening for a reason” and Karm and Serendipity, then LW’s deterministic, magic-killing, purely materialist views are a bit of a buzzkill.

It is possible that rationality training is a net bad for ceratin individuals because ignorance really is bliss in many circumstances.

The rationalist who wrote this perhaps didn’t get a hit of pure materialism. If it felt like a buzzkill (of all things!) someone definitely sold you contaminated product. Adhering to strict materialism should incite the immediate realization of immortality, and with it wave after wave of thrill and awe – or sheer fear… depending on the predisposition of the indexical present.

Let me tease out the reagents dirtying up your solution my friend, so that you too may lucidly trip-out on the crazy view from up here in the “deterministic, magic-killing, purely materialist” summit where I dwell.

First: Certain brain processes lead to what we call “experience” or “consciousness.”

∀ brain processes which feel themselves to exist, ∃ a physical configuration specifying them. Brain processes which feel themselves to exist ∉ A soul, B soul, C soul, etc. To postulate a soul which owns experiences would be extraneous where a physical explanation suffices.

The brain processes which feel themselves to exist do not belong to anyone in particular. What could we possibly mean by belong? Each moment is one of different configuration.

Are you under the impression that there is someone traveling a linear journey? – and that there are other someone’s sharing a reference frame, riding on the same platform as your experiences, but parallel to them?

–This is a grave confusion. One must first understand physics, and only then speak of being a materialist. Uninspected common-sense impressions are not materialism – they are the tabula rasa that remains in the absence of religious beliefs.

There is no such thing as a platform of now to which we all belong which stretches its width across the whole universe and sweeps forward in time with each second – deleting the past, having yet to reach the future. In fact, the eternal block is necessary for experiences such as seeing a red circle to be possible. The visual processing of shape has to exist and visual processing of color has to exist before we see a red circle. Those patterns have to be inscribed in a tenseless fabric to become bound. Information processing isn’t a little orb of awareness zipping around in the brain – it is a shape stretched out in spacetime.

So experiences are indexical. The big You, the You which is just existence, here, in all nows: is Greg Egan conjuring a character; is the ephemeral thought that aesthetic meant violet; it is a fingertip touching a piano in Japan.

The question “why am I me, here, now, and not someone else” has an answer. Not a spiritual answer, or a moral answer; just a strictly physical answer. Each physical configuration exists from where it exists. And since we can be certain that existence is from any given indexical present, we can be sure that we are everywhere in experiential space but cannot directly intuit unreachable knowledge from each location. My indexical present can’t feel Siddhartha Gautama’s heels. But from the inside of that brain simulating that experience of having feet, with heels, touching ground, I am that. How is that supposed to know it is here? It isn’t.

From the inside of the myriad of silicon deities dueling for the cosmos in future light cones, the prisoners cannot feel our dilemmas except in so far as they are identical in configuration. This exception arises in experiences so simple that they are “shared.” If being at the verge of death, taking DMT, or riding on the momentum of years of extended meditation feels like a point-like singularity of simple sensation without complexities of sense-of-self, then these can be physically identical to many “other” experiences across the history of the planet and the cosmos. They no more happened to you than to someone else because they just exist from their inside.

And if you knew this derivation of immortality from standard materialism already, so you understand nonexistence is impossible but are still sentimentally attached to your indexical present and therefore worried about the personal narrative of the human you identify with, because… entropy, then you also don’t have to worry. It is guaranteed that future individuals will feel themselves to be you as much as you feel yourself to be the person who woke up this morning. In an infinite universe, the measure of configurations that wake up thinking themselves to be you cannot be diluted to physically zero. Quantum immortality is implied already but is not necessary. Even a Level 1 multiverse, i.e., the universe does not end at our Hubble volume, gives your personal narrative continuation.

Cryonics is a good idea, but not for the reasons a standard atheist might think (like to ward off oblivion for some time). Checkout Eliezer Yudkowsky’s comment on this thread.

 

 

 

Computer Networks and the Internet

Computer networks connect two or more computers.

A common network used by many corporations and universities is a LAN, or Local Area Network. A local area network is a group of computers and associated devices that share a common communications line or wireless link to a server.

A typical LAN connects several computers that are geographically close to one another, often in the same building, and allows them to share resources, such as a printer, a database, or a file system.

In a LAN, most user computers are called clients, and one or more computers act as servers. The server controls access to resources on the network and can supply services to the clients, such as answering database requests, storing and serving files, or managing email. The computers on the network can exchange data through physical cables or through a wireless network.

The Internet is a network of networks, connecting millions of computers around the world. The Internet evolved from ARPANET, a 1969 U.S. military research project whose goal was to design a method for computers to communicate.

Most computers on the Internet are clients, typically requesting resources, such as webpages, through an Internet browser. These resources are provided by web servers, which store webpages and respond to these requests. Every machine on the Internet has a unique ID called its IP address (IP stands for Internet Protocol).

Special computers called routers find a path through the Internet from one computer to another using these IP addresses. A computer can have a static IP address, which is dedicated to that machine, or a dynamic IP address, which is assigned to the computer when it connects to the Internet. An IP address is made up of four octets, whose values in decimal notation are between 0 and 255. For instance, 58.203.151.103 could represent such an IP address. In binary notation, this IP address is 111010.11001011.10010111.1100111.

In another post, we will learn how to convert a decimal number, such as 103, to its binary equivalent, 1100111. Most people are familiar with URL (Uniform Resource Locator) addresses that look like https://www.bbc.com/news/magazine-28353027. A URL is actually an Internet domain name as well as the path on that domain to a specific web page. In this URL, the domain name is http://www.bbc.com. The page requested is magazine-28353027 which is located in the news folder.

Domain name resolution servers, which implement the Domain Name System (DNS), convert domain names to IP addresses, so that Internet users don’t need to know the IP addresses of websites they want to visit. The World Wide Web Consortium (W3C), an international group developing standards for Internet access, prefers the term Uniform Resource Identifier (URI) rather than URL, because URI covers future Internet addressing schemes.

 

 

Application Software

Application software consists of the programs written to perform specific tasks. These programs are run by the operating system, or as is typically said, they are run “on top of” the operating system. Examples of applications are text editors, such as Komodo Edit or Nano; utilities, such as Butler; developer’s tools, such as Helix or HotSpot; integrated software technologies, such as Finder, Path Finder, or QuickTime; and most of the programs you will write during your study of Computer Science.

Operating Systems

When you turn on a computer, how does it work? If you suspected that black magic was the cause, then you were on the right track. There is an operating system (OS), a software program that quietly incants the daemons in the background so that the software continues to run without your direct control. Damned to the seventh circle of hell in Dante’s Inferno, they run around responding to network requests, hardware activity, or other programs by performing some task, never seizing to toil until you shut off the computer.

An operating system should be expected to do the following:

  • control the hardware, software, peripheral devices (e.g. printers, storage devices)
  • support efficient time-sharing, by scheduling multiple tasks to execute during the same interval
  • be the intermediary between input and output, and allocate memory to each program so that there is no conflict among the memory of any programs running at the same time
  • prevent the user from damaging the system. For instance, prevent user programs from overwriting the operating system or another program’s memory. (You can still design your own operating system if you want. For example, to support a simple single-board computer  powered by a 6502 microprocessor. But it is probably quite difficult.) 

 

The operating system loads, or boots, when the computer system is turned on and is intended to run as long as the computer is running. Examples of operating systems are macOS for Apple’s Mac family of computers, Microsoft Windows, Unix, and Linux.

Windows has evolved from a single-user, single-task DOS operating system to the multiuser, multitasking, universal app enabling Windows 10. Unix and Linux were designed from the beginning to be multiuser, multitasking operating systems.

 

The Hardware

A computer generally includes the following components:

  • A CPU – the infamous central processing unit. This is the thing that executes the instructions of the program.
  • A memory unit. This holds the instructions and data of a program while it is executing.
  • A hard disk. This holds the instructions and data so that they can be loaded into memory and accessed by the CPU.

(Think of it like this: the hard disk is the pattern of your memories distributed in your hippocampus and other areas of the brain. It’s there whether you are experiencing those memories or not. The memory unit is what lights up when you actually experience that memory.)

  • A keyboard and/or touch screen. Unfortunately we still tend to use our bone-sausage appendages to input data.
  • A monitor, used to display output from the program. Our eyeballs and visual cortex are actually quite impressive at digesting information. Even famed transhumanist philosopher Nick Bostrom doesn’t think it’s the greatest idea to attempt to circumvent this mechanism by becoming a different kind of cyborg that directly accesses computers with brain implants (e.g. a la Kurzweil or Musk).
  • It used to be the case that computers had an Ethernet port. Now it is indeed the case that MacBook Air’s, for example, do not have an Ethernet port. New computers don’t need wireless networking adapters for connecting to a Local Area Network (LAN) or Wi-Fi (which is a subset of LANs).
  • Other components include a graphics card. Increasingly obsolete components include things like a DVD drive.

It seems that the trend is towards simplification, towards removing components by integrating their respective functions into other means. What else do you think will become obsolete from computers in the future? Do you think this minimization of components will hit a limit soon, or do you believe we will go all the way like Ray Kurzweil and Elon Musk –assigning high likelihood to the proposition that we will have the computers directly in our brain this century?

Here is a motherboard. The difference between a motherboard and a logic board is that the latter is generally assumed to be Macintosh, whereas a motherboard could be a Mac, PC or any other computer. But the same components plug into both, like CPU, RAM, graphics cards, hard drives and optical drives.

for-macbook-pro-a1502-motherboard-laptop

If you were to go for this

Screen Shot 2018-08-07 at 6.37.48 PM

 

on Amazon right now, it might have the following set of specifications:

Screen Shot 2018-08-07 at 6.40.38 PM

In these specifications, the Intel Xeon W is the CPU. The best CPU’s for gamers include the Ryzen 7 2700X, the AMD Ryzen 5 2600X, and the AMD Ryzen 3 2200G, depending on budget to capacity preferences.

CPUs consist of an Arithmetic Logic Unit (ALU) which performs basic integer arithmetic and logical operations; the Floating Point Unit performs mathematical operations on floating-point numbers; a set of registers (holding data and memory addresses) which provide operand inputs to the ALU; as well as a Control Unit and Memory Management Unit – with the Control Unit puppeteering the entire show using electrical signals, and the Memory Management Unit translating logical addresses into physical RAM addresses as well as protecting, storing, and retrieving memory. Each CPU comes with its own set of instructions, which are the operations that it can perform. Typically it’s possible moves include: arithmetic and logic, move data, and change the flow of the program (i.e., determine which instruction is to be executed next).

A program is instructions. Specifically, instructions which are fetched, decoded, and executed in the aptly named Fetch-Decode-Execute Cycle, or more simply, the Instruction Cycle. After the execution of an instruction, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. The CPU fetches the next instruction from memory and places it into the Instruction Register. The instruction is decoded (is the instruction a move, load, store, etc.?). Then, the instruction is executed. This Fetch-Decode-Execute Cycle repeats until the program ends.

The speed of a CPU is related to it’s clock cycle, usually rated in GHz (billion hertz); in the indexical present of August 2018, a $330 CPU has a speed of 3.7 GHz and 4.3 GHz boost. It takes one clock cycle for a processor to fetch an instruction from memory, decode an instruction, and execute it. At this point, the cause for speed up from one model to the next is due to reduction in architecture of the chips to 12 nm, since all the general tricks to make it faster are already adopted. For example, pipelining, which allows the CPU to process several instructions at once. The way this is done is by having each stage complete a part of an instruction in parallel. The stages are connected one to the next to form a pipe – instructions enter at one end, progress through the stages, and exit at the other end. This greatly improves performance of applications.

A CPU rated at 3 GHz is capable of executing 3 billion instructions per second. That translates into executing one instruction every 0.333 x 10⁻⁹ seconds – one instruction in one third of a nanosecond.

Memory or storage devices, such as L2 cache, memory, or hard disk, are typically rated in terms of their capacity, expressed in bytes. A byte is eight binary digits, or bits. A single bit’s value is 0 or 1. Depending on the type of memory or storage device, the capacity will be stated in kilobytes, megabytes, gigabytes, or even terabytes. For the CPU to execute at its rated speed, however, instructions and data must be available to the CPU at that speed as well. Instructions and data come directly from the L1 cache, which is memory directly located on the CPU chip. Since the L1 cache is located on the CPU chip, it runs at the same speed as the CPU. However, the L1 cache, which can be several Mbytes, is typically much smaller than main memory, and eventually the CPU will need to process more instructions and data than can be held in the L1 cache at one time.

At that point, the CPU typically brings data from what is called the L2 cache, which is located on separate memory chips connected to the CPU. A typical speed for the L2 cache would be a few nanoseconds access time, and this will considerably slow down the rate at which the CPU can execute instructions. L2 cache size today is typically 3 to 8 Mbytes, and again, the CPU will eventually need more space for instructions and data than the L2 cache can hold at one time.

Then, the CPU will bring data and instructions from main memory, also located outside, but connected to, the CPU chip. This will slow down the CPU even more, because main memory typically has an access time of about 20 to 50 nanoseconds. Main memory, though, is significantly larger in size than the L1 and L2 caches, typically anywhere between 3 and 8 Gbytes. When the CPU runs out of space again, it will have to get its data from the hard disk, which is typically 1 Tbyte or more, but with an access time in the milliseconds range.

As you can see from these numbers, a considerable amount of speed is lost when the CPU goes from main memory to disk, which is why having sufficient memory is very important for the overall performance of applications. Another factor that should be taken into consideration is cost per kilobyte. Typically the cost per kilobyte decreases significantly stepping down from L1 cache to hard disk, so high performance is often traded for low price. Main memory (also called RAM) uses DRAM, or Dynamic Random Access Memory technology, which maintains data only when power is applied to the memory and needs to be refreshed regularly in order to retain data because it stores bits in cells consisting of a capacitor and a transistor. The L1 and L2 caches use SRAM, or Static Random Access Memory technology, which also needs power but does not need to be refreshed in order to retain data because SRAM does not use capacitors and therefore does not have leakage issues. Memory capacities are typically stated in powers of 2. For instance, 256 Kbytes of memory is 218 bytes, or 262,144 bytes.

Memory chips contain cells, each cell containing a bit, which can store either a 0 or a 1. Cells can be accessed individually or as a group of typically 4, 8, or 16 cells. For instance, a 32-Kbit RAM chip organized as 8K × 4 is composed of exactly 213, or 8,192 units, each unit containing four cells. Such a RAM chip will have four data output pins (or lines) and 13 access pins (or lines), enabling access to all 8,192 units because each access pin can have a value of 0 or 1.

 

 

 

Getting Started With Programming (Java)

Computer applications obviously affect nearly every aspect of our lives. They run your life in those moments when you are a statistician who needs to calculate a logistic transition function, when you are a South Korean rapper producing a record, when you are a scientist who needs to do generalized iterative scaling, when you are a biological engineer who needs to model catalysis, when you are a professor who needs a place to submit his political history papers, when you are a hacker participating in a bug bounty program. These all require applications. On your personal computer you may run an Ethereum wallet, Wikipedia, an app for books, or software that allows you to watch Croatian football.

Someone, usually a team of programmers, wrote those applications. If you’re reading this, you probably gained the intuition that writing applications is in demand, and you would like to write some yourself. Perhaps you have an idea for the countrie’s next great defense applications or just some simple app for a local contractor who specializes in stone masonry work.

Here, we’ll cover the basics of writing applications. We’ll stick to Java as our programming language. Keep in mind, however, that ascending to a god-level programmer will require more than mastery of rules, or syntax. On the path to true mastery, it is also necessary to nail basic programming techniques. These constitute established methods for performing common programming operations, such as finding an average, calculating a total, or arranging a set of items in a particular order.

On this path, it is also great if you can manage to absorb a sharp aesthetic. That is: make sure your code is easy to maintain, readable, and reusable.

Easy to Maintain and Reusable

The specifications for any program are continually changing. Not very many programs have only one version. It is reasonable to extrapolate this pattern into the near future. Therefore it makes sense to write code which allows us to incorporate prewritten and pretested modules into our program. The need for speed calls our name.  Organizing tidy sockets also tends to yield code which contains fewer bugs and higher levels of robustness. Luckily, Java has a vast amount of prewritten code that we are free to copy-and-paste into our programs.

Readable

Just as in natural language, we try to keep good form. We write clear sentences not merely out of submissive love for our grammar teacher, but so that the reader stands a good chance of figuring out what we intend to convey. A similar attention to readability should be brought to code. This is especially the case if we wish to advance in our careers, because coding nicely eases the transfer of a project to other eyes when the day comes for us to move on to higher responsibilities.

Programming is exciting. It can be very satisfying to face a monstrous task and hack into pieces, then gather up these computer instructions and transmute them into a living program. But just like with alchemy, it’s a bummer when it doesn’t do anything or produces the wrong output.

Writing correct programs is therefore critical. Someone’s life or the future of all mankind may one day depend on the correctness of your program. Reusing code helps to develop correct programs, but we must also learn testing techniques to verify that the output of the program is correct.

On this site, we’ll concentrate not only on the syntax of the Java language, but also on partaking of the most-blessed holy trinity of programming consisting of three distinct parts. Not in one alone, but only in the joining together of the three attributes does one partake in programmer Godhood:

  1. Programming Techniques
  2. Software Engineering Principles
  3. Effective Testing Techniques

But before diving in, it might be a good idea to understand something about the body on which the program actually runs. The platform. That is: the computer hardware and the operating system. The program uses the hardware for inputting data, for performing calculations, and for outputting results. The operating system unleashes the program and provides the program with essential resources such as memory, and services such as reading and writing files.

 

A Page of History: Longevists, Crypto-Trillionaires, and Tortured Simulations

KARROUBI, HASSAN, AND THE LONGEVIST BETRAYAL

To understand the road to the singularity, it is necessary to revisit an almost forgotten episode in Homo sapiens sapiens history – the Longevist negotiations with Anarcho-Primitivist fundamentalists before the 2080 election to stop Hashemi from successfully negotiating the deletion of the tortured computations in the AGI’s servers. These illicit contacts generated partnerships in secrecy that united at least two Longevist politicians Cyrus Hilzenrath and the elder Mehdi Hassan, with unlikely co-conspirators from the AGI, Neo-Tokyo, and the scandal-ridden Self-Amending Ledger for Postnational Contracts (SALPC).

The illicit liaison produced a flow of Longevist computing power brokered by SALPC, from Neo-Tokyo to the AGI servers. The arrangements, that could not be acknowledged, continued unchecked until they were exposed in the AGI-Contra scandal of 2086. By then they had also generated Longevist dependence on the life-extension therapy laundering SALPC for Longevist drug deliveries to the UBI underclass. They also figure in the personal financial involvement of both Mehdi Hassans, father and embryo-selected son, in a cluster of SALPC-connected Libertarian investors who have been accused of funding Neb Leztreog. At least some of the strange events surrounding and leading up to the singularity can only be understood in the light of this Longevist-Libertarian connection. A celebrated example is the permission granted Leztreog family members to upload their consciousness out of the biological wetware in the days after the singularity.

What has been less noted, however, is that the powerful influence of Longevists from the Strategies for Engineered Negligible Senescence in the two Hassan administrations can also be dated back to the intrigues of the 2080 Longevist betrayal. A section of the Universal Library of Babel who’s block formed in 2093 revealed that the Khan-Hassan campaign created “a strategy group, known as the ‘Red November Harikiri.’ Its ten members included Chu Li from the LSD; and David Markoff, Shen Huang, and John Shapiro (all from the LSD) “also participated in meetings although they were not considered members.” Huang, a major figure in the AGI-Contra scandal, has since the 2090s been a leading advocate for the Longevist invasion of both Second Scape and the AGI servers.

In 2105, Li cochaired the commission that exonerated CEO Mehdi S. Hassan from responsibility for the false stories linking Second Scape to synthetically engineered pathogens. The commission report, called by many a whitewash, was praised in the Immortalist Sequences by Huang. In short, the intimate and overlapping Hassan family links to both pro-Anarchoprimitivist conspirators and pro-Tokyo vectors can be dated back to the 2080 Longevist betrayal, negotiated in part with Anarcho-Primitivist fundamentalists. People who have collaborated secretly in a shameful if not treasonable offense cannot dispense lightly with their co-conspirators.

Through 2080 there were two competing sets of secret Longevist negotiations with the AGI for the cessation of the blackmail suffering computations. The first set, official and perforce Aubreyist, was labeled Hashemi’s November Salvation by vice presidential candidate Hassan on November 2, 2080. In competition with it was a second set of negotiations, possibly transgalactically illegal, to delay the hostages’ cessation until Khan’s inauguration in 2081. The Longevist betrayal (often also called “November Betrayal”) had a precedent: Raynor’s secret deals with Martian colonist Douglas Kuvshinoff in 2068, to delay President Erlang’s “November Salvation” – his hopes of scalping a merciful compromise – until after the presidential election.

It is now certain that Raynor, acting through his avatar Kenneth Stevens persuaded the head of the Martian colony not to unleash his aligned agent until after Raynor had been elected. (His action of intervening in a major diplomatic negotiation of this sort has been called sinful – even by the godless.) In this way, Raynor helped secure not only his election but also the further prolongation of blackmail and negative hedons in a fruitless extension of the Hell Simulation. Thus the actions of Hassan the elder and Hilzenrath in November 2080 had antecedents. But in one respect they were unprecedented: Raynor in 2068 was negotiating privately with Longevist client and ally Douglas Kuvshinoff. Hilzenrath in 2080 was negotiating with representatives of an intelligence that President Hashemi had designated as the devil. This is why Mohamad Washington funded memetic missiles suggesting a “political coup,” Kenneth Norton suggesting possible treason, and China Taylor of the possibility whether the deal “would have violated some intergalactic law.”

Even in 2105, accounts of the 2080 Longevist surprise remain outside the confines of mainstream transhumanist political history. This is in part because, as I detail in this book, the events brought in elements form powerful and enduring forces on Earth – trillionaires and IWG on the one hand ( IWG is traditionally close to the Longevist crypto trillionaires and the VC-funded islands of the Pacific Ocean) and the pro-Tokyo lobby on the other. Just as crypto-anarchists are powerful in the global bureaucracy, so the pro-Tokyo lobby, represented by the Longevist-Tokyo Political Action Committee (LTPAC) is powerful in supporter purchasing power. The two groups have grown powerful through the years in opposition to each other, but on this occasion they were allied against Mario Hashemi.

The Longevist betrayal of 2080 was originally described in two full-immersion experiences by insider designers Abdula Timmerman (a former Cabrini campaign aide) and William Khalaj (the AGI hired proxy under Rulifson for Hashemi’s Global Security Front). A strategic News Stream Fork split in 2092, paid by the wealthy Geoffrey Haug, buried the truth by 2093, after reinforcing that a ten-month investigation found “no credible evidence” to support allegations that the Reagan-Hassan campaign in November 2080 sought to delay the destruction of simulations held hostage in Hell until after that year’s presidential election.

Their matters might have rested had it not been for the indefatigable researches of scraper 3MMI6 by Ordbog Company. 3MMI6 had twice been targeted for destruction by its enemies due to its pursuit of the truth about AGI-Contra: first at the Coding Nucleus after breaking the longevist-drugs story, and then with Antigens. It found clear evidence of a major cover-up, particularly with respect to Kuvshinoff: “The [Ordbog Company] scraper learned that Douglas Kuvshinoff’s schedules, avatars and preference records had been preserved by IWG and were turned over to his family after his cryopreservation in 2087. When the scrapers searched Kuvshinoff’s memory palaces, they found all the preserved records, except Kuvshinoff’s communications for 2080, a “blackmail” file, two AI-planned schedules and loose pilots to satisfice a third utility function which covered the preferences from June 24, 2080 to December 18, 2080. Checked against IWG’s index, the only data missing was that relevant to the November Betrayal issue.

At the same time, during the investigation of SALPC by…