The Hardware

A computer generally includes the following components:

  • A CPU – the infamous central processing unit. This is the thing that executes the instructions of the program.
  • A memory unit. This holds the instructions and data of a program while it is executing.
  • A hard disk. This holds the instructions and data so that they can be loaded into memory and accessed by the CPU.

(Think of it like this: the hard disk is the pattern of your memories distributed in your hippocampus and other areas of the brain. It’s there whether you are experiencing those memories or not. The memory unit is what lights up when you actually experience that memory.)

  • A keyboard and/or touch screen. Unfortunately we still tend to use our bone-sausage appendages to input data.
  • A monitor, used to display output from the program. Our eyeballs and visual cortex are actually quite impressive at digesting information. Even famed transhumanist philosopher Nick Bostrom doesn’t think it’s the greatest idea to attempt to circumvent this mechanism by becoming a different kind of cyborg that directly accesses computers with brain implants (e.g. a la Kurzweil or Musk).
  • It used to be the case that computers had an Ethernet port. Now it is indeed the case that MacBook Air’s, for example, do not have an Ethernet port. New computers don’t need wireless networking adapters for connecting to a Local Area Network (LAN) or Wi-Fi (which is a subset of LANs).
  • Other components include a graphics card. Increasingly obsolete components include things like a DVD drive.

It seems that the trend is towards simplification, towards removing components by integrating their respective functions into other means. What else do you think will become obsolete from computers in the future? Do you think this minimization of components will hit a limit soon, or do you believe we will go all the way like Ray Kurzweil and Elon Musk –assigning high likelihood to the proposition that we will have the computers directly in our brain this century?

Here is a motherboard. The difference between a motherboard and a logic board is that the latter is generally assumed to be Macintosh, whereas a motherboard could be a Mac, PC or any other computer. But the same components plug into both, like CPU, RAM, graphics cards, hard drives and optical drives.

for-macbook-pro-a1502-motherboard-laptop

If you were to go for this

Screen Shot 2018-08-07 at 6.37.48 PM

 

on Amazon right now, it might have the following set of specifications:

Screen Shot 2018-08-07 at 6.40.38 PM

In these specifications, the Intel Xeon W is the CPU. The best CPU’s for gamers include the Ryzen 7 2700X, the AMD Ryzen 5 2600X, and the AMD Ryzen 3 2200G, depending on budget to capacity preferences.

CPUs consist of an Arithmetic Logic Unit (ALU) which performs basic integer arithmetic and logical operations; the Floating Point Unit performs mathematical operations on floating-point numbers; a set of registers (holding data and memory addresses) which provide operand inputs to the ALU; as well as a Control Unit and Memory Management Unit – with the Control Unit puppeteering the entire show using electrical signals, and the Memory Management Unit translating logical addresses into physical RAM addresses as well as protecting, storing, and retrieving memory. Each CPU comes with its own set of instructions, which are the operations that it can perform. Typically it’s possible moves include: arithmetic and logic, move data, and change the flow of the program (i.e., determine which instruction is to be executed next).

A program is instructions. Specifically, instructions which are fetched, decoded, and executed in the aptly named Fetch-Decode-Execute Cycle, or more simply, the Instruction Cycle. After the execution of an instruction, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. The CPU fetches the next instruction from memory and places it into the Instruction Register. The instruction is decoded (is the instruction a move, load, store, etc.?). Then, the instruction is executed. This Fetch-Decode-Execute Cycle repeats until the program ends.

The speed of a CPU is related to it’s clock cycle, usually rated in GHz (billion hertz); in the indexical present of August 2018, a $330 CPU has a speed of 3.7 GHz and 4.3 GHz boost. It takes one clock cycle for a processor to fetch an instruction from memory, decode an instruction, and execute it. At this point, the cause for speed up from one model to the next is due to reduction in architecture of the chips to 12 nm, since all the general tricks to make it faster are already adopted. For example, pipelining, which allows the CPU to process several instructions at once. The way this is done is by having each stage complete a part of an instruction in parallel. The stages are connected one to the next to form a pipe – instructions enter at one end, progress through the stages, and exit at the other end. This greatly improves performance of applications.

A CPU rated at 3 GHz is capable of executing 3 billion instructions per second. That translates into executing one instruction every 0.333 x 10⁻⁹ seconds – one instruction in one third of a nanosecond.

Memory or storage devices, such as L2 cache, memory, or hard disk, are typically rated in terms of their capacity, expressed in bytes. A byte is eight binary digits, or bits. A single bit’s value is 0 or 1. Depending on the type of memory or storage device, the capacity will be stated in kilobytes, megabytes, gigabytes, or even terabytes. For the CPU to execute at its rated speed, however, instructions and data must be available to the CPU at that speed as well. Instructions and data come directly from the L1 cache, which is memory directly located on the CPU chip. Since the L1 cache is located on the CPU chip, it runs at the same speed as the CPU. However, the L1 cache, which can be several Mbytes, is typically much smaller than main memory, and eventually the CPU will need to process more instructions and data than can be held in the L1 cache at one time.

At that point, the CPU typically brings data from what is called the L2 cache, which is located on separate memory chips connected to the CPU. A typical speed for the L2 cache would be a few nanoseconds access time, and this will considerably slow down the rate at which the CPU can execute instructions. L2 cache size today is typically 3 to 8 Mbytes, and again, the CPU will eventually need more space for instructions and data than the L2 cache can hold at one time.

Then, the CPU will bring data and instructions from main memory, also located outside, but connected to, the CPU chip. This will slow down the CPU even more, because main memory typically has an access time of about 20 to 50 nanoseconds. Main memory, though, is significantly larger in size than the L1 and L2 caches, typically anywhere between 3 and 8 Gbytes. When the CPU runs out of space again, it will have to get its data from the hard disk, which is typically 1 Tbyte or more, but with an access time in the milliseconds range.

As you can see from these numbers, a considerable amount of speed is lost when the CPU goes from main memory to disk, which is why having sufficient memory is very important for the overall performance of applications. Another factor that should be taken into consideration is cost per kilobyte. Typically the cost per kilobyte decreases significantly stepping down from L1 cache to hard disk, so high performance is often traded for low price. Main memory (also called RAM) uses DRAM, or Dynamic Random Access Memory technology, which maintains data only when power is applied to the memory and needs to be refreshed regularly in order to retain data because it stores bits in cells consisting of a capacitor and a transistor. The L1 and L2 caches use SRAM, or Static Random Access Memory technology, which also needs power but does not need to be refreshed in order to retain data because SRAM does not use capacitors and therefore does not have leakage issues. Memory capacities are typically stated in powers of 2. For instance, 256 Kbytes of memory is 218 bytes, or 262,144 bytes.

Memory chips contain cells, each cell containing a bit, which can store either a 0 or a 1. Cells can be accessed individually or as a group of typically 4, 8, or 16 cells. For instance, a 32-Kbit RAM chip organized as 8K × 4 is composed of exactly 213, or 8,192 units, each unit containing four cells. Such a RAM chip will have four data output pins (or lines) and 13 access pins (or lines), enabling access to all 8,192 units because each access pin can have a value of 0 or 1.

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s