LEARN Architecture

Welcome

Fertile Questions for the Architecture chapter of “How to Learn Computer Science”.


Thank you for buying my book! This page discusses the content in the Architecture chapter and answers the “Fertile Questions” I asked there. There are no perfect answers, you may even disagree, but the point of a fertile question is to make you think.

Here are the questions, and my suggested answers. Do you agree?

Advertisements
Advertisements
Where did the idea of an instruction cycle of Fetch-Decode and Execute come from?

John Von Neumann (pronounced “noy-man” because he was Hungarian) first had the idea of storing programs in the same memory as used for data. He demonstrated this “stored program” concept with the EDVAC in 1949. Prior to EDVAC, computer programs were hard-wired using plug-boards, or read in from paper tape.

Just before Von Neumann could complete EDVAC, however, two British engineers called Williams and Kilburn finished their Small-Scale Experimental Machine, also known as the Manchester Baby, which successfully calculated the highest proper factor of the number 218 (answer 262,144) in just 52 minutes, and became the first stored-program electronic computer, running Von Neumann’s fetch-decode-execute cycle in 1948.

Do modern computers have a Von Neumann or a Harvard architecture?

The answer is really “neither” or “a mix of the two”.

Early computers were largely limited by the processing speed of the CPU. In the 1960s, however, fast-switching transistors changed all this: suddenly CPUs were sitting idle, waiting for instructions or data. Splitting main storage into data memory and instruction memory allowed the CPU to fetch an instruction on one bus, while simultaneously fetching data on another. This twin-memory solution to the “Von Neumann bottleneck” is sometimes known as the Harvard architecture. Modern computers combine von Neumann and Harvard ideas in a “hybrid architecture” to maximise performance.

“How to Learn Computer Science” page 124
Why don’t we make the cache huge and get rid of RAM and secondary storage?

Two reasons really, cache is more expensive than RAM, and after a certain size the time spent seeking the cache gets too large, so more cache over a certain amount would be self-defeating.

In order to speed up execution of programs, instructions can be cached. Cache memory responds in as little as one nanosecond or just two CPU cycles. When cache holds both instructions and data, that code completes far quicker than if it were all in RAM. Cache is the second of the three Cs, which improves a computer’s performance.

Why don’t we make all main memory into cache? The answer is that cache is expensive and, ironically, the bigger it gets, the slower it performs. All CPUs since Intel’s 8086 (released in 1978) therefore contain technology that prefetches contents from RAM into cache to prevent CPU idleness. Processors now have many levels of cache: a tiny level 1 cache tightly integrated with the CPU has lightning performance, while level 2 and sometimes level 3 are further away, relatively slower and cheaper.

“How to Learn Computer Science” page 125
Advertisements
Why is Moore’s law slowing down and what can we do about it?

In 1965, electronics pioneer Gordon Moore noted in an article in the 35th anniversary edition of Electronics magazine that computing power was doubling every two years.[1] Moore’s law remained largely reliable until recent years, when chip manufacturers began to hit limits imposed by physics, like the wavelength of the UV lasers used to etch the silicon. Manufacturers are even grappling with “quantum tunnelling” – electrons simply “jumping the gate” when transistors become too small.

“How to Learn Computer Science” page 120

Moore’s law hit the buffers once we got transistors down to about 5 nanometres (5nm) in size, this can’t reliably switch current because of quantum effects. So we have to think “outside the box” and speed up computers in other ways. These are some of the solutions:

  • Multiple cores – most laptops now have Quad-Core chips, and most smartphones now have at least six core CPUs. Each core can process a different program or program segment, running its own fetch-decode-execute cycle.
  • Multilevel cache A small number of instructions can be stored in the superfast level 1 cache right inside the CPU, but we can add further cache called level 2 and level 3.
  • Intelligent pre-fetching. A CPU twill look ahead in the program and fetch instructions into cache before they are needed, which speeds up execution.

Of course Quantum computers, that use the weird properties of sub-atomic particles, will solve any performance bottlenecks in the future!

What innovation helped ARM chips take over the world?

Reduced Instruction Set Computing (RISC). The ARM chip executed far fewer kinds of instruction than typical CPUs, meaning its logic circuits were simpler and required less space and less power, making the chip suitable for mobile devices, just in time for the revolution in device formats, as people wanted laptops, smartphones and tablets.

The Acorn RISC Machine (ARM) chip was designed to improve the performance of desktop microcomputers, not to save power and space. But when Apple needed a low-power, cool-running chip for its ill-fated Newton handheld computer in 1987, they chose the ARM. Apple got behind Acorn and in 1990 they spun off the ARM company. After the ARM-powered iPod was launched in 2001, the world went ARM-crazy. By 2010, more than 95% of smartphones contained an ARM processor and the company shipped more than 20 billion chips in 2017.

“How to Learn Computer Science” page 130
What do the Manchester Baby and the iPhone still have in common?

From the Baby to ARM, all CPUs still contain an ALU, registers and a control unit, and perform a fetch-decode-execute cycle first described by von Neumann in 1945.

“How to Learn Computer Science” page 132
If you are grateful for my blog, please buy my books here or buy me a coffee at ko-fi.com/mraharrisoncs, thanks!
Advertisements