The history of the central processing unit (CPU) - IBM Blog

The history of the central processing unit (CPU) – IBM Blog

Source Node: 2580483


The history of the central processing unit (CPU) – IBM Blog

<!—->


<!– –>


Technician working on computer

The central processing unit (CPU) is the computerā€™s brain. It handles the assignment and processing of tasks, in addition to functions that make a computer run.

Thereā€™s no way to overstate the importance of the CPU to computing. Virtually all computer systems contain, at the least, some type of basic CPU. Regardless of whether theyā€™re used in personal computers (PCs), laptops, tablets, smartphones or even in supercomputers whose output is so strong it must be measured in floating-point operations per second, CPUs are the one piece of equipment on computers that canā€™t be sacrificed. No matter what technological advancements occur, the truth remainsā€”if you remove the CPU, you simply no longer have a computer.

In addition to managing computer activity, CPUs help enable and stabilize the push-and-pull relationship that exists between data storage and memory. The CPU serves as the intermediary, interacting with the primary storage (or main memory) when it needs to access data from the operating systemā€™s random-access memory (RAM). On the other hand, read-only memory (ROM) is built for permanent and typically long-term data storage.

CPU components

Modern CPUs in electronic computers usually contain the following components:

  • Control unit: Contains intensive circuitry that leads the computer system by issuing a system of electrical pulses and instructs the system to carry out high-level computer instructions.
  • Arithmetic/logic unit (ALU): Executes all arithmetic and logical operations, including math equations and logic-based comparisons that are tied to specific computer actions.
  • Memory unit: Manages memory usage and flow of data between RAM and the CPU. Also supervises the handling of the cache memory.
  • Cache: Contains areas of memory built into a CPUā€™s processor chip to reach data retrieval speeds even faster than RAM can achieve.
  • Registers: Provides built-in permanent memory for constant, repeated data needs that must be handled regularly and immediately.
  • Clock: Manages the CPUā€™s circuitry by transmitting electrical pulses. The delivery rate of those pulses is referred to as clock speed, measured in Hertz (Hz) or megahertz (MHz).
  • Instruction register and pointer: Displays location of the next instruction set to be executed by the CPU.
  • Buses: Ensures proper data transfer and data flow between the components of a computer system.

How do CPUs work?

CPUs function by using a type of repeated command cycle that is administered by the control unit in association with the computer clock, which provides synchronization assistance.

The work a CPU does occurs according to an established cycle (called the CPU instruction cycle). The CPU instruction cycle designates a certain number of repetitions, and this is the number of times the basic computing instructions will be repeated, as permitted by that computerā€™s processing power.

The basic computing instructions include the following:

  • Fetch: Fetches occur anytime data is retrieved from memory.
  • Decode: The decoder within the CPU translates binary instructions into electrical signals that engage with other parts of the CPU.
  • Execute: Execution occurs when computers interpret and carry out a computer programā€™s set of instructions.

With some basic tinkering, the computer clock within a CPU can be manipulated to keep time faster than it normally elapses. Some users do this to run their computer at higher speeds. However, this practice (ā€œoverclockingā€) is not advisable since it can cause computer parts to wear out earlier than normal and can even violate CPU manufacturer warranties.

Processing styles are also subject to tweaking. One way to manipulate those is by implementing instruction pipelining, which seeks to instill instruction-level parallelism in a single processor. The goal of pipelining is to keep each part of the processor engaged by splitting up incoming computer instructions and spreading them out evenly among processor units. Instructions are broken down into smaller sets of instructions or steps.

Another method for achieving instruction-level parallelism inside a single processor is to use a CPU called a superscalar processor. Whereas scalar processors can execute a maximum of one instruction per clock cycle, thereā€™s really no limit to how many instructions can be dispatched by a superscalar processor. It sends multiple instructions to various of the processorā€™s execution units, thereby boosting throughput.

Who invented the CPU?

Breakthrough technologies often have more than one parent. The more complex and earth-shaking that technology, the more individuals who are usually responsible for that birth.

In the case of the CPUā€”one of historyā€™s most important inventionsā€”weā€™re really talking about who discovered the computer itself.

Anthropologists use the term ā€œindependent inventionā€ to describe situations where different individuals, who may be located countries away from each other and in relative isolation, each come up with what are similar or complementary ideas or inventions without knowing about similar experiments taking place.

In the case of the CPU (or computer), independent invention has occurred repeatedly, leading to different evolutionary shifts during CPU history.

Twin giants of computing

While this article canā€™t honor all the early pioneers of computing, there are two people whose lives and work need to be illuminated. Both had a direct connection to computing and the CPU:

Grace Hopper: Saluting ā€œGrandma COBOLā€

American Grace Brewster Hopper (1906-1992) weighed a mere 105 pounds when she enlisted in the US Navyā€”15 pounds under the required weight limit. And in one of US maritime historyā€™s wisest decisions, the Navy gave an exemption and took her anyway.

What Grace Hopper lacked in physical size, she made up for with energy and versatile brilliance. She was a polymath of the first order: a gifted mathematician armed with twin Ph.D. degrees from Yale University in both mathematics and mathematical physics, a noted professor of mathematics at Vassar College, a pioneering computer scientist credited with writing a computer language and authoring the first computer manual, and a naval commander (at a time when women rarely rose above administrative roles in the military).

Because of her work on leading computer projects of her time, such as the development of the UNIVAC supercomputer after WWII, Hopper always seemed in the thick of the action, always at the right place at the right time. She had personally witnessed much of modern computing history. She was the person who originally coined the term ā€œcomputer bug,ā€ describing an actual moth that had become caught within a piece of computing equipment. (The original moth remains on display at the Smithsonian Institutionā€™s National Museum of American History in Washington, DC.)

During her experience working on the UNIVAC project (and later running the UNIVAC project for the Remington Rand Corporation), Hopper became frustrated that there was not a simpler programming language that could be used. So, she set about writing her own programming language, which famously came to be known as COBOL (an acronym for COmmon Business-Oriented Language).

Robert Noyce: The Mayor of Silicon Valley

Robert Noyce was a mover and shaker in the classic business senseā€”a person who could make amazing activity start happening just by showing up.

American Robert Noyce (1927-1990) was a whiz-kid boy inventor. He later channeled his intellectual curiosity into his undergrad collegiate work, especially after being shown two of the original transistors created by Bell Laboratories. By age 26, Noyce earned a Ph.D. in Physics from the Massachusetts Institute of Technology (MIT).

In 1959, he followed up on Jack Kilbyā€™s 1958 invention of the first hybrid integrated circuit by making substantial tweaks to the original design. Noyceā€™s improvements led to a new kind of integrated circuits: the monolithic integrated circuit (also called the microchip), which was formulated using silicon. Soon the silicon chip became a revelation, changing industries and shaping society in new ways.

Noyce co-founded two hugely successful corporations during his business career: Fairchild Semiconductor Corporation (1957) and Intel (1968). He was the first CEO of Intel, which is still known globally for manufacturing processing chips.

His partner in both endeavors was Gordon Moore, who became famous for a prediction about the semiconductor industry that proved so reliable it has seemed almost like an algorithm. Called ā€œMooreā€™s Law,ā€ it posited that the number of transistors to be used within an integrated circuit reliably doubles about every two years.

While Noyce oversaw Intel, the company produced the Intel 4004, now recognized as the chip that launched the microprocessor revolution of the 1970s. The creation of the Intel 4004 involved a three-way collaboration between Intelā€™s Ted Hoff, Stanley Mazor and Federico Faggin, and it became the first microprocessor ever offered commercially.

Late in his tenure, the company also produced the Intel 8080ā€”the companyā€™s second 8-bit microprocessor, which first appeared in April 1974. Within a couple of years of that, the manufacturer was rolling out the Intel 8086, a 16-bit microprocessor.

During his illustrious career, Robert Noyce amassed 12 patents for various creations and was honored by three different US presidents for his work on integrated circuits and the massive global impact they had.

ENIAC: Marching off to war

It seems overly dramatic, but in 1943, the fate of the world truly was hanging in the balance. The outcome of World War II (1939-1945) was still very much undecided, and both Allies forces and Axis forces were eagerly scouting any kind of technological advantage to gain leverage over the enemy.

Computer devices were still in their infancy when a project as monumental in its way as the Manhattan Project was created. The US government hired a group of engineers from the Moore School of Electrical Engineering at the University of Pennsylvania. The mission called upon them to build an electronic computer capable of calculating yardage amounts for artillery-range tables.

The project was led by John Mauchly and J. Presper Eckert, Jr. at the militaryā€™s request. Work began on the project in early 1943 and didnā€™t end until 3 years later.

The creation produced by the projectā€”dubbed ENIAC, which stood for ā€œElectronic Numerical Integrator and Computerā€ā€”was a massive installation requiring 1,500 sq. ft. of floor space, not to mention 17,000 glass vacuum tubes, 70,000 resistors, 10,000 capacitors, 6,000 switches and 1,500 relays. In 2024 currency, the project would have cost USD 6.7 million.

It could process up to 5,000 equations per second (depending on the equation), an amazing quantity as seen from that historical vantage point. Due to its generous size, the ENIAC was so large that people could stand within the CPU and program the machine by rewiring connections between functional units in the machine.  

ENIAC was used by the US Army during the rest of WWII. But when that conflict ended, the Cold War began and ENIAC was given new marching orders. This time it would perform calculations that would help enable the building of a bomb with more than a thousand times the explosive force of the atomic weapons that ended WWII: the hydrogen bomb.

UNIVAC: Getting back to business

Following WWII, the two leaders of the ENIAC project decided to set up shop and bring computing to American business. The newly dubbed Eckert-Mauchly Computer Corporation (EMCC) set out to prepare its flagship productā€”a smaller and cheaper version of the ENIAC, with various improvements like added tape drives, a keyboard and a converter device that accepted punch-card use.

Though sleeker than the ENIAC, the UNIVAC that was unveiled to the public in 1951 was still mammoth, weighing over 8 tons and using 125 kW of energy. And it was still expensive: around USD 11.6 million in todayā€™s money.

For its CPU, it contained the first CPUā€”the UNIVAC 1103ā€”which was developed at the same time as the rest of the project. The UNIVAC 1103 used glass vacuum tubes, making the CPU large, unwieldy and slow.

The original batch of UNIVAC 1s was limited to a run of 11 machines, meaning that only the biggest, best-funded and best-connected companies or government agencies could gain access to a UNIVAC. Nearly half of those were US defense agencies, like the US Air Force and the Central Intelligence Agency (CIA). The very first model was purchased by the U.S. Census Bureau.

CBS News had one of the machines and famously used it to correctly predict the outcome of the 1952 US Presidential election, against long-shot odds. It was a bold publicity stunt that introduced the American public to the wonders that computers could do.

Transistors: Going big by going small

As computing increasingly became realized and celebrated, its main weakness was clear. CPUs had an ongoing issue with the vacuum tubes being used. It was really a mechanical issue: Glass vacuum tubes were extremely delicate and prone to routine breakage.

The problem was so pronounced that the manufacturer went to great lengths to provide a workaround solution for its many agitated customers, whose computers stopped dead without working tubes.

The manufacturer of the tubes regularly tested tubes at the factory, subjecting tubes to different amounts of factory use and abuse, before selecting the ā€œtoughestā€ tubes out of those batches to be held in reserve and at the ready for emergency customer requests.

The other problem with the vacuum tubes in CPUs involved the size of the computing machine itself. The tubes were bulky and designers were craving a way to get the processing power of the tube from a much smaller device.

By 1953, a research student at the University of Manchester showed you could construct a completely transistor-based computer.

Original transistors were hard to work with, in large part because they were crafted from germanium, a substance which was tricky to purify and had to be kept within a precise temperature range.

Bell Laboratory scientists started experimenting with other substances in 1954, including silicon. The Bell scientists (Mohamed Italia and Dawn Kahng) kept refining their use of silicon and by 1960 had hit upon a formula for the metal-oxide-semiconductor field-effect transistor (or MOSFET, or MOS transistor) modern transistor, which has been celebrated as the ā€œmost widely manufactured device in history,ā€ by the Computer History Museum. In 2018 it was estimated that 13 sextillion MOS transistors had been manufactured.

The advent of the microprocessor

The quest for miniaturization continued until computer scientists created a CPU so small that it could be contained within a small integrated circuit chip, called the microprocessor.

Microprocessors are designated by the number of cores they support. A CPU core is the ā€œbrain within the brain,ā€ serving as the physical processing unit within a CPU. Microprocessors can contain multiple processors. Meanwhile, a physical core is a CPU built into a chip, but which only occupies one socket, thus enabling other physical cores to tap into the same computing environment.

Here are some of the other main terms used in relation to microprocessors:

  • Single-core processors: Single-core processors contain a single processing unit. They are typically marked by slower performance, run on a single thread and perform the CPU instruction cycle one at a time.
  • Dual-core processors: Dual-core processors are equipped with two processing units contained within one integrated circuit. Both cores run at the same time, effectively doubling performance rates.
  • Quad-core processors: Quad-core processors contain four processing units within a single integrated circuit. All cores run simultaneously, quadrupling performance rates.
  • Multi-core processors: Multi-core processors are integrated circuits equipped with at least two processor cores, so they can deliver supreme performance and optimized power consumption.

Leading CPU manufacturers

Several companies now create products that support CPUs through different brand lines. However, this market niche has changed dramatically, given that it formerly attracted numerous players, including plenty of mainstream manufacturers (e.g., Motorola). Now thereā€™s really just a couple of main players: Intel and AMD.

They use differing instruction set architectures (ISAs). So, while AMD processors take their cues from Reduced Instruction Set Computer (RISC) architecture, Intel processors follow a Complex Instruction Set Computer (CISC) architecture.

  • Advanced Micro Devices (AMD): AMD sells processors and microprocessors through two product types: CPUs and APUs (which stands for accelerated processing units). In this case, APUs are simply CPUs that have been equipped with proprietary Radeon graphics. AMDā€™s Ryzen processors are high-speed, high-performance microprocessors intended for the video-game market. Athlon processors was formerly considered AMDā€™s high-end line, but AMD now uses it as a general-purpose alternative.
  • Arm: Arm doesnā€™t actually manufacture equipment, but does lease out its valued processor designs and/or other proprietary technologies to other companies who make equipment. Apple, for example, no longer uses Intel chips in Mac CPUs, but makes its own customized processors based on Arm designs. Other companies are following suit.
  • Intel: Intel sells processors and microprocessors through four product lines. Its premium line is Intel Core, including processor models like the Core i3. Intelā€™s Xeon processors are marketed toward offices and businesses. Intelā€™s Celeron and Intel Pentium lines (represented by models like the Pentium 4 single-core CPUs) are considered slower and less powerful than the Core line.

Understanding the dependable role of CPUs

When considering CPUs, we can think about the various components that CPUs contain and use. We can also contemplate how CPU design has moved from its early super-sized experiments to its modern period of miniaturization.

But despite any transformations to its dimensions or appearance, the CPU remains steadfastly itself, still on the jobā€”because itā€™s so good at its particular job. You know you can trust it to work correctly, each time out.

Smart computing depends upon having proper equipment you can rely upon. IBM builds its servers strong, to withstand any problems the modern workplace can throw at them. Find the IBM servers you need to get the results your organization relies upon.

Explore IBM servers

Was this article helpful?

YesNo


More from Cloud




A clear path to value: Overcome challenges on your FinOps journey 

3 min readIn recent years, cloud adoption services have accelerated, with companies increasingly moving from traditional on-premises hosting to public cloud solutions. However, the rise of hybrid and multi-cloud patterns has led to challenges in optimizing value and controlling cloud expenditure, resulting in a shift from capital to operational expenses.   According to a Gartner report, cloud operational expenses are expected to surpass traditional IT spending, reflecting the ongoing transformation in expenditure patterns by 2025. FinOps is an evolving cloud financial management disciplineā€¦




IBM Power8 end of service: What are my options?

3 min readIBM Power8Ā® generation of IBM Power Systems was introduced ten years ago and it is now time to retire that generation. The end-of-service (EoS) support for the entire IBM Power8 server line is scheduled for this year, commencing in March 2024 and concluding in October 2024. EoS dates vary by model: 31 March 2024: maintenance expires for Power Systems S812LC, S822, S822L, 822LC, 824 and 824L. 31 May 2024: maintenance expires for Power Systems S812L, S814 and 822LC. 31 Octoberā€¦




24 IBM offerings winning TrustRadius 2024 Top Rated Awards

2 min readTrustRadius is a buyer intelligence platform for business technology. Comprehensive product information, in-depth customer insights and peer conversations enable buyers to make confident decisions. ā€œEarning a Top Rated Award means the vendor has excellent customer satisfaction and proven credibility. Itā€™s based entirely on reviews and customer sentiment,ā€ said Becky Susko, TrustRadius, Marketing Program Manager of Awards. Top Rated Awards have to be earned: Gain 10+ new reviews in the past 12 months Earn a trScore of 7.5 or higher fromā€¦

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.

Subscribe now

More newsletters

Time Stamp:

More from IBM IoT