Devoir de Philosophie

Central Processing Unit.

Publié le 11/05/2013

Extrait du document

Central Processing Unit. I INTRODUCTION Central Processing Unit (CPU), in computer science, microscopic circuitry that serves as the main information processor in a computer. A CPU is generally a single microprocessor made from a wafer of semiconducting material, usually silicon, with millions of electrical components on its surface. On a higher level, the CPU is actually a number of interconnected processing units that are each responsible for one aspect of the CPU's function. Standard CPUs contain processing units that interpret and implement software instructions, perform calculations and comparisons, make logical decisions (determining if a statement is true or false based on the rules of Boolean algebra), temporarily store information for use by another of the CPU's processing units, keep track of the current step in the execution of the program, and allow the CPU to communicate with the rest of the computer. II A HOW A CPU WORKS CPU Function A CPU is similar to a calculator, only much more powerful. The main function of the CPU is to perform arithmetic and logical operations on data taken from memory or on information entered through some device, such as a keyboard, scanner, or joystick. The CPU is controlled by a list of software instructions, called a computer program. Software instructions entering the CPU originate in some form of memory storage device such as a hard disk, floppy disk, CD-ROM, or magnetic tape. These instructions then pass into the computer's main random access memory (RAM), where each instruction is given a unique address, or memory location. The CPU can access specific pieces of data in RAM by specifying the address of the data that it wants. As a program is executed, data flow from RAM through an interface unit of wires called the bus, which connects the CPU to RAM. The data are then decoded by a processing unit called the instruction decoder that interprets and implements software instructions. From the instruction decoder the data pass to the arithmetic/logic unit (ALU), which performs calculations and comparisons. Data may be stored by the ALU in temporary memory locations called registers where it may be retrieved quickly. The ALU performs specific operations such as addition, multiplication, and conditional tests on the data in its registers, sending the resulting data back to RAM or storing it in another register for further use. During this process, a unit called the program counter keeps track of each successive instruction to make sure that the program instructions are followed by the CPU in the correct order. B Branching Instructions The program counter in the CPU usually advances sequentially through the instructions. However, special instructions called branch or jump instructions allow the CPU to abruptly shift to an instruction location out of sequence. These branches are either unconditional or conditional. An unconditional branch always jumps to a new, out of order instruction stream. A conditional branch tests the result of a previous operation to see if the branch should be taken. For example, a branch might be taken only if the result of a previous subtraction produced a negative result. Data that are tested for conditional branching are stored in special locations in the CPU called flags. C Clock Pulses The CPU is driven by one or more repetitive clock circuits that send a constant stream of pulses throughout the CPU's circuitry. The CPU uses these clock pulses to synchronize its operations. The smallest increments of CPU work are completed between sequential clock pulses. More complex tasks take several clock periods to complete. Clock pulses are measured in Hertz, or number of pulses per second. For instance, a 2-gigahertz (2-GHz) processor has 2 billion clock pulses passing through it per second. Clock pulses are a measure of the speed of a processor. D Fixed-Point and Floating-Point Numbers Most CPUs handle two different kinds of numbers: fixed-point and floating-point numbers. Fixed-point numbers have a specific number of digits on either side of the decimal point. This restriction limits the range of values that are possible for these numbers, but it also allows for the fastest arithmetic. Floating-point numbers are numbers that are expressed in scientific notation, in which a number is represented as a decimal number multiplied by a power of ten. Scientific notation is a compact way of expressing very large or very small numbers and allows a wide range of digits before and after the decimal point. This is important for representing graphics and for scientific work, but floating-point arithmetic is more complex and can take longer to complete. Performing an operation on a floating-point number may require many CPU clock periods. A CPU's floating-point computation rate is therefore less than its clock rate. Some computers use a special floating-point processor, called a coprocessor, that works in parallel to the CPU to speed up calculations using floating-point numbers. This coprocessor has become standard on many personal computer CPUs, such as Intel's Pentium chip. III HISTORY A Early Computers In the first computers, CPUs were made of vacuum tubes and electric relays rather than microscopic transistors on computer chips. These early computers were immense and needed a great deal of power compared to today's microprocessor-driven computers. The first general purpose electronic computer, the ENIAC (Electronic Numerical Integrator And Computer), was introduced in 1946 and filled a large room. About 18,000 vacuum tubes were used to build ENIAC's CPU and input/output circuits. Between 1946 and 1956 all computers had bulky CPUs that consumed massive amounts of energy and needed continual maintenance, because the vacuum tubes burned out frequently and had to be replaced. B The Transistor A solution to the problems posed by vacuum tubes came in 1948, when American physicists John Bardeen, Walter Brattain, and William Shockley first demonstrated a revolutionary new electronic switching and amplifying device called the transistor. The transistor had the potential to work faster and more reliably and to consume much less power than a vacuum tube. Despite the overwhelming advantages transistors offered over vacuum tubes, it took nine years before they were used in a commercial computer. The first commercially available computer to use transistors in its circuitry was the UNIVAC (UNIVersal Automatic Computer), delivered to the United States Air Force in 1956. C The Integrated Circuit Development of the computer chip started in 1958 when Jack Kilby of Texas Instruments demonstrated that it was possible to integrate the various components of a CPU onto a single piece of silicon. These computer chips were called integrated circuits (ICs) because they combined multiple electronic circuits on the same chip. Subsequent design and manufacturing advances allowed transistor densities on integrated circuits to increase tremendously. The first ICs had only tens of transistors per chip compared to the millions or even billions of transistors per chip available on today's CPUs. In 1967 Fairchild Semiconductor introduced a single integrated circuit that contained all the arithmetic logic functions for an eight-bit processor. (A bit is the smallest unit of information used in computers. Multiples of a bit are used to describe the largest-size piece of data that a CPU can manipulate at one time.) However, a fully working integrated circuit computer required additional circuits to provide register storage, data flow control, and memory and input/output paths. Intel Corporation accomplished this in 1971 when it introduced the Intel 4004 microprocessor. Although the 4004 could only manage four-bit arithmetic, it was powerful enough to become the core of many useful hand calculators at the time. In 1975 Micro Instrumentation Telemetry Systems introduced the Altair 8800, the first personal computer kit to feature an eight-bit microprocessor. Because microprocessors were so inexpensive and reliable, computing technology rapidly advanced to the point where individuals could afford to buy a small computer. The concept of the personal computer was made possible by the advent of the microprocessor CPU. In 1978 Intel introduced the first of its x86 CPUs, the 8086 16-bit microprocessor. Although 32-bit microprocessors are most common today, microprocessors are becoming increasingly sophisticated, with many 64-bit CPUs available. High-performance processors can run with internal clock rates that exceed 3 GHz, or 3 billion clock pulses per second. IV CURRENT DEVELOPMENTS The competitive nature of the computer industry and the use of faster, more cost-effective computing continue the drive toward faster CPUs. The minimum transistor size that can be manufactured using current technology is fast approaching the theoretical limit. In the standard technique for microprocessor design, ultraviolet (short wavelength) light is used to expose a light-sensitive covering on the silicon chip. Various methods are then used to etch the base material along the pattern created by the light. These etchings form the paths that electricity follows in the chip. The theoretical limit for transistor size using this type of manufacturing process is approximately equal to the wavelength of the light used to expose the light-sensitive covering. By using light of shorter wavelength, greater detail can be achieved and smaller transistors can be manufactured, resulting in faster, more powerful CPUs. Printing integrated circuits with X-rays, which have a much shorter wavelength than ultraviolet light, may provide further reductions in transistor size that will translate to improvements in CPU speed. Many other avenues of research are being pursued in an attempt to make faster CPUs. New base materials for integrated circuits, such as composite layers of gallium arsenide and gallium aluminum arsenide, may contribute to faster chips. Alternatives to the standard transistor-based model of the CPU are also being considered. Experimental ideas in computing may radically change the design of computers and the concept of the CPU in the future. These ideas include quantum computing, in which single atoms hold bits of information; molecular computing, where certain types of problems may be solved using recombinant DNA techniques; and neural networks, which are computer systems with the ability to learn. Contributed By: Peter M. Kogge Microsoft ® Encarta ® 2009. © 1993-2008 Microsoft Corporation. All rights reserved.

« Development of the computer chip started in 1958 when Jack Kilby of Texas Instruments demonstrated that it was possible to integrate the various components of aCPU onto a single piece of silicon.

These computer chips were called integrated circuits (ICs) because they combined multiple electronic circuits on the same chip.Subsequent design and manufacturing advances allowed transistor densities on integrated circuits to increase tremendously.

The first ICs had only tens of transistorsper chip compared to the millions or even billions of transistors per chip available on today’s CPUs. In 1967 Fairchild Semiconductor introduced a single integrated circuit that contained all the arithmetic logic functions for an eight-bit processor.

(A bit is the smallestunit of information used in computers.

Multiples of a bit are used to describe the largest-size piece of data that a CPU can manipulate at one time.) However, a fullyworking integrated circuit computer required additional circuits to provide register storage, data flow control, and memory and input/output paths.

Intel Corporationaccomplished this in 1971 when it introduced the Intel 4004 microprocessor.

Although the 4004 could only manage four-bit arithmetic, it was powerful enough tobecome the core of many useful hand calculators at the time.

In 1975 Micro Instrumentation Telemetry Systems introduced the Altair 8800, the first personal computerkit to feature an eight-bit microprocessor.

Because microprocessors were so inexpensive and reliable, computing technology rapidly advanced to the point whereindividuals could afford to buy a small computer.

The concept of the personal computer was made possible by the advent of the microprocessor CPU.

In 1978 Intelintroduced the first of its x86 CPUs, the 8086 16-bit microprocessor.

Although 32-bit microprocessors are most common today, microprocessors are becomingincreasingly sophisticated, with many 64-bit CPUs available.

High-performance processors can run with internal clock rates that exceed 3 GHz, or 3 billion clock pulsesper second. IV CURRENT DEVELOPMENTS The competitive nature of the computer industry and the use of faster, more cost-effective computing continue the drive toward faster CPUs.

The minimum transistorsize that can be manufactured using current technology is fast approaching the theoretical limit.

In the standard technique for microprocessor design, ultraviolet (short wavelength) light is used to expose a light-sensitive covering on the silicon chip.

Various methods are then used to etch the base material along the pattern created bythe light.

These etchings form the paths that electricity follows in the chip.

The theoretical limit for transistor size using this type of manufacturing process isapproximately equal to the wavelength of the light used to expose the light-sensitive covering.

By using light of shorter wavelength, greater detail can be achieved andsmaller transistors can be manufactured, resulting in faster, more powerful CPUs.

Printing integrated circuits with X-rays, which have a much shorter wavelength thanultraviolet light, may provide further reductions in transistor size that will translate to improvements in CPU speed. Many other avenues of research are being pursued in an attempt to make faster CPUs.

New base materials for integrated circuits, such as composite layers of galliumarsenide and gallium aluminum arsenide, may contribute to faster chips.

Alternatives to the standard transistor-based model of the CPU are also being considered.Experimental ideas in computing may radically change the design of computers and the concept of the CPU in the future.

These ideas include quantum computing, inwhich single atoms hold bits of information; molecular computing, where certain types of problems may be solved using recombinant DNA techniques; and neuralnetworks, which are computer systems with the ability to learn. Contributed By:Peter M.

KoggeMicrosoft ® Encarta ® 2009. © 1993-2008 Microsoft Corporation.

All rights reserved.. »

↓↓↓ APERÇU DU DOCUMENT ↓↓↓

Liens utiles