Computer Hardware
1.1 History
The notion of a mechanical calculator is due to Charles Babbage. In 1821 he proposed his Difference Engine consisting of a set of linked adding mechanisms. Its primary purpose was to manipulate polynomials (by computing finite differences). He built a prototype which could compute second order differences. This machine could compute values of quadratic polynomials. Although he designed one capable of dealing with a sixth degree polynomial, he never built one. His Analytical Engine is the forerunner of the architecture of modern computers. It had a storage unit, arithmetic "mill" or unit and punched card input/output. The building technology was based on gears and linkages. According to his estimation, an addition could be done in a second, a multiplication in a minute. Babbage died in 1871. His designs were never implemented since his ideas were far too advanced for his time.
First
Generation Machines (1945-58)
These
machines used thermionic valves. Two prototypes were
built at the
Second Generation Machines (1959-65)
In the second generation machines, the vacuum tube gave way to the transistor. The transistor was developed by American Physicists Bardeen, Brattain and Shockley of AT&T Bell Labs. They shared the Nobel prize in 1956 for their invention. A transistor is much smaller than a vacuum tube and so can house more active components. They were very reliable, produced very little heat. Computers based on the transistor were very compact. The GDT for such machines had been reduced to 0.3 microseconds. A typical machine of this generation is IBM 7090. The minicomputer (DEC) also belongs to this generation.
Third
Generation Machines (1965-1975)
A new component technology was born in 1965 - solid state integrated circuits (invented by Jack Kelby at Texas Instruments). An IC is the size of an ice cube that contains hundreds of miniature transistors). The chief characteristic of these circuits is the silicon chip where several components were combined in a single wafer. In the beginning there were only few gates were placed on such a chip with a GDT of 10 nanoseconds (10-9 seconds). By 1975 this was improved to 1 nanosecond by increasing the number of gates. Typical machines of this generation are the hugely successful IBM 360 series, Burroughs 6500 and the UNIVAC 1108.
Fourth Generation Computers (1980 - )
This age belongs to the microchip, or more commonly known as VLSI - very large scale integrated circuit. A microchip is the size of a postage stamp containing hundreds of thousands of electronic components. This is the age of the micro or "personal" computer which is still evolving.
The GDT had thus improved from 1 microsecond to 1 nanosecond (1000 fold decrease) in 30 years. Typical machine performances from these generations are:
1st generation: 100 arithmetic operations per second (EDSAC)
2nd generation: 100,000 operations per second (IBM 7090)
3rd generation: 10 million operations per second (10 mflops, IBM 360)
1.The von Neumann Architecture
All the computers designed in these generations, including the Babbage Analytical Engine, were based on essentially one model of architecture: the von Neumann model. This consisted essentially of one each of the following:
|
|
I/O : input/output device
MEMORY location for storage of data and instructions
CU : control unit for instruction interpretation
ALU : arithmetic and logical unit for processing data.
(Dotted line : Data, Solid Line: Control)
In this model, under the control of the control unit, the following operations are repeated over and over until it reaches a "stop" instruction:
1. Read an instruction from memory;
2. Read any data required by instructions from memory;
3. Perform the operation(s) on the data;
4. Store results in memory;
5. Go to 1.
The hardware
of modern day computers acts in much more complex manner. However, for purposes
of understanding, this simple model suffices. The important fact to note is
that processing speed of the machine is limited by the rate at which
instructions and data can be transferred from memory to processing unit. This
narrow connection between instructions and data held in memory and the single
processing unit forms the so called von
Neumann's bottleneck.
In spite of the bottleneck, each succeeding generation of computer evolution, the processing speed of the machines, measured by the number of operations per second had improved by an order of 105. There has been a real need for faster and faster machines to solve larger and more demanding problems.
MEMORY AND THE BUS SYSTEM
Operation
of a computer requires a means of storing binary encoded information. This includes
representations of numerical or other data as well as machine language programs
as discussed above. Computer memory serves this function.
Binary
data within the computer is packaged into a units of a
fixed number of bits. Almost always,
this size is 8 bits so we will assume that the computer stores binary data as a
sequence of bytes. In order to
distinguish one byte from another, each memory location is associated with a
specific binary string, its "address".
Example Suppose a computer uses 4 bit addresses, what are the
maximum number of bytes that can be stored in memory
at any one time?
Solution: Since there are 24
different 4 bit binary codes, 16 bytes
of memory can be accessible. Figure shows a diagram of memory in which every
byte is associated with a 4 bit address. The address to the first byte is 0000b, the address to the
second byte is 0001b, and so on through the maximum number of storage locations
available. The address to the last storage location is 1111b.
|
|
Example (a) What is the largest possible memory size
for a computer which uses 10 bit addresses? (b) What about 20 bit addresses?
Solution The
solution to (a) is 210 = 1024
bytes (1 KiloByte or 1 KB) and the solution to (b)
is 220 = 1032576 bytes (1 MegaByte
or 1 MB)
The
bus system is composed of three components: the address bus, the data bus and
the control bus. Data travels between the CPU, memory and I/O over the data
bus. The CPU uses the address bus to
specify a unique address corresponding to a specific memory location or a
specific input/output device. The CPU uses the control bus to indicate whether
the data is to be go into or out of the CPU. The CPU
also uses the control bus to synchronize the actual data transfer between the CPU,
memory and I/O.
When
the CPU performs an operation, as directed by a program instruction, memory may serve as either the source or
destination operand. Accessing memory by
the CPU is accomplished by the following steps: (i)
The CPU uses the address bus
to communicate the address of the relevant memory location to
computer memory. (ii) The CPU uses the control bus to send a control signal to computer
memory specifying whether access is to be a "READ" or a
"WRITE". By convention, movement of data to the CPU is always
referred to as a READ; movement of data from the CPU is always referred to as a
WRITE. (iii) Data is placed on the data bus either by memory (a READ) or by CPU (a
WRITE) (iv) Data is retrieved from the data bus by the other component.
There
are two kinds of computer memory. Random Access Memory (RAM) is a general sort
of memory as discussed above. Read Only Memory(ROM)
differs from RAM in that a binary data in this sort of memory is placed there
by the "factory" and cannot be changed by the CPU. RAM is sometimes
called volatile memory since when the computer is turned off this memory
disappears. When the computer is turned again , any
data in RAM is lost. ROM is sometimes called non volatile memory since it
always contains the same data. When a computer is first powered up, the program
stored at a particular address is always executed first. This must be the address of a location in ROM
otherwise the computer will behave unpredictably.
REMARKS:
(1) Memory can be accessed as a byte by specifying
its byte address. However, some computers allow accessing multiple consecutive
bytes by specifying a
byte address. Either the
lower byte address or the higher byte address can be specified depending on the
computer.
(2) Memory may sometimes be organized into
"segments". Segmentation gives a logical separation of machine
instructions and data.
(3) All computers use an area of RAM as a
"stack". Access to the stack is normally restricted to a
"push" operation which writes data into the stack and a
"pop" operation which retrieves from the stack the last value pushed.
This type of access is referred to as "last-in-first-out". These
stack operations may be invoked explicitly by the programmer and may be used by
the CPU in some situations.
(4) Most modern day computers organize memory in a
hierarchical fashion as registers (the lowest level of the hierarchy), cache
memory, main memroy(RAM/ROM), and secondary memory(the highest level of the
hierarchy). At lower levels in the hierarchy, memory is more expensive but is
much faster. Economic factors restrict the amount of memory at the lower levels
of the hierarchy. Depending on the particular type of computer used, the
distinctions between various levels of the hierarchy may or may not be visible
to the programmer.