Geeks With Blogs
Mark Pearl

 

Computer Components

Virtually all contemporary computer designs are based on concepts developed by John von Neumen, which includes the following three concepts…

  1. Data and instructions are stored in a single read-write memory
  2. The contents of this memory are addressable by location, without regard to the type of data contained there
  3. Execution occurs in a sequential fashion (unless explicitly modified) from one instruction to the next

 

IMG_0001

Keep in mind that software and hardware are typically interchangeable – in the sense that you can develop programs with hardware that do the same as software. Hardware is typically quicker, and more fixed in its purpose. The above diagram illustrates both approaches…

IMG_0002

The above diagram illustrates the top level components and suggests the interactions among them…

  • The CPU exchanges data with memory – typically making use of two internal registers, a MAR and a MBR
  • A Memory Address Register (MAR), specifies the address in memory for the next read or write
  • A Memory buffer register (MBR) contains the data to be written into memory or receives the data read from memory
  • An I/O address register (I/OAR) specifies a particular I/O device
  • An I/O buffer (I/OBR) register is used for the exchange of data between an I/O module and the CPU
  • A memory module consists of a set of locations, defined by sequentially numbered addresses, each location contains a binary number that can be interpreted as either an instruction or data
  • An I/O module transfers data from external devices to the CPU and memory, and vice versa. It contains internal buffers for temporarily holding the data until it can be sent on

Computer Function

The basic function performed by a computer is execution of a program, which consists of a set of instructions stored in memory. The processor does the actual work by executing instructions specified in the program.

In its simplest form, instruction processing consists of two steps…

  1. The processor reads (fetches) instructions from memory one at a time
  2. The processor executes each instruction

The processing required for a single instruction is called an instruction cycle.

Instruction Fetch and Execute

  • At the beginning of each cycle, the processor fetches an instruction from memory (typically a register called the program counter (PC) holds the address of the instruction to be fetched next.
  • Unless specified, the processor increments the PC after each instruction fetch so that it will fetch the next instruction in sequence
  • The fetched instruction is loaded into a register in the processor known as the instruction register (IR)
  • The instruction contains bits that specify the action the processor is to take
  • The processor interprets the instructions and performs the action

Basic Instruction Cycle

in general the action performed can fall into one of four categories or a combination of more than one…

  1. Processor memory – Data may be transferred from processor to memory or from memory to processor
  2. Processor I/O – Data may be transferred to or from a peripheral device by transferring between the processor and an I/O module
  3. Data processing – The processor may perform some arithmetic or logic operation on data
  4. Control – an instruction may specify the sequence of execution be altered

IMG_0003

The above diagram provides a more detailed look at the basic  instruction cycle. In the above diagram, for any given instruction, some states may be null and others may be visited more than once. The states can be described as follows…

  • Instruction address calculation (IAC) – Determine the address of the next instruction to be executed. Usually involves adding a fixed number to the address of the previous instruction
  • Instruction fetch (IF) – Read instruction from its memory location  into processor
  • Instruction operation decoding (IOD) – analyse instruction to determine type of operation to be performed and operands to be used
  • Operand address calculation (OAC) – If the operation involves reference to an operand in memory or available via I/O, then determine the address of the operand
  • Operand fetch (OF) – Fetch the operand from memory or read it from I/O
  • Data operation (DO) – Perform the operation indicated in the instruction
  • Operand store (OS) – Write the result into memory or out to I/O

Interrupts

Interrupts are provided primarily as a way to improve processing efficiency (most external devices are much slower than the processor and could have pauses during processing, e.g. scanner). With interrupts, the processor can be engaged in executing other instruction while an I/O operation is in progress. When the I/O operation is completed or ready to accept more data, the I/O device sends an interrupt request signal to the processor. The processor responds by suspending operation of the current program, branching off to a program to service that particular I/O device, known as an interrupt handler, and resuming the original execution after the device is serviced.

To accommodate interrupts, an interrupt cycle is added to the instruction cycle.

Untitled

 

If an interrupt is pending, the processor does the following…

  • Suspends execution of the current program being executed and saves its context. This means saving the address of the next instruction to be executed and any other data relevant to the processors current activity
  • It sets the program counter to the starting address of an interrupt handler routine

Note that allowing for an interrupt does add extra processors to the cycle, however because most I/O devices are so much slower than the processor, the lost in performance by adding an interrupt is negligible, compared to the time saved by not being I/O device dependent.

Multiple Interrupts

Two approaches can be taken when dealing with multiple interrupts

  1. First approach is to disable interrupts while an interrupt is being processed. What this means generally, is if an interrupt occurred while the processor was processing an interrupt, it would ignore it till completed and the second interrupt would wait in the queue. This approach is simple as interrupts are handled in order that they occurred. The problem with this approach is that it does not handle higher priority interrupts or time critical needs.
  2. Second approach is to define priorities for interrupts and to allow an interrupt of higher priority to cause a lower priority interrupt handler to be itself interrupted.

Below is the diagram of the flow sequence including interrupt handling….

IMG_0004

I/O Function

In some cases it is desirable to allow I/O exchanges to occur directly with memory. In such a case, the processor grants to an I/O module the authority to read from or write to memory so that the I/O memory transfer can occur without tying up the processor. During such a transfer, the I/O module issues read or write commands to memory, relieving the processor of responsibility for the exchange. This operation is known as direct memory access (DMA)

Interconnection Structures

A computer consists of a set of components or modules of three basic types that communicate with each other.

  1. Processor
  2. Memory
  3. I/O

In effect, a computer is a network of basic modules, thus there must be paths for connecting the modules. This is called the interconnection structure.

  • Memory – typically a memory module will consist of N words of equal length. Each word is assigned a unique numerical address. A word of data can be read from or written into the memory. The nature of the operation is indicated by read and write control signals. The location for the operation is specified by an address.
  • I/O module – from an internal point of view, I/O is functionally similar to memory. There are two operations, read and write. Further, an I/O module may control more than one external device. We can refer to each of the interfaces to an external device as a port and give each a unique address. In addition, there are external data paths for the input and output of data with an external device. Finally an I/O module may be able to send interrupt signals to the processor.
  • Processor – the processor reads in instructions and data, writes out data after processing and uses control signals to control the overall operation of the system. It also receives interrupt signals.

The above list defines the data to be exchanged. The interconnection structure must support the following types of transistors…

  • Memory to processor
  • Processor to memory
  • I/O to processor
  • Processor to I/O
  • I/O to or from memory

Many different interconnection structures have been tried, but by far the most common has been the bus and various multiple bus structures.

Bus Interconnection

  • Bus is a communication pathway connecting two or more devices.
  • Key characteristic of a bus is that it is a shared transmission medium (meaning multiple devices connect to the bus, and a signal transmitted by any one device is available for reception by all other devices attached to the bus)
  • If two devices transmit during the same time period, their signals will overlap and become garbled – thus only one device at a time can successfully transmit
  • A bus that connects major computer components is called a system bus (i.e. processor, memory or I/O)

Bus Structure

  • A bus consists typically of from about 50 to hundreds of separate lines
  • Each line is assigned a particular meaning or function
  • A line on a bus can usually be classified into one of three functional groups – Data, address & control lines (there may also be power distribution lines that supply power to additional lines)
  • Data lines – provide a path for moving data among system modules, collectively these lines are called the data bus. Data bus can consist of 32, 64, 128 or more separate lines. The number of lines being referred to as the width of the data bus. Because each line can carry only 1 bit at a time, the number of lines determines how many bits can be transferred at a time and plays an important factor in the overall system performance.
  • Address lines – are used to designate the source or destination of the data on the data bus. The width of the address bus determines the maximum possible memory capacity of the system. Address lines are also used to address I/O ports
  • Control lines –used to control the access to and the use of the data and address lines. Because the data and address lines are shared by all components, there must be a means of controlling their use. Control signals transmit both command and timing information among system modules. Timing signals indicate the validity of data and address information. Command signals specify operations to be performed.

Typical control lines include…

  • Memory write – data on the bus to be written into the addressed location
  • Memory read – data from the addressed location to be placed on the bus
  • I/O write – data on the bus to be output to the addressed I/O port
  • I/O read – data from the addressed I/O port to be placed on the bus
  • Transfer ACK – indicates that data have been accepted from or placed on the bus
  • Bus request – indicates that a module needs to gain control of the bus
  • Bus grant – indicates that a  requesting module has been granted control of the bus
  • Interrupt request – indicates that an interrupt is pending
  • Interrupt ACK – acknowledges that the pending interrupt has been recognized
  • Clock – is used to synchronize operations
  • Reset – Initializes all modules

The operation of the bus is as follows,

If one module wishes to send data another, it must do two things…

  1. obtain the use of the bus
  2. transfer data via the bus

If one module wishes to request data from another module, it must…

  1. obtain the use of the bus
  2. transfer a request to the other module over the appropriate control and address lines

Multiple Bus Hierarchies

If there are to many devices connected to the bus, performance will suffer. There are two main causes…

  1. More devices means the greater the bus length and the greater the propagation delay which can noticeably affect performance
  2. The bus may become a bottleneck as the aggregate data transfer demand approaches the capacity of the bus. This can be countered by increasing bus width and clock speed

To keep performance up, multiple bus designs are becoming common where there is a hierarchy of buses.

Elements of Bus Design

Bus Types

  • Bus lines can be separated into two generic types, dedicated and multiplexed.
  • Dedicated bus line - is permanently assigned either to one function or to a physical subset of computer components

Methods of Arbitration

  • More than one module may need control of the bus. Because only one unit at a time can successfully transmit over the bus, some method of arbitration is needed. These roughly fall into two categories, centralized or distributed
  • In a centralized scheme, a single hardware device, referred to as a bus controller or arbiter, is responsible for allocating time on the bus.
  • In a distributed scheme, there is no central controller, each module contains access control logic and the modules act together to share the bus
  • With both approaches, the purpose is to designate one device, either the processor or an I/O module, as master

Timing

  • Timing refers to the way in which events are coordinated on the bus
  • Buses use either synchronous timing or asynchronous timing
  • Synchronous timing – the occurrence of events on the bus is determine by a clock. The bus includes a clock line which transmits a regular sequence of alternating 1’s and 0’s of equal duration. All the devices on the bus can read the clock line, and all events start at the beginning of a clock cycle
  • Asynchronous timing – the occurrence of one event on a bus follows and depends on the occurrence of a previous event. In the simple read example, the processor places address and status signals on the bus. After pausing for these signals to stabilize, it issues a read command, indicating the presence of valid address and control signals. The appropriate memory decodes the address and responds by placing the data on the data line. Once the data lines have stabilized, the memory module asserts the acknowledged line to signal the processor that the data is available. Once the master has read the data from the data lines, it de-asserts the read signal. This causes the memory module to drop the data and acknowledge lines. Finally, once the acknowledge line is dropped, the master removes the address information.

Synchronous timing is simpler to implement and test, however it is less flexible than asynchronous timing. With Asynchronous timing, a mixture of slow and fast devices, using older and newer technology can share a bus.

Data Transfer Type

A bus supports various data transfer types. All buses support both write (master to slave) and read (slave to master) transfers as well as a number of other transfer types including…

  • Read-modify-write operation
  • Read-after-write operation
  • Block data transfer
  • Write (multiplexed) operation
  • Read (multiplexed) operation
Posted on Wednesday, February 8, 2012 5:41 AM UNISA COS 2621 Computer Organization | Back to top


Comments on this post: Organization & Architecture UNISA Studies – Chap 3

No comments posted yet.
Your comment:
 (will show your gravatar)


Copyright © MarkPearl | Powered by: GeeksWithBlogs.net