Stored Program Processing Part 1 (PIC Microcontroller)

If we take the Arithmetic Logic Unit (ALU) Working register pair depicted in Fig. 2.19  and feed it with function codes, then we have in essence a programmable processing unit. These command codes may be stored in digital memory and constitute the system’s program. By fetching these instructions down one at a time we can execute this program. Memory can also hold data on which the ALU operates. This structure, together with its associated data paths, decoders and logic circuitry is known as a digital computer.

In Part 2 we will see that microcontroller architecture is modelled on that of the computer. As a prelude to this we will look at the architecture and operating rhythm of the computer structure and some characteristics of its programming. Although this computer is strictly hypothetical, it has been very much ‘designed’ with our topic’s target microcontroller in mind.

After reading this topic you will:

• Understand the Harvard structure with its separate program and data memories, and how it compares to the more common von Neumann architecture.

• Understand the parallel fetch and execute rhythm and its interaction with the Program and Data stores and the internal processor registers.

• Understand the concept of a File address as a pointer to where data is located in the Data store.

• Comprehend the structure of an instruction and appreciate that the string of instructions necessary to implement the task is known as a program.

• Have an understanding of a basic instruction set, covering data movement, arithmetic, logic and skipping categories.

• Understand how Literal, Register Direct, File Direct, File Indirect and Absolute address modes permit an instruction to target an operand for processing.

• To be able to write short programs using a symbolic assembly-level language and appreciate its one-to-one relationship to machine code.

The architecture of the great majority of general-purpose computers and microprocessors is modelled after the von Neumann model shown in

Fig. 3.L1 The few electronic computers in use up to the late 1940s either only ever ran one program (like the war time code breaking Colossus) or else needed partly rewired to change their behavior (for example the ENIAC). The web site entry for this topic gives historical and technical details of these prehistorical machines.

An elementary von Neumann computer.

Fig. 3.1 An elementary von Neumann computer.

Von Neumann’s great leap forward was to recognise that the program could be stored in memory along with any data. The advantage of this approach is flexibility. To alter the program simply load the bit pattern into the appropriate area of memory. In essence, the von Neumann architecture comprises a Central Processing Unit (CPU), a memory and a common connecting highway carrying data back and forth. In practice the CPU must also communicate with the environment outside the computer. For this purpose data to and from suitable interface ports are also funnelled through the data highway.

Looking at these elements in a little more detail.

The Central Processing Unit

The CPU consists of the ALU/working register together with the associated control logic. Under the management of the control unit, program instructions are fetched from memory, decoded and executed. Data resulting from, or used by, the program is also accessed from memory. This fetch and execute cycle constitutes the operating rhythm of the computer and continues indefinitely, as long as the system is activated.


Memory holds the bit patterns which define the program. These sequences of instructions are known as the software. The word is a play on the term hardware; as such patterns do not correspond to any physical rearrangement of the circuitry. Memory holding software should ideally be as fast as the CPU, and normally uses semiconductor technologies, such as that described in the last topic.2 This memory also holds data being processed by the program.

Program memories appear as an array of cells, each holding a bit pattern. As each cell ultimately feeds the single data highway, a decoding network is necessary to select only one cell at a time for interrogation. The computer must target its intended cell for connection by driving this decoder with the appropriate code or address. Thus if location 602Eh is 6 0 2 E to be read, then the pattern 0110 0000 0010 1110k must be presented to the decoder. For simplicity, this address highway is not shown in Figs. 3.1 and 3.2.

This addressing technique is known as random access, as it takes the same time to access a cell regardless of where it is situated in memory. Most computers have large backup memories, usually magnetic or optical disk-based or magnetic tape, in which case access does depend on the cell’s physical position. Apart from this sequential access problem, such media are normally too slow to act as the main memory and are used for backup storage of large arrays of data (eg. student exam records) or programs that must be loaded into main memory before execution.

The Interface Ports

To be of any use, a computer must be able to interact with its environment. Although conventionally one thinks of a keyboard and screen, any of a range of physical devices may be read and controlled. Thus the flow of fuel injected into a cylinder together with engine speed may be used to alter the instant of spark ignition in the combustion chamber of a gas/petrol engine.

Data Highway

All the elements of the von Neumann computer are wired together with the one common data highway, or bus. With the CPU acting as the master controller, all information flow is back and forward along these shared wires. Although this is efficient, it does mean that only one thing can happen at any time. This phenomena is sometimes known as the von Neumann bottleneck.

An elementary Harvard architecture computer.

Fig. 3.2 An elementary Harvard architecture computer.

The Harvard architecture illustrated in Fig. 3.2 is an adaptation of the standard von Neumann structure, that separates the shared memory into entirely separate Program and Data stores. The diagram shows two physically distinct buses used to carry information to the CPU from these disjoint memories. Each memory has its own Address bus and thus there is no interaction between a Program cell and a Data cell’s address. The two memories are said to lie in separate memory spaces. The Data store is sometimes known as the File store, with each location n being described as File n.

The fetch instruction down – decode it - execute sequence, the so called fetch and execute cycle, is fundamental to the understanding of the operation of the computer. To illustrate this operating rhythm we look at a simple program that takes a variable called NUM_1, then adds 65 h (101d) to it and finally assigns the resultant value to the variable called NUM_2. In the high-level language C this may be written as:3


A snapshot of the CPU executing the first instruction whilst fetching down the second instruction.

Fig. 3.3 A snapshot of the CPU executing the first instruction whilst fetching down the second instruction.

A rather more detailed close-up of our computer, which I have named BASIC (for Basic All-purpose Stored Instruction Computer) is shown in Fig. 3.3. This shows the CPU and memories, together with the two data highways (or buses) and corresponding address buses.

The CPU can broadly be partitioned into two sectors. The leftmost circuitry deals with fetching down the instruction codes and sequentially presenting them to the Instruction decoder. The rightmost sector executes each instruction, as controlled by this Instruction decoder. Looking first at the fetch process:

Program Counter

Instructions are normally stored sequentially in Program memory, and the PC is the counter register that keeps track of the current instruction word. This up-counter  is sometimes called (perhaps more sensibly) an Instruction Pointer.

As the PC is connected to the Execution unit – via the internal data bus – the ALU can be used to manipulate this register and disrupt the orderly execution sequence. In this way various Goto and Skip to another part of the program operations can be implemented.

Instruction Register 1

The contents of the Program store cell pointed to by the PC, that is instruction word n, is latched into IR1 and held for processing during the next cycle.

Instruction Register 2

During the same cycle as instruction word n is being fetched, the previously fetched instruction word n – 1 in IR1 is moved into IR2 and feeds the Instruction decoder.

Instruction Decoder

The ID is the ‘brains’ of the CPU, deciphering the instruction word in IR2 and sending out the appropriate sequence of signals to the execution unit as necessary to locate the operand in the Data store (if any) and to configure the ALU to its appropriate mode. In the diagram the instruction shown is movf 5,w (Movee File 5 to the Working register).

The Execution sector deals with accesses to the Data store and configuring the ALU. Execution circuitry is controlled from the Instruction Decoder, which is in turn commanded by Instruction word n – 1 in IR2.

File Address Register

When the CPU wishes to access a cell (or file) in the Data store, it places the file address in the FAR. This directly addresses the memory via the File address bus. As shown in the diagram, File 5 is being read from the Data store and the resulting datum is latched into the CPU’s File Data Register.

File Data Register

This is a bi-directional register which either:

• Holds the contents of an addressed file if the CPU is executing a Read cycle. This is the case for instruction 1 (movf 5,w) that moves (reads) a datum from File 5 into the Working register.

• Holds the datum that a CPU wishes to send out (Write) to an addressed file. This Write cycle is implemented for instruction 3 (movwf 6) that moves (writes) out the contents of the Working register to File 6.

Arithmetic Logic Unit

The ALU carries out an arithmetic or logic operation as commanded by its function code as generated by the Instruction Decoder.

Working Register

W is the ALU’s working register, generally holding one of an instruction’s operands, either source or destination. For example, subwf 20,w subtracts the contents of the Working register from the contents of File 20 and places the difference back in W. Some computers call this a Data register or Accumulator register.

In our BASIC computer, each instruction word in the Program store is 14 bits long. Some of these bits code the operation, for example 000111k for Add and 000110k for Exclusive-OR. This portion of the Instruction word is called the operation code or op-code.The rest of the instruction word bits generally relate to where in the Data store the operand is or sometimes a literal (constant) operand, such as in addlw 6 (ADD Literal 6 to W). For example the instruction word for SUBtract File from W (subfw) is structured as op-code d fffffff, where:

• The op-code for Subtract is 000010k or 02h.

• d is the destination for the difference, with 0 for W and 1 for a file, as specified below.

• fffffff is the 7-bit address of the subtrahend file (and destination if d is 1), from 00h through to 7Fh.

For example subwf 20h,w is coded astmp1892_thumbk or 0220h.

As the Program and Data stores are separate entities, their cell size need not be the same. In our case each file holds an 8-bit byte datum. In consequence both the ALU and Working register are also byte sized. Generally the ALU size defines the size of the computer, and so BASIC could be described as an 8-bit machine. Real computers range in size from one bit up to 64 bits. From the previous example we see that seven bits of the instruction code are reserved for the file address, and thus the Data store has a maximum capacity for direct access limited to 128 (27) 8-bit files.

The Program memory capacity is a function of the Program Counter. If this were 10 bits wide, then the Program store could directly hold 1024 (210) 14-bit instructions. OK. We have got our CPU with its Program and Data stores. Let us look at the program itself. There are three instructions in our illustrative software, and as we have already observed the task is to copy the value of a byte-sized variable NUM_1 plus 101d (65h) into a variable called NUM_2. This is symbolized as:


We see from our diagram that the variable named NUM_1 is simply a symbolic representation for "the contents of File 5" (which is shown as DATA1), and similarly NUM_2 is a much prettier way of saying "the contents of File 6" (shown as DATA2).

Now as far as the computer is concerned, starting at location 000h our program is:


Unless you are a CPU this is not much fun!4 Using hexadecimal5 is a little better.


but is still instantly forgettable. Furthermore, the CPU still only understands binary, so you are likely to have to use a translator program running on, say a PC, to translate from hexadecimal to binary.

If you are going to use a computer as an aid to translate your program, known as source code, to binary machine code, known as object code, then it makes sense to go the whole hog and express the program symbolically. Here the various instructions are represented by mnemonics (eg. clrf for CLeaR File, subwf for SUBtract File from W) and variables’ addresses are given names. Doing this our program becomes:


where the text after a semicolon is comment, which makes the program easier to understand by the tame human programmer.

Topic 8 is completely devoted to the process of translation from this assembly-level source code to machine-readable binary. Here it is only necessary to look at the general symbolic form of an instruction, which is:


A few instructions have no explicit operand, such as return (RETURN from subroutine) and nop (No OPeration); however, the majority have one, two or even three operands. For instance, the operation Clear on its own does not make sense. Clear what? Thus we need to say "clear the destination file"; for example clrf 20h to clear File 20h. Here Operand A is the file address 20h. This could be written as (f) <- 00, where the brackets mean "contents of" and <- means "becomes". This notation is called register transfer language (rtl).

Many instructions have two operands. Thus the instruction incf f,d takes a copy of the contents of the specified file plus one and places this in either the Working register (d = w) or back in the file itself (d = f). For example, incf 20h,w means "deposit the contents of File20h plus one in the Working register". In rtl this is W <- (f20) + 1.

Three-operand instructions are common. For example, addwf f,d adds the W register’s contents to the specified file’s contents and deposits the result either in W or in the file itself. Thus addwf 20h,f means "add the contents of W to that of File 20h and put the outcome in File 20h" or (f20) <- W + (f20). Of course this is not a true 3-operand instruction as the destination must be one of the two source locations; that is W or File 20h. It is more accurately described as a 21roperand instruction!

All three instructions in our exemplar program have two operands, of which the Working register is either the source or/and destination. Where an instruction has a choice of destination d – as in movf f,d – then this is indicated appropriately as Operand B. Thus in our program movf 5,w. Where the source or destination is fixed as the Working register then this is indicated in the mnemonic itself, as in addlw (ADD Literal to W) and movwf (MOVe W to File).

The first and last instructions specify an absolute file, which is an actual location in the Data store. In a large program it is easier for us humans to give these variables symbolic names, such as NUM_1 for File 5 and NUM_2 for File 6. Of course we must somewhere tell the assembler (the program that does the translation) that NUM_1 and NUM_2 equate to addresses File 5 and File 6 respectively.

The middle instruction addlw 101 adds a constant number or literal (that is 101d or 65 h) to W rather than a variable in memory. This literal is actually stored as part of the instruction word bit pattern. In rtl this instruction implements the function W <- W + 65. In some cases it may be desirable to give a literal a symbolic name.

In writing programs using assembly-level symbolic representation, it is important to remember that each instruction has a one to one correspondence to the underlying machine instructions and its binary code. In topic 9 we will see that high-level languages loose that 1:1 relationship.

The essence of computer operation is the rhythm of the fetch and execute cycles. Here each instruction is successively brought down from the Program store (fetched), interpreted and then executed. As any execution memory access will be on the Data store and as each store has its own busses, then the fetch and execution processes can progress in parallel. Thus while instruction n is being fetched, instruction n – 1 is being executed. In Fig. 3.3 the instruction codes for both the imminent and current instructions are held in the two Instruction registers IR1 and IR2 respectively. This structure is known as a pipeline, with instructions being fetched into one end and ‘popping out’ into the Instruction decoder at the other end. Figure 3.4 below shows the time line of our 3-instruction exemplar program, quantized in clock cycles. During each clock cycle, except for the first, both a fetch and an execution is proceeding simultaneously.

Parallel fetch and execute streams.

Fig. 3.4 Parallel fetch and execute streams.

In order to illustrate the sequence in a little more detail, let us trace through our specimen program. We assume that our computer (that is the Program Counter) has been reset to 000h and has just finished the Cycle 1 fetch.

Fetch (Fig. 3.3) ………………………………………………Cycle 2

• Increment the Program Counter to point to instruction 2.

• Simultaneously move the instruction word 1 down the pipeline (from Instruction register 1 to Instruction register 2).

• Program Counter (001 h) to Program address bus.

• The instruction word 2 then appears on the Program data bus and is loaded into Instruction register 1.

Execute (Fig. 3.3)…………………………………………….Cycle 2

• The operand address 05h (i.e. NUM_1) to the File Address register and out onto the File address bus.

• The resulting datum at NUM_1 is read onto the File data bus and loaded into the File Data register.

• The ALU is configured to the Pass Through mode, which feeds the datum through to the Working register.

Fetch ……………………………………………………….Cycle 3

• Increment the Program Counter to point to instruction 3.

• Simultaneously move the instruction word 2 down the pipeline (from Instruction register 1 to Instruction register 2).

• Program Counter (002h) to Program address bus.

• The instruction word 3 then appears on the Program data bus and is loaded into the pipeline at Instruction register 1.

Execute……………………………………………………..Cycle 3

• The ALU is configured to the Add mode and the literal (which is part of instruction word 2) is added to the datum in W.

• The ALU output, NUM_1 + 65 h, is placed in W.

Fetch ……………………………………………………….Cycle 4

• Increment the Program Counter to point to instruction 4.

• Simultaneously move instruction word 3 down the pipeline to IR2.

• Program Counter (003h) to Program address bus.

• The instruction word 4 then appears on the Program data bus and is loaded into the pipeline at IR1 .

Execute……………………………………………………..Cycle 4

• The operand address (i.e. NUM_2) 06h to the File Address register and out onto the File address bus.

• The ALU is configured to the Pass Through mode, which feeds the contents of W through to the File Data register and onto the File data bus.

• The datum in the File Data register is written into the Data store at the address on the File address bus and becomes the new datum in NUM_2.

Notice how the Program Counter is automatically advanced during each fetch cycle. This sequential advance will continue indefinitely unless an instruction to modify the PC occurs, such as goto 200h. This would place the address 200h into the PC, overwriting the normal incrementing process, and effectively causing the CPU to jump to whatever instruction was located at 200h. Thereafter, the linear progression would continue.

Next post:

Previous post: