1. Basics

Computer

Data processing device capable of storing, inputting and outputting the data
Logical components of a computer:

Data representation

Units of measure and multiples

Defined by IEEE 1541 and IEC 80000-13 standards
Information units:

Value Short Word
2^10 Ki kibi
2^20 Mi mebi
2^30 Gi gibi
2^40 Ti tebi
2^50 Pi pebi
2^60 Ei exbi
Use of binary and decimal multiples

Sizes of primary memories (RAM, Flash, etc.) - binary multiples
Transfer speeds - decimal multiples
Mass storage devices - usually decimal multiple

Octal representation

Allows us to represent all possible combinations of 3-digit binary numbers

Hexadecimal representation

All possible combinations of 4-digit binary numbers

Taxonomies

Classification of computer architectures

Flynn's taxonomy

Michael J. Flinn, 1972

Assumes that computer processes streams of data (does not explain what they are)

Classifies architectures based on the number of streams (1...n)

SISD - Single Instruction Stream, Single Data Stream
most common architecture
SIMD - Single Instruction Stream, Multiple Data Streams
vector/matrix processor
MIMD - Multiple Instruction Streams, Multiple Data Streams
MISD - doesn't really exist in real world

There are also versions of SISD and SIMD where the number of instructions streams is 0.

Devices without data streams are not computers - computer must process some data
Devices without instructions streams can still process data
Data may contain information on the required processing
These machines are called dataflow computers
Dataflow uniprocessors and multiprocessors may be built

Dataflow computers

The processed unit is called a token
Contains data and tag, describing its contents
Tag serves as instruction - what to do with the token
During processing a new token is created containing new data and tag
The processed token doesn't have to void (can be processed multiple times)
Dataflow are not longer in production (NEC produced one in 1985)
Dataflow paradigm is used to describe information systems without any relation to hardware

Skillicorn's taxonomy

David Skillicorn, 1988
Describes the structure of computer composed of several components
Two levels

Abstract components of computer architecture

"Abstract" because there is no relation between the model and physical machine
Components:

Processors don't contain any storage elements
By definition every processor hs its own memory hierarchy

Composition of models in Skillicorn's taxonomy

Must have connections:
Instruction Processor <---> Intruction Memory Hierarchy
Data Processor <---> Data Memory Hierarchy
Number of data processors may be 1 or N, number of instruction processors - 0, 1, or N
University/WUT/ECOAR/pictures/Pasted image 20250412152952.png

Collaboration of components

IP <-> IM

Models of computers in Skillicorn's taxonomy

About 30 can be designed
7 (maybe 9) are reasonable
Connection of processors of the same kind


Memory hierarchy

It is not possible to build arbitrarily big and arbitrarily fast memory
Access time grows with capacity
Layered (hierarchical) structure

Memory hierarchy - layers

University/WUT/ECOAR/pictures/Pasted image 20250412153153.png
|\ Registers | \ Caches (3 levels)
| \ Main (primary) memory | \ Mass storage used as virtual memory (secondary memory)
| \ Mass storage used as file system |_____\ Removable media and network resources

Controlling the memory hierarchy

Most freqently/recently used objects are moved towerds the top of hierarchy
Control of object movements

von Neumann machine

Program is stored in the same way as the data

Memory is a vector of numbered cells

Based on those above assumptions:

Physical organization of memory in uniprocessors

Harvard - separate IM and DM
considered non-von Neumann architecture
Princeton - common IM and DM

Harvard

University/WUT/ECOAR/pictures/Pasted image 20250412153736.png
Separate IM and DM
High performance - parallel access to instructions and data
Instruction memory cannot be written:
computer is not programmable (sold with fixed program)
not suitable arch for a g.p. computer

Princeton

University/WUT/ECOAR/pictures/Pasted image 20250412153912.png
Generic implementation of von Neumann arch. with single, common IM and DM hierarchy
Single hierarchy makes is impossible to access data and instructions at the same time
von Neumann bottleneck
Unlimited possibilities of program modification
any object written by DP as data may be fetched by IP as instruction
programming capability - needed in g.p. computers
program may modify itself by writing its own instructions

Harvard-Princeton (program modification)

Software doesn't have a full control over the placement of data in memory hierarchy (memory vs. cache)
self-modification not possible as the effect is unpredictable
OS may enforce placement of objects in memory
one process may modify instructions of another process
code loaded from file
Architecture provides programmability without a risk involved with self-modification
Most of contemporary g.p. machines are based on this arch.

Classification of computer memories

Persistency

Volatile - power required to preserve information
Dynamic - require periodic refresh
Static - require power only
Non-volatile - keeps information after powered off

Access method

Random, sequential, associative, hybrid
Random Access Memory is a memory which content is selected by address (volatile or non-volatile
RAM acronym is incorrect, Read-Write Memory (RWM) is a proper name