Operating system/Multiprogramming

From Wikiversity
Jump to navigation Jump to search

This is a lesson in in the course, Operating system, which is a part of The School of Computer Science

Objective[edit | edit source]

Introduction[edit | edit source]

Multiprogramming became possible when disks were introduced to the computing world. The concept of multiprogramming relies on the capability of a computer to store instructions (programs) for long-term use. The goal is to reduce CPU idle time by allowing new jobs to take over the CPU whenever the currently running job needed to wait (e.g. for user I/O).

It was also at this point when operating systems received a new responsibility - decision making. Before multiprogramming was introduced, the role of the operating system was simple and straight-forward - load a program into memory and execute it via the CPU.

With the advent of multiprogramming, operating systems now faced different mechanics for program execution as multiple jobs now needed to be loaded into memory at the same time and several options existed for allocating CPU time.

Two types of scheduling were introduced to handle this decision-making - job scheduling and CPU scheduling. Job scheduling refers to the selection of jobs to load into memory. CPU scheduling refers to the selection of a job existing in memory to execute via the CPU. In a computer system, both these decisions are made by the operating system.

Scheduling Algorithms[edit | edit source]

Scheduling Basics

Process scheduling is one of the most important functions of an operating system that supports multiprogramming. This function is heavily dependent on queues. There are three types of queues that are used in process scheduling:

  • Job Queue - Contains all processes that have been introduced into the system
  • Ready Queue - Contains processes that are waiting for CPU time, and can be selected to run at any time
  • Device Queue - Contains processes waiting on a certain device. Each device has its own queue for processes that need input/output from it.

Scheduling Statistics

To gauge the performance of a scheduling algorithm, several statistics are used.

  • Turnaround Time = Time Job Finished - Time Job First Ready
  • Average Turnaround Time = Average of Turnaround Time of all jobs
  • Throughput = Number of Jobs Finished / Total CPU Time
  • CPU Utilization = (Total CPU Time - Idle Time)/Total CPU Time

The general goals of scheduling are:

  • Reduce average turnaround time.
  • Increase the throughput
  • Increase CPU Utilization

First Come First Serve

The name of this algorithm is self-explanatory. This is the simplest (but least efficient) algorithm used to schedule processes for execution.

The first process to arrive in the ready queue will be executed first. Since this is not a preemptive algorithm, an incoming process is not allowed to take over the CPU if the running process is not complete. Thus, the running process will continue until it is completed.

Shortest Job First

In this algorithm, if more than one process is ready to execute, the scheduler selects the process with the shortest "burst time" (time to finish) and assigns it to the CPU. This is also a non-preemptive algorithm, so as soon as a process is assigned to the CPU, it will not be replaced until it is completed.

Shortest Remaining Time First

This is similar to Shortest Job First, but this algorithm is preemptive. If a new process becomes ready, the scheduler will check if its burst time is shorter than the remaining time of the currently running process, then the new process will "preempt" the current process. The current process will be returned to the ready queue.

Threads[edit | edit source]

The concept of threading was introduced to address the issue of concurrent execution within a single process. Since the overhead for switching between multiple processes was too expensive in terms of computing resources, programmers needed a way to perform concurrent execution without resorting to the use of multiple processes.

A thread is simply a lightweight process. Although each thread executes a separate program code, all threads belonging to the same process share the same memory space and context. This allows more convenient communication and data sharing. More importantly, switching between threads takes much less time compared to the context switch between processes.

Threads come in two types:

  1. Kernel Threads - Kernel threads are managed by the kernel itself. It is created in kernel space and the switching between threads is performed by the kernel. Although management is more efficient, the trade-off is the overhead required to create these threads.
  2. User Threads - User threads are created as part of a user process. Implementation-wise, the mechanism for creating and managing user threads is found within a programming language's application program interface.

Assignments[edit | edit source]

Completion status: this resource is a stub, which means that pretty much nothing has been done yet.