Sunday, 10 July 2022

The History and Evolution of Operating Systems.

Serial Processing

History of the operating system started in 1950. Before 1950, the programmers directly interact with the hardware there was no operating system at that time. If a programmer wishes to execute a program on those days, the following serial steps are necessary.

  • Type the program or punched card.
  • Convert the punched card to a card reader.
  • submit to the computing machine, is there any errors, the error was indicated by the lights.
  • The programmer examined the register and main memory to identify the cause of an error
  • Take outputs on the printers.
  • Then the programmer ready for the next program.

Drawback:

This type of processing is difficult for users, it takes much time and the next program should wait for the completion of the previous one. The programs are submitted to the machine one after one, therefore the method is said to be serial processing.

Batch Processing

Before 1960, it is difficult to execute a program using a computer because of the computer located in three different rooms, one room for the card reader, one room for executing the program and another room for printing the result.

The user/machine operator runs between three rooms to complete a job. We can solve this problem by using batch processing.

In batch processing technique, the same type of jobs batch together and execute at a time. The carrier carries the group of jobs at a time from one room to another.
Therefore, the programmer need not run between these three rooms several times.

Multiprogramming

Multiprogramming is a technique to execute the number of programs simultaneously by a single processor. In multiprogramming, a number of processes reside in main memory at a time. The OS(Operating System) picks and begins to execute one of the jobs in main memory. Consider the following figure, it depicts the layout of the multiprogramming system. The main memory consisting of 5 jobs at a time, the CPU executes one by one.


Multiprogramming
Multiprogramming

In the non-multiprogramming system, the CPU can execute only one program at a time, if the running program is waiting for any I/O device, the CPU becomes idle so it will effect on the performance of the CPU.

But in a multiprogramming environment, if any I/O wait happened in a process, then the CPU switches from that job to another job in the job pool. So, the CPU is not idle at any time.

Advantages:

  • Can get efficient memory utilization.
  • CPU is never idle so the performance of CPU will increase.
  • The throughput of CPU may also increase.
  • In the non-multiprogramming environment, the user/program has to wait for CPU much time. But waiting time is limited in multiprogramming.

Time Sharing System

Time-sharing or multitasking is a logical extension of multiprogramming. Multiple jobs are executed by the CPU switching between them. The CPU scheduler selects a job from the ready queue and switches the CPU to that job. When the time slot expires, the CPU switches from this job to another.

In this method, the CPU time is shared by different processes. So, it is said to be "Time-Sharing System". Generally, time slots are defined by the operating system.

Advantages:

  • The main advantage of the time-sharing system is efficient CPU utilization. It was developed to provide interactive use of a computer system at a reasonable cost. A time shared OS uses CPU scheduling and multiprogramming to provide each user with a small portion of a time-shared computer.
  • Another advantage of the time-sharing system over the batch processing system is, the user can interact with the job when it is executing, but it is not possible in batch systems.

Parallel System

There is a trend multiprocessor system, such system have more than one processor in close communication, sharing the computer bus, the clock, and sometimes memory and peripheral devices.

These systems are referred to as "Tightly Coupled" system. Then the system is called a parallel system. In the parallel system, a number of processors are executing there job in parallel.

Advantages:

  • It increases the throughput.
  • By increasing the number of processors(CPU), to get more work done in a shorter period of time.

Distributed System

In a distributed operating system, the processors cannot share a memory or a clock, each processor has its own local memory. The processor communicates with one another through various communication lines, such as high-speed buses. These systems are referred to as "Loosely Coupled" systems.

Advantages:

  • If a number of sites connected by high-speed communication lines, it is possible to share the resources from one site to another site, for example, s1 and s2 are two sites. These are connected by some communication lines. The site s1 having a printer, but the site does not have any print. Then the system can be altered without moving from s2 to s1. Therefore, resource sharing is possible in the distributed operating system.
  • A big computer that is partitioned into a number of partitions, these sub-partitions are run concurrently in distributed systems.
  • If a resource or a system fails in one site due to technical problems, we can use other systems/resources in some other sites. So, the reliability will increase in the distributed system

Source: Operating Systems: Internals and Design Principles by Stallings (Pearson)

0 comments :

Post a Comment

Note: only a member of this blog may post a comment.

Machine Learning

More

Advertisement

Java Tutorial

More

UGC NET CS TUTORIAL

MFCS
COA
PL-CG
DBMS
OPERATING SYSTEM
SOFTWARE ENG
DSA
TOC-CD
ARTIFICIAL INT

C Programming

More

Python Tutorial

More

Data Structures

More

computer Organization

More
Top