[Solved] Operating System (3140702) Paper Solution Summer 2021 | GTU SEM 4 Paper Solution | GTU Medium

 Q.1 (a) Define the essential properties of the following types of operating systems:

(1) Batch 

Users using batch operating systems do not interact directly with the computer. Each user prepares their job using an offline device like a punch card and submitting it to the computer operator. Jobs with similar requirements are grouped and executed as a group to speed up processing. Once the programmers have left their programs with the operator, they sort the programs with similar needs into batches.

The batch operating system grouped jobs that perform similar functions. These job groups are treated as a batch and executed simultaneously. A computer system with this operating system performs the following batch processing activities :

The steps to be followed by batch operating system are as follows −

Step 1 − Using punch cards the user prepares his job.

Step 2 − After that the user submits the job to the programmer.

Step 3 − The programmer collects the jobs from different users and sorts the jobs into batches with similar needs.

Step 4 − Finally, the programmer submits the batches to the processor one by one.

Step 5 − All the jobs of a single batch are executed together.

Advantages

The advantages of batch operating system are as follows −

  • The time taken by the system to execute all the programs will be reduced.

  • It can be shared between multiple users.

Disadvantages

The disadvantages of batch operating system are as follows −

  • Manual interrupts are required between two batches.

  • Priority of jobs is not set, they are executed sequentially.

  • It may lead to starvation.

  • The CPU utilization is low and it has to remain ideal for a long time because the time taken in loading and unloading of batches is very high as compared to execution time.

(2) Time-sharing 

Time-sharing enables many people, located at various terminals, to use a particular computer system at the same time. Multitasking or Time-Sharing Systems is a logical extension of multi-programming. Processor’s time is shared among multiple users simultaneously is termed as time-sharing.

 

In above figure the user 5 is active state but user 1, user 2, user 3, and user 4 are in waiting state whereas user 6 is in ready state.

  1. Active State – The user’s program is under the control of CPU. Only one program is available in this state.
  2. Ready State – The user program is ready to execute but it is waiting for  it’s turn to get the CPU.More than one user can be in ready state at a time.
  3. Waiting State – The user’s program is waiting for some input/output operation. More than one user can be in a waiting state at a time.

Requirements of Time Sharing Operating System : An alarm clock mechanism to send an interrupt signal to the CPU after every time slice. Memory Protection mechanism to prevent one job’s instructions and data from interfering with other jobs. 

Advantages :

  1. Each task gets an equal opportunity.
  2. Less chances of duplication of software.
  3. CPU idle time can be reduced.

Disadvantages :

  1. Reliability problem.
  2. One must have to take of security and integrity of user programs and data.
  3. Data communication problem.

(3) Real-time

Real-time operating systems (RTOS) are used in environments where a large number of events, mostly external to the computer system, must be accepted and processed in a short time or within certain deadlines.This system is time-bound and has a fixed deadline.

Examples of the real-time operating systems: Airline traffic control systems, Command Control Systems, Airlines reservation system, Heart Pacemaker, Network Multimedia Systems, Robot etc.

The real-time operating systems can be of 3 types – 

  1. Hard Real-Time operating system: 
    These operating systems guarantee that critical tasks be completed within a range of time. 

    For example, a robot is hired to weld a car body. If the robot welds too early or too late, the car cannot be sold, so it is a hard real-time system that requires complete car welding by robot hardly on the time. 

  2. Soft real-time operating system: 
    This operating system provides some relaxation in the time limit. 

    For example – Multimedia systems, digital audio systems etc. Explicit, programmer-defined and controlled processes are encountered in real-time systems. A separate process is changed with handling a single external event. The process is activated upon occurrence of the related event signalled by an interrupt. 

    Multitasking operation is accomplished by scheduling processes for execution independently of each other. Each process is assigned a certain level of priority that corresponds to the relative importance of the event that it services. The processor is allocated to the highest priority processes. This type of schedule, called, priority-based preemptive scheduling is used by real-time systems.

  3. Firm Real-time Operating System
    RTOS of this type have to follow deadlines as well. In spite of its small impact, missing a deadline can have unintended consequences, including a reduction in the quality of the product. Example: Multimedia applications.

Advantages: 

The advantages of real-time operating systems are as follows- 

  1. Maximum consumption – 
    Maximum utilization of devices and systems. Thus more output from all the resources. 
  2. Task Shifting – 
    Time assigned for shifting tasks in these systems is very less. For example, in older systems, it takes about 10 microseconds. Shifting one task to another and in the latest systems, it takes 3 microseconds. 
  3. Focus On Application – 
    Focus on running applications and less importance to applications that are in the queue. 
  4. Real-Time Operating System In Embedded System – 
    Since the size of programs is small, RTOS can also be embedded systems like in transport and others. 
  5. Error Free – 
    These types of systems are error-free. 
  6. Memory Allocation – 
    Memory allocation is best managed in these types of systems.

Disadvantages: 
The disadvantages of real-time operating systems are as follows- 
 

  1. Limited Tasks – 
    Very few tasks run simultaneously, and their concentration is very less on few applications to avoid errors. 
  2. Use Heavy System Resources – 
    Sometimes the system resources are not so good and they are expensive as well. 
  3. Complex Algorithms – 
    The algorithms are very complex and difficult for the designer to write on. 
  4. Device Driver And Interrupt signals – 
    It needs specific device drivers and interrupts signals to respond earliest to interrupts. 
  5. Thread Priority – 
    It is not good to set thread priority as these systems are very less prone to switching tasks.
  6. Minimum Switching –  RTOS performs minimal task switching.


 Q.1 (b) What are the advantages of multiprogramming?

Multiprogramming : Multiprogramming operating system allows to execute multiple processes by monitoring their process states and switching in between processes. It executes multiple programs to avoid CPU and memory underutilization. It is also called as Multiprogram Task System. It is faster in processing than Batch Processing system. Advantages of Multiprogramming :

  • CPU never becomes idle
  • Efficient resources utilization
  • Response time is shorter
  • Short time jobs completed faster than long time jobs
  • Increased Throughput
 Q.1 (c) What is the thread? What are the difference between user-level threads and kernel supported threads? Under what circumstances is one type “better” than the other?

A thread is a single sequential flow of execution of tasks of a process so it is also known as thread of execution or thread of control. There is a way of thread execution inside the process of any operating system. Apart from this, there can be more than one thread inside a process. Each thread of the same process makes use of a separate program counter and a stack of activation records and control blocks. Thread is often referred to as a lightweight process.

Threads in os

The process can be split down into so many threads. For example, in a browser, many tabs can be viewed as threads. MS Word uses many threads - formatting text from one thread, processing input from another thread, etc.

User-level threadsKernel-level threads
  • The existence of user- level threads is unknown to the kernel.
  • The existence of the kernel-level threads is known to the kernel.
  • User-level threads are managed without kernel support.
  • Kernel-level threads are managed by the operating system.
  • user-level threads are faster to create than are kernel-level threads.
  • Kernel-level threads are user-level threads.
  • User-level threads are scheduled by the thread library.

  • Kernel-level threads are scheduled by the kernel.

 

In a multiprocessor environment, the kernel-level threads are better than user-level threads, because kernel-level threads can run on different processors simultaneously while user-level threads of a process will run on one processor only even if multiple processors are available.


Q.2 (a) What is Process? Give the difference between a process and a program. 

 Process In the Operating System, a Process is something that is currently under execution. So, an active program can be called a Process. For example, when you want to search something on web then you start a browser. So, this can be process.

Sr.No.ProgramProcess
1.Program contains a set of instructions designed to complete a specific task.Process is an instance of an executing program.
2.Program is a passive entity as it resides in the secondary memory.Process is a active entity as it is created during execution and loaded into the main memory.
3.Program exists at a single place and continues to exist until it is deleted.Process exists for a limited span of time as it gets terminated after the completion of task.
4.Program is a static entity.Process is a dynamic entity.
5.Program does not have any resource requirement, it only requires memory space for storing the instructions.Process has a high resource requirement, it needs resources like CPU, memory address, I/O during its lifetime.
6.Program does not have any control block.Process has its own control block called Process Control Block.
7.Program has two logical components: code and data.In addition to program data, a process also requires additional information required for the management and execution.
8.Program does not change itself.Many processes may execute a single program. There program code may be the same but program data may be different. these are never same.

Q.2 (b) What is Process State? Explain different states of a process with various queues generated at each stage. 

The process executes when it changes the state. The state of a process is defined by the current activity of the process.

Each process may be in any one of the following states −

  • New − The process is being created.

  • Running − In this state the instructions are being executed.

  • Waiting − The process is in waiting state until an event occurs like I/O operation completion or receiving a signal.

  • Ready − The process is waiting to be assigned to a processor.

  • Terminated − the process has finished execution.

It is important to know that only one process can be running on any processor at any instant. Many processes may be ready and waiting.

Now let us see the state diagram of these process states −


OS Process State Diagram


Explanation

Step 1 − Whenever a new process is created, it is admitted into ready state.

Step 2 − If no other process is present at running state, it is dispatched to running based on scheduler dispatcher.

Step 3 − If any higher priority process is ready, the uncompleted process will be sent to the waiting state from the running state.

Step 4 − Whenever I/O or event is completed the process will send back to ready state based on the interrupt signal given by the running state.

Step 5 − Whenever the execution of a process is completed in running state, it will exit to terminate state, which is the completion of process.

Q.2 (c) Write a bounded-buffer monitor in which the buffers (portions) are embedded within the monitor itself.

  monitor bounded buffer {

int items[MAX ITEMS];

int numItems = 0;

condition full, empty;

void produce(int v)

{

while (numItems == MAX ITEMS) full.wait();

items[numItems++] = v;

empty.signal();

}

int consume()

{

int retVal;

while (numItems == 0) empty.wait();

retVal = items[--numItems];

full.signal();

return retVal;

}

}

OR Q.2 (c) What is Semaphore? Give the implementation of Readers-Writers Problem using Semaphore

A semaphore is a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system.

The readers-writers problem is a classical problem of process synchronization, it relates to a data set such as a file that is shared between more than one process at a time. Among these various processes, some are Readers - which can only read the data set; they do not perform any updates, some are Writers - can both read and write in the data sets.The readers-writers problem is used to manage synchronization so that there are no problems with the object data. For example - If two readers access the object at the same time there is no problem. However if two writers or a reader and writer access the object at the same time, there may be problems.To solve this situation, a writer should get exclusive access to an object i.e. when a writer is accessing the object, no reader or writer may access it. However, multiple readers can access the object at the same time.

This can be implemented using semaphores. The codes for the reader and writer process in the reader-writer problem are given as follows −

Reader Process

The code that defines the reader process is given below −

wait (mutex);
rc ++;
if (rc == 1)
wait (wrt);
signal(mutex);
.
. READ THE OBJECT
.
wait(mutex);
rc --;
if (rc == 0)
signal (wrt);
signal(mutex);

In the above code, mutex and wrt are semaphores that are initialized to 1. Also, rc is a variable that is initialized to 0. The mutex semaphore ensures mutual exclusion and wrt handles the writing mechanism and is common to the reader and writer process code.

The variable rc denotes the number of readers accessing the object. As soon as rc becomes 1, wait operation is used on wrt. This means that a writer cannot access the object anymore. After the read operation is done, rc is decremented. When re becomes 0, signal operation is used on wrt. So a writer can access the object now.

Writer Process

The code that defines the writer process is given below:

wait(wrt);
.
. WRITE INTO THE OBJECT
.
signal(wrt);

If a writer wants to access the object, wait operation is performed on wrt. After that no other writer can access the object. When a writer is done writing into the object, signal operation is performed on wrt.

Q.3 (a) Define the difference between preemptive and non preemptive scheduling. 

ParameterPREEMPTIVE SCHEDULINGNON-PREEMPTIVE SCHEDULING
BasicIn this resources(CPU Cycle) are allocated to a process for a limited time.Once resources(CPU Cycle) are allocated to a process, the process holds it till it completes its burst time or switches to waiting state.
InterruptProcess can be interrupted in between.Process can not be interrupted until it terminates itself or its time is up.
StarvationIf a process having high priority frequently arrives in the ready queue, a low priority process may starve.If a process with a long burst time is running CPU, then later coming process with less CPU burst time may starve.
OverheadIt has overheads of scheduling the processes.It does not have overheads.
Flexibilityflexiblerigid
Costcost associatedno cost associated
CPU UtilizationIn preemptive scheduling, CPU utilization is high.It is low in non preemptive scheduling.
Waiting TimePreemptive scheduling waiting time is less.Non-preemptive scheduling waiting time is high.
Response TimePreemptive scheduling response time is less.Non-preemptive scheduling response time is high.
ExamplesExamples of preemptive scheduling are Round Robin and Shortest Remaining Time First.Examples of non-preemptive scheduling are First Come First Serve and Shortest Job First.

Q.3 (c) What is deadlock? Explain deadlock prevention in detail.

Deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process.

We can prevent Deadlock by eliminating any of the above four conditions. 

  1. Mutual Exclusion
  2. Hold and Wait
  3. No preemption
  4. Circular wait

Eliminate Mutual Exclusion 
It is not possible to dis-satisfy the mutual exclusion because some resources, such as the tape drive and printer, are inherently non-shareable. 

Eliminate Hold and wait 

  1. Allocate all required resources to the process before the start of its execution, this way hold and wait condition is eliminated but it will lead to low device utilization. for example, if a process requires printer at a later time and we have allocated printer before the start of its execution printer will remain blocked till it has completed its execution. 
     
  2. The process will make a new request for resources after releasing the current set of resources. This solution may lead to starvation.

 

holdnwait

Eliminate No Preemption 
Preempt resources from the process when resources required by other high priority processes. 

 Eliminate Circular Wait 
Each resource will be assigned with a numerical number. A process can request the resources increasing/decreasing. order of numbering. 
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3 lesser than R5 such request will not be granted, only request for resources more than R5 will be granted. 

Q.3 (b) What are the Allocation Methods of a Disk Space? 

The allocation methods define how the files are stored in the disk blocks. There are three main disk space or file allocation methods.
  1. Contiguous Allocation
  2. Linked Allocation
  3. Indexed Allocation

1. Contiguous Allocation

In this scheme, each file occupies a contiguous set of blocks on the disk. For example, if a file requires n blocks and is given a block b as the starting location, then the blocks assigned to the file will be: b, b+1, b+2,……b+n-1. This means that given the starting block address and the length of the file (in terms of blocks required), we can determine the blocks occupied by the file.
The directory entry for a file with contiguous allocation contains

  • Address of starting block
  • Length of the allocated portion.

The file ‘mail’ in the following figure starts from the block 19 with length = 6 blocks. Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks.

pic
Advantages:

  • Both the Sequential and Direct Accesses are supported by this. For direct access, the address of the kth block of the file which starts at block b can easily be obtained as (b+k).
  • This is extremely fast since the number of seeks are minimal because of contiguous allocation of file blocks.

Disadvantages:

  • This method suffers from both internal and external fragmentation. This makes it inefficient in terms of memory utilization.
  • Increasing file size is difficult because it depends on the availability of contiguous memory at a particular instance.

2. Linked List Allocation

In this scheme, each file is a linked list of disk blocks which need not be contiguous. The disk blocks can be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the ending file block. Each block contains a pointer to the next block occupied by the file.

The file ‘jeep’ in following image shows how the blocks are randomly distributed. The last block (25) contains -1 indicating a null pointer and does not point to any other block.
linked

Advantages:

  • This is very flexible in terms of file size. File size can be increased easily since the system does not have to look for a contiguous chunk of memory.
  • This method does not suffer from external fragmentation. This makes it relatively better in terms of memory utilization.

Disadvantages:

  • Because the file blocks are distributed randomly on the disk, a large number of seeks are needed to access every block individually. This makes linked allocation slower.
  • It does not support random or direct access. We can not directly access the blocks of a file. A block k of a file can be accessed by traversing k blocks sequentially (sequential access ) from the starting block of the file via block pointers.
  • Pointers required in the linked allocation incur some extra overhead.

3. Indexed Allocation

In this scheme, a special block known as the Index block contains the pointers to all the blocks occupied by a file. Each file has its own index block. The ith entry in the index block contains the disk address of the ith file block. The directory entry contains the address of the index block as shown in the image:

indexed
Advantages:

  • This supports direct access to the blocks occupied by the file and therefore provides fast access to the file blocks.
  • It overcomes the problem of external fragmentation.

Disadvantages:

  • The pointer overhead for indexed allocation is greater than linked allocation.
  • For very small files, say files that expand only 2-3 blocks, the indexed allocation would keep one entire block (index block) for the pointers which is inefficient in terms of memory utilization. However, in linked allocation we lose the space of only 1 pointer per block.

For files that are very large, single index block may not be able to hold all the pointers.
Following mechanisms can be used to resolve this:

  1. Linked scheme: This scheme links two or more index blocks together for holding the pointers. Every index block would then contain a pointer or the address to the next index block.
  2. Multilevel index: In this policy, a first level index block is used to point to the second level index blocks which inturn points to the disk blocks occupied by the file. This can be extended to 3 or more levels depending on the maximum file size.
Combined Scheme: In this scheme, a special block called the Inode (information Node) contains all the information about the file such as the name, size, authority, etc and the remaining space of Inode is used to store the Disk Block addresses which contain the actual file as shown in the image below. The first few of these pointers in Inode point to the direct blocks i.e the pointers contain the addresses of the disk blocks that contain data of the file. The next few pointers point to indirect blocks. Indirect blocks may be single indirect, double indirect or triple indirect. Single Indirect block is the disk block that does not contain the file data but the disk address of the blocks that contain the file data. Similarly, double indirect blocks do not contain the file data but the disk address of the blocks that contain the address of the blocks containing the file data.
inode

OR Q.3 (a) What are the disadvantages of FCFS scheduling algorithm as compared to shortest job first (SJF) scheduling? 
Disadvantages :
(i) Waiting time can be huge if short requests wait behind the long ones.
(ii) This is not appropriate for time sharing systems where this is significant that each user should find the CPU for an equal amount of time interval.
(iii) An appropriate mix of jobs is required to achieve good results from FCFS scheduling.
OR Q.3 (b) Distinguish between CPU bounded, I/O bounded processes. 
CPU Bound processes are executed at the speed of the computer processor or central processor unit. The CPU performs all its tasks along with the assistance of the arithmetic logic unit(ALU) and control unit(CU). For example, the process of adding two vector arrays will be the CPU bound.
I/O Bound refers to the processes that progress and are completed with the similar speed of the I/O subsystem. Jobs like reading data from disk or writing data on the disk will be the I/O bound processes.
The main difference among I/O bound, CPU bound, and Memory bound processes is the rate of completion of the process under respective devices. 
OR Q.3 (c) What is deadlock? Explain deadlock Avoidance in detail.
Deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process.
In deadlock avoidance, the request for any resource will be granted if the resulting state of the system doesn't cause deadlock in the system. The state of the system will continuously be checked for safe and unsafe states.
In order to avoid deadlocks, the process must tell OS, the maximum number of resources a process can request to complete its execution.
The simplest and most useful approach states that the process should declare the maximum number of resources of each type it may ever need. The Deadlock avoidance algorithm examines the resource allocations so that there can never be a circular wait condition.

Q.4 (a) What is Access control?
Access control for an operating system determines how the operating system implements accesses to system resources by satisfying the security objectives of integrity, availability, and secrecy.
OR 
Access control is a security technique that regulates who or what can view or use resources in a computing environment.
Q.4 (b) What are Pages and Frames? What is the basic method of Segmentation? 
Page is a fixed-length contiguous block of virtual memory, described by a single entry in the page table. It is the smallest unit of data for memory management in a virtual memory operating system.
A frame refers to a storage frame or central storage frame. In terms of physical memory, it is a fixed sized block in physical memory space, or a block of central storage. In computer architecture, frames are analogous to logical address space pages. 
A computer system that is using segmentation has a logical address space that can be viewed as multiple segments. And the size of the segment is of the variable that is it may grow or shrink. As we know that during the execution each segment has a name and length. And the address mainly specifies both thing name of the segment and the displacement within the segment.
Therefore the user specifies each address with the help of two quantities: segment name and offset.
For simplified Implementation segments are numbered; thus referred to as segment number rather than segment name.
Thus the logical address consists of two tuples:
<segment-number,offset>
where,
Segment Number(s): Segment Number is used to represent the number of bits that are required to represent the segment.
Offset(d) Segment offset is used to represent the number of bits that are required to represent the size of the segment.
In Operating Systems, Segmentation is a memory management technique in which the memory is divided into the variable size parts. Each part is known as a segment which can be allocated to a process.
The details about each segment are stored in a table called a segment table. Segment table is stored in one (or many) of the segments.
Segment table contains mainly two information about segment:
  1. Base: It is the base address of the segment
  2. Limit: It is the length of the segment.
Q.4 (c) Briefly explain and compare, fixed and dynamic memory partitioning schemes

1. Fixed Partitioning : 
Multi-programming with fixed partitioning is a contiguous memory management technique in which the main memory is divided into fixed sized partitions which can be of equal or unequal size. Whenever we have to allocate a process memory then a free partition that is big enough to hold the process is found. Then the memory is allocated to the process.If there is no free space available then the process waits in the queue to be allocated memory. It is one of the most oldest memory management technique which is easy to implement. 

2. Variable Partitioning : 
Multi-programming with variable partitioning is a contiguous memory management technique in which the main memory is not divided into partitions and the process is allocated a chunk of free memory that is big enough for it to fit. The space which is left is considered as the free space which can be further used by other processes. It also provides the concept of compaction. In compaction the spaces that are free and the spaces which not allocated to the process are combined and single large memory space is made. 


Difference between Fixed Partitioning and Variable Partitioning : 

S.NO.Fixed partitioningVariable partitioning
1.In multi-programming with fixed partitioning the main memory is divided into fixed sized partitions.In multi-programming with variable partitioning the main memory is not divided into fixed sized partitions.
2.Only one process can be placed in a partition.In variable partitioning, the process is allocated a chunk of free memory.
3.It does not utilize the main memory effectively.It utilizes the main memory effectively.
4.There is presence of internal fragmentation and external fragmentation.There is external fragmentation.
5.Degree of multi-programming is less.Degree of multi-programming is higher.
6.It is more easier to implement.It is less easier to implement.
7.There is limitation on size of process.There is no limitation on size of process.
OR Q.4 (a) Explain difference between Security and Protection?

Security

  • Security grants access to specific users of the system only.

  • There are external security threats associated with the system.

  • Convoluted queries are handled by security systems.

  • Security uses mechanisms like encryption and authentication (also known as certification) are used.

Protection

  • It deals with the access to certain system resources.

  • There are internal threats associated with protection of the system.

  • Simple queries are handled in protection.

  • It tries to determine the files that could be accessed or permeated by a special user.

  • It implements authorization mechanism.

OR Q.4 (b) Differentiate external fragmentation with internal fragmentation. 

As processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens after sometimes that processes cannot be allocated to memory blocks considering their small size and memory blocks remains unused. This problem is known as Fragmentation.

Internal Fragmentation

Memory block assigned to process is bigger. Some portion of memory is left unused, as it cannot be used by another process. The internal fragmentation can be reduced by effectively assigning the smallest partition but large enough for the process.

External Fragmentation

Total memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous, so it cannot be used. External fragmentation can be reduced by compaction or shuffle memory contents to place all free memory together in one large block. To make compaction feasible, relocation should be dynamic.

Following are the important differences between Internal Fragmentation and External Fragmentation.

S.NOInternal FragmentationExternal Fragmentation
1.If the process is larger than the memory, then internal fragmentation occurs.If the process is removed, then external fragmentation occurs.
2.Fixed-sized memory blocks are designated for internal fragmentation.Variable-sized memory blocks are designated for external fragmentation.
3.Internal fragmentation happens when memory is split into fixed-sized distributions.External fragmentation happens when memory is split into variable size distributions.
4.The best-fit block is the solution to internal fragmentation.Paging, compaction, and segmentation are solutions to external fragmentation.

OR Q.4 (c) Explain the best fit, first fit and worst fit algorithm
First Fit : In the first fit approach is to allocate the first free partition or hole large enough which can accommodate the process. It finishes after finding the first suitable free partition.
Best Fit : The best fit deals with allocating the smallest free partition which meets the requirement of the requesting process. This algorithm first searches the entire list of free partitions and considers the smallest hole that is adequate. It then tries to find a hole which is close to actual process size needed.
Worst fit : In worst fit approach is to locate largest available free portion so that the portion left will be big enough to be useful. It is the reverse of best fit.

Q.5 (a) Explain the concept of virtual machines.
A Virtual Machine (VM) is a compute resource that uses software instead of a physical computer to run programs and deploy apps. One or more virtual “guest” machines run on a physical “host” machine.
An example of a process virtual machine is the Java Virtual Machine (JVM) which allows any system to run Java applications as if they were native to the system.

Q.5 (b) Compare virtual machine and non virtual machine.

Virtual machine :
  • VM is piece of software that allows you to install other software inside of it so you basically control it virtually as opposed to installing the software directly on the computer.
  • Applications running on VM system can run different OS.
  • VM virtualizes the computer system.
  • VM size is very large.
  • VM takes minutes to run, due to large size.
  • VM uses a lot of system memory.
  • VM is more secure.
  • VM’s are useful when we require all of OS resources to run various applications.
  • Examples of VM are: KVM, Xen, VMware.
Container :
  • A container is a software that allows different functionalities of an application independently.
  • Applications running in a container environment share a single OS.
  • Containers virtualize the operating system only.
  • The size of container is very light; i.e. a few megabytes.
  • Containers take a few seconds to run.
  • Containers require very less memory.
  • Containers are less secure.
  • Containers are useful when we are required to maximise the running applications using minimal servers.
  • Examples of containers are:RancherOS, PhotonOS, Containers by Docker.

Q.5 (c) What is “inode”? Explain File and Directory Management of Unix Operating System.
In Unix based operating system each file is indexed by an Inode. Inode are special disk blocks they are created when the file system is created. The number of Inode limits the total number of files/directories that can be stored in the file system.
Directory Management : A directory is a file the solo job of which is to store the file names and the related information. All the files, whether ordinary, special, or directory, are contained in directories.
Unix uses a hierarchical structure for organizing files and directories. This structure is often referred to as a directory tree. The tree has a single root node, the slash character (/), and all other directories are contained below it.
File Management : All data in Unix is organized into files. All files are organized into directories. These directories are organized into a tree-like structure called the file system.
In Unix, there are three basic types of files −
Ordinary Files − An ordinary file is a file on the system that contains data, text, or program instructions. In this tutorial, you look at working with ordinary files.
Directories − Directories store both special and ordinary files. For users familiar with Windows or Mac OS, Unix directories are equivalent to folders.
Special Files − Some special files provide access to hardware such as hard drives, CD-ROM drives, modems, and Ethernet adapters. Other special files are similar to aliases or shortcuts and enable you to access a single file using different names.

OR Q.5 (a) What is marshalling and unmarshalling?
Marshalling is the process of transforming the memory representation of an object to a data format suitable for the storage and transmission.
Unmarshalling refers to the process of transforming a representation of an object that is used for storage or transmission to a representation of the object that is executable.
In few words, “marshalling” refers to the process of converting the data or the objects into a byte-stream, and “unmarshalling” is the reverse process of converting the byte-stream beack to their original data or object. The conversion is achieved through “serialization”.

OR Q.5 (b) What are components of Linux systems?
Linux architecture has the following components: 

Linux-Architecture


Kernel: Kernel is the core of the Linux based operating system. It virtualizes the common hardware resources of the computer to provide each process with its virtual resources. This makes the process seem as if it is the sole process running on the machine. The kernel is also responsible for preventing and mitigating conflicts between different processes. Different types of the kernel are: 
  • Monolithic Kernel
  • Hybrid kernels
  • Exo kernels
  • Micro kernels
System Library: Is the special types of functions that are used to implement the functionality of the operating system.
Shell: It is an interface to the kernel which hides the complexity of the kernel’s functions from the users. It takes commands from the user and executes the kernel’s functions.
Hardware Layer: This layer consists all peripheral devices like RAM/ HDD/ CPU etc.
System Utility: It provides the functionalities of an operating system to the user.

OR Q.5 (c) Explain Disk arm scheduling algorithm. 
Disk scheduling is a method that is used by the OS (operating system) to schedule the next upcoming requests. Disk Scheduling is also called Input/output scheduling.

The time required to read or write a disk block is determined by three factors:

  1. Seek time (the time to move the arm to the proper cylinder).
  2. Rotational delay (the time for the proper sector to rotate under the head).
  3. Actual data transfer time.

For most disks, the seek time dominates the other two times, so reducing the mean seek time can improve system performance substantially.

Various types of disk arm scheduling algorithms are available to decrease mean seek time.

  1. FCSC (First come first serve)
  2. SSTF (Shorted seek time first)
  3. SCAN
  4. C-SCAN
  5. LOOK (Elevator)
  6. C-LOOK

Previous Post Next Post