[Solved] Winter 2021 Operating System(3140702) SEM 4 GTU Paper Solution | GTU Paper Solution | GTU Medium

GUJARAT TECHNOLOGICAL UNIVERSITY

BE - SEMESTER–IV (NEW) EXAMINATION – WINTER 2021

Subject Code:3140702 Date:31/12/2021

Subject Name:Operating System

Time:10:30 AM TO 01:00 PM Total Marks: 70


 

 Q.1 (a) Define the followings:

(1) System bus :  System bus is a single computer bus that connects the major components of a computer system, combining the functions of a data bus to carry information, an address bus to determine where it should be sent or read from, and a control bus to determine its operation.

(2) Auxiliary memory : Auxiliary memory, also known as auxiliary storage, secondary storage, secondary memory or external memory, is a non-volatile memory (does not lose stored data when the device is powered down).

  1. The most common forms of auxiliary storage have been magnetic disks, magnetic tapes, and optical discs.
  2. Programs not currently needed in main memory are transferred into auxiliary memory to provide space in main memory for other programs that are currently in use.
  3. An Auxiliary memory is referred to as the lowest-cost, highest-space, and slowest-approach storage in a computer system.

(b) What do you mean by cache memory? Explain the cache read operation.
Cache is the temporary memory officially termed “CPU cache memory.” This chip-based feature of your computer lets you access some information more quickly than if you access it from your computer's main hard drive.
The cache memory is used to store program data which is currently being executed in the CPU.
OR
It acts as a temporary storage area that the computer's processor can retrieve data from easily.

(c) What is process? Explain the process creation and termination.
A process is the instance of a computer program that is being executed by one or many threads. It contains the program code and its activity. 
OR
A Process is something that is currently under execution. So, an active program can be called a Process.
For example, when you want to search something on web then you start a browser. So, this can be process.

The process creation and termination :

1. Process creation: 

1. When a new process is created, the operating system assigns a unique Process Identifier (PID) to it and inserts a new entry in the primary process table. 

2. Then required memory space for all the elements of the process such as program, data, and stack is allocated including space for its Process Control Block (PCB). 

3. Next, the various values in PCB are initialized such as, 

  1. The process identification part is filled with PID assigned to it in step (1) and also its parent’s PID.
  2. The processor register values are mostly filled with zeroes, except for the stack pointer and program counter. The stack pointer is filled with the address of the stack-allocated to it in step (ii) and the program counter is filled with the address of its program entry point.
  3. The process state information would be set to ‘New’.
  4. Priority would be lowest by default, but the user can specify any priority during creation.

4. Then the operating system will link this process to the scheduling queue and the process state would be changed from ‘New’ to ‘Ready’. Now the process is competing for the CPU. 

5. Additionally, the operating system will create some other data structures such as log files or accounting files to keep track of processes activity. 

2. Process Deletion: 

Processes are terminated by themselves when they finish executing their last statement, then operating system USES exit( ) system call to delete its context. Then all the resources held by that process like physical and virtual memory, 10 buffers, open files, etc., are taken back by the operating system. A process P can be terminated either by the operating system or by the parent process of P. 

A parent may terminate a process due to one of the following reasons:

  1. When task given to the child is not required now.
  2. When the child has taken more resources than its limit.
  3. The parent of the process is exciting, as a result, all its children are deleted. This is called cascaded termination.
Q.2 (a) Define the term critical section.
Critical Section: When more than one processes access the same code segment that segment is known as the critical section. The critical section contains shared variables or resources which are needed to be synchronized to maintain the consistency of data variables. In simple terms, a critical section is a group of instructions/statements or region of code that need to be executed atomically (read this post for atomicity), such as accessing a resource (file, input or output port, global data, etc.).

(b) Difference between user level and kernel level thread.
User level thread

Kernel level thread
  • User thread are implemented by users.
  • kernel threads are implemented by OS.
  • OS doesn’t recognize user level threads.
  • Kernel threads are recognized by OS.
  • Implementation of User threads is easy.
  • Implementation of Kernel thread is complicated.
  • Context switch time is less.
  • Context switch time is more.
  • Context switch requires no hardware support.
  • Hardware support is needed.
  • If one user level thread perform blocking operation then entire process will be blocked.
  • If one kernel thread perform blocking operation then another thread can continue execution.
  • User level threads are designed as dependent threads.
  • Kernel level threads are designed as independent threads.
  • Example : Java thread, POSIX threads.
  • Example : Window Solaris.
(c) The processes are assumed to have arrived in the order 

P1,P2,P3,P4,P5 all at time 0. Calculate the average turnaround time and maximum waiting time for per- emptive scheduling algorithm.

enter image description here


a) FCFS scheduling:

Here the processes are scheduled as per the order in which they arrive.

Gantt chart:

enter image description here

Waiting time for P1=0

Waiting time for P2=10

Waiting time for P3=11

Waiting time for P4=13

Waiting time for P5=14

Average waiting time =0+10+11+13+145=9.6

Average execution time =10+1+2+1+55=3.8

Average turn-around time = Average waiting time + Average execution time = 9.6 + 3.8 = 13.4

b) SJF non-preemptive scheduling:

Here the processes are scheduled as per their burst time with the least timed-process being given the maximum priority.

enter image description here

Waiting time for P1=9

Waiting time for P2=0

Waiting time for P3=2

Waiting time for P4=1

Waiting time for P5=4

Average waiting time
=9+0+2+1+45=3.2

Average execution time =10+1+2+1+55=3.8

Average turn-around time= Average waiting time + Average execution time = 3.2 + 3.8 = 7

c) Priority scheduling :

Here the scheduling happens on the basis of priority number assigned to each process. (Here we would be considering Priority=1 as highest priority)

enter image description here

Waiting time for P1=6

Waiting time for P2=0

Waiting time for P3=16

Waiting time for P4=18

Waiting time for P5=1

Average waiting time =6+0+16+18+15=8.2

Average execution time =10+1+2+1+55=3.8

Average turn-around time= Average waiting time + Average execution time = 8.2 + 3.8 = 12

d) Round Robin scheduling:

Here every process executes in the FCFS for the given time quantum. This is a pre-emptive method of scheduling. Here time quantum =1

enter image description here

Waiting time for P1=(0+(51)+(86)+(109)+(1211)+(1413))=9

Waiting time for P2=1

Waiting time for P3=2+(63)=5

Waiting time for P4=3

Waiting time for P5=4+(75)+(98)+(1110)+(1312)=9

Average waiting time =9+1+5+3+95=5.4

Average execution time =10+1+2+1+55=3.8

Average turn-around time= Average waiting time + Average execution time = 5.4 + 3.8 = 9.2


(c) What are various criteria for a good process scheduling algorithm? Explain any two preemptive scheduling algorithms in brief.
Characteristics include Utilization of CPU, Response time, Throughput, Turnaround time, Waiting time and fairness.
Utilization of CPU
This algorithm keeps the CPU busy by using most of it.
Throughput
The process involves the number of works finished in a time unit and the algorithm increases the number.
Response Time:
The time used in accepting the request of a task and the scheduler minimizes the time.
Turnaround time:
The time span between the starting of a job and finishing it is the turnaround time.
Waiting time:
In a multi programming system when a task waits for allocation of resources the time it takes is the waiting time and the algorithm decreases it.
Fairness:
The scheduler takes care of the share that each process receives in CPU

Algorithms that are backed by preemptive Scheduling are round-robin (RR), priority, SRTF (shortest remaining time first).
Shortest Remaining Time
  1. Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
  2. The processor is allocated to the job closest to completion but it can be preempted by a newer ready job with shorter time to completion.
  3. Impossible to implement in interactive systems where required CPU time is not known.
  4. It is often used in batch environments where short jobs need to give preference.

Round Robin Scheduling

  1. Round Robin is the preemptive process scheduling algorithm.
  2. Each process is provided a fix time to execute, it is called a quantum.
  3. Once a process is executed for a given time period, it is preempted and other process executes for a given time period.
  4. Context switching is used to save states of preempted processes.
Round Robin Scheduling Algorithm

Wait time of each process is as follows −

ProcessWait Time : Service Time - Arrival Time
P0(0 - 0) + (12 - 3) = 9
P1(3 - 1) = 2
P2(6 - 2) + (14 - 9) + (20 - 17) = 12
P3(9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5


Q.3 (a) What is meant priority inversion?

Priority inversion is a bug that occurs when a high priority task is indirectly preempted by a low priority task. For example, the low priority task holds a mutex that the high priority task must wait for to continue executing

(b) What is the criterion used to select the time quantum in case of round-robin scheduling algorithm? Explain it with a suitable example.

Round Robin is the preemptive process scheduling algorithm. Each process is provided a fix time to execute, it is called a quantum. Once a process is executed for a given time period, it is preempted and other process executes for a given time period. Context switching is used to save states of preempted processes.

Consider this following three processes




Step 1) 

Round Robin Algorithm

The execution begins with process P1, which has burst time 4. Here, every process executes for 2 seconds. P2 and P3 are still in the waiting queue.Round Robin Algorithm

Step 2) At time =2, P1 is added to the end of the Queue and P2 starts executing

Round Robin Algorithm


Step 3) At time=4 , P2 is preempted and add at the end of the queue. P3 starts executing.

Round Robin Algorithm

Step 4) At time=6 , P3 is preempted and add at the end of the queue. P1 starts executing.

Round Robin Algorithm

Step 5) At time=8 , P1 has a burst time of 4. It has completed execution. P2 starts execution

Round Robin Algorithm

Step 6) P2 has a burst time of 3. It has already executed for 2 interval. At time=9, P2 completes execution. Then, P3 starts execution till it completes.

Round Robin Algorithm

Step 7) Let’s calculate the average waiting time for above example.

Wait time 
P1= 0+ 4= 4
P2= 2+4= 6
P3= 4+3= 7

(c) What is Semaphore? Give the implementation of Bounded Buffer Producer Consumer Problem using Semaphore

Semaphores are integer variables that are used to solve the critical section problem by using two atomic operations, wait and signal that are used for process synchronization.

To solve the problem occurred of race condition, we use Binary Semaphore and Counting Semaphore

Binary Semaphore: In Binary Semaphore, only two processes can compete to enter into its CRITICAL SECTION at any point in time, apart from this the condition of mutual exclusion is also preserved.

Counting Semaphore: In counting semaphore, more than two processes can compete to enter into its CRITICAL SECTION at any point of time apart from this the condition of mutual exclusion is also preserved.


Q.3 (a) What is Deadlock? List the conditions that lead to deadlock.
Deadlock is a situation that occurs in OS when any process enters a waiting state because another waiting process is holding the demanded resource.
OR
Deadlock is a situation where two or more processes are waiting for each other. For example, let us assume, we have two processes P1 and P2. Now, process P1 is holding the resource R1 and is waiting for the resource R2. At the same time, the process P2 is having the resource R2 and is waiting for the resource R1.

The four necessary conditions for a deadlock to arise are as follows.

  • Mutual Exclusion: Only one process can use a resource at any given time i.e. the resources are non-sharable.
  • Hold and wait: A process is holding at least one resource at a time and is waiting to acquire other resources held by some other process.
  • No preemption: The resource can be released by a process voluntarily i.e. after execution of the process.
  • Circular Wait: A set of processes are waiting for each other in a circular fashion. For example, lets say there are a set of processes {P_0,P_1,P_2,P_3} such that P_0 depends on P_1P_1 depends on P_2P_2 depends on P_3 and P_3 depends on P_0. This creates a circular relation between all these processes and they have to wait forever to be executed.
(b) List criterions used to evaluate the performance of CPU scheduling algorithms.

The criteria include the following: 

  1. CPU utilisation – 
    The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible. Theoretically, CPU utilisation can range from 0 to 100 but in a real-time system, it varies from 40 to 90 percent depending on the load upon the system. 
     
  2. Throughput – 
    A measure of the work done by the CPU is the number of processes being executed and completed per unit of time. This is called throughput. The throughput may vary depending upon the length or duration of the processes. 
     
  3. Turnaround time – 
    For a particular process, an important criterion is how long it takes to execute that process. The time elapsed from the time of submission of a process to the time of completion is known as the turnaround time. Turn-around time is the sum of times spent waiting to get into memory, waiting in the ready queue, executing in CPU, and waiting for I/O. The formula to calculate Turn Around Time = Compilation Time – Arrival Time
     
  4. Waiting time – 
    A scheduling algorithm does not affect the time required to complete the process once it starts execution. It only affects the waiting time of a process i.e. time spent by a process waiting in the ready queue. The formula for calculating Waiting Time = Turnaround Time – Burst Time.
     
  5. Response time – 
    In an interactive system, turn-around time is not the best criteria. A process may produce some output fairly early and continue computing new results while previous results are being output to the user. Thus another criteria is the time taken from submission of the process of request until the first response is produced. This measure is called response time. The formula to calculate Response Time = CPU Allocation Time(when the CPU was allocated for the first) – Arrival Time
(c) What is advantage of using Monitor? Give the implementation of Bounded Buffer
Producer Consumer Problem using Monitor.

Monitors make solving the producer-consumer a little easier. Mutual exclusion is achieved by placing the critical section of a program inside a monitor. In the code below, the critical sections of the producer and consumer are inside the monitor ProducerConsumer. Once inside the monitor, a process is blocked by the Wait and Signal primitives if it cannot continue.

  monitor ProducerConsumer
    condition full, empty;
    int count;
  
  procedure enter();
  {
    if (count == N) wait(full);        // if buffer is full, block
    put_item(widget);                  // put item in buffer
    count = count + 1;                 // increment count of full slots
    if (count == 1)  signal(empty);    // if buffer was empty, wake consumer
  }
  procedure remove();
  {
    if (count == 0)  wait(empty);      // if buffer is empty, block
    remove_item(widget);               // remove item from buffer
    count = count - 1;                 // decrement count of full slots
    if (count == N-1)  signal(full);   // if buffer was full, wake producer
  }
  count = 0;
  end monitor;
  Producer();
  {
    while (TRUE)
   {
      make_item(widget);               // make a new item
      ProducerConsumer.enter;          // call enter function in monitor
   }
  }
  Consumer();
  {
    while (TRUE)
    {
      ProducerConsumer.remove;         // call remove function in monitor
      consume_item;                    // consume an item
   }
  }

Q.4 (a) What is resource allocation graph?
The Resource Allocation Graph, also known as RAG is a graphical representation of the state of a system. It has all the information about the resource allocation to each process and the request of each process

(b) Explain paging technique. 
The paging technique divides the physical memory(main memory) into fixed-size blocks that are known as Frames and also divide the logical memory(secondary memory) into blocks of the same size that are known as Pages.
  • This technique keeps the track of all the free frames.
  • The Frame has the same size as that of a Page. A frame is basically a place where a (logical) page can be (physically) placed.


(c) Explain the following allocation algorithms
(1) First-fit


As shown on the above image

Memory blocks available are : {100, 50, 30, 120, 35}

Process P1, size: 20

  • OS Searches memory from sequentially from starting
  • Block 1 fits, P1 gets assigned to block 1

Process P2, size: 60

  • OS Searches memory sequentially from block 1 again
  • Block 1 is unavailable, Block 2 and 3 can’t fit
  • Block 4 fits, p2 assigned to block 4

Process P3, size: 70

  • OS Searches memory sequentially from block 1 again
  • Block 1 is unavailable, Block 2, 3 can’t fit. Block 4 unavailable, Block 5 can’t fit
  • P3 remains unallocated

Similarly, P4 is assigned to block 2

(2) Best-fit


In Best fit memory allocation scheme, the operating system searches that can –

Accommodate the process
Also, leaves the minimum memory wastage

Example –
If you see the image above you will see that the process size is 40.

While blocks 1, 2 and 4 can accommodate the process. Block 2 is chosen as it leaves the lowest memory wastage


(3) Worst-fit 


  • Step 1: Input memory block with a size.
  • Step 2: Input process with size.
  • Step 3: Initialize by selecting each process to find the maximum block size that can be assigned to the current process.
  • Step 4: If the condition does not fulfill, they leave the process.
  • Step 5: If the condition is not fulfilled, then leave the process and check for the next process.
  • Step 6: Stop.

Q.4 (a) When is a system in a safe state?
A state is safe if the system can allocate resources to each process( up to its maximum requirement) in some order and still avoid a deadlock. Formally, a system is in a safe state only, if there exists a safe sequence. So a safe state is not a deadlocked state and conversely a deadlocked state is an unsafe state.



In an Unsafe state, the operating system cannot prevent processes from requesting resources in such a way that any deadlock occurs. It is not necessary that all unsafe states are deadlocks; an unsafe state may lead to a deadlock.

(b) Explain segmentation.

In Operating Systems, Segmentation is a memory management technique in which the memory is divided into the variable size parts. Each part is known as a segment which can be allocated to a process.

The details about each segment are stored in a table called a segment table. Segment table is stored in one (or many) of the segments.

The correlation and mapping for the logical address which is 2 dimensional address and the physical address is done with the help of segment table.

Base Address
The starting address of the memory where the data is stored in its logical format in the system.

Say Base address is 8000, then the data will be available in addresses from 8000 onwards.

Limit
Since the size of each segment is different, thus each segment has its own limit defined and it tells the system about the size or length of any particular segment.

Further for mapping to physical form for the CPU the following are required –

  • Segment Number (s) – It is basically the total number of bits that are required to store any particular address of the segment in consideration
  • Segment Offset (d) – Number of bits required for the size of the segment.


There are following types of segmentation –

Virtual Memory Segmentation – Each processes is segmented into n divisions, however, they are not segmented all at once.

Simple Segmentation – Each processes is segmented into n divisions, they are all together segmented at once exactly at runtime. While, they may be scattered in the memory i.e. can be non contiguous.

(c) What is fragmentation? Explain the difference between internal and external
fragmentation
Fragmentation is an unwanted problem in the operating system in which the processes are loaded and unloaded from memory, and free memory space is fragmented. Processes can't be assigned to memory blocks due to their small size, and the memory blocks stay unused.
OR
Memory space in the system constantly goes through loading and releasing processes and their resources because of which the total memory spaces gets broken into a lot of small pieces, this causes creation small non utilised fragmented memory spaces, which are so small that normal processes can not fit into those small fragments, causing those memory spaces not getting utilised at all, this is called memory Fragmentation in operating system.

S.NOInternal fragmentationExternal fragmentation
1.In internal fragmentation fixed-sized memory, blocks square measure appointed to process.In external fragmentation, variable-sized memory blocks square measure appointed to the method.
2.Internal fragmentation happens when the method or process is smaller than the memory.External fragmentation happens when the method or process is removed.
3.The solution of internal fragmentation is the best-fit block.The solution to external fragmentation is compaction and paging.
4.Internal fragmentation occurs when memory is divided into fixed-sized partitions.External fragmentation occurs when memory is divided into variable size partitions based on the size of processes.
5.The difference between memory allocated and required space or memory is called Internal fragmentation.The unused spaces formed between non-contiguous memory fragments are too small to serve a new process, which is called External fragmentation.
6.Internal fragmentation occurs with paging and fixed partitioning. External fragmentation occurs with segmentation and dynamic partitioning. 
7.It occurs on the allocation of a process to a partition greater than the process’s requirement. The leftover space causes degradation system performance.It occurs on the allocation of a process to a partition greater which is exactly the same memory space as it is required.
8.It occurs in worst fit memory allocation method. It occurs in best fit and first fit memory allocation method.
Q.5 (a) Explain RAID. How it is helpful to increase CPU performance?
RAID stands for Redundant Array of Independent Disks, and combines multiple hard drives together in order to improve efficiency. Depending on how your RAID is configured, it can increase your computer's speed while giving you a single drive with a huge capacity. RAIDs can also increase reliability.
OR 
RAID (redundant array of independent disks) is the method of combining multiple physical disk drives into a single unit for data storage. The idea of RAID is to have a number of disks co-operating as one big disk to ensure data redundancy and performance improvement.

There are two main benefits of using a RAID system. They are improved performance and fault tolerance. There are several different RAID configurations that provide fault tolerance but RAID 0 is the only one that does not. Instead RAID 0 provides a performance boost. So how does RAID 0 provide that performance boost?

RAID 0 provides a performance boost by dividing data into blocks and spreading them across multiple drives using what is called disk striping. By spreading data across multiple drives, it means multiple disks can access the file, resulting in faster read/write speeds.

Raid uses disk striping which can spread files over 2 or more hard drives. Disk striping allows for better performance, and retrieve data from multiple hard drives. RAID also uses disk mirroring which allows writing of two hard drives at the same time, this allows a back up if one hard drive should fail.

(b) Explain the following Linux commands:
(1) mkdir
mkdir command in Linux allows the user to create directories (also referred to as folders in some operating systems ). This command can create multiple directories at once as well as set the permissions for the directories.
(2) touch
Using the touch command, you can change the existing file's timestamp, the time of last modification, and the time of last access.
(3) cat
cat command allows us to create single or multiple files, view content of a file, concatenate files and redirect output in terminal or files.This command displays the contents of one or more files without having to open the file for editing
(4) rm
rm command is used to remove objects such as files, directories, symbolic links and so on from the file system like UNIX.


(c) What do you mean by security? Discuss in brief access control list. 
Operating system security (OS security) is the process of ensuring OS integrity, confidentiality and availability. OS security refers to specified steps or measures used to protect the OS from threats, viruses, worms, malware or remote hacker intrusions

An access control list (ACL) contains rules that grant or deny access to certain digital environments. 

Each system resource has a security attribute that identifies its access control list. The list includes an entry for every user who can access the system. The most common privileges for a file system ACL include the ability to read a file or all the files in a directory, to write to the file or files, and to execute the file if it is an executable file or program. ACLs are also built into network interfaces and operating systems (OSes), including Linux and Windows. On a computer network, access control lists are used to prohibit or allow certain types of traffic to the network. They commonly filter traffic based on its source and destination.

There are two types of ACLs:

  1. Filesystem ACLs━filter access to files and/or directories. Filesystem ACLs tell operating systems which users can access the system, and what privileges the users are allowed.
  2. Networking ACLs━filter access to the network. Networking ACLs tell routers and switches which type of traffic can access the network, and which activity is allowed.

Originally, ACLs were the only way to achieve firewall protection. Today, there are many types of firewalls and alternatives to ACLs. However, organizations continue to use ACLs in conjunction with technologies like virtual private networks (VPNs) that specify which traffic should be encrypted and transferred through a VPN tunnel.

Q.5 (a) Explain i/o buffering.
I/O buffering The process of temporarily storing data that is passing between a processor and a peripheral. The usual purpose is to smooth out the difference in rates at which the two devices can handle data.

(b) What is virtualization? Explain the benefits of virtualization.
Virtualization relies on software to simulate hardware functionality and create a virtual computer system. This enables IT organizations to run more than one virtual system – and multiple operating systems and applications – on a single server. The resulting benefits include economies of scale and greater efficiency.
The benefits of virtualization :

1. Protection from system failures
 
No matter how careful you are with the technology you use, technology in general can sometimes be prone to system issues. Businesses can handle a few glitches, but if your developer is working on an important application that needs to be finished immediately, the last thing you need is a system crash. 
One advantage of virtualization in cloud computing is the automatic backup that takes place across multiple devices. By storing all your backup data through virtualized cloud services or networks, you can easily access files from any device. This multi-layered access prevents you from losing any files, even if one system goes down for a time. 

2. Hassle-free data transfers

Another benefit of virtualization in cloud computing is expedited data transfer. You can easily transfer data from physical storage to a virtual server, and vice versa. Virtualization in cloud computing can also handle long-distance data transfers. Administrators don’t have to waste time digging through hard drives to find data. Instead, dedicated server and cloud storage space allow you to easily locate required files and transfer them appropriately.

3. Firewall and security support

Security remains a central focus in the IT space. Through virtual firewalls, made possible through computer virtualization, you can restrict access to your data at much lower costs compared to traditional data protection methods. Virtualization earns you protection from many potential cybersecurity issues, through a virtual switch that protects your data and applications from harmful malware, viruses and other threats.
Firewall features for network virtualization allow you to create segments within the system. Server virtualization storage on cloud services will save you from the risks of lost or corruputed data. Cloud services are also encrypted with high-end protocols that protect your data from various other threats. When data security is on the line, virtualization offers premium-level protection without many of the associated firewall costs.

4. Smoother IT operations

Virtual networks help IT professionals improve efficiency in the workplace. These networks are easy to operate and faster to process, eliminating downtime and helping you save progress in real time. Before virtual networks were introduced in the digital world, technical workers could take days, sometimes weeks, to create and sufficiently support the same data across physical servers. 
Apart from the operations, virtualization also helps IT support teams solve critical, sometimes nuanced technical problems in cloud computing environments. Because data is always available on a virtual server, technicians don’t have to waste time recovering files from crashed or corrupted devices. 

5. Cost-effective strategies

Virtualization is a great way to reduce operational costs. With all the data stored on virtual servers or clouds, there’s hardly a need for physical systems or hardware, saving businesses a significant amount in waste, electricity and maintenance fees. In fact, 70% of senior executives support the integration of virtualization at some level across their organization, specifically for its time-saving properties. Virtualization also saves companies a significant amount of server space, which can then be utilized to further improve daily operations.


(c) Why is segmented paging important (as compared to a paging system)? What are
the different pieces of the virtual address in a segmented paging?

Advantages of Segmented Paging
The page table size is reduced as pages are present only for data of segments, hence reducing the memory requirements.
Gives a programmers view along with the advantages of paging.
Reduces external fragmentation in comparison with segmentation.
Since the entire segment need not be swapped out, the swapping out into virtual memory becomes easier .


Post a Comment

Previous Post Next Post

Contact Form