Concurrent Programming and Shared Memory Pitfalls

Concurrent Programming and Shared Memory Pitfalls

Objectives

  • Process
  • Process Scheduling
  • Interprocess Communication
  • Synchronization Mechanisms

What is a Process?

Typically a process is:
-> A program in execution is considered a process...
-> But some programs run as multiple processes..

Process: A new Perspective
We could view Process as a data structure.

Example: Stack
Defined as the operations that can be done on the associated data, In case of the stack: Push and Pop which create the structure of this data structure. There are few more operations like create to create a stack and destroy to deallocate the stack.
We will discuss the idea of the process in terms of a set of operations and the associated data.

Operations on processes ?

Process related system calls(operations which are initiated by operating system on behalf of process)
->fork(), exec(), wait(), exit()
What is the data manipulated by these process operations ?

  • Stack, Heap, text, data
  • CPU time used by the process, in user/system
  • File related info: Open files, file pointers
  • Identifiers pid[process id], ppid[parent process id], uid[who initiates the process]

Process scheduling ?
What if there is a long operation i.e (I/O operation, infinite loop, Network request etc are these operations while they are in action, the CPU is free). performed by a process, So in that time cpu will be doing nothing To maximize cpu utilization OS lets other processes be in execution. That process goes into a waiting state.

What if there is an infinite loop like something unintentionally caused by a programmer, etc To avoid this, the CPU gives a maximum time slot to each process before it goes to a waiting state and the ready process comes into execution.

Time Slice: Maximum time given to each process by a cpu.
The OS keeps track of this by a clock built into a cpu.

When a time slice of each process is completed or any long operation comes, then the next ready process comes into running state. Before this, the CPU stores the state of the previous running process and then restores the newly scheduled process.

Context:
Current state of a process includes register values e.g all the hardware state of a process.
Context Switching:
-> Saving the HW state of the previously running process.
-> Restoring HW state of the newly scheduled process.

A Process can mainly be in three states:
-> A Running Process
-> A Waiting Process
-> A Ready Process for the OS to cause it to run.

WaitingQueue: A queue is maintained by an OS containing processes which are waiting for any long operation completion or their turn when they will be put in ReadyQueue.

ReadyQueue: A queue is maintained by an OS containing processes which are to be executed next based on if there is any priority set.

Question: Can 2 processes be running at the same time ?
Answer: Not if there is only one PC, one IR , or one set of general purpose registers.

**What should the OS do when a process does something that will involve a long time ? **
Anything that involves hard disk access, file read/write operation , page fault etc any long operation. In that time window the CPU does nothing so,

  • OS should try to maximize processor utilization.
  • OS could change the status of that process to 'Waiting' and make another process 'Running'.

Then Question arises, which process should the OS choose from waitingQueue ?

Which other Process ?
Determined by the process scheduler.

Process Scheduler
Maximum time a cpu can give to a process.
There are many scheduling policies depending on the operating system on which basis the next process should be in running state.

Non-Preemptive Scheduling Policies

-> FCFS [First Come First Serve]

Preemptive Scheduling Policies

-> Round Robin
Which maintains a readyQueue. A process is evicted after a cpu slice time, put at the end of a readyQueue.
-> Priority based
The readyQueue need not be ordered on FCFS basis it could be ordered on any other priority instead for example: The process that has not run for the most time could get the highest priority.
The Scheduler could even assign a longer CPU timeslice for certain processes

What is concurrent Programming ?

Now, we know a program can contain more than one flow[processes], CPU manages them based on scheduling policies as discussed above.
Without Concurrent: Program execution involved one flow of control through the program.
Concurrent Programming is about programs with multiple flows of control.

For Example: A program that runs as multiple processes cooperating to achieve a common goal or some other goal.
To cooperate: processes must somehow communicate.
We can create processes or threads [light weight process] using the operating system api using fork(),exec() ...

Interprocess Communication models (Shared Memory Pitfalls):

1. Processes can communicate using files
Suppose there are two processes P1 and P2, they can use two files F1, F2] for communication. For example , F1 can be used for writing by P1 and for reading P2 , F2 can be used for writing by P2 and for reading P1. But There is high demand for I/O operations which are Long calls. There is gonna be context switching for each communication.
2. The OS supports something called a pipe

This is like the above one. Instead of files we are using a pipe which is created by a system call [long operation]. Each process can write on one end of a pipe and read from another end of a pipe. using system calls which are long operations would be costly in time.

3. Processes could communicate through variables that are shared between them

-> Shared variables, shared memory; other variables are private to a process.
-> Problem arises when two or more processes try to update a shared variable. For example, let there be three processes, one process would read the data, the 2nd process would perform some operations on it, and the third process would send it to the client. Let’s say, the first process hasn't finished reading the data and it’s time slice completes, then context switching happens, the 2nd process comes and performs operations on garbage data and the third process sends garbage to the client. So there is a need for synchronization. The 2nd process could not perform operations unless the first process read.
-> Critical Section: part of a program where a shared variable is accessed like this.
-> Necessary to synchronize processes that are interacting using shared variables.

If two processes or threads [light weight processes] are trying to access the same resource at same time creates a race condition.

In Assembly

When these two processes run concurrently they might produce different results, to avoid such things synchronization mechanisms are used.

Synchronization primitives [provided by operating system]:

Synchronization mechanisms allow the processes to access critical sections in a synchronized manner to avoid inconsistent results.

Examples: mutex lock, semaphore examples.

Critical Section Problem: Mutual Exclusion

Must Synchronize processes so that they access shared variable one at a time in critical section called Mutual Exclusion.

Lock: a synchronization mechanism
AcquireLock(L):
Done before critical section of code
returns when safe for process to enter critical section
ReleaseLock(L):
Done after critical section
Allows another process to acquire lock

Implementation of Lock
int L = 0;
AcquireLock(L):
while (L == 1); // busy waiting // bad approach
L = 1;
ReleaseLock(L):
L = 0;
Fails due to context switching of a process after complete timeslice of either process which may result in both processes reading the same value of L.

Test&Set(Mutex,Spin wait Lock, Spinlock, Busy wait lock): Synchronization mechanism

Test and Set Lock (TSL) is a synchronization mechanism.
It uses a test and set instruction to provide the synchronization among the processes executing concurrently.

Snippet:

oldValue= L;
L = 1;
return oldValue;

returns the old value of a memory location and sets the memory location to 1 as a single atomic operation (all this happens with single instruction with operating system support) which ensures if one process is currently executing test-and-set, no other process is allowed to begin another test-and-set until the first process test-and-set is finished.

Lock = 0 => available
Lock = 1 => occupied

It ensures mutual exclusion and is deadlock free but it requires OS support and when a process tries to acquire an occupied resource it goes into a busy waiting loop, in that time window, cpu utilization affects.

Semaphore ?

A semaphore is a simple integer which is used to provide synchronization among multiple processes running concurrently.
There are two types of semaphores:

  1. Counting Semaphore
  2. Binary Semaphore

Counting Semaphore:
This has two components

  • An integer value
  • An associate waiting list [queue]

The Value of a semaphore may be positive or negative.
Positive values indicate the number of processes that can be present in the critical section at the same time. while negative values indicate the number of processes that are blocked in the waiting list.

struct semaphore
{
   int value;
   Queue type L;
}
 
Wait (semaphore s) // P,Down
{
   s.value = s.value - 1;
   if (s.value < 0)
   {
       put process (PCB) in L;
       sleep();
   }
   else return;
}
 
Signal (semaphore s) // V , Post
{
   s.value = s.value + 1;
if (s.value <=0 )
   {
       select a process (PCB) from L;
       wake up();
   }
}

A process which wants to enter into the critical section will run Wait(), If the semaphore value is negative after decreasing by one, then it would be put in the queue otherwise it goes into the critical section. When a section leaves the critical section, will run Signal(), it would increase the semaphore values meaning there is a vacancy for another process, a process is enqueued from waiting list to ready Queue so that it can run again the Wait().

Binary Semaphore:

if we limit the value of a semaphore to 0 or 1. means only one process can enter the critical section at a time.

struct semaphore
{
   enum value (0,1);
   Queue type L;
}
 
Wait (semaphore s)
{
   if (s.value == 1)
   {
       s.value=0;
   }
   else
   {
       put process (PCB) in s.L;
       sleep();
   }
}
 
Signal (semaphore s)
{
   if (s.L is empty)
   {
       s.value=1;
   }
   else
   {
       select a process (PCB) from s.L;
       wake up();
   }
}

4. Processes could communicate by sending and receiving messages to each other

This usually happens through

  • Defining a Message Structure
  • Creating the Message Queue
  • Sending the Message to the Process A through the Message Queue
  • Receiving the Message from the Process A through the Message Queue

A queue must be initialized by msgget [system call] and process can send and receive messages through msgsnd() and msgrcv().

There are two files in C. ProcessA.c and ProcessB.c

ProcessA.c:

// C Program for Message Queue (Writer Process)
#include <stdio.h>
#include <sys/ipc.h>
#include <sys/msg.h>
#define MAX 10
 
// structure for message queue
struct mesg_buffer {
   long mesg_type;
   char mesg_text[100];
} message;
 
int main()
{
   key_t key;
   int msgid;
 
   // ftok to generate unique key
   key = ftok("progfile", 65);
 
   // msgget creates a message queue
   // and returns identifier
   msgid = msgget(key, 0666 | IPC_CREAT);
   message.mesg_type = 1;
 
   printf("Write Data : ");
   fgets(message.mesg_text,MAX,stdin);
 
   // msgsnd to send message
   msgsnd(msgid, &message, sizeof(message), 0);
 
   // display the message
   printf("Data send is : %s \n", message.mesg_text);
 
   return 0;
}

This would display the following result:

While for,
ProcessB.c

// C Program for Message Queue (Reader Process)
#include <stdio.h>
#include <sys/ipc.h>
#include <sys/msg.h>
 
// structure for message queue
struct mesg_buffer {
   long mesg_type;
   char mesg_text[100];
} message;
 
int main()
{
   key_t key;
   int msgid;
 
   // ftok to generate unique key
   key = ftok("progfile", 65);
 
   // msgget creates a message queue
   // and returns identifier
   msgid = msgget(key, 0666 | IPC_CREAT);
 
   // msgrcv to receive message
   msgrcv(msgid, &message, sizeof(message), 1, 0);
 
   // display the message
   printf("Data Received is : %s \n",
                   message.mesg_text);
 
   // to destroy the message queue
   msgctl(msgid, IPC_RMID, NULL);
 
   return 0;
}

Leaving us with

Summary

  • Concurrency: multiple computations running simultaneously
  • Shared-memory & message-passing paradigms
  • Processes & threads
    Process is like a virtual computer; thread is like a virtual processor
  • Race conditions
    When correctness of result (postconditions and invariants) depends on the relative timing of events

These ideas connect to our three key properties of good software mostly in bad ways. Concurrency is necessary but it causes serious problems for correctness. We’ll work on fixing those problems in the next few readings.

Safe from bugs. Concurrency bugs are some of the hardest bugs to find and fix, and require careful design to avoid.

Easy to understand. Predicting how concurrent code might interleave with other concurrent code is very hard for programmers to do. It’s best to design in such a way that programmers don’t have to think about that.