Multi-Threaded Programming - Sahyadri

burgerraraSoftware and s/w Development

Nov 18, 2013 (4 years and 6 months ago)



Multithreaded programming

Multithreaded Processes

If one application should be implemented as a set of related units of execution. Two possible


Multiple processes

Heavy weight

Less efficient


Multiple threads

A thread

is referred as a light weight process (LWP).

is a basic unit of CPU utilization

has a thread ID, a program counter, a register set, and a stack.

shares a code section, data section, open files etc. with other threads belonging to the
same process.

A tr
aditional (or heavy weight) process has a single thread of control.

If a process can have multiple threads, it can do more than one task.

Multiple threads

Light weight because of shared memory

Thread creation and termination is less time consuming


switch between two threads takes less time

Communication between threads is fast because they share address space, intervention
of kernel is not required.

Multithreading refers to the ability of an OS to support multiple threads of execution within a
le process.

A multithreaded process contains several different flows of control within the same address

Threads are tightly coupled.

A single PCB & user space are associated with the process and all the threads belong to this

Each thread h
as its own thread control block (TCB) having following information:

ID, counter,

set and other information.

Why multithreading?

Use of traditional processes incurs high overhead due to process switching.

Two reasons for high process switching

Unavoidable overhead of saving the state of running process and loading the state of
new process.

Process is considered to be a unit of resource allocation, resource accounting and inter

process communication.

Use of threads splits the process st
ate into two parts

resource state remains with the process
while execution state is associated with a thread.

The thread state consists only of the state of the computation.

This state needs to be saved and restored while switching between threads of a p
The resource state remains with the process.

Resource state is saved only when the kernel needs to perform switching between
threads of different processes.


Web browser

Displaying message or text or image

Retrieves data from network

Word p

Displaying graphics

Reading key strokes

Spelling and grammar checking

Benefits of Multithreading


A process continues running even if one thread is blocked or performing lengthy
operations, thus increased response.

Resource Sharing

Threads share memory and other resources.

It allows an application to have several different threads of activity, all within the same
address space.


Sharing the memory and other resources

Thread management (creation, termination, switching between

threads) is less time
consuming than process management.

Utilization of M


Each thread may be running in parallel on different processor

increases concurrency.

A single threaded process can run on only one processor irrespecti
ve of number of
processors present.

Multithreading Models: User & Kernel Threads

Support for threads may be provided either at user level i.e. user threads or by kernel i.e. kernel

User threads are supported above the kernel and are managed withou
t kernel support.

Kernel threads are supported by OS.


User Threads

These are supported above the kernel and are implemented by a thread library at
user level.

Library provides thread creation, scheduling management with no support from

No kernel in
tervention is required.

User level threads are fast to create and manage.






Kernel Threads

Kernel performs thread creation, scheduling, management in kernel space.

Kernel threads are slower to create
and manage.

If a thread performs a blocking system call, the kernel can schedule another
kernel thread.

In multiprocessor environment, the kernel can schedule threads on different

Most of the operating system support kernel threads also:

Windows XP


Mac OS X

Tru64 UNIX


Multithreading Models

One Model

Many user
level threads are mapped to single kernel thread

Thread management is done by the thread library in user space.

It is fast, efficient.

Drawback: If one user th
read makes a system call and only one kernel thread is available to
handle, entire process will be blocked.

Green threads: a library for Solaris

GNU Portable Threads

One Model

Each user
level thread is mapped to one kernel thread

It provides more

concurrency than many to one model.

If blocking call from one user thread to one kernel thread, another kernel thread can be
mapped to other user thread.

It allows multiple threads to run in parallel.

Drawback: Creating a user thread requires the creation

of corresponding kernel thread which
is overhead.

indows 95/98/NT/2000/XP


Solaris 9 and later

Many Model

Many user
level threads are mapped to a smaller or equal number of kernel threads.

Number of kernel threads may be specific to either

application or a particular machine.

It allows many user level threads to be created.

Corresponding kernel threads can run in parallel on multiprocessor.

Thread Libraries

A thread library provides the programmer an API for creating and managing

There are two ways of implementing thread


Provide a library entirely in user space with no kernel support.

All code and data structure for a library exist in user space.

This means that invoking a function in the library results in a local
function call

in user space and not a system call.


Implement kernel level library

Supported directly by the OS.

Code and Data structure of library exist in kernel space.

Invoking a function in the API for the library typically results in
system call to kernel

Java thre
ad API allows thread creation and management directly
on programs. Since java program executes inside JVM

which is
running top of host operating system, so java thread API typically
implemented using thread library available on the host system.

On windows
System java threads are typically implemented using
Win32 API.

Linux or U
nix system use PTHREADS

Threading Issues


Cancellation is the task of terminating a thread before it has completed.

For example: If multiple threads concurrently searching through the database and one
thread returns then remaining thread can be cancelled.

A thread that is to be cancelled is called target thread.

Asynchronous Cancellation

One thread immediate
ly terminates the target thread.

Deferred cancellation

The target thread periodically checks whether it should terminate or not.

Signal Handling

A signal handling is used to notify a process that a particular event has occurred.

A signal is generated by
the occurrence of particular event

A generated signal is delivered to a process

Once delivered, the signal must be handled.

Synchronous signal

Example: illegal memory access.

Division by zero.

Here signals are sent to those process who ca
used the signal.

Asynchronous signal

When signal is generated by an event external to a running process then that signal is
sent to another process.

It may be handled by :


A default signal handler


A user defined signal handler.

Thread pools

ited threads could exhaust the system.

One solution to the problem is thread pool.

Create a number of threads at process start up and place them into pool.


When a server receives a request , it awakens a thread from this pool.

Once thread
completes its service it returns to pool and awaits more work.

Thread specific data

Thread belonging to process share the data of the process.

in some circumstances, each thread might need its own copy of data.

Scheduler Activation

Many systems
implementing an intermediate data structure between the user and
kernel threads. This data structure is known as lightweight process or LWP.

To the user thread LWP appears to be a virtual processor on which the application can
schedule a user thread to ru

Each LWP is attached to a kernel thread, and it is kernel threads that the operating system
scheduled to run on physical processor.

If a kernel thread blocks , the LWP blocks as well.

Up the chain, the user level thread attached to the LWP also blocks.

User Thread

eight process

Kernel thread

If a process has four LWPs, then the fifth request must wait for one of the LWPs to return
from the kernel.

Communication between the user thread library and the kernel is known as

Scheduler activation

It work as follows: The kernel provides an application with a set of virtual processors

(LWPs) and

the application can schedule user threads onto an available virtual

But kernel must inform an application about the certain

This procedure is
known as Upcall.

Upcalls are handled by the thread library with an upcall handler.