Multi-Threaded Programming - Sahyadri

burgerraraSoftware and s/w Development

Nov 18, 2013 (3 years and 10 months ago)

73 views

Chapter
-
4

Multithreaded programming

Multithreaded Processes



If one application should be implemented as a set of related units of execution. Two possible
ways:

o

Multiple processes



Heavy weight



Less efficient

o

Multiple threads


A thread



is referred as a light weight process (LWP).



is a basic unit of CPU utilization



has a thread ID, a program counter, a register set, and a stack.



shares a code section, data section, open files etc. with other threads belonging to the
same process.





A tr
aditional (or heavy weight) process has a single thread of control.



If a process can have multiple threads, it can do more than one task.



Multiple threads



Light weight because of shared memory



Thread creation and termination is less time consuming



Context

switch between two threads takes less time



Communication between threads is fast because they share address space, intervention
of kernel is not required.



Multithreading refers to the ability of an OS to support multiple threads of execution within a
sing
le process.



A multithreaded process contains several different flows of control within the same address
space.



Threads are tightly coupled.



A single PCB & user space are associated with the process and all the threads belong to this
process.



Each thread h
as its own thread control block (TCB) having following information:



ID, counter,
registers

set and other information.

Why multithreading?


Use of traditional processes incurs high overhead due to process switching.



Two reasons for high process switching
overhead:



Unavoidable overhead of saving the state of running process and loading the state of
new process.



Process is considered to be a unit of resource allocation, resource accounting and inter

process communication.



Use of threads splits the process st
ate into two parts


resource state remains with the process
while execution state is associated with a thread.



The thread state consists only of the state of the computation.



This state needs to be saved and restored while switching between threads of a p
rocess.
The resource state remains with the process.



Resource state is saved only when the kernel needs to perform switching between
threads of different processes.

Examples



Web browser



Displaying message or text or image



Retrieves data from network



Word p
rocessor



Displaying graphics



Reading key strokes



Spelling and grammar checking

Benefits of Multithreading



Responsiveness



A process continues running even if one thread is blocked or performing lengthy
operations, thus increased response.



Resource Sharing



Threads share memory and other resources.



It allows an application to have several different threads of activity, all within the same
address space.



Economy



Sharing the memory and other resources



Thread management (creation, termination, switching between

threads) is less time
consuming than process management.



Utilization of M
ulti
P
rocessor

Architectures



Each thread may be running in parallel on different processor


increases concurrency.



A single threaded process can run on only one processor irrespecti
ve of number of
processors present.

Multithreading Models: User & Kernel Threads



Support for threads may be provided either at user level i.e. user threads or by kernel i.e. kernel
threads.



User threads are supported above the kernel and are managed withou
t kernel support.



Kernel threads are supported by OS.

o

User Threads



These are supported above the kernel and are implemented by a thread library at
user level.



Library provides thread creation, scheduling management with no support from
kernel.



No kernel in
tervention is required.



User level threads are fast to create and manage.




Examples:



POSIX
Pthreads



Mach
C
-
threads



Solaris
threads

o

Kernel Threads



Kernel performs thread creation, scheduling, management in kernel space.



Kernel threads are slower to create
and manage.



If a thread performs a blocking system call, the kernel can schedule another
kernel thread.



In multiprocessor environment, the kernel can schedule threads on different
processors.


Most of the operating system support kernel threads also:



Windows XP



Solaris



Mac OS X



Tru64 UNIX



Linux

Multithreading Models

Many
-
to
-
One Model



Many user
-
level threads are mapped to single kernel thread



Thread management is done by the thread library in user space.



It is fast, efficient.



Drawback: If one user th
read makes a system call and only one kernel thread is available to
handle, entire process will be blocked.



Green threads: a library for Solaris



GNU Portable Threads


One
-
to
-
One Model



Each user
-
level thread is mapped to one kernel thread
.



It provides more

concurrency than many to one model.



If blocking call from one user thread to one kernel thread, another kernel thread can be
mapped to other user thread.



It allows multiple threads to run in parallel.



Drawback: Creating a user thread requires the creation

of corresponding kernel thread which
is overhead.



w
indows 95/98/NT/2000/XP



Linux



Solaris 9 and later


Many
-
to
-
Many Model



Many user
-
level threads are mapped to a smaller or equal number of kernel threads.



Number of kernel threads may be specific to either

application or a particular machine.



It allows many user level threads to be created.



Corresponding kernel threads can run in parallel on multiprocessor.


Thread Libraries



A thread library provides the programmer an API for creating and managing
threads
.



There are two ways of implementing thread

1.

Provide a library entirely in user space with no kernel support.



All code and data structure for a library exist in user space.



This means that invoking a function in the library results in a local
function call

in user space and not a system call.

2.

Implement kernel level library



Supported directly by the OS.



Code and Data structure of library exist in kernel space.



Invoking a function in the API for the library typically results in
system call to kernel



Java thre
ad API allows thread creation and management directly
on programs. Since java program executes inside JVM

which is
running top of host operating system, so java thread API typically
implemented using thread library available on the host system.



On windows
System java threads are typically implemented using
Win32 API.



Linux or U
nix system use PTHREADS

Threading Issues



Cancellation



Cancellation is the task of terminating a thread before it has completed.


For example: If multiple threads concurrently searching through the database and one
thread returns then remaining thread can be cancelled.



A thread that is to be cancelled is called target thread.


Asynchronous Cancellation

One thread immediate
ly terminates the target thread.


Deferred cancellation

The target thread periodically checks whether it should terminate or not.



Signal Handling

A signal handling is used to notify a process that a particular event has occurred.



A signal is generated by
the occurrence of particular event



A generated signal is delivered to a process



Once delivered, the signal must be handled.



Synchronous signal

Example: illegal memory access.


Division by zero.

Here signals are sent to those process who ca
used the signal.



Asynchronous signal

When signal is generated by an event external to a running process then that signal is
sent to another process.

It may be handled by :

1.

A default signal handler
(Kernel)

2.

A user defined signal handler.



Thread pools



Unlim
ited threads could exhaust the system.

One solution to the problem is thread pool.

Create a number of threads at process start up and place them into pool.

Example:

When a server receives a request , it awakens a thread from this pool.

Once thread
completes its service it returns to pool and awaits more work.



Thread specific data



Thread belonging to process share the data of the process.



However
in some circumstances, each thread might need its own copy of data.



Scheduler Activation



Many systems
implementing an intermediate data structure between the user and
kernel threads. This data structure is known as lightweight process or LWP.



To the user thread LWP appears to be a virtual processor on which the application can
schedule a user thread to ru
n.



Each LWP is attached to a kernel thread, and it is kernel threads that the operating system
scheduled to run on physical processor.



If a kernel thread blocks , the LWP blocks as well.



Up the chain, the user level thread attached to the LWP also blocks.


User Thread





Lightw
eight process




Kernel thread





If a process has four LWPs, then the fifth request must wait for one of the LWPs to return
from the kernel.



Communication between the user thread library and the kernel is known as

Scheduler activation
.

It work as follows: The kernel provides an application with a set of virtual processors

(LWPs) and

the application can schedule user threads onto an available virtual
processor.

But kernel must inform an application about the certain
events.

This procedure is
known as Upcall.

Upcalls are handled by the thread library with an upcall handler.




















LWP

K