Chapter04x

prettybadelyngeSoftware and s/w Development

Nov 18, 2013 (3 years and 6 months ago)

85 views

Chapter 4

Threads

Patricia Roy

Manatee Community College, Venice, FL

©2008, Prentice Hall


Operating Systems:

Internals and Design Principles, 6/E

William Stallings

Processes and Threads


Resource ownership
-

process includes a
virtual address space to hold the process
image


Scheduling/execution
-

follows an
execution path that may be interleaved
with other processes


These two characteristics are treated
independently by the operating system


Processes and Threads


Dispatching is referred to as a thread or
lightweight process


Resource ownership is referred to as a
process or task


Multithreading


To support multiple, concurrent paths of
execution within a single process

Multithreading


MS
-
DOS supports a single user process
and thread


Some variants of UNIX support multiple user
processes but only one thread per process


Java run
-
time environment is a single
process with multiple threads


Windows, Solaris, and other modern
versions of Unix support multiple processes
and multiple threads per process

Threads and Processes

Processes


A virtual address space which holds the
process image


Protected access to processors, other
processes (for
interprocess

communication), files, and I/O resources


One or More Threads in
Process


An execution state (running, ready, etc.)


Saved thread context when not running


An execution stack


One or More Threads in
Process


Some per
-
thread static storage for local
variables


Access to the memory and resources of its
process


all threads of a process share this


Threads

Uses of Threads in a Single
-
User Multiprocessing System


Modular program structure


Foreground and background work


One thread for user
-
interface, another for data
processing


Speed of execution


Asynchronous processing


For example, a thread to do backup for a
word processor


Remote Procedure Call Using
Single Thread


Download and display a web page in a
web browser


RPC Using One Thread per
Server

Benefits of Threads


Takes less time to create a new thread
than a process


Experiment shows that it is 10 times faster


Less time to terminate a thread than a
process


Less time to switch between two threads
within the same process


No memory reallocation is involved


Benefits of Threads


Since threads within the same process
share memory and files, they can
communicate with each other without
invoking the kernel


Threads


Suspending a process involves
suspending all threads of the process
since all threads share the same address
space


Termination of a process, terminates all
threads within the process


Thread States


States associated with a change in thread
state


Spawn


Spawn another thread


Block


Unblock


Finish


Deallocate

register context and stacks


Thread Implementation
-

Packages


Threads are provided as a package,
including operations to create, destroy,
and synchronize them



A package can be implemented as:


User
-
level threads


Kernel threads


User
-
Level Threads


All thread management is done by the
application


The kernel is not aware of the existence of
threads


User
-
Level Threads

User
-
Level Threads


Thread library entirely executed in user mode


Kernel is not involved!


Cheap to manage threads


Create: setup a stack


Destroy: free up memory


Cheap to do context switch


Just save CPU registers


Done based on program logic


A blocking system call blocks all peer threads

Kernel
-
Level Threads


Kernel is aware of and schedules threads


A blocking system call, will not block all
peer threads



Windows is an example of this approach


Kernel maintains context information for
the process and the threads


Scheduling is done on a thread basis



Kernel
-
Level Threads

Kernel
-
Level Threads


Kernel is aware of and schedules threads


A blocking system call, will not block all
peer threads



More expensive to manage threads


More expensive to do context switch


Kernel intervention, mode switches are
required

Thread/Process Operation Latencies

Operation

user
-
level threads

kernel
-
level threads

processes


null fork

34 usec

948

11,300
usec

usec

signal
-
wait

37 usec

441
usec

1,840 usec

User vs. Kernel
-
Level Threads


Users
-
level threads


Cheap to manage and to do context switch


A blocking system call blocks all peer threads



Kernel
-
level threads


A blocking system call will not block all peer
threads


Expensive to manage and to do context switch



Light
-
Weight Processes (LWP)


Support for hybrid (user
-
level and kernel)
threads, e
xample is Solaris


A process contains several LWPs


In addition, the system provides user
-
level
threads


Developer: creates multi
-
threaded
applications


System: Maps threads to LWPs for
execution

Thread Implementation


LWP

Combining kernel
-
level lightweight processes and user
-
level threads

Thread Implementation


LWP


Each LWP offers a virtual CPU


LWPs are created by system calls


They all run the scheduler, to schedule
a thread


Thread table is kept in user space


Thread table is shard by all LWPs


LWPs switch context between threads


Thread Implementation


LWP


When a thread blocks, LWP schedules
another ready thread


Thread context switch is completely done
in user mode


When a thread blocks on a system call,
execution mode changes from user to
kernel
but

continues in the context of the
current LWP


When current LWP can no longer execute,
context is switched to another LWP

Thread Implementation


LWP

Combining kernel
-
level lightweight processes and user
-
level threads

LWP Features



Cheap thread management


A blocking system call may not suspend
the whole process


LWPs are transparent to the application


LWPs can be easily mapped to different
CPUs


Managing LWPs is expensive (like kernel
threads)

A Short Survey


On awareness
of CSE Bits & Bytes