Skip to main content

Multithreading and Synchronization in Win32 applications


Synchronization - Overview

The concurrent execution of threads and processes in

Win32 is achieved by synchronization mechanisms.

Following methods/objects are used to achieve synchronization

Critical Section

single process;

non kernel object

Mutexes

Mutually Exclusive

Critical Section + Multi Processes

Semaphores

Semaphores – keeps trap of resource usage by no of threads

Resource acquired by thread it does Counter ++

Resource released by thread it does Counter--

Event Objects

Signal Set – Reset Events

Auro reset and manual reset

Resource

: memory location, data structure, file…

Buffer

Data

Thread 1:

data

Thread 2:

Critical Sections

• Critical Section

– Portion of a code that accesses a shared resource.

• Critical Section Object

– Provided by Win32 API

– used to enforce mutual exclusion between threads within

a single process.

– Only one Thread at a time is allowed to be “inside” the

critical section.

– Local object

• kernel object

• Define critical section object

– CRITICAL_SECTIONcritical_section;

• Initialized critical section

– VOID InitializeCriticalSection (LPCRITICAL_SECTION

lpCriticalSection);

• Deleting critical section object

– VOID DeleteCriticalSection (LPCRITICAL_SECTION

lpCriticalSection);

2005-10-19 Seong Jong Choi Multithreading_CHAPTER_4-9

• Entering the critical section

– VOID EnterCriticalSection (LPCRITICAL_SECTION

lpCriticalSection);

• Leaving the critical section

– VOID LeaveCriticalSection (LPCRITICAL_SECTION

lpCriticalSection);

• Example : Linked list

typedef struct _Node{

struct _Node *next;

int data;

} Node;

typedef struct _List{

Node *head;

CRITICAL_SECTION Critical_sec;

} List;

List *CreateList(){

List *pList = malloc(sizeof(pList));

pList->head = NULL;

InitializeCriticalSection(&pList->Critical_sec);

return pList;

}

void DeleteList(List *pList)

{

DeleteCriticalSection(&pList->Critical_sec);

free(pList);

}

void AddHead(List *pList, Node *node)

{

EnterCriticalSection(&pList->critical_sec);

node->next = pList->head;

pList->head = node;

LeaveCriticalSection(&pList->Critical_sec);

}

• Example : Linked

AddHead()

Data | head

Data | next

NULL

List

Node

Void Insert(List *pList, Node *afterNode, Node *newnode){

EnterCriticalSection(&pList->critical_sec);

if(afterNode == NULL){

AddHead(pList, newNode);

}

else{

newNode->next = afterNode->next;

afterNode->next = newNode;

}

LeaveCriticalSection(&pList->Critical_sec);

}

Node *next(List *pList, Node *node){

Node* next;

EnterCriticalSection(&pList->Critical_sec);

next = node->next;

LeaveCriticalSection(&pList->Critical_sec);

return next;

}

• Example : Linked list

Insert()

1.

Data | next

Data | next

Data | next

Data | next

Data | next

NULL

AfterNode

newNode

AfterNode

• Example : Linked list

*Next()

Data | next

node

head Data | next Data | next NULL

List node node

Data | next

next

– Critical Section

EnterCriticalSection(pCS)

EnterCriticalSection(pCS)

LeaveCriticalSection(pCS)

LeaveCriticalSection(pCS)

2005-10-19 Seong Jong Choi Multithreading_CHAPTER_4-17

• Minimizing Lock Time

– Don’t lock a resource for long time !

– Never call Sleep() or any of the Wait…() APIs inside of a

critical section.

– How often the resource will be used and how quickly a

thread must release the resource.

• Void Dangling Critical Sections

– LeaveCriticalSection()없이 thread가 죽는다면..?

– Critical section is not kernel object -> there is no way to

tell if the thread currently inside a critical section is

still alive.

Deadlock

Example SwapLists() –p.82

void SwapLists(List *list1, List *list2){

List *tmp_list;

EnterCriticalSection(list1->critical_sec);//context switch

EnterCriticalSection(list2->critical_sec);

tmp_list = list1->head;

list1->head = list2->head;

list2->head = tmp_list;

LeaveCriticalSection(list1->critical_sec);

LeaveCriticalSection(list2->critical_sec);

}

ThreadA SwapLists(home_address_list, work_address_list);

ThreadA SwapLists(work_address_list, home_address_list);

Deadlock

Data | next

Node

head Data | next Data | next NULL

Tmp_list Node Node

Data | next

Node

head Data | next Data | next NULL

List1 Node Node

Data | next

Node

head Data | next Data | next NULL

List2 Node Node

• Deadlock() = “The Deadly Embrace”

• Deadlock will not happen by allocating all the

resources you need as a single operation with

WaitForMultipleObjects(). (later…-> P.91)

• All-or-nothing proposition

The Dining Philosophers

The table set for the dining philosophers

eating

thinking Waiting to eat

Let’s See

the Dining.c

Mutexes

• MUTualExclusion

• Only one thread at a time is allowed to own a mutex, just as

only one thread at a time can enter a critical section.

• A kernel object that will enforce mutual exclusion between

threads even if they are in different processes.

Timeout (see Wait…() func) not

Between Processes Within the same process

Almost 100 times longer

Mutexes Critical section

The comparison of the function

DeleteCriticalSection() CloseHandle()

LeaveCriticalSection() ReleaseMutex()

WaitForSingleObject()

WaitForMultipleObjects()

MsgWaitForMultipleObjects()

EnterCriticalSection()

CreatMutex()

OpenMutex()

InitializeCriticalSection()

CRITICAL_SECTION Mutex Kernel Object

Creating Mutex

HANDLE CreateMutex(

LPSECURITY_ATTRIBUTES lpAttribute,

BOOL bInitialOwner,

LPCTSTR lpName

);

– lpAttribute: Use NULL to get the default Security Attributes.

– bInitialOwner: Set to TRUE if you want the thread that called

CreateMutex() to own the mutex.

– lpName:Any thread or process can refer to this mutex by name

• a Mutex has a reference count. You should call CloseHandle()

– CreateMutex Reference Count++

– CloseHandle Reference Count--

Opening Mutex

HANDLE OpenMutex(

DWORD dwDesiredAccess, // access

BOOL bInheritHandle, // inheritance option

LPCTSTR lpName // object name );

– dwDesiredAccess :Specifies the requested access to the mutex.

– bInheritHandle : Specifies whether the returned handle is

inheritable.

– lpName: Name to access Mutex.

– Mutex has a reference count. You should call CloseHandle()

• Locking a Mutex

– To acquire ownership of the mutex, use one of the Win32

Wait….() functions.

– If a thread waits on a non-signaled mutex then the

thread will block.

– A Mutex is signaled when no thread owns it.

– Scenario (textbook p.87 side effect)

BOOL ReleaseMutex( HANDLE hMutex );

- a thread releases ownship of the mutex by calling

ReleaseMutex().

• Handling Abandoned Mutexes

– If a thread that owns a mutex exits without calling

ReleaseMutex(), the mutex is not destroyed.

– Mutex -> unowned, nonsignaled

– Wait…() -> return : WAIT_ABANDONED_0

(~WAIT_ABANDONED_n - 1)

• Letting the Philosophers Eat

– Source

• Fixing SwapLists

Void SwapLists(struct List *list1, struct List *list2)

{

struct List *tmp_list;

HANDLE arrhandles[2];

arrHandles[0] = list1->hMutex;

arrHandles[1] = list2->hMutex;

WaitForMultipleObjects(2, arrHandles, TRUE, INFINITE);

tmp_list = list1->head;

list1->head = list2->head;

list2->head = tmp_list;

ReleaseMutex(arrHandles[0]);

ReleaseMutex(arrHandles[1]);

}

Why Have an Initial Owner?

– bInitialOwner of CreateMutex()

– It prevents a race condition.

HANDLE hMutex = CreateMutex(NULL, FALSE, “Simple Name”); <<

int result = WaitForSingleObject(hMutex, INFINITE);

– Critical Section

EnterCriticalSection(pCS)

EnterCriticalSection(pCS)

LeaveCriticalSection(pCS)

LeaveCriticalSection(pCS)

– Mutex

WaitforSingleObject(hMtx)

WaitforSingleObject(hMtx)

ReleaseMutex(hMtx)

ReleaseMutex(hMtx)

Semaphores

• They are the key ingredient in solving various

producer/consumer problems where a buffer is being read

and written at the same time.

• A semaphore in Win32 may be locked at most n times, where

n is specified when the semaphore is created.

• EX) car rental problem.

• Creating Semaphores

HANDLE CreateSemaphore(

LPSECURITY_ATTRIBUTES lpAttribute,

LONG InitialCount,

LONG MaximumCount,

LPCTSTR lpName

);

– lpAttribute: Use NULL to get the default Security Attributes.

– InitialCount: Specifies an initial count for the semaphore

object(0 <= this <= MaximemCount)

– MaximumCount: Specifies the maximum count for the

semaphore object

– lpName:Any thread or process can refer to this Semaphore by name

• Acquiring Locks

– The Value of the semaphore represents the number of

resources currently available.

– You acquire a lock on a semaphore by using any of the Wait…()

– There is no concept of ownership (Semaphore has no owner)

• Several Threads can lock a semaphore at the same time

• Semaphore can be released by any thread

• Releasing Locks

– To release a lock, call ReleaseSemaphore()

– This function increments the semaphore’s value

BOOL ReleaseSemaphore(

HANDLE hSemaphore,

LONG lReleaseCount,

LPLONG lpPreviousCount

);

hSemaphore Handle to the semaphore

lReleaseCount Increment the value of the semaphore

lpPreviousCount Return the previous value of the semaphore.

Note the there is no way to get the current value.

• Why Have an Initial Count?

– Initial Value of the semaphore

– Must be greater than or equal to zero and less than

MaximumCount

– ReleaseSemaphore() can increment the count up to its

maximum

Event Objects

• The most flexible type of synchronization object in Win32 Kernel

Object ( signaled or nonsignaled)

HANDLE CreateEvent(

LPSECURITY_ATTRIBUTES lpEventAttributes,

BOOL bManualReset, // T=AutoReset, F=Manual Reset

BOOL bInitialState, // initial state; T=signaled, F=nonsignaled

LPCTSTR lpName // object name

);

SetEvent()

Memory Operation

Set the event object to the signaled state

ResetEvent()

Set the event object to the nonsignaled state

PulseEvent()

– Memoryless operation

– Manual Reset Event: Set the event object to the signaled state, wake everything

up that is currently waiting, then return to the nonsignaled state.

– Auto Reset Event: Set the event object to the signaled state, wake up a single

waiting thread (if any), return to the nonsignaled state.

Manual Reset Event Object: Set/Reset

1. Two threads are waiting for an event object (EO).

2. SetEvent(EO) will activate all threads waiting for the EO.

3. The EO will be in nonsignaled state only after the

ResetEvent()

Auto Reset Event Object: Set

1. Two threads are waiting for an event object (EO).

2. SetEvent(EO) will activate only one thread waiting for the

EO, then return to nonsignaled state

3. If no threads are waiting, the event object's state remains

signaled,

4. Until wait (or reset()??) function.

Manual Reset Event Object: Pulse

1. Assume two threads are waiting for an event object (EO).

2. PulseEvent(EO) will activate all threads waiting for the EO,

then the EO return to the nonsignaled state.

3. If no threads are waiting, the event object's state remains

nonsignaled

Auto Reset Event Object: Pulse

1. If no threads are waiting, the event object's state remains

nonsignaled

2. Two threads are waiting for an event object (EO).

3. PulseEvent(EO) will activate a single thread waiting for the

EO, then the EO return to the nonsignaled state.

Event Objects: Summary

• If threads are waiting.

Activate one, then

Activate one, then --- nonsignaled

Auto Reset stay nonsignaled

Activate all, then

Activate all, then Nonsignaled nonsignaled

Manual Reset stay signaled

SetEvent() ResetEvent() PulseEvent()

• If no treads are waiting.

Stay signaled, --- Stay nonsignaled

Auto Reset until wait function.

Manual Reset Stay signaled Nonsignaled Stay nonsignaled

SetEvent() ResetEvent() PulseEvent()

2005-10-19 Seong Jong Choi Multithreading_CHAPTER_4-43

Summary & Questions

• PulseEvent() is an memoryless operation.

• SetEvent() is memory operation.

• Manual/auto Reset

– No threads are waiting, and current state is signaled.

What happens when PulseEvent()??

Interlocked Variables

• The simplest type of synchronization mechanism.

• Operate on a standard 32-bit long variable.

• No ability to wait

• InterlockedIncrement(LPLONG lpTarget)

• InterlockedDecrement(LPLONG lpTarget)

– These functions return a comparison against zero.

– Use these functions to maintain your object reference count.

(ex: AddRef(), Release())

Summary of Synchronization Mechanisms

• Critical Section

– Is a local object, not a kernel object

– Is fast and efficient

– Cannot be waited on more than one at a time

– Cannot determine if it was abandoned by a thread

• Mutex

– Is a kernel object

– Generates an ‘abandoned’ error if the owning thread

terminates

– Can be used in Wait…() calls

– Is Named, and can be opened across processes.

– Can only be released by the thread that owns it

• Semaphore

– Is a kernel object

– Has no owner

– Is Nameed

– Can be released by any thread

• Event Object

– Is a kernel object

– Is completely under program control

– Is good for designing new synchronization objects

– Does not queue up wake-up requests

– Is Named

• Interlocked Variable

– Useful for reference counting

Comments

Popular posts from this blog

MFC - Microsoft Foundation Classes Design Patterns

1 Introduction This paper describes the use of object-oriented software design patterns, as presented in Design Patterns: Elements of Reusable Object-Oriented Software by Gamma et al., within the Microsoft Foundation Class Library (MFC). MFC is used for implementing applications for Microsoft Windows operating systems. Because of the size of the MFC library, a complete analysis would have been beyond the scope of this assignment. Instead, we identified various possible locations for design patterns, using the class hierachy diagram of MFC, and studied the source carefully at these locations. When we did not find a pattern where we expected one, we have documented it anyway, with examples of how the particular problem could have been solved differently, perhaps more elegantly, using design patterns. We have included a brief introduction to MFC in Section 2 , as background information. The analysis has been split into three parts, with one section for each major design pattern ca...

Explain Polymorphism and Flavors of Polymorphism...

Polymorphism is the ability of different objects to react in an individual manner to the same message. This notion was imported from natural languages. For example, the verb "to close" means different things when applied to different objects. Closing a door, closing a bank account, or closing a program's window are all different actions; their exact meaning is determined by the object on which the action is performed. Most object-oriented languages implement polymorphism only in the form of virtual functions. But C++ has two more mechanisms of static (meaning: compile-time) polymorphism: Operator overloading. Applying the += operator to integers or string objects, for example, is interpreted by each of these objects in an individual manner. Obviously, the underlying implementation of += differs in every type. Yet, intuitively, we can predict what results are. Templates. A vector of integers, for example, reacts differently from a vector of string objects when it receives ...

• Why might you need exception handling be used in the constructor when memory allocation is involved?

Your first reaction should be: "Never use memory allocation in the constructor." Create a separate initialization function to do the job. You cannot return from the constructor and this is the reason you may have to use exception handling mechanism to process the memory allocation errors. You should clean up whatever objects and memory allocations you have made prior to throwing the exception, but throwing an exception from constructor may be tricky, because memory has already been allocated and there is no simple way to clean up the memory within the constructor.