Reference

Multithreading

 

 

Multithreading allows your scripts and drivers to perform background processing or monitoring operations.

 

Surgeon General's Warning on Multithreading

Multithreading is a very advanced programming topic, which has many other synchronization problems, conundrums, techniques and theories way beyond the scope of this topic.

 

This is meant as a lightweight introduction to multithreading possibilities and a few very basic techniques.  For the casual developer, you want ensure your mission-critical Stardraw Control application functions predictably.

 

Other Resources

Someone in the Support Forums or a Certified Programmer may be able to help.

 

About Multithreading

At its simplest, multithreading allows a program to perform several operations simultaneously.  Strictly speaking, in most desktop computers, nothing really happens simultaneously: its just that the processor is fast enough to give that illusion.

 

Threads

In a program executable, actions, methods and events execute on threads.  A thread can only perform one operation at a time, however a program can have multiple threads all doing different tasks.

 

An example is an auto-save feature in an application: rather than interrupting the user and pausing the application whilst the backup files are written to disk, a new thread can be launched which performs the save operation in the background.

 

This means that the auto-save operation goes almost unnoticed by the user: their interaction with the application is not affected by the background thread.  Once the background thread completes the auto-save, it has finished its work, and simply terminates.

 

Other types of threads may continue to process information in the background throughout the lifetime of the application.  Tasks such as processing event schedules, communicating with networks or other programs, or monitoring ports or devices.

 

Uses for Threads

Creating extra threads allows your program to perform asynchronous An asynchronous operation executes independently from the action which started it. The application can continue performing other tasks while the asynchronous operation performs its task. operations.  Practical uses for threads are:

 

Starting and Stopping Threads

The Start and Stop a Thread pattern shows how to start and stop threads in your port script.  It relies on the port script's base class being either TcpInPortInstance or SerialInPortInstance.

 

Interaction between Threads

Usually threads exist for some purpose that is related to the rest of the application, and at some point need to interact with other threads in the program.  This might be to alert the user that an event has occurred, or to invoke a series of actions, or to simply update a piece of data somewhere.

 

In .Net classes, threads can communicate state information by accessing any member variable in the class.  This is a very easy way to set initialize a value or prepare a data buffer in one thread, store it in a private variable which can then be accessed by another thread, perhaps some time later.

 

This shared memory feature, whilst sounding straightforward, throws up the big gotcha when it comes to multithreading: that is, how can one thread be sure that its changes to shared memory aren't overwritten by another thread?

 

Take the following example functions:

 

 

private int x = 0;

 

private void TaskA()

{

   x = 5;

   this.Logger.Debug( "The value of x in TaskA is {0}", x );

   x = 1;

}

 

private void TaskB()

{

   x = 10;

   this.Logger.Debug( "The value of x in TaskB is {0}", x );

   x = 0;

}

 

 

 

Variable x is initialized to zero, and can be accessed by functions TaskA and TaskB.

If TaskA and TaskB are executed on different threads, they will run "concurrently".

 

Surely we'd get this result, right?  In short, no.  Or at least, maybe.  Each individual operation within a thread may be temporarily interrupted by the processor in order to give some time to another thread.

 

 

The value of x in TaskA is 5

The value of x in TaskB is 10

 

 

The problem highlighted above is that the precise sharing of processor time between TaskA and TaskB cannot be predicted, so one scenario might be:

  1. TaskB begins first and set x to 10,

  2. TaskA executes, sets x to 5 thus overwriting the value,

  3. TaskB outputs its message; x is no longer 10, but 5,

  4. TaskB continues executing ahead of TaskA, and sets x to 0,

  5. TaskA outputs its message; x is certainly not 5, but 0.

 

Many other scenario combinations could occur: the point is, the results are unpredictable:

 

 

The value of x in TaskB is 5

The value of x in TaskA is 0

 

 

 

Not only can we not predict which task will output its message first, but we can't even guarantee that the data value has any integrity.  Not good!

 

Whilst multithreading has its obvious benefits, it is a technology that is inherently indeterministic.  Thankfully, we can solve this and similar (actually, worse) problems using the techniques of thread synchronization.

 

Synchronization Objects

Synchronization is a set of techniques that aims to make multithreaded programs deterministic, or to be able to predict control their behavior.

 

In order to make the interaction between threads deterministic, it requires them to learn how to wait their turn before accessing shared data or performing operations that use it.  A waiting thread consumes no real processor time until the condition it is waiting for is met, at which point it continues execution.

 

Clearly a thread can't do any work whilst it's waiting, but in the normal course of multithreaded programs, these wait times are usually over in the blink of an eye, and gives the illusion that everything happens simultaneously.

 

Object Lock

The simplest type of control over a thread is the object lock.  An object lock is like a room that only one person can be in at a time.  If a person enters the empty room, a second person must wait for the first to leave, no matter how long that takes.  As soon as the person leaves the room, the second person can enter it.

 

The Lock an Object pattern shows how to use an object lock to control two (or more) functions that require shared access to the same data.

 

AutoResetEvent, again

Sometimes it may be better for a thread to sit and wait and not consume processor time.  For example, a thread may need to send a heartbeat message once every 10 seconds.  The message may only take a fraction of a second to prepare and send, the thread can then wait and sit idle for the remainder of the 10 seconds until the next heartbeat is due.

 

Or, a waiting thread may need to be signalled from another thread to wake up and perform an action.  For example, when a message is received from a serial port, a background waiting thread could be signalled that a new message is ready for processing.

 

The AutoResetEvent object allows us to achieve both of these scenarios.  It can be used by a thread to pause until a method on a separate thread signals to wake up, or it can be used to wait for an amount of time before continuing.  In both cases, the waiting thread is idle and consumes no processor time.

 

The Wait for AutoResetEvent pattern shows how a method executing on one thread can wait for another thread to signal it.  It also shows how a timeout period can be used to pause a thread for an amount of time.

 

See Also

Design Patterns Index

Support