March 26, 2011, 7:29 a.m.
posted by tekkero
Running a new thread is a relatively simple programming task. What makes multithreaded programming difficult, however, is recognizing the data that multiple threads could access simultaneously. The program needs to synchronize such data, the state, in order to prevent simultaneous access. Consider Listing 15.7.
The output is not 0, as it would have been if Decrement() was called directly rather than on a separate thread. Instead, a race condition is introduced because the _Count++ and _Count-- statements are able to interrupt each other. Although in C# these statements appear to be one operation, each takes three steps, as discussed earlier in this chapter.
The problem with this is that a thread context switch could take place during any of these steps. Consider the sample execution in Figure.
Figure shows a thread context switch by the transition of instructions appearing from one column to the other. The value of _Count after a particular line has completed appears in the last column. In this sample execution, _Count++ executes twice and _Count-- occurs once. However, the resulting _Count value is 0, not 1. Copying a result back to _Count essentially wipes out any _Countvalue changes that occurred since the read of _Count on the same thread.
The problem in Listing 15.7 is a race condition, which occurs when multiple threads have simultaneous access to the same data elements. As this sample execution demonstrates, simultaneous access to data by multiple threads undermines data integrity, even on a single-processor computer. To remedy this, the code needs synchronization around the data (state). Code or data that is appropriately synchronized for simultaneous access by multiple threads is known as thread safe.
There is one important point to note about atomicity of reading and writing to variables. The runtime guarantees that a type whose size is no bigger than a native integer will not be read or written only partially. Assuming a 32-bit operating system, therefore, reads and writes to an int (System.Int32) will be atomic. However, reads and writes to a long (System.Int64), for example, will not be guaranteed to be atomic. Therefore, write operations to change a long variable may be interrupted after copying only 32 bits, resulting in the reading of an incorrect value.
Synchronization Using Monitor
To synchronize two threads so that they cannot execute particular sections of code simultaneously, you need a monitor to block the second thread from entering a protected code section until the first thread has exited that particular section. The monitor functionality is part of a class called System.Threading.Monitor, and the beginning and end of protected code sections are marked with calls to the static methods Monitor.Enter() and Monitor.Exit(), respectively.
Listing 15.8 demonstrates synchronization using the Monitor class explicitly. As this listing shows, it is important that all code between calls to Monitor.Enter() and Monitor.Exit() be surrounded with a try/finally block. Without this, an exception could occur within the protected section and Monitor.Exit() may never be called, thereby blocking other threads indefinitely.
Synchronizing with a Monitor Explicitly
Note that Monitor.Enter() and Monitor.Exit() are associated with each other by sharing the same object reference passed as the parameter (in this case, _Sync).
Using the lock Keyword
Because of the frequent need for synchronization using Monitor in multithreaded code, and the fact that the try/finally block could easily be forgotten, C# provides a special keyword to handle this locking synchronization pattern. Listing 15.9 demonstrates the use of the new lock keyword, and Output 15.8 shows the results.
Listing 15.9. Synchronizing Using the lock Keyword
By locking the section of code accessing _Count (using either lock or Monitor), you are making the Main() and Decrement() methods thread safe, meaning they can be safely called from multiple threads simultaneously.
Synchronization does not come without a cost. First, synchronization has an impact on performance. Listing 15.9, for example, takes an order-of-magnitude longer to execute than Listing 15.7 does, which demonstrates lock's relatively slow execution compared to the execution of incrementing and decrementing the count.
Even when lock is insignificant in comparison with the work it synchronizes, programmers should avoid indiscriminately adding synchronization, thus avoiding the complexities of deadlocks and unnecessary constraints on multiprocessor computers. The general best practice for object design is to synchronize static state and to leave out synchronization from any instance data. Programmers that allow multiple threads to access a particular object must provide their own synchronization for the object. Any class that explicitly deals with threads itself is likely to want to make instances thread safe to some extent.
Choosing a lock Object
Regardless of whether using the lock keyword or the Monitor class explicitly, it is crucial that programmers carefully select the lock object.
In the previous examples, the synchronization variable, _Sync, is declared as both private and read only. It is declared as read only to ensure that the value is not changed between calls to Monitor.Enter() and Monitor.Exit(). This is important because there would otherwise be no correlation between the entering and exiting of the synchronized block of code.
Similarly, the code declares _Sync as private so that no other synchronization block outside the class can synchronize on the same object instance, thereby inappropriately causing the code to block.
If the data is public, then the synchronization object could be public so that other classes can synchronize using the same synchronization object instance. The problem is that this makes deadlock avoidance more difficult. Fortunately, the need for this pattern occurs rarely. For public data, it is preferable to leave synchronization entirely outside the class, allowing the calling code to take locks with its own synchronization object.
One more important factor is that the synchronization object cannot be a value type. If the lock keyword is used on a value type, then the compiler will report an error. (In the case of the System.Threading.Monitor class, however, no such error will occur at compile time. Instead, the code will throw an exception with the call to Monitor.Exit(), indicating there was no corresponding Monitor.Enter() call.) The issue is that when using a value type, the runtime makes a copy of the value, places it in the heap (boxing occurs), and passes the boxed value to Monitor.Enter(). Similarly, Monitor.Exit() receives a boxed copy of the original variable. The result is that Monitor.Enter() and Monitor.Exit() receive different synchronization object instances so that no correlation between the two calls occurs.
Why to Avoid Locking on this and typeof(type)
One common pattern is to lock on the this keyword for instance data in a class, and on the type instance obtained from typeof(type) (for example, typeof(MyType)) for static data. Such a pattern provides a synchronization target for all states associated with a particular object instance when this is used, and all static data for a type when typeof(type) is used. The problem is that the synchronization target that this (or typeof(type)) points to could participate in the synchronization target for an entirely different synchronization block created in an entirely unrelated block of code. In other words, although only the code within the instance itself can block using the this keyword, the caller that created the instance can still pass that instance into a synchronization lock.
The result is that two different synchronization blocks that synchronize two entirely different sets of data could potentially block each other. Although perhaps unlikely, sharing the same synchronization target could have an unintended performance impact and, in extreme cases, even cause a deadlock. Instead of locking on this or even typeof(type), it is better to define a private, read-only field on which no one will block, except for the class that has access to it.
Declaring Fields as volatile
On occasion, the compiler may optimize code in such a way that the instructions don't occur in the exact order they are coded, or some instructions are optimized out. Such optimizations are generally innocuous when code executes on one thread. However, with multiple threads, such optimizations may have unintended consequences because the optimizations may change the order of execution of a field's read or write operations relative to an alternate thread's access to the same field.
One way to stabilize this is to declare fields using the volatile keyword. This keyword forces all reads and writes to the volatile field to occur at the exact location the code identifies instead of at some other location that the optimization produces. The volatile modifier identifies that the field is susceptible to modification by the hardware, operating system, or another thread. As such, the data is "volatile," and the keyword instructs the compilers and runtime to handle it more exactly.
Using the System.Threading.InterlockedClass
Within a particular process, you have all the necessary tools for handling synchronization. However, synchronization with System.Threading.Monitor is a relatively expensive operation and there is an alternative solution, generally supported by the processor directly, that targets specific synchronization patterns.
Listing 15.10 sets _Data to a new value as long as the preceding value was null. As indicated by the method name, this pattern is the Compare/Exchange pattern. Instead of manually placing a lock around behaviorally equivalent compare and exchange code, the Interlocked.CompareExchange() method provides a built-in method for a synchronous operation that does the same check for a value (null) and swaps the first two parameters if the value is equal. Figure shows other synchronization methods supported by Interlocked.
Listing 15.10. Synchronizing Using System.Threading.Interlocked
Most of these methods are overloaded with additional data type signatures, such as support for long. Figure provides the general signatures and descriptions. For example, the threading namespace does not include generic method signatures until C# 2.0, although earlier versions do include nongeneric equivalents.
Note that Increment() and Decrement() can be used in place of the synchronized ++ and -- operators from Listing 15.9, and doing so will yield better performance. Also note that if a different thread accessed location using a noninterlocked method, then the two accesses would not be synchronized correctly.
Event Notification with Multiple Threads
One area where developers often overlook synchronization is when firing events. The unsafe thread code for publishing an event is similar to Listing 15.11.
Firing an Event Notification
This code is valid when it appears in an instance method that multiple threads do not access. However, when multiple threads may access it, the code is not atomic. It is possible that between the time when OnTemperatureChange is checked for null and the event is actually fired, OnTemperatureChange could be set to null, thereby throwing a NullReferenceException. In other words, if multiple threads could possibly access a delegate simultaneously, it is necessary to synchronize the assignment and firing of the delegate.
Fortunately, the operators for adding and removing listeners are thread safe and static (operator overloading is done with static methods). To correct Listing 15.11 and make it thread safe, assign a copy, check the copy for null, and fire the copy (see Listing 15.12).
Thread-Safe Event Notification
Given that a delegate is a reference type, it is perhaps surprising that assigning a local variable and then firing with the local variable is sufficient for making the null check thread safe. Since localOnChange points to the same location that OnTemperatureChange points to, one would think that any changes in OnTemperatureChange would be reflected in localOnChange as well.
However, this is not the case because any calls to OnTemperatureChange += <listener> will not add a new delegate to OnTemperatureChange, but rather, will assign it an entirely new multicast delegate without having any effect on the original multicast delegate to which localOnChange also points. This makes the code thread safe because only one thread will access the localOnChange instance, and OnTemperatureChange will be an entirely new instance if listeners are added or removed.
Synchronization Design Best Practices
Along with the complexities of multithreaded programming come several best practices for handling the complexities.
With the introduction of synchronization comes the potential for deadlock. Deadlock occurs when two or more threads wait for each other to release a synchronization lock. For example, Thread 1 requests a lock on _Sync1, and then later requests a lock on _Sync2 before releasing the lock on _Sync1. At the same time, Thread 2 requests a lock on _Sync2, followed by a lock on _Sync1, before releasing the lock on _Sync2. This sets the stage for the deadlock. The deadlock actually occurs if both Thread 1 and Thread 2 successfully acquire their initial locks (_Sync1 and _Sync2, respectively) before obtaining their second locks.
Two conditions cause the deadlock: Two or more threads need to lock on the same two or more synchronization targets, and the locks are requested in different orders. To avoid deadlocks like this, developers should be careful when acquiring multiple locks to code each thread to obtain the locks in the same order.
For each synchronization mechanism discussed here, a single thread cannot cause a deadlock with itself. If a thread acquires a lock and then recursively calls back on the same method and re-requests the lock, the thread will not block because it already is the owner of the lock. (Although not discussed in this chapter, System.Threading.Semaphore is one example of a synchronization mechanism that could potentially deadlock with itself.)
When to Provide Synchronization
As already discussed, all static data should be thread safe. Therefore, synchronization needs to surround static data. Generally, this means that programmers should declare private static variables and then provide public methods for modifying the data. Such methods should internally handle the synchronization.
In contrast, instance state is not expected to include synchronization. Synchronization significantly decreases performance and increases the chance of a lock contention or deadlock. With the exception of classes that are explicitly designed for multithreaded access, programmers sharing objects across multiple threads are expected to handle their own synchronization of the data being shared.
Avoiding Unnecessary Locking
Although not at the cost of data integrity, programmers should avoid synchronization where possible. For example, if static method A() calls static method B() and both methods include taking locks, the redundant locks will decrease performance and perhaps decrease scalability. Carefully code APIs to minimize the number of locks required.
More Synchronization Types
System.Threading.Mutex is virtually identical in concept to the System .Threading.Monitorclass, except that the lock keyword does not use it, and it synchronizes across multiple processes. Using the Mutex class, you can synchronize access to a file or some other cross-process resource. Since Mutex is a cross-process resource, .NET 2.0 added support to allow for setting the access control via a System.Security.AccessControl.MutexSecurity object. One use for the Mutex class is to limit an application so that it cannot run multiple times simultaneously, as Listing 15.13 demonstrates.
Creating a Single Instance Application
The results from running the first instance of the application appear in Output 15.9.
The results of the second instance of the application while the first instance is still running appear in Output 15.10.
In this case, the application can run only once on the machine, even if it is launched by different users. To restrict the instances to once per user, prefix Assembly.GetEntryAssembly().FullName with System.Windows.Forms.Application.UserAppDataPath.Replace("\\", "+") instead. This requires a reference to the System.Windows.Forms assembly.
Reset Events: ManualResetEvent and AutoResetEvent
One way to control uncertainty about when particular instructions in a thread will execute relative to instructions in another thread is with reset events, of which there are two. In spite of the term events, reset events have nothing to do with C# delegates and events. Instead, reset events are a way to force code to wait for the execution of another thread until the other thread signals. These are especially useful for testing multithreaded code because it is possible to wait for a particular state before verifying the results.
The reset event types are System.Threading.AutoResetEvent and System.Threading.ManualResetEvent. The key methods on the reset events are Set() and WaitHandle(). Calling the WaitHandle() method will cause a thread to block until a different thread calls Set(), or until the wait period times out. Listing 15.14 demonstrates how this works, and Output 15.11 shows the results.
Listing 15.14. Waiting for AutoResetEvent
Listing 15.14 begins by starting a new thread. Following tHRead.Start(), it calls ResetEvent.WaitOne(). This causes the thread executing Main() to suspend and wait for the AutoResetEvent called ResetEvent to be set. The thread running DoWork() continues, however. Inside DoWork() is a call to ResetEvent.Set(), and once this method has been called, the call to ResetEvent.WaitOne() back in Main() is signaled, meaning it is allowed to continue. As a result, DoWork() started and DoWork() ending appear before Application shutting down.... in spite of the fact that DoWork() includes a call to Thread.Sleep() and DoWork() is running on a different thread.
Calling a reset event's WaitOne() method blocks the calling thread until another thread signals and allows the blocked thread to continue. Instead of blocking indefinitely, WaitOne() includes a parameter, either in milliseconds or as a TimeSpan object, for the maximum amount of time to block. When specifying a timeout period, the return from WaitOne() will be false if the timeout occurs before the reset event is signaled.
The only difference between System.Threading.AutoResetEvent and System.Threading.ManualResetEventis the fact that AutoResetEvent will automatically switch to an unsignaled state after calling Set(). As a result, a second call to WaitOne() will automatically be blocked until another call to Set() occurs. Given this behavior, it is possible for two different threads to call WaitOne() simultaneously, and only one will be allowed to continue with each call to Set(). In contrast, ManualResetEvent will require a call to Reset() before it will block any additional threads.
The remainder of this chapter, and Chapter 16, use a call to an AutoResetEvent's Set() method within the worker thread's implementation. In addition, AutoResetEvent's WaitOne() method blocks on Main()'s thread until Set() has been called. In this way, it demonstrates that the worker thread executes before Main() exits.
Although not exactly the same, System.Threading.Monitor includes Wait() and Pulse() methods that provide similar functionality to reset events in some circumstances.
In some cases, using synchronization locks can lead to unacceptable performance and scalability restrictions. In other instances, providing synchronization around a particular data element may be too complex, especially when it is added after the original coding.
One alternate solution to synchronization is thread local storage. Thread local storage creates a new instance of a static field for every thread. This provides each thread with its own instance; as a result, there is no need for synchronization, as there is no point in synchronizing data that occurs within only a single thread's context.
Decorating a field with a ThreadStaticAttribute, as in Listing 15.15, designates it as one instance per thread.
Listing 15.15. Using the THReadStaticAttribute
As Output 15.12 demonstrates, the value of _Count for the thread executing Main() is never decremented by the thread executing Decrement(). Since _Count is decorated by the THReadStaticAttribute, the thread running Main() and the thread running Decrement() are operating on entirely different instances of _Count.
There is one important caveat to the ThreadStaticAttribute. If the value of _Count is assigned during declarationprivate int _Count = 42, for example then only the thread static instance associated with the thread running the constructor will be initialized. In Listing 15.15, only the thread executing Main() will have a thread local storage variable of _Count that is initialized. The value of _Count that Decrement() decrements will never be initialized. Similarly, if a constructor initializes a thread local storage field, only the constructor calling thread will have an initialized thread local storage instance. For this reason, it is a good practice to initialize a thread local storage field within the method that is initially called by each thread.
The decision to use thread local storage requires some degree of cost-benefit analysis. For example, consider using thread local storage for a database connection. Depending on the database management system, database connections are relatively expensive, so creating a connection for every thread could be costly. Similarly, locking a connection so that all database calls are synchronized places a significantly lower ceiling on scalability. Each pattern has its costs and benefits, and the correct choice depends largely on the individual implementation.
Another reason to use ThreadStatic is to make commonly needed context information available to other methods without explicitly passing the data via parameters. If multiple methods in the call stack require user security information, for example, you can pass the data using thread local storage fields instead of as parameters. This keeps APIs cleaner while still making the information available to methods in a thread-safe manner.
Synchronizing a Method Using MethodImplAttribute
The last synchronization method to point out is the MethodImplAttribute. Used in conjunction with the MethodImplOptions.Synchronized method, this attribute marks a method as synchronized so that only one thread can execute the method at a time. In the current implementation, this causes the JIT to treat the method as though it was surrounded by lock(this). Such an implementation means that, in fact, the method and all other methods on the same class, decorated with the same attribute and enum, are synchronized, not just each method relative to itself. In other words, given two or more methods on the same class decorated with the attribute, only one of them will be able to execute at a time and the one executing will block all calls by other threads to itself or to any other method in the class with the same decoration.
One area where threading issues relating to the user interface may arise unexpectedly is when using one of the timer classes. The trouble is that when timer notification callbacks fire, the thread on which they fire may not be the user interface thread, and therefore, it cannot safely access user interface controls and forms (see Chapter 16).
Several timer classes are available, including System.Windows.Forms.Timer, System.Timers.Timer, and System.Threading.Timer. In creating System.Windows.Forms.Timer, the development team designed it specifically for use within a rich client user interface. Programmers can drag it onto a form as a nonvisual control and control the behavior from within the Properties window. Most importantly, it will always safely fire an event from a thread that can interact with the user interface.
The other two timers are very similar.System.Threading.Timer is essentially a lighter-weight implementation of System.Timers.Timer. Specifically, System.Threading.Timer does not derive from System.ComponentModel.Component, and therefore, it cannot be used as a component within a component container, something that implements System.ComponentModel.IContainer. Another difference is that System.Threading.Timer enables the passing of state, an object parameter, from the call to start the timer and then into the call that fires the timer notification. The remaining differences are simply in the API usability with System.Timers.Timer supporting a synchronization object and having calls that are slightly more intuitive. Both System.Timers.Timer and System.Threading.Timer are designed for use in server-type processes, and neither should interact directly with the user interface. Furthermore, both timers use the system thread pool. Figure provides an overall comparison of the various timers.
using System.Windows.Forms.Timer is a relatively obvious choice for user interface programming. The only caution is that a long-running operation on the user interface thread may delay the arrival of a timer's expiration. Choosing between the other two options is less obvious, and generally, the difference between the two is insignificant. If hosting within an IContainer is necessary, then System.Timers.Timer is the right choice. However, if no specific System.Timers.Timer feature is required, then choose System.Threading.Timer by default, simply because it is a slightly lighter-weight implementation.
Listing 15.16 and Listing 15.17 provide sample code for using System.Timers.Timer and System.Threading.Timer, respectively. Their code is very similar, including the fact that both support instantiation within a using statement because both support IDispose. The output for both listings is identical, as shown in Output 15.13.
Listing 15.16. Using System.Timers.Timer
In Listing 15.16, you have using directives for both System.Threading and System.Timers. This makes the Timer type ambiguous. Therefore, use an alias to explicitly associate Timer with System.Timers.Timer.
One noteworthy characteristic of System.Threading.Timer is that it takes the callback delegate and interval within the constructor.
Listing 15.17. Using System.Threading.Timer
You can change the interval or time due after instantiation on System.Threading.Timer via the Change() method. However, you cannot change the callback listeners after instantiation. Instead, you must create a new instance.