•Nonblocking synchronization attempts (for example,
tryLock) as well as blocking versions。
•Optional timeouts, so applications can give up waiting。
•Cancellability via interruption, usually separated into one version of acquire that is cancellable, and one that isn't。
Synchronizers may vary according to whether they manage only exclusive states – those in which only one thread at a time may continue past a possible blocking point – versus possible shared states in which multiple threads can at least sometimes proceed。 Regular lock classes of course maintain only exclusive state, but counting semaphores, for example, may be acquired by as many threads as the count permits。 To be widely useful, the framework must support both modes of operation。
The java。util。concurrent package also defines interface Condition, supporting monitor-style await/signal operations that may be associated with exclusive Lock classes, and whose implementations are intrinsically intertwined with their associated Lock classes。
2。2Performance Goals
Java built-in locks (accessed using synchronized methods and blocks) have long been a performance concern, and there is a sizable literature on their construction (e。g。, [1], [3])。 However, the main focus of such work has been on minimizing space overhead (because any Java object can serve as a lock) and on minimizing time overhead when used in mostly-single-threaded
contexts on uniprocessors。 Neither of these are especially important concerns for synchronizers: Programmers construct synchronizers only when needed, so there is no need to compact space that would otherwise be wasted, and synchronizers are used almost exclusively in multithreaded designs (increasingly often on multiprocessors) under which at least occasional contention is to be expected。 So the usual JVM strategy of optimizing locks primarily for the zero-contention case, leaving other cases to less predictable "slow paths" [12] is not the right tactic for typical multithreaded server applications that rely heavily on java。util。concurrent。
Instead, the primary performance goal here is scalability: to predictably maintain efficiency even, or especially, when synchronizers are contended。 Ideally, the overhead required to pass a synchronization point should be constant no matter how many threads are trying to do so。 Among the main goals is to minimize the total amount of time during which some thread is permitted to pass a synchronization point but has not done so。 However, this must be balanced against resource considerations, including total CPU time requirements, memory traffic, and thread scheduling overhead。 For example, spinlocks usually provide shorter acquisition times than blocking locks, but usually waste cycles and generate memory contention, so are not often applicable。
These goals carry across two general styles of use。 Most applications should maximize aggregate throughput, tolerating, at best, probabilistic guarantees about lack of starvation。 However in applications such as resource control, it is far more important to maintain fairness of access across threads, tolerating poor aggregate throughput。 No framework can decide between these conflicting goals on behalf of users; instead different fairness policies must be accommodated。
No matter how well-crafted they are internally, synchronizers will create performance bottlenecks in some applications。 Thus, the framework must make it possible to monitor and inspect basic operations to allow users to discover and alleviate bottlenecks。 This minimally (and most usefully) entails providing a way to determine how many threads are blocked。
3。DESIGN AND IMPLEMENTATION
The basic ideas behind a synchronizer are quite straightforward。 An acquire operation proceeds as: