Fine-grained Locking
   HOME

TheInfoList



OR:

In
computer science Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
, a lock or mutex (from
mutual exclusion In computer science, mutual exclusion is a property of concurrency control, which is instituted for the purpose of preventing race conditions. It is the requirement that one thread of execution never enters a critical section while a concurr ...
) is a
synchronization primitive In computer science, synchronization is the task of coordinating multiple processes to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action. Motivation The need for synchronization ...
that prevents state from being modified or accessed by multiple threads of execution at once. Locks enforce mutual exclusion
concurrency control In information technology and computer science, especially in the fields of computer programming, operating systems, multiprocessors, and databases, concurrency control ensures that correct results for concurrent operations are generated, whil ...
policies, and with a variety of possible methods there exist multiple unique implementations for different applications.


Types

Generally, locks are ''advisory locks'', where each thread cooperates by acquiring the lock before accessing the corresponding data. Some systems also implement ''mandatory locks'', where attempting unauthorized access to a locked resource will force an exception in the entity attempting to make the access. The simplest type of lock is a binary
semaphore Semaphore (; ) is the use of an apparatus to create a visual signal transmitted over distance. A semaphore can be performed with devices including: fire, lights, flags, sunlight, and moving arms. Semaphores can be used for telegraphy when arra ...
. It provides exclusive access to the locked data. Other schemes also provide shared access for reading data. Other widely implemented access modes are exclusive, intend-to-exclude and intend-to-upgrade. Another way to classify locks is by what happens when the lock strategy prevents the progress of a thread. Most locking designs
block Block or blocked may refer to: Arts, entertainment and media Broadcasting * Block programming, the result of a programming strategy in broadcasting * W242BX, a radio station licensed to Greenville, South Carolina, United States known as ''96.3 ...
the
execution Capital punishment, also known as the death penalty and formerly called judicial homicide, is the state-sanctioned killing of a person as punishment for actual or supposed misconduct. The sentence ordering that an offender be punished in ...
of the thread requesting the lock until it is allowed to access the locked resource. With a
spinlock In software engineering, a spinlock is a lock that causes a thread trying to acquire it to simply wait in a loop ("spin") while repeatedly checking whether the lock is available. Since the thread remains active but is not performing a useful tas ...
, the thread simply waits ("spins") until the lock becomes available. This is efficient if threads are blocked for a short time, because it avoids the overhead of operating system process rescheduling. It is inefficient if the lock is held for a long time, or if the progress of the thread that is holding the lock depends on preemption of the locked thread. Locks typically require hardware support for efficient implementation. This support usually takes the form of one or more atomic instructions such as "
test-and-set In computer science, the test-and-set instruction is an instruction used to write (set) 1 to a memory location and return its old value as a single atomic (i.e., non- interruptible) operation. The caller can then "test" the result to see if the ...
", "
fetch-and-add In computer science, the fetch-and-add (FAA) CPU instruction atomically increments the contents of a memory location by a specified value. That is, fetch-and-add performs the following operation: increment the value at address by , where is ...
" or "
compare-and-swap In computer science, compare-and-swap (CAS) is an atomic instruction used in multithreading to achieve synchronization. It compares the contents of a memory location with a given (the previous) value and, only if they are the same, modifies the ...
". These instructions allow a single process to test if the lock is free, and if free, acquire the lock in a single atomic operation.
Uniprocessor A uniprocessor system is defined as a computer system that has a single central processing unit that is used to execute computer tasks. As more and more modern software is able to make use of multiprocessing architectures, such as SMP and MPP, t ...
architectures have the option of using uninterruptible sequences of instructions—using special instructions or instruction prefixes to disable
interrupt In digital computers, an interrupt (sometimes referred to as a trap) is a request for the processor to ''interrupt'' currently executing code (when permitted), so that the event can be processed in a timely manner. If the request is accepted ...
s temporarily—but this technique does not work for
multiprocessor Multiprocessing (MP) is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. The ...
shared-memory machines. Proper support for locks in a multiprocessor environment can require quite complex hardware or software support, with substantial
synchronization Synchronization is the coordination of events to operate a system in unison. For example, the Conductor (music), conductor of an orchestra keeps the orchestra synchronized or ''in time''. Systems that operate with all parts in synchrony are sa ...
issues. The reason an
atomic operation In concurrent programming, an operation (or set of operations) is linearizable if it consists of an ordered list of Execution (computing), invocation and response Event (computing), events, that may be extended by adding response events such tha ...
is required is because of concurrency, where more than one task executes the same logic. For example, consider the following C code: if (lock

0)
The above example does not guarantee that the task has the lock, since more than one task can be testing the lock at the same time. Since both tasks will detect that the lock is free, both tasks will attempt to set the lock, not knowing that the other task is also setting the lock. Dekker's or
Peterson's algorithm Peterson's algorithm (or Peterson's solution) is a concurrent programming algorithm for mutual exclusion that allows two or more processes to share a single-use resource without conflict, using only shared memory for communication. It was formulate ...
are possible substitutes if atomic locking operations are not available. Careless use of locks can result in
deadlock Deadlock commonly refers to: * Deadlock (computer science), a situation where two processes are each waiting for the other to finish * Deadlock (locksmithing) or deadbolt, a physical door locking mechanism * Political deadlock or gridlock, a si ...
or
livelock In concurrent computing, deadlock is any situation in which no member of some group of entities can proceed because each waits for another member, including itself, to take action, such as sending a message or, more commonly, releasing a lock. ...
. A number of strategies can be used to avoid or recover from deadlocks or livelocks, both at design-time and at run-time. (The most common strategy is to standardize the lock acquisition sequences so that combinations of inter-dependent locks are always acquired in a specifically defined "cascade" order.) Some languages do support locks syntactically. An example in C# follows: public class Account // This is a monitor of an account C# introduced in C# 13 on
.NET The .NET platform (pronounced as "''dot net"'') is a free and open-source, managed code, managed computer software framework for Microsoft Windows, Windows, Linux, and macOS operating systems. The project is mainly developed by Microsoft emplo ...
9. The code lock(this) can lead to problems if the instance can be accessed publicly. Similar to
Java Java is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea (a part of Pacific Ocean) to the north. With a population of 156.9 million people (including Madura) in mid 2024, proje ...
, C# can also synchronize entire methods, by using the MethodImplOptionsSynchronized attribute. ethodImpl(MethodImplOptions.Synchronized)public void SomeMethod()


Granularity

Before being introduced to lock granularity, one needs to understand three concepts about locks: * ''lock overhead'': the extra resources for using locks, like the memory space allocated for locks, the CPU time to initialize and destroy locks, and the time for acquiring or releasing locks. The more locks a program uses, the more overhead associated with the usage; * ''lock contention'': this occurs whenever one process or thread attempts to acquire a lock held by another process or thread. The more fine-grained the available locks, the less likely one process/thread will request a lock held by the other. (For example, locking a row rather than the entire table, or locking a cell rather than the entire row); * ''
deadlock Deadlock commonly refers to: * Deadlock (computer science), a situation where two processes are each waiting for the other to finish * Deadlock (locksmithing) or deadbolt, a physical door locking mechanism * Political deadlock or gridlock, a si ...
'': the situation when each of at least two tasks is waiting for a lock that the other task holds. Unless something is done, the two tasks will wait forever. There is a tradeoff between decreasing lock overhead and decreasing lock contention when choosing the number of locks in synchronization. An important property of a lock is its ''
granularity Granularity (also called graininess) is the degree to which a material or system is composed of distinguishable pieces, "granules" or "grains" (metaphorically). It can either refer to the extent to which a larger entity is subdivided, or the ...
''. The granularity is a measure of the amount of data the lock is protecting. In general, choosing a coarse granularity (a small number of locks, each protecting a large segment of data) results in less ''lock overhead'' when a single process is accessing the protected data, but worse performance when multiple processes are running concurrently. This is because of increased ''lock contention''. The more coarse the lock, the higher the likelihood that the lock will stop an unrelated process from proceeding. Conversely, using a fine granularity (a larger number of locks, each protecting a fairly small amount of data) increases the overhead of the locks themselves but reduces lock contention. Granular locking where each process must hold multiple locks from a common set of locks can create subtle lock dependencies. This subtlety can increase the chance that a programmer will unknowingly introduce a ''deadlock''. In a
database management system In computing, a database is an organized collection of data or a type of data store based on the use of a database management system (DBMS), the software that interacts with end users, applications, and the database itself to capture and an ...
, for example, a lock could protect, in order of decreasing granularity, part of a field, a field, a record, a data page, or an entire table. Coarse granularity, such as using table locks, tends to give the best performance for a single user, whereas fine granularity, such as record locks, tends to give the best performance for multiple users.


Database locks

Database locks can be used as a means of ensuring transaction synchronicity. i.e. when making transaction processing concurrent (interleaving transactions), using 2-phased locks ensures that the concurrent execution of the transaction turns out equivalent to some serial ordering of the transaction. However, deadlocks become an unfortunate side-effect of locking in databases. Deadlocks are either prevented by pre-determining the locking order between transactions or are detected using waits-for graphs. An alternate to locking for database synchronicity while avoiding deadlocks involves the use of totally ordered global timestamps. There are mechanisms employed to manage the actions of multiple
concurrent user In computer science, the number of concurrent users (sometimes abbreviated as CCU) for a resource in a location, with the location being a computing network or a single computer, refers to the total number of people simultaneously accessing or usi ...
s on a database—the purpose is to prevent lost updates and dirty reads. The two types of locking are ''pessimistic locking'' and ''optimistic locking'': * ''Pessimistic locking'': a user who reads a record with the intention of updating it places an exclusive lock on the record to prevent other users from manipulating it. This means no one else can manipulate that record until the user releases the lock. The downside is that users can be locked out for a very long time, thereby slowing the overall system response and causing frustration. :: Where to use pessimistic locking: this is mainly used in environments where data-contention (the degree of users request to the database system at any one time) is heavy; where the cost of protecting data through locks is less than the cost of rolling back transactions, if concurrency conflicts occur. Pessimistic concurrency is best implemented when lock times will be short, as in programmatic processing of records. Pessimistic concurrency requires a persistent connection to the database and is not a scalable option when users are interacting with data, because records might be locked for relatively large periods of time. It is not appropriate for use in Web application development. * ''
Optimistic locking Optimistic concurrency control (OCC), also known as optimistic locking, is a non-locking concurrency control method applied to transactional systems such as relational database management systems and software transactional memory. OCC assumes that ...
'': this allows multiple concurrent users access to the database whilst the system keeps a copy of the initial-read made by each user. When a user wants to update a record, the application determines whether another user has changed the record since it was last read. The application does this by comparing the initial-read held in memory to the database record to verify any changes made to the record. Any discrepancies between the initial-read and the database record violates concurrency rules and hence causes the system to disregard any update request. An error message is generated and the user is asked to start the update process again. It improves database performance by reducing the amount of locking required, thereby reducing the load on the database server. It works efficiently with tables that require limited updates since no users are locked out. However, some updates may fail. The downside is constant update failures due to high volumes of update requests from multiple concurrent users - it can be frustrating for users. :: Where to use optimistic locking: this is appropriate in environments where there is low contention for data, or where read-only access to data is required. Optimistic concurrency is used extensively in .NET to address the needs of mobile and disconnected applications, where locking data rows for prolonged periods of time would be infeasible. Also, maintaining record locks requires a persistent connection to the database server, which is not possible in disconnected applications.


Lock compatibility table

Several variations and refinements of these major lock types exist, with respective variations of blocking behavior. If a first lock blocks another lock, the two locks are called ''incompatible''; otherwise the locks are ''compatible''. Often, lock types blocking interactions are presented in the technical literature by a ''Lock compatibility table''. The following is an example with the common, major lock types: *✔ indicates compatibility *X indicates incompatibility, i.e., a case when a lock of the first type (in left column) on an object blocks a lock of the second type (in top row) from being acquired on the same object (by another transaction). An object typically has a queue of waiting requested (by transactions) operations with respective locks. The first blocked lock for operation in the queue is acquired as soon as the existing blocking lock is removed from the object, and then its respective operation is executed. If a lock for operation in the queue is not blocked by any existing lock (existence of multiple compatible locks on a same object is possible concurrently), it is acquired immediately. Comment: In some publications, the table entries are simply marked "compatible" or "incompatible", or respectively "yes" or "no".


Disadvantages

Lock-based resource protection and thread/process synchronization have many disadvantages: * Contention: some threads/processes have to wait until a lock (or a whole set of locks) is released. If one of the threads holding a lock dies, stalls, blocks, or enters an infinite loop, other threads waiting for the lock may wait indefinitely until the computer is power cycled. * Overhead: the use of locks adds overhead for each access to a resource, even when the chances for collision are very rare. (However, any chance for such collisions is a
race condition A race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events, leading to unexpected or inconsistent ...
.) * Debugging: bugs associated with locks are time dependent and can be very subtle and extremely hard to replicate, such as
deadlock Deadlock commonly refers to: * Deadlock (computer science), a situation where two processes are each waiting for the other to finish * Deadlock (locksmithing) or deadbolt, a physical door locking mechanism * Political deadlock or gridlock, a si ...
s. * Instability: the optimal balance between lock overhead and lock contention can be unique to the problem domain (application) and sensitive to design, implementation, and even low-level system architectural changes. These balances may change over the life cycle of an application and may entail tremendous changes to update (re-balance). * Composability: locks are only composable (e.g., managing multiple concurrent locks in order to atomically delete item X from table A and insert X into table B) with relatively elaborate (overhead) software support and perfect adherence by applications programming to rigorous conventions. *
Priority inversion In computer science, priority inversion is a scenario in scheduling in which a high-priority task is indirectly superseded by a lower-priority task, effectively inverting the assigned priorities of the tasks. This violates the priority model tha ...
: a low-priority thread/process holding a common lock can prevent high-priority threads/processes from proceeding.
Priority inheritance In real-time computing, priority inheritance is a method for eliminating unbounded priority inversion. Using this Computer programming, programming method, a process scheduling algorithm increases the priority of a process (A) to the maximum priorit ...
can be used to reduce priority-inversion duration. The
priority ceiling protocol In real-time computing, the priority ceiling protocol is a synchronization protocol for shared resources to avoid unbounded priority inversion and mutual deadlock due to wrong nesting of critical sections. In this protocol each resource is assigned ...
can be used on uniprocessor systems to minimize the worst-case priority-inversion duration, as well as prevent
deadlock Deadlock commonly refers to: * Deadlock (computer science), a situation where two processes are each waiting for the other to finish * Deadlock (locksmithing) or deadbolt, a physical door locking mechanism * Political deadlock or gridlock, a si ...
. * Convoying: all other threads have to wait if a thread holding a lock is descheduled due to a time-slice interrupt or page fault. Some
concurrency control In information technology and computer science, especially in the fields of computer programming, operating systems, multiprocessors, and databases, concurrency control ensures that correct results for concurrent operations are generated, whil ...
strategies avoid some or all of these problems. For example, a
funnel A funnel is a tube or pipe that is wide at the top and narrow at the bottom, used for guiding liquid or powder into a small opening. Funnels are usually made of stainless steel, aluminium, glass, or plastic. The material used in its constructi ...
or
serializing tokens In computer science, serializing tokens are a concept in concurrency control arising from the ongoing development of DragonFly BSD. According to Matthew Dillon, they are most akin to SPLs, except a token works across multiple CPUs while SPLs onl ...
can avoid the biggest problem: deadlocks. Alternatives to locking include
non-blocking synchronization Non-blocking or nonblocking may refer to: *''non-blocking I/O'', see asynchronous I/O * Non-blocking synchronization * Nonblocking minimal spanning switch {{Disambiguation Articles with example C code Articles with example C Sharp code Articles with example Java code Articles with example pseudocode