Information Technology Reference
In-Depth Information
wake_up_process(waiter->task);
}
spin_unlock_mutex(&lock->wait_lock,flags);
}
Notice that this function always sets count to 1, even if there are waiting
threads. As a result, a new thread this is not waiting may swoop in and acquire
the lock on its fast path, setting count=0 . However, in this case, the waiting
thread will still eventually run, and when it does, the main loop above will set
count=-1 .
Discussion: Mutex fast path. An important thing to remember from this
example is that many implementations of locks include a fast path so that
acquiring and releasing an uncontended lock is cheap. Programmers will some-
times go to great lengths to convince themselves that they can avoid acquiring
a lock in a particular situation. However, the reasoning in such cases can be
subtle, and omitting needed locks is dangerous. In cases where there is little
contention, avoiding locks is unlikely to significantly improve performance, so it
is usually better just to keep things simple and rely on locks to ensure mutual
exclusion when accessing shared state.
Locks for user-level and kernel-supported threads
The discussion above focused on implementing locks for in-kernel threads. In
that case everything|code, shared state, lock data structures, thread control
blocks, and the ready list{is in kernel memory, and all threads run in kernel
mode. Fortunately, although some details change, the basic approaches work
when we implement locks for threads that run within user-level processes.
Kernel-supported threads. In a kernel-supported threads library, the ker-
nel provides multiple threads to a process. In this situation, the kernel scheduler
needs to know when a thread is waiting for the lock so that it can suspend the
waiting thread and run a different one.
In the simplest case, we can place the lock data structure, including all of
a lock's state (e.g., value , waiting , and the spinlock in the kernel's address
space and replace each method call on the lock object with a system call. Then,
the implementations described above for kernel-level locks can be used without
significant changes.
A more sophisticated approach splits the lock's state and implementation
into a fast path and slow path similar to the Linux lock described above. For
example, each lock has two data structures: one in the process's address space
that holds something similar to the count field and one in the kernel with the
spinlock and waitlist queue.
 
Search WWH ::




Custom Search