Information Technology Reference
In-Depth Information
mutex_remove_waiter(lock,&waiter,current_thread_info());
/*setitto0iftherearenowaitersleft:*/
if(likely(list_empty(&lock->wait_list)))
atomic_set(&lock->count,0);
spin_unlock_mutex(&lock->wait_lock,flags);
preempt_enable();
return0;
}
Here, the thread removes itself from the list of waiting threads. If there are
no other waiting threads, it resets lock's state to \locked" ( count=0 ) so that
when it releases the lock it can use the fast path if nothing has changed. Finally,
the thread releases the lock's guard and reenables interrupts.
Releasing the lock. Lock release is similar. The function mutexunlock()
in kernel/mutex.c first tries a fast path and falls back on a slow path if needed:
void__schedmutex_unlock(structmutex*lock)
{
__mutex_fastpath_unlock(&lock->count,__mutex_unlock_slowpath);
}
And the fath path for unlock follows a similar path to that of lock. Here are
the relevant excerpts from arch/x86/include/asm/mutex32.h , for example:
#define__mutex_fastpath_unlock(count,fail_fn)
\
do{
\
unsignedintdummy;
\
asmvolatile(LOCK_PREFIX" incl(%%eax)\n"
\
" jg 1f\n"
\
" call"#fail_fn"\n"
\
"1:\n"
\
:"=a"(dummy)
\
:"a"(count)
\
:"memory","ecx","edx");
\
}while(0)
Here, when we atomically increment count , if we find that the new value is 1,
then the previous value must have been 0, so there can be no waiting threads
and we're done. Otherwise, fall back to the slowpath by calling failfn , which
corresponds to the following fuction in kernel/mutex.c .
staticinlinevoid
__mutex_unlock_common_slowpath(atomic_t*lock_count,intnested)
{
structmutex*lock=container_of(lock_count,structmutex,count);
unsignedlongflags;
spin_lock_mutex(&lock->wait_lock,flags);
/*
*Unlocklockhere
*/
atomic_set(&lock->count,1);
if(!list_empty(&lock->wait_list)){
structmutex_waiter*waiter=
list_entry(lock->wait_list.next,
structmutex_waiter,list);
 
Search WWH ::




Custom Search