Waiman Long
|
81d3dc9a34
locking/qspinlock: Add stat tracking for pending vs. slowpath
|
7 years ago |
Will Deacon
|
ae75d9089f
locking/qspinlock: Use try_cmpxchg() instead of cmpxchg() when locking
|
7 years ago |
Will Deacon
|
9d4646d14d
locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb()
|
7 years ago |
Will Deacon
|
c131a198c4
locking/qspinlock: Use smp_cond_load_relaxed() to wait for next node
|
7 years ago |
Will Deacon
|
f9c811fac4
locking/qspinlock: Use atomic_cond_read_acquire()
|
7 years ago |
Will Deacon
|
c61da58d8a
locking/qspinlock: Kill cmpxchg() loop when claiming lock from head of queue
|
7 years ago |
Will Deacon
|
59fb586b4a
locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath
|
7 years ago |
Will Deacon
|
6512276d97
locking/qspinlock: Bound spinning on pending->locked transition in slowpath
|
7 years ago |
Will Deacon
|
625e88be1f
locking/qspinlock: Merge 'struct __qspinlock' into 'struct qspinlock'
|
7 years ago |
Will Deacon
|
11dc13224c
locking/qspinlock: Ensure node->count is updated before initialising node
|
7 years ago |
Will Deacon
|
95bcade33a
locking/qspinlock: Ensure node is initialised before updating prev->next
|
7 years ago |
Paul E. McKenney
|
548095dea6
locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath()
|
7 years ago |
Paul E. McKenney
|
d3a024abbc
locking: Remove spin_unlock_wait() generic definitions
|
8 years ago |
Stafford Horne
|
5671360f29
locking/qspinlock: Explicitly include asm/prefetch.h
|
8 years ago |
Pan Xinhui
|
0dceeaf599
locking/qspinlock: Use __this_cpu_dec() instead of full-blown this_cpu_dec()
|
9 years ago |
Peter Zijlstra
|
33ac279677
locking/barriers: Introduce smp_acquire__after_ctrl_dep()
|
9 years ago |
Peter Zijlstra
|
1f03e8d291
locking/barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()
|
9 years ago |
Peter Zijlstra
|
055ce0fd1b
locking/qspinlock: Add comments
|
9 years ago |
Peter Zijlstra
|
8d53fa1904
locking/qspinlock: Clarify xchg_tail() ordering
|
9 years ago |
Peter Zijlstra
|
2c61002271
locking/qspinlock: Fix spin_unlock_wait() some more
|
9 years ago |
Waiman Long
|
cb037fdad6
locking/qspinlock: Use smp_cond_acquire() in pending code
|
9 years ago |
Waiman Long
|
cd0272fab7
locking/pvqspinlock: Queue node adaptive spinning
|
9 years ago |
Waiman Long
|
1c4941fd53
locking/pvqspinlock: Allow limited lock stealing
|
9 years ago |
Peter Zijlstra
|
b3e0b1b6d8
locking, sched: Introduce smp_cond_acquire() and use it
|
9 years ago |
Waiman Long
|
aa68744f80
locking/qspinlock: Avoid redundant read of next pointer
|
9 years ago |
Waiman Long
|
81b5598665
locking/qspinlock: Prefetch the next node cacheline
|
9 years ago |
Waiman Long
|
64d816cba0
locking/qspinlock: Use _acquire/_release() versions of cmpxchg() & xchg()
|
9 years ago |
Peter Zijlstra
|
43b3f02899
locking/qspinlock/x86: Fix performance regression under unaccelerated VMs
|
10 years ago |
Waiman Long
|
75d2270280
locking/pvqspinlock: Only kick CPU at unlock time
|
10 years ago |
Waiman Long
|
a23db284fe
locking/pvqspinlock: Implement simple paravirt support for the qspinlock
|
10 years ago |