Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler updates from Ingo Molnar:
"The main changes are:
- lockless wakeup support for futexes and IPC message queues
(Davidlohr Bueso, Peter Zijlstra)
- Replace spinlocks with atomics in thread_group_cputimer(), to
improve scalability (Jason Low)
- NUMA balancing improvements (Rik van Riel)
- SCHED_DEADLINE improvements (Wanpeng Li)
- clean up and reorganize preemption helpers (Frederic Weisbecker)
- decouple page fault disabling machinery from the preemption
counter, to improve debuggability and robustness (David
Hildenbrand)
- SCHED_DEADLINE documentation updates (Luca Abeni)
- topology CPU masks cleanups (Bartosz Golaszewski)
- /proc/sched_debug improvements (Srikar Dronamraju)"
* 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (79 commits)
sched/deadline: Remove needless parameter in dl_runtime_exceeded()
sched: Remove superfluous resetting of the p->dl_throttled flag
sched/deadline: Drop duplicate init_sched_dl_class() declaration
sched/deadline: Reduce rq lock contention by eliminating locking of non-feasible target
sched/deadline: Make init_sched_dl_class() __init
sched/deadline: Optimize pull_dl_task()
sched/preempt: Add static_key() to preempt_notifiers
sched/preempt: Fix preempt notifiers documentation about hlist_del() within unsafe iteration
sched/stop_machine: Fix deadlock between multiple stop_two_cpus()
sched/debug: Add sum_sleep_runtime to /proc/<pid>/sched
sched/debug: Replace vruntime with wait_sum in /proc/sched_debug
sched/debug: Properly format runnable tasks in /proc/sched_debug
sched/numa: Only consider less busy nodes as numa balancing destinations
Revert 095bebf61a ("sched/numa: Do not move past the balance point if unbalanced")
sched/fair: Prevent throttling in early pick_next_task_fair()
preempt: Reorganize the notrace definitions a bit
preempt: Use preempt_schedule_context() as the official tracing preemption point
sched: Make preempt_schedule_context() function-tracing safe
x86: Remove cpu_sibling_mask() and cpu_core_mask()
x86: Replace cpu_**_mask() with topology_**_cpumask()
...
This commit is contained in:
commit
23b7776290
138 changed files with 1444 additions and 974 deletions
|
|
@ -60,6 +60,28 @@ void lg_local_unlock_cpu(struct lglock *lg, int cpu)
|
|||
}
|
||||
EXPORT_SYMBOL(lg_local_unlock_cpu);
|
||||
|
||||
void lg_double_lock(struct lglock *lg, int cpu1, int cpu2)
|
||||
{
|
||||
BUG_ON(cpu1 == cpu2);
|
||||
|
||||
/* lock in cpu order, just like lg_global_lock */
|
||||
if (cpu2 < cpu1)
|
||||
swap(cpu1, cpu2);
|
||||
|
||||
preempt_disable();
|
||||
lock_acquire_shared(&lg->lock_dep_map, 0, 0, NULL, _RET_IP_);
|
||||
arch_spin_lock(per_cpu_ptr(lg->lock, cpu1));
|
||||
arch_spin_lock(per_cpu_ptr(lg->lock, cpu2));
|
||||
}
|
||||
|
||||
void lg_double_unlock(struct lglock *lg, int cpu1, int cpu2)
|
||||
{
|
||||
lock_release(&lg->lock_dep_map, 1, _RET_IP_);
|
||||
arch_spin_unlock(per_cpu_ptr(lg->lock, cpu1));
|
||||
arch_spin_unlock(per_cpu_ptr(lg->lock, cpu2));
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
void lg_global_lock(struct lglock *lg)
|
||||
{
|
||||
int i;
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue