OpenMP的barrier同步
- Micro Topics
- 2025-05-29
- 406 Views
- 0 Comments
- 722 Words
do_release:
if (nested)
gomp_barrier_wait (&team->barrier);
else
gomp_simple_barrier_wait (&pool->threads_dock);
/* Decrease the barrier threshold to match the number of threads
that should arrive back at the end of this team. The extra
threads should be exiting. Note that we arrange for this test
to never be true for nested teams. If AFFINITY_COUNT is non-zero,
the barrier as well as gomp_managed_threads was temporarily
set to NTHREADS + AFFINITY_COUNT. For NTHREADS < OLD_THREADS_COUNT,
AFFINITY_COUNT if non-zero will be always at least
OLD_THREADS_COUNT - NTHREADS. */
if (__builtin_expect (nthreads < old_threads_used, 0)
|| __builtin_expect (affinity_count, 0))
{
long diff = (long) nthreads - (long) old_threads_used;
if (affinity_count)
diff = -affinity_count;
gomp_simple_barrier_reinit (&pool->threads_dock, nthreads);
#ifdef HAVE_SYNC_BUILTINS
__sync_fetch_and_add (&gomp_managed_threads, diff);
#else
gomp_mutex_lock (&gomp_managed_threads_lock);
gomp_managed_threads += diff;
gomp_mutex_unlock (&gomp_managed_threads_lock);
#endif
}
if (__builtin_expect (gomp_display_affinity_var, 0))
{
if (nested
|| nthreads != old_threads_used
|| force_display)
{
gomp_display_affinity_thread (gomp_thread_self (), &thr->ts,
thr->place);
if (nested)
{
start_data -= nthreads - 1;
for (i = 1; i < nthreads; ++i)
{
gomp_display_affinity_thread (
#ifdef LIBGOMP_USE_PTHREADS
start_data->handle,
#else
gomp_thread_self (),
#endif
&start_data->ts,
start_data->place);
start_data++;
}
}
else
{
for (i = 1; i < nthreads; ++i)
{
gomp_thread_handle handle
= gomp_thread_to_pthread_t (pool->threads[i]);
gomp_display_affinity_thread (handle, &pool->threads[i]->ts,
pool->threads[i]->place);
}
}
}
}
if (__builtin_expect (affinity_thr != NULL, 0)
&& team->prev_ts.place_partition_len > 64)
free (affinity_thr);
}
#endif
你贴出的这段是 gomp_team_start() 的收尾阶段,主要在做以下三件事:
✅ 一、同步所有线程:等待子线程启动完成
if (nested)
gomp_barrier_wait (&team->barrier);
else
gomp_simple_barrier_wait (&pool->threads_dock);
🔍 含义
- 主线程在这里阻塞,等待所有子线程完成初始化并进入 barrier,同步起来。
- 如果是嵌套并行(nested),就使用
team->barrier。 - 否则(顶层 team),使用
pool->threads_dock。
这是一个 同步屏障(barrier),保证主线程和所有工作线程都完成启动再继续后续工作。
✅ 二、调整 gomp_managed_threads 和 barrier 阈值
if (nthreads < old_threads_used || affinity_count)
{
long diff = nthreads - old_threads_used;
if (affinity_count)
diff = -affinity_count;
gomp_simple_barrier_reinit (&pool->threads_dock, nthreads);
__sync_fetch_and_add (&gomp_managed_threads, diff);
}
🔍 含义
- 如果这次需要的线程数少于上次的
old_threads_used,或之前为 affinity 多加了线程,现在要回收。 gomp_simple_barrier_reinit():动态调整 barrier 的阈值(即需要等待的线程数)。-
同时也更新
gomp_managed_threads(记录当前活跃线程总数):- 用
atomic add或加锁更新。
- 用
⚠️ 为什么要减?
- 有可能线程池中保留了更多线程,但当前 team 只需要一部分。
- 这些多余线程要主动退出,否则主线程等待的 barrier 永远达不到目标数。
✅ 三、显示线程亲和性(如果需要)
if (gomp_display_affinity_var)
{
if (nested || nthreads != old_threads_used || force_display)
{
gomp_display_affinity_thread(...);
}
}
🔍 含义
- 若
OMP_DISPLAY_AFFINITY=TRUE,就打印每个线程的绑定情况(place 编号)。 - 这部分只是调试用途,无功能性影响。
✅ 四、清理大数组(affinity_thr)
if (affinity_thr != NULL && team->prev_ts.place_partition_len > 64)
free (affinity_thr);
- 释放为 place 分配的大数组,避免内存泄漏。
place_partition_len > 64是经验阈值,小数组不释放是为了复用。
🔚 总结一句话
这一部分的核心目的是主线程阻塞等待所有线程就绪、同步 barrier,然后根据需要动态调整线程和 barrier 状态,并完成资源清理与调试输出。
如果你需要我把完整启动流程整理成图或伪代码流程我也可以做,告诉我你的偏好。
