TA的每日心情 | 开心 2020-4-8 10:45 |
---|
签到天数: 227 天 [LV.7]分神
|
# u) q O9 Z% h1 d# f8 ~. S在论文里,这是第3.2.2节的内容- ^% [/ F3 u" T+ T9 I
$ W- B- o1 f. `5 V# N7 z3.2.2. Efficient Implementation of Cross-Node All-to-All Communication7 r$ ?% v& L! c$ v! A% ^, z! j
In order to ensure sufficient computational performance for DualPipe, we customize efficient
4 H2 L5 b+ A! o4 C0 l. A u3 ecross-node all-to-all communication kernels (including dispatching and combining) to conserve
! G6 n% X* F8 D+ }; X$ Hthe number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,* C# X9 ]0 v5 \7 E+ T" i$ J2 n
in our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications
+ ?5 d; U! b. f8 V. Z/ }are handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB
* `: G3 o6 ?; H6 I6 B(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each
/ f: P3 {+ Q0 C" n8 `1 s; Ltoken to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its
( K- W, _! |! h3 s0 l. prouting decision is made, it will first be transmitted via IB to the GPUs with the same in-node
7 @; b2 K8 U$ |# t* Y# z) E, findex on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is* o; q2 s$ y+ l+ k: a4 m
instantaneously forwarded via NVLink to specific GPUs that host their target experts, without
^5 w& m/ U3 m# Rbeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink
/ I8 R( i0 Y6 B( |' u3 n1 Lare fully overlapped, and each token can efficiently select an average of 3.2 experts per node
% ~ G+ }6 `, s0 S; F M! m2 Awithout incurring additional overhead from NVLink. This implies that, although DeepSeek-V3- b/ i" i( g) A3 l9 t- a, C; B
13
& U" b- s5 A- A% F8 i7 A" aselects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts
: c9 n/ a; s& |8 b; i(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under. h. U4 u7 N2 c
such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB
) E8 l' a! ^1 o. \6 _and NVLink.# c& [' W4 O" `1 Q8 \' h
In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition( q. m3 d5 A$ P6 ~
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)
+ R& I" V+ ?5 R; \( L* YIB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The
`" G! X6 I7 ?' M; z2 J- rnumber of warps allocated to each communication task is dynamically adjusted according to the3 `7 _" @# ?/ P9 J: ?3 r8 P9 ]
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,& ^) R! h" B8 |. k8 n( t$ ^
(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also
# H3 a- Q$ L; H* V7 c$ ?handled by dynamically adjusted warps. In addition, both dispatching and combining kernels5 t, b( g$ ? o9 _
overlap with the computation stream, so we also consider their impact on other SM computation
! b% z) ]0 Z1 o D/ Okernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and; f! b/ H K* \5 f
auto-tune the communication chunk size, which significantly reduces the use of the L2 cache8 [& L M) h2 r
and the interference to other SMs.
" J( d, _8 \; l6 @. g4 y. m3 a, s u6 O- C
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。8 |8 o: D8 h1 u9 @
: h+ r! j5 p) W. L
我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。' M ~( E" w8 X' ]
: U8 _- f" T( B; c! v% ^0 w3 X
目的不是为了绕cuda,反而是为了让cuda的效率更高。
& W0 p3 G, Y+ ~# E; w. N
5 G+ D4 v2 |; E6 ]9 m类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|