TA的每日心情 | 开心 2020-4-8 10:45 |
|---|
签到天数: 227 天 [LV.7]分神
|
0 Q0 z5 p6 M" `0 r1 R# y
在论文里,这是第3.2.2节的内容
/ f; M1 B' o2 O& }: J& t; C2 g- Y3 L/ a) I1 r. r0 j
3.2.2. Efficient Implementation of Cross-Node All-to-All Communication
% [8 J* @; M5 y+ j: WIn order to ensure sufficient computational performance for DualPipe, we customize efficient I: I5 i; Y% O- ^
cross-node all-to-all communication kernels (including dispatching and combining) to conserve b `/ ~; `% L
the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,
5 I A# Q3 _6 A- { v& Oin our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications
. C& l" h# L7 ~( H: G( c' Bare handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB
N* u5 I( K( R0 F(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each0 v9 A& ~7 ]; v& O
token to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its9 z c/ Z, ?9 o# R
routing decision is made, it will first be transmitted via IB to the GPUs with the same in-node
" k4 |2 o- ?# B) Yindex on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is
: }" ]6 l) K. G0 Iinstantaneously forwarded via NVLink to specific GPUs that host their target experts, without- n- U' _0 }. s& u
being blocked by subsequently arriving tokens. In this way, communications via IB and NVLink7 J2 y2 _6 T* E6 k
are fully overlapped, and each token can efficiently select an average of 3.2 experts per node* x6 p' A! d# k$ \% w6 E" e! [- W2 R
without incurring additional overhead from NVLink. This implies that, although DeepSeek-V3 i" {& `, a6 \* ~3 ?2 y% S
13
% g0 v6 X3 Q7 M2 @1 bselects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts% [, A! N% T, ^0 A9 d5 O9 L g. g
(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under
5 J+ K3 K( A3 k5 Csuch a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB
8 R6 b/ b% {. ^' q5 Eand NVLink.
0 ^% s. G, G, b% O& M& B [5 wIn detail, we employ the warp specialization technique (Bauer et al., 2014) and partition( P4 t& L# l8 e6 I5 C/ p
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)
& y) E9 E: J5 P: H! \IB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The
5 i5 p! C/ V% c4 V2 O% hnumber of warps allocated to each communication task is dynamically adjusted according to the$ j8 X1 x1 \7 x7 P: G& A
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,8 ?( l1 p7 x. N. z. ~, D( L% D
(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also4 x* q" B3 K# u
handled by dynamically adjusted warps. In addition, both dispatching and combining kernels
/ v. @4 Q7 n) _) K" Xoverlap with the computation stream, so we also consider their impact on other SM computation0 Q: Y# N, w# d- I2 M3 l; R3 i9 t
kernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and
7 n6 y6 x4 B" l; J5 Qauto-tune the communication chunk size, which significantly reduces the use of the L2 cache
6 Y& q; f; K$ m6 \and the interference to other SMs.
& K. }( D7 w4 p4 z8 N9 K0 i! Y- J% F" T( ?- }
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。8 D" v+ z) W8 ]! x
4 ?. K' [. P: E; F4 g我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。
& {7 L& d; ^9 Z, |7 w* K
0 ]1 P; K) L8 }4 ?+ L目的不是为了绕cuda,反而是为了让cuda的效率更高。
4 y6 b3 V* F9 q0 Q9 o7 z5 g5 F& ]4 P0 U: g* @" c& s
类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|