TA的每日心情 | 开心 2020-4-8 10:45 |
|---|
签到天数: 227 天 [LV.7]分神
|
7 V& A- _; G: b! V: B在论文里,这是第3.2.2节的内容" H/ |$ G7 e; Y: {3 j$ |
4 ^0 ?" r }5 I2 {1 n C% n/ c
3.2.2. Efficient Implementation of Cross-Node All-to-All Communication
0 z8 J h/ [7 m2 `In order to ensure sufficient computational performance for DualPipe, we customize efficient, f7 O1 d8 r. ?
cross-node all-to-all communication kernels (including dispatching and combining) to conserve1 {" I" w2 Y6 o/ r0 d) w
the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,; B( x; b( V8 t# T
in our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications
, f( ~4 {6 E5 s! t/ P' Y. ^are handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB
: Q! M& ^/ [4 b( s* Q6 i(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each% I+ |. e' Z! L: s2 a) X' V
token to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its6 a& Y1 y2 N! I2 M+ O. ^
routing decision is made, it will first be transmitted via IB to the GPUs with the same in-node) s) ]! _2 J- j# d
index on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is1 ]8 g9 t: ~+ q3 R; d$ [
instantaneously forwarded via NVLink to specific GPUs that host their target experts, without
, m9 f9 p0 J3 d4 d4 Obeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink
5 g4 `3 V% b- \" xare fully overlapped, and each token can efficiently select an average of 3.2 experts per node
7 J6 t0 n5 g4 y/ d& Z" Xwithout incurring additional overhead from NVLink. This implies that, although DeepSeek-V3
3 j7 N/ ]9 _9 S1 T1 B13# g/ d i9 }6 R% F
selects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts
1 y: e0 o# v) k% p* l(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under1 u5 g2 T; Y; ]/ A; S t+ S
such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB0 h9 _1 g, c+ ?
and NVLink.! o; }- C# O1 t* c* F' Y
In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition
8 U4 G2 K o0 Q20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)
/ }6 X9 z* s/ V- x* v6 `' dIB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The
9 P2 x: V0 F6 t% r2 ~& q0 J# K pnumber of warps allocated to each communication task is dynamically adjusted according to the0 \3 c1 [- I a X5 B( |. B) V
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,( m# O& r& e$ P G: Y
(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also; B; x; _! y+ O2 c4 S
handled by dynamically adjusted warps. In addition, both dispatching and combining kernels
3 W8 O/ n" U( n* L; loverlap with the computation stream, so we also consider their impact on other SM computation# m- M! z" W3 l
kernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and& T3 N- Q5 G5 B/ @! u4 N: ]! T
auto-tune the communication chunk size, which significantly reduces the use of the L2 cache
5 q: @. l; q5 g. rand the interference to other SMs.
" C$ L- J0 [" f! r" H d* ]0 }2 T {$ q4 j5 t9 }$ }8 }
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。/ l7 W! P) z% l( r! `) a9 v! i+ }
% l# Z7 g' ^2 ^* q. E' _
我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。
' o. h4 i0 [# i5 c- z' `% L2 e5 A; h! Q; V2 a5 [$ E; a
目的不是为了绕cuda,反而是为了让cuda的效率更高。
) q. s( s( T. l4 v9 l0 F7 a) ]4 a! z, l/ ?' [# G
类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|