TA的每日心情 | 开心 2020-4-8 10:45 |
---|
签到天数: 227 天 [LV.7]分神
|
' ~/ B ?7 Y/ H2 m5 f6 K; s: _
在论文里,这是第3.2.2节的内容3 H6 n& d: L5 [& s4 K9 M
4 [2 K9 G( F: Y7 D3.2.2. Efficient Implementation of Cross-Node All-to-All Communication
+ D; f+ i9 `% D3 F) \: t( AIn order to ensure sufficient computational performance for DualPipe, we customize efficient6 Y5 v5 Y: W4 K G% a, d
cross-node all-to-all communication kernels (including dispatching and combining) to conserve
5 G% b6 V- O. f; T0 A4 Y. sthe number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,
9 X$ Z% }8 |+ H4 F& gin our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications
: h' [# [; N* z* F" ~$ p3 D; Pare handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB+ G# D. h7 x; \; N/ q
(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each2 i3 h- @' I" b; q9 Z
token to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its
O9 _4 {& }5 U6 erouting decision is made, it will first be transmitted via IB to the GPUs with the same in-node
5 s3 d" b9 s6 \$ `% e1 T3 Nindex on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is
4 K! b4 z! T0 jinstantaneously forwarded via NVLink to specific GPUs that host their target experts, without
- c- J: [. l8 \! u5 l& Nbeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink( u+ e$ ~ o4 r q! N) j* c
are fully overlapped, and each token can efficiently select an average of 3.2 experts per node$ o0 I8 [' {+ p& @* X
without incurring additional overhead from NVLink. This implies that, although DeepSeek-V3
/ S& J( L1 G( D" q13
$ ^1 A% ^: m! |- u! s: L. Aselects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts
0 V/ m) ?$ C8 B% U& I4 \(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under. F" p1 e& [5 \ e% [
such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB
4 \0 K/ r" ^! c. W( _6 Aand NVLink.
; E! B' k+ l( ]+ ]! V% c0 m+ ?In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition, U/ t4 \/ D# y/ p: n* g( m) B
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)
9 Y( x: T T: P; [, X1 i- ]9 Y; F& IIB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The
; t% L- T" q1 C% |% w' A& enumber of warps allocated to each communication task is dynamically adjusted according to the
" ]6 _" `% ~8 { j/ t8 dactual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,7 e+ Z- [8 o4 v9 m! d5 k% a
(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also2 }/ H. j& ]) l0 q$ W
handled by dynamically adjusted warps. In addition, both dispatching and combining kernels
7 E# O! I/ O6 S; ]( K5 foverlap with the computation stream, so we also consider their impact on other SM computation. U# O: R; r+ d3 S3 O
kernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and
! l" Z4 c6 U+ q0 B, N7 oauto-tune the communication chunk size, which significantly reduces the use of the L2 cache
" v: f, z" v6 S3 K7 Mand the interference to other SMs.& I# X+ v8 M4 L
. c4 y% t% Y$ N4 M/ T通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。6 l0 p$ b: ` w% J& S3 n2 n
g3 o7 o9 j% ^; O2 ?/ P7 F0 N我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。3 e1 [$ B/ X/ W; [. y
9 S4 T+ q5 E; y) ^* ` n G
目的不是为了绕cuda,反而是为了让cuda的效率更高。
. d! Y/ I4 s( q0 m+ Y2 L! [
5 p* u( u4 _9 E3 I类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|