TA的每日心情 | 开心 2020-4-8 10:45 |
---|
签到天数: 227 天 [LV.7]分神
|
. R/ D/ e5 k& c9 x: ?% \0 N
在论文里,这是第3.2.2节的内容7 w% y+ [) |% M" }0 V* H: d" P
+ _0 {2 X7 `# a0 } |, \6 y5 G% z4 x
3.2.2. Efficient Implementation of Cross-Node All-to-All Communication% S1 Y ^6 T k+ l0 |
In order to ensure sufficient computational performance for DualPipe, we customize efficient
8 K4 U( G4 C8 v. @/ [9 icross-node all-to-all communication kernels (including dispatching and combining) to conserve1 @$ A4 E" E3 o9 v$ Y3 H+ B* R4 I
the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,% ^/ u+ A3 n' P- ]; T+ W* V# Q6 K
in our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications
" ^* x/ F+ \! d% s( xare handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB4 d& }9 l7 C. B. v1 c0 O1 `
(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each
0 H+ S- F- X: O+ Vtoken to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its* ?$ E" s9 N% }# b, f
routing decision is made, it will first be transmitted via IB to the GPUs with the same in-node
& l+ l3 K8 _& ]! ]5 ?; Y% n$ T' Cindex on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is
, u; C$ U8 Z4 {+ c* _* P4 X6 finstantaneously forwarded via NVLink to specific GPUs that host their target experts, without
5 L# n) i4 @) Vbeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink
: i) x! `2 I1 ^' Kare fully overlapped, and each token can efficiently select an average of 3.2 experts per node% V* H3 b0 J4 F0 V+ ^% d* ]
without incurring additional overhead from NVLink. This implies that, although DeepSeek-V3
; Z Z; n6 x9 A9 _# ?( W% y: n13
r/ {" p: A2 F% fselects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts( ^) G& N. \ N# i& z3 p' ]
(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under
/ a8 p) I# V8 Q; r- Esuch a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB- A, H6 E' Z6 w- P# P+ [
and NVLink.% X/ N, H% J% z" M' D
In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition
" Y& {* Y7 e& w) Y, l20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)
) D7 r( d1 Q# ^8 l% T- G+ ]8 mIB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The5 m, r: ?# K$ x) }8 ]
number of warps allocated to each communication task is dynamically adjusted according to the3 l+ L1 W; W5 Z4 U; T
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,
( Q& z' M ~4 I. n(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also
. t5 t) [- R1 z( A1 _* T# Bhandled by dynamically adjusted warps. In addition, both dispatching and combining kernels( f& r. s/ q! i. G/ L8 Y
overlap with the computation stream, so we also consider their impact on other SM computation
5 {/ a/ \/ _" |! j3 Lkernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and9 c0 V8 s1 O7 T! l+ y5 b% A
auto-tune the communication chunk size, which significantly reduces the use of the L2 cache
6 X) `# R8 C( E& |$ r: i- d }and the interference to other SMs.
7 T' k4 N# m! j- e' J4 Z4 A
7 j- N$ B- d+ j7 H通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。6 `6 f5 ] g$ A3 s! `. _
7 H% n/ f5 p& T& I- A8 K f& ]8 n2 K
我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。+ Z, F M& J; H* V% H
- L4 q: I# e% C& h% z" Y) W
目的不是为了绕cuda,反而是为了让cuda的效率更高。
6 i( Y3 k9 H( s
" z G. p0 Q7 `; I类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|