TA的每日心情 | 开心 2020-4-8 10:45 |
|---|
签到天数: 227 天 [LV.7]分神
|
2 }6 f0 u% \9 \6 A! P在论文里,这是第3.2.2节的内容
3 M$ {+ }1 K; L: K( D
L' C3 L5 n4 z9 ?/ w3.2.2. Efficient Implementation of Cross-Node All-to-All Communication
- G) Q' ?5 j# ^6 EIn order to ensure sufficient computational performance for DualPipe, we customize efficient( h% V6 a: l: k4 p
cross-node all-to-all communication kernels (including dispatching and combining) to conserve4 K. g- Q) z% Z' @
the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,3 U! j/ s9 `- c4 x& k
in our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications
# f; o/ c( a0 ~" L% e) {are handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB( ^& d" u, u1 a' }; p9 p; I
(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each
2 N/ _- h N, K% [9 w) Wtoken to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its5 O6 ^ V& l, {, Z$ @! m, O
routing decision is made, it will first be transmitted via IB to the GPUs with the same in-node7 b1 f9 Z' A U3 O: X+ M1 L
index on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is4 t, C! o2 t+ \& ?1 d
instantaneously forwarded via NVLink to specific GPUs that host their target experts, without
4 n+ T; M7 ^) R, {, Y# ^7 Y: kbeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink
6 t8 ]/ n) F4 z- _/ g5 U1 _' `are fully overlapped, and each token can efficiently select an average of 3.2 experts per node
- O& B) X$ `7 V& p ywithout incurring additional overhead from NVLink. This implies that, although DeepSeek-V3$ u' c0 k. M" a8 M
13
4 ~7 x; L. U6 W% Qselects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts5 H( o% ^, Q* y5 P# D
(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under2 F& L i }% y) N# n8 N( M0 k
such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB
# `- {: G# e& e, G9 c$ s1 @2 a7 kand NVLink.2 a, x" E6 v6 ?9 A8 s- ?
In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition. T, G3 Y( E' b7 u; E. {
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)" X5 o. d, y8 V7 O8 H
IB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The' f; @% W ]8 z" b7 H
number of warps allocated to each communication task is dynamically adjusted according to the
2 S/ M; v- b: v0 O1 }9 P5 Yactual workload across all SMs. Similarly, during the combining process, (1) NVLink sending," ~/ a" w3 J* R0 d) s1 F
(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also
7 Y; n* V0 E/ n ?. T7 R$ Uhandled by dynamically adjusted warps. In addition, both dispatching and combining kernels4 q" P% _/ i) o" Z9 ] p
overlap with the computation stream, so we also consider their impact on other SM computation" B; v S3 [- s( Q
kernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and
6 Z/ f! g, ~& ?. D. xauto-tune the communication chunk size, which significantly reduces the use of the L2 cache1 l9 f: g( g, P! k4 K
and the interference to other SMs.
* _7 k& A- Z9 B# {
- _0 y9 @3 q4 ~$ S' V) \4 ?8 R通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。
# Z0 m2 h; A3 u2 g, X3 [2 g3 d; d8 @6 [3 h4 R/ Y3 ~4 C
我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。) ?5 Y/ [: x( {. ^) @* ?+ C
: ?- H3 ?! U9 s5 s V7 Q, \% V
目的不是为了绕cuda,反而是为了让cuda的效率更高。& P& c } o& {; Y" W: N" _0 c
& g& Q2 U f) {0 ^1 u类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|