TA的每日心情 | 开心 2020-4-8 10:45 |
|---|
签到天数: 227 天 [LV.7]分神
|
5 b7 q- m& D1 w7 {+ h9 c4 j在论文里,这是第3.2.2节的内容1 F8 i3 y# ?, r4 S% M. B* {
- ]: v: O2 H2 H; P
3.2.2. Efficient Implementation of Cross-Node All-to-All Communication* T( x% y( y6 Q! D
In order to ensure sufficient computational performance for DualPipe, we customize efficient( s) W8 s7 @3 p
cross-node all-to-all communication kernels (including dispatching and combining) to conserve. ^9 Z/ s: n1 z( c* h9 g
the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,
. m4 N1 Z5 `7 q& d" {5 E; r) uin our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications# j9 M1 y: q( D H! V
are handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB
5 E; K6 l* \2 X6 o f$ L6 v: L" f(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each9 t( ~' D% Y- v0 B
token to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its
. F- R( y& ]+ v+ n" e5 e arouting decision is made, it will first be transmitted via IB to the GPUs with the same in-node
' i9 W; y2 x8 w! v! s: V9 iindex on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is3 W. Y, \; {8 }! Y- Z+ H
instantaneously forwarded via NVLink to specific GPUs that host their target experts, without* I3 c* I! T; m- X9 n# a3 u
being blocked by subsequently arriving tokens. In this way, communications via IB and NVLink
: j& t8 X/ ^/ I' c3 i6 i6 g5 W& X# L8 fare fully overlapped, and each token can efficiently select an average of 3.2 experts per node" J' {/ K2 u, {0 ^& `
without incurring additional overhead from NVLink. This implies that, although DeepSeek-V3" K, w; p1 p6 X3 k+ v
136 A* l# I7 o; t) `3 l7 P$ U& j
selects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts/ b( q$ Q9 Q1 x* J0 |
(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under0 _* f9 f8 ?0 j; D9 {0 ?8 x& ?
such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB8 O2 S$ n1 D% B$ t- {! E
and NVLink.
q1 N0 T' I5 J- _/ u: q/ _In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition' }- o+ R8 c( ~4 v4 }
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)
6 P" b: P" J, O2 o# e9 OIB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The
& c) ^" c7 L8 K7 f3 y, i0 B4 U Onumber of warps allocated to each communication task is dynamically adjusted according to the* z" Z' p9 t0 ]( A2 L/ f |
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,7 R) c: u$ R2 J+ x- x& H* U' E
(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also
( x2 s( u2 y h7 lhandled by dynamically adjusted warps. In addition, both dispatching and combining kernels5 m( A# j, }/ N9 E& X; E
overlap with the computation stream, so we also consider their impact on other SM computation5 D. N9 T# g+ m' I7 L2 W" `- t. q
kernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and7 T4 i3 ?& y1 J. ~8 ^; X V6 Q
auto-tune the communication chunk size, which significantly reduces the use of the L2 cache
( W" ~( e* U; k) fand the interference to other SMs., y6 E' {0 K( T: ?3 J# a( \
6 X K/ L. E7 z
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。
) F/ F- S4 ^" S7 J& w- u! O+ I2 [1 X
我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。
( ~* s' ?. S' H4 }$ j4 b! v
9 P: k, b: u9 K目的不是为了绕cuda,反而是为了让cuda的效率更高。+ E: I, [6 ^; f) Y% s4 ?
/ Z) e6 a2 K" _+ |0 Q+ \
类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|