vllm.v1.attention.ops.deepseek_v4_ops.cache_utils ¶
Triton kernels for DeepseekV4 paged K-cache management and sparse-attention index preparation.
- quantize_and_insert_k_cache: quantize bf16 K to UE8M0 FP8 and insert into the paged cache.
- dequantize_and_gather_k_cache: gather and dequantize FP8 K from the paged cache for sparse/SWA prefill.
- compute_global_topk_indices_and_lens: map local topk indices to global KV cache slots and count valid entries.
- combine_topk_swa_indices: concatenate topk compressed indices with SWA window indices for sparse prefill.
compute_global_topk_indices_and_lens ¶
compute_global_topk_indices_and_lens(
topk_indices: Tensor,
token_to_req_indices: Tensor,
block_table: Tensor,
block_size: int,
is_valid_token: Tensor,
) -> tuple[Tensor, Tensor]
Map local topk indices to global KV cache slots and count valid entries.
Fuses three operations into a single kernel: 1. Block-table lookup (local index → global slot id) 2. Valid-entry counting (topk_lens per token) 3. Masking padding tokens to length 0
Source code in vllm/v1/attention/ops/deepseek_v4_ops/cache_utils.py
quantize_and_insert_k_cache ¶
quantize_and_insert_k_cache(
k: Tensor,
k_cache: Tensor,
slot_mapping: Tensor,
block_size: int = 64,
is_ue8m0: bool = True,
)
Quantize K tensor and insert into paged K cache.
K Cache block layout (block_size=64 tokens): - First 64 * 576 = 36864 bytes: Token data - Each token: 448 bytes (fp8) + 128 bytes (bf16) - Next 64 * 8 = 512 bytes: Scales - Each token: 8 bytes (uint8 scales, 7 real + 1 padding) - Padded to multiple of 576
Source code in vllm/v1/attention/ops/deepseek_v4_ops/cache_utils.py
quantize_and_insert_k_kernel ¶
quantize_and_insert_k_kernel(
k_ptr,
slot_mapping_ptr,
k_cache_ptr,
num_tokens,
input_dim: constexpr,
fp8_dim: constexpr,
bf16_dim: constexpr,
scale_dim: constexpr,
quant_block: constexpr,
cache_block_size: constexpr,
token_data_size: constexpr,
block_stride: constexpr,
fp8_max: constexpr,
n_quant_blocks: constexpr,
)
Quantize K tensor and insert into paged K cache.
K Cache block layout (block_size=64 tokens): - [0, 64576): Token data, each token has 448 fp8 + 128 bf16 - [64576, 64576 + 648): Scales, each token has 8 uint8 scales - [64576 + 648, block_stride): Padding
One program per token.
Source code in vllm/v1/attention/ops/deepseek_v4_ops/cache_utils.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 | |