vllm.model_executor.model_loader.weight_utils
Utilities for downloading and initializing model weights.
_BAR_FORMAT
module-attribute
¶
_BAR_FORMAT = "{desc}: {percentage:3.0f}% Completed | {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}]\n"
runai_model_streamer
module-attribute
¶
runai_model_streamer = PlaceholderModule(
"runai_model_streamer"
)
DisabledTqdm
¶
Bases: tqdm
Source code in vllm/model_executor/model_loader/weight_utils.py
_shared_pointers
¶
Source code in vllm/model_executor/model_loader/weight_utils.py
composed_weight_loader
¶
composed_weight_loader(
loader: LoaderFunction, fn: Callable[[Tensor], Tensor]
) -> LoaderFunction
Create a weight loader that post-processes the weights after loading
Source code in vllm/model_executor/model_loader/weight_utils.py
convert_bin_to_safetensor_file
¶
Source code in vllm/model_executor/model_loader/weight_utils.py
convert_pyslice_to_tensor
¶
convert PySafeSlice object from safetensors to torch.Tensor
PySafeSlice object supports indexing, which is done before loading the
actual tensor and can reduce the amount of memory being read into the
memory. However, it does not support more advanced functionalities
like .view()
or .t()
. Therefore, if we need to modify the loaded
tensor with these more complicated operators, we need to convert to
tensor first.
Source code in vllm/model_executor/model_loader/weight_utils.py
default_weight_loader
¶
Default weight loader.
Source code in vllm/model_executor/model_loader/weight_utils.py
download_safetensors_index_file_from_hf
¶
download_safetensors_index_file_from_hf(
model_name_or_path: str,
index_file: str,
cache_dir: Optional[str],
revision: Optional[str] = None,
) -> None
Download hf safetensors index file from Hugging Face Hub.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name_or_path
|
str
|
The model name or path. |
required |
index_file
|
str
|
The safetensors index file name |
required |
cache_dir
|
Optional[str]
|
The cache directory to store the model weights. If None, will use HF defaults. |
required |
revision
|
Optional[str]
|
The revision of the model. |
None
|
Source code in vllm/model_executor/model_loader/weight_utils.py
download_weights_from_hf
¶
download_weights_from_hf(
model_name_or_path: str,
cache_dir: Optional[str],
allow_patterns: list[str],
revision: Optional[str] = None,
ignore_patterns: Optional[Union[str, list[str]]] = None,
) -> str
Download model weights from Hugging Face Hub.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name_or_path
|
str
|
The model name or path. |
required |
cache_dir
|
Optional[str]
|
The cache directory to store the model weights. If None, will use HF defaults. |
required |
allow_patterns
|
list[str]
|
The allowed patterns for the weight files. Files matched by any of the patterns will be downloaded. |
required |
revision
|
Optional[str]
|
The revision of the model. |
None
|
ignore_patterns
|
Optional[Union[str, list[str]]]
|
The patterns to filter out the weight files. Files matched by any of the patterns will be ignored. |
None
|
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
The path to the downloaded model weights. |
Source code in vllm/model_executor/model_loader/weight_utils.py
enable_hf_transfer
¶
automatically activates hf_transfer
Source code in vllm/model_executor/model_loader/weight_utils.py
fastsafetensors_weights_iterator
¶
fastsafetensors_weights_iterator(
hf_weights_files: list[str], use_tqdm_on_load: bool
) -> Generator[tuple[str, Tensor], None, None]
Iterate over the weights in the model safetensor files using fastsafetensor library.
Source code in vllm/model_executor/model_loader/weight_utils.py
filter_duplicate_safetensors_files
¶
filter_duplicate_safetensors_files(
hf_weights_files: list[str],
hf_folder: str,
index_file: str,
) -> list[str]
Source code in vllm/model_executor/model_loader/weight_utils.py
filter_files_not_needed_for_inference
¶
Exclude files that are not needed for inference.
See https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/trainer.py#L227-L233
Source code in vllm/model_executor/model_loader/weight_utils.py
get_gguf_extra_tensor_names
¶
Source code in vllm/model_executor/model_loader/weight_utils.py
get_lock
¶
Source code in vllm/model_executor/model_loader/weight_utils.py
get_quant_config
¶
get_quant_config(
model_config: ModelConfig, load_config: LoadConfig
) -> QuantizationConfig
Source code in vllm/model_executor/model_loader/weight_utils.py
get_sparse_attention_config
¶
get_sparse_attention_config(
model_config: ModelConfig,
load_config: LoadConfig,
sparse_attention_config_filename: str = "sparse_attention_config.json",
) -> dict[str, Any]
Source code in vllm/model_executor/model_loader/weight_utils.py
gguf_quant_weights_iterator
¶
gguf_quant_weights_iterator(
gguf_file: str, gguf_to_hf_name_map: dict[str, str]
) -> Generator[tuple[str, Tensor], None, None]
Iterate over the quant weights in the model gguf files and convert them to torch tensors
Source code in vllm/model_executor/model_loader/weight_utils.py
initialize_dummy_weights
¶
initialize_dummy_weights(
model: Module,
low: float = -0.001,
high: float = 0.001,
seed: int = 1234,
) -> None
Initialize model weights with random values.
The model weights must be randomly initialized for accurate performance measurements. Additionally, the model weights should not cause NaNs in the forward pass. We empirically found that initializing the weights with values between -1e-3 and 1e-3 works well for most models.
We use per-parameter random seed, so that dummy weights are consistent, even if the model is partitioned across multiple devices. When the seed is fixed, the random values generated by this function only depends on the parameter's number of elements and its data type.
Source code in vllm/model_executor/model_loader/weight_utils.py
maybe_remap_kv_scale_name
¶
Remap the name of FP8 k/v_scale parameters.
This function handles the remapping of FP8 k/v_scale parameter names. It detects if the given name ends with a suffix and attempts to remap it to the expected name format in the model. If the remapped name is not found in the params_dict, a warning is printed and None is returned.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
The original loaded checkpoint parameter name. |
required |
params_dict
|
dict
|
Dictionary containing the model's named parameters. |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
Optional[str]
|
The remapped parameter name if successful, or the original name if no remapping is needed. |
None |
Optional[str]
|
If the remapped name is not found in params_dict. |
Source code in vllm/model_executor/model_loader/weight_utils.py
np_cache_weights_iterator
¶
np_cache_weights_iterator(
model_name_or_path: str,
cache_dir: Optional[str],
hf_folder: str,
hf_weights_files: list[str],
use_tqdm_on_load: bool,
) -> Generator[tuple[str, Tensor], None, None]
Iterate over the weights in the model np files.
Will dump the model weights to numpy files if they are not already dumped.
Source code in vllm/model_executor/model_loader/weight_utils.py
pt_weights_iterator
¶
pt_weights_iterator(
hf_weights_files: list[str],
use_tqdm_on_load: bool,
pt_load_map_location: Union[
str, dict[str, str]
] = "cpu",
) -> Generator[tuple[str, Tensor], None, None]
Iterate over the weights in the model bin/pt files.
Source code in vllm/model_executor/model_loader/weight_utils.py
row_parallel_weight_loader
¶
Load weights that are row-parallelized.
Source code in vllm/model_executor/model_loader/weight_utils.py
runai_safetensors_weights_iterator
¶
runai_safetensors_weights_iterator(
hf_weights_files: list[str], use_tqdm_on_load: bool
) -> Generator[tuple[str, Tensor], None, None]
Iterate over the weights in the model safetensor files.
Source code in vllm/model_executor/model_loader/weight_utils.py
safetensors_weights_iterator
¶
safetensors_weights_iterator(
hf_weights_files: list[str], use_tqdm_on_load: bool
) -> Generator[tuple[str, Tensor], None, None]
Iterate over the weights in the model safetensor files.
Source code in vllm/model_executor/model_loader/weight_utils.py
sharded_weight_loader
¶
sharded_weight_loader(shard_axis: int) -> LoaderFunction
Create a weight loader that shards the weights along the given axis