vllm.transformers_utils.tokenizer
AnyTokenizer
module-attribute
¶
AnyTokenizer = Union[
PreTrainedTokenizer,
PreTrainedTokenizerFast,
TokenizerBase,
]
get_lora_tokenizer_async
module-attribute
¶
get_lora_tokenizer_async = make_async(get_lora_tokenizer)
cached_tokenizer_from_config
¶
cached_tokenizer_from_config(
model_config: ModelConfig, **kwargs: Any
)
Source code in vllm/transformers_utils/tokenizer.py
decode_tokens
¶
decode_tokens(
tokenizer: AnyTokenizer,
token_ids: list[int],
*,
skip_special_tokens: Optional[bool] = None,
) -> str
Backend-agnostic equivalent of HF's
tokenizer.decode(token_ids, ...)
.
skip_special_tokens=None
means to use the backend's default
settings.
Source code in vllm/transformers_utils/tokenizer.py
encode_tokens
¶
encode_tokens(
tokenizer: AnyTokenizer,
text: str,
*,
truncation: Optional[bool] = None,
max_length: Optional[int] = None,
add_special_tokens: Optional[bool] = None,
) -> list[int]
Backend-agnostic equivalent of HF's
tokenizer.encode(text, ...)
.
add_special_tokens=None
means to use the backend's default
settings.
Source code in vllm/transformers_utils/tokenizer.py
get_cached_tokenizer
¶
get_cached_tokenizer(
tokenizer: AnyTokenizer,
) -> AnyTokenizer
By default, transformers will recompute multiple tokenizer properties each time they are called, leading to a significant slowdown. This proxy caches these properties for faster access.
Source code in vllm/transformers_utils/tokenizer.py
get_lora_tokenizer
¶
get_lora_tokenizer(
lora_request: LoRARequest, *args, **kwargs
) -> Optional[AnyTokenizer]
Source code in vllm/transformers_utils/tokenizer.py
get_tokenizer
¶
get_tokenizer(
tokenizer_name: Union[str, Path],
*args,
tokenizer_mode: str = "auto",
trust_remote_code: bool = False,
revision: Optional[str] = None,
download_dir: Optional[str] = None,
**kwargs,
) -> AnyTokenizer
Gets a tokenizer for the given model name via HuggingFace or ModelScope.
Source code in vllm/transformers_utils/tokenizer.py
161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 |
|
patch_padding_side
¶
Patch _pad method to accept padding_side
for older tokenizers.