vllm.model_executor.models.nvlm_d
NVLMDummyInputsBuilder
¶
Bases: BaseInternVLDummyInputsBuilder[NVLMProcessingInfo]
Source code in vllm/model_executor/models/nvlm_d.py
get_dummy_mm_data
¶
get_dummy_mm_data(
seq_len: int, mm_counts: Mapping[str, int]
) -> MultiModalDataDict
Source code in vllm/model_executor/models/nvlm_d.py
get_dummy_text
¶
NVLMMultiModalProcessor
¶
Bases: BaseInternVLMultiModalProcessor[NVLMProcessingInfo]
Source code in vllm/model_executor/models/nvlm_d.py
_get_prompt_updates
¶
_get_prompt_updates(
mm_items: MultiModalDataItems,
hf_processor_mm_kwargs: Mapping[str, object],
out_mm_kwargs: MultiModalKwargs,
) -> Sequence[PromptUpdate]
Source code in vllm/model_executor/models/nvlm_d.py
NVLMProcessingInfo
¶
Bases: BaseInternVLProcessingInfo
Source code in vllm/model_executor/models/nvlm_d.py
get_hf_processor
¶
get_hf_processor(
*,
min_dynamic_patch: Optional[int] = None,
max_dynamic_patch: Optional[int] = None,
dynamic_image_size: Optional[bool] = None,
**kwargs: object,
) -> NVLMProcessor
Source code in vllm/model_executor/models/nvlm_d.py
NVLMProcessor
¶
Bases: BaseInternVLProcessor
Source code in vllm/model_executor/models/nvlm_d.py
get_image_repl
¶
get_image_repl(
feature_size: int, num_patches: Optional[int]
) -> PromptUpdateDetails[str]
Source code in vllm/model_executor/models/nvlm_d.py
NVLM_D_Model
¶
Bases: InternVLChatModel
Source code in vllm/model_executor/models/nvlm_d.py
_init_mlp1
¶
_init_mlp1(config: PretrainedConfig) -> Sequential
Source code in vllm/model_executor/models/nvlm_d.py
_init_vision_model
¶
_init_vision_model(
config: PretrainedConfig,
quant_config: Optional[QuantizationConfig],
*,
is_mono: bool,
prefix: str,
)