Skip to content

vllm.inputs

Modules:

Name Description
data
parse
preprocess

DecoderOnlyInputs module-attribute

DecoderOnlyInputs: TypeAlias = (
    TokenInputs | EmbedsInputs | MultiModalInputs
)

A processed prompt from InputPreprocessor which can be passed to InputProcessor for decoder-only models.

ProcessorInputs module-attribute

A processed prompt from InputPreprocessor which can be passed to InputProcessor.

PromptType module-attribute

Schema for any prompt, regardless of model type.

This is the input format accepted by most LLM APIs.

SingletonInputs module-attribute

SingletonPrompt module-attribute

Schema for a single prompt. This is as opposed to a data structure which encapsulates multiple prompts, such as ExplicitEncoderDecoderPrompt.

__all__ module-attribute

__all__ = [
    "DataPrompt",
    "TextPrompt",
    "TokensPrompt",
    "PromptType",
    "SingletonPrompt",
    "ExplicitEncoderDecoderPrompt",
    "TokenInputs",
    "EmbedsInputs",
    "EmbedsPrompt",
    "token_inputs",
    "embeds_inputs",
    "DecoderOnlyInputs",
    "EncoderDecoderInputs",
    "ProcessorInputs",
    "SingletonInputs",
    "StreamingInput",
]

DataPrompt

Bases: _PromptOptions

Represents generic inputs that are converted to PromptType by IO processor plugins.

Source code in vllm/inputs/data.py
class DataPrompt(_PromptOptions):
    """
    Represents generic inputs that are converted to
    [`PromptType`][vllm.inputs.data.PromptType] by IO processor plugins.
    """

    data: Any
    """The input data."""

    data_format: str
    """The input data format."""

data instance-attribute

data: Any

The input data.

data_format instance-attribute

data_format: str

The input data format.

EmbedsInputs

Bases: _InputOptions

Represents embeddings-based inputs.

Source code in vllm/inputs/data.py
class EmbedsInputs(_InputOptions):
    """Represents embeddings-based inputs."""

    type: Literal["embeds"]
    """The type of inputs."""

    prompt_embeds: torch.Tensor
    """The embeddings of the prompt."""

prompt_embeds instance-attribute

prompt_embeds: Tensor

The embeddings of the prompt.

type instance-attribute

type: Literal['embeds']

The type of inputs.

EmbedsPrompt

Bases: _PromptOptions

Schema for a prompt provided via token embeddings.

Source code in vllm/inputs/data.py
class EmbedsPrompt(_PromptOptions):
    """Schema for a prompt provided via token embeddings."""

    prompt_embeds: torch.Tensor
    """The embeddings of the prompt."""

    prompt: NotRequired[str]
    """The prompt text corresponding to the token embeddings, if available."""

prompt instance-attribute

prompt: NotRequired[str]

The prompt text corresponding to the token embeddings, if available.

prompt_embeds instance-attribute

prompt_embeds: Tensor

The embeddings of the prompt.

EncoderDecoderInputs

Bases: TypedDict

A processed pair of encoder and decoder singleton prompts. InputPreprocessor which can be passed to InputProcessor for encoder-decoder models.

Source code in vllm/inputs/data.py
class EncoderDecoderInputs(TypedDict):
    """
    A processed pair of encoder and decoder singleton prompts.
    [`InputPreprocessor`][vllm.inputs.preprocess.InputPreprocessor]
    which can be passed to
    [`InputProcessor`][vllm.v1.engine.input_processor.InputProcessor]
    for encoder-decoder models.
    """

    encoder: EncoderInputs
    """The inputs for the encoder portion."""

    decoder: DecoderInputs
    """The inputs for the decoder portion."""

decoder instance-attribute

decoder: DecoderInputs

The inputs for the decoder portion.

encoder instance-attribute

encoder: EncoderInputs

The inputs for the encoder portion.

ExplicitEncoderDecoderPrompt

Bases: TypedDict

Schema for a pair of encoder and decoder singleton prompts.

Note

This schema is not valid for decoder-only models.

Source code in vllm/inputs/data.py
class ExplicitEncoderDecoderPrompt(TypedDict):
    """
    Schema for a pair of encoder and decoder singleton prompts.

    Note:
        This schema is not valid for decoder-only models.
    """

    encoder_prompt: EncoderPrompt
    """The prompt for the encoder part of the model."""

    decoder_prompt: DecoderPrompt | None
    """
    The prompt for the decoder part of the model.

    Passing `None` will cause the prompt to be inferred automatically.
    """

decoder_prompt instance-attribute

decoder_prompt: DecoderPrompt | None

The prompt for the decoder part of the model.

Passing None will cause the prompt to be inferred automatically.

encoder_prompt instance-attribute

encoder_prompt: EncoderPrompt

The prompt for the encoder part of the model.

StreamingInput dataclass

Input data for a streaming generation request.

This is used with generate() to support multi-turn streaming sessions where inputs are provided via an async generator.

Source code in vllm/inputs/data.py
@dataclass
class StreamingInput:
    """Input data for a streaming generation request.

    This is used with generate() to support multi-turn streaming sessions
    where inputs are provided via an async generator.
    """

    prompt: PromptType
    sampling_params: SamplingParams | None = None

prompt instance-attribute

prompt: PromptType

sampling_params class-attribute instance-attribute

sampling_params: SamplingParams | None = None

__init__

__init__(
    prompt: PromptType,
    sampling_params: SamplingParams | None = None,
) -> None

TextPrompt

Bases: _PromptOptions

Schema for a text prompt.

Source code in vllm/inputs/data.py
class TextPrompt(_PromptOptions):
    """Schema for a text prompt."""

    prompt: str
    """The input text to be tokenized before passing to the model."""

prompt instance-attribute

prompt: str

The input text to be tokenized before passing to the model.

TokenInputs

Bases: _InputOptions

Represents token-based inputs.

Source code in vllm/inputs/data.py
class TokenInputs(_InputOptions):
    """Represents token-based inputs."""

    type: Literal["token"]
    """The type of inputs."""

    prompt_token_ids: list[int]
    """The token IDs of the prompt."""

prompt_token_ids instance-attribute

prompt_token_ids: list[int]

The token IDs of the prompt.

type instance-attribute

type: Literal['token']

The type of inputs.

TokensPrompt

Bases: _PromptOptions

Schema for a tokenized prompt.

Source code in vllm/inputs/data.py
class TokensPrompt(_PromptOptions):
    """Schema for a tokenized prompt."""

    prompt_token_ids: list[int]
    """A list of token IDs to pass to the model."""

    prompt: NotRequired[str]
    """The prompt text corresponding to the token IDs, if available."""

    token_type_ids: NotRequired[list[int]]
    """A list of token type IDs to pass to the cross encoder model."""

prompt instance-attribute

prompt: NotRequired[str]

The prompt text corresponding to the token IDs, if available.

prompt_token_ids instance-attribute

prompt_token_ids: list[int]

A list of token IDs to pass to the model.

token_type_ids instance-attribute

token_type_ids: NotRequired[list[int]]

A list of token type IDs to pass to the cross encoder model.

embeds_inputs

embeds_inputs(
    prompt_embeds: Tensor, cache_salt: str | None = None
) -> EmbedsInputs

Construct EmbedsInputs from optional values.

Source code in vllm/inputs/data.py
def embeds_inputs(
    prompt_embeds: torch.Tensor,
    cache_salt: str | None = None,
) -> EmbedsInputs:
    """Construct [`EmbedsInputs`][vllm.inputs.data.EmbedsInputs] from optional
    values."""
    inputs = EmbedsInputs(type="embeds", prompt_embeds=prompt_embeds)

    if cache_salt is not None:
        inputs["cache_salt"] = cache_salt

    return inputs

token_inputs

token_inputs(
    prompt_token_ids: list[int],
    cache_salt: str | None = None,
) -> TokenInputs

Construct TokenInputs from optional values.

Source code in vllm/inputs/data.py
def token_inputs(
    prompt_token_ids: list[int],
    cache_salt: str | None = None,
) -> TokenInputs:
    """Construct [`TokenInputs`][vllm.inputs.data.TokenInputs] from optional
    values."""
    inputs = TokenInputs(type="token", prompt_token_ids=prompt_token_ids)

    if cache_salt is not None:
        inputs["cache_salt"] = cache_salt

    return inputs