vllm.distributed.kv_transfer.kv_connector.v1.lmcache_connector
LMCacheConnectorV1
¶
Bases: KVConnectorBase_V1
Source code in vllm/distributed/kv_transfer/kv_connector/v1/lmcache_connector.py
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 |
|
_lmcache_engine
instance-attribute
¶
__init__
¶
__init__(vllm_config: VllmConfig, role: KVConnectorRole)
build_connector_meta
¶
build_connector_meta(
scheduler_output: SchedulerOutput,
) -> KVConnectorMetadata
Build the connector metadata for this step.
This function should NOT modify fields in the scheduler_output. Also, calling this function will reset the state of the connector.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
scheduler_output
|
SchedulerOutput
|
the scheduler output object. |
required |
Source code in vllm/distributed/kv_transfer/kv_connector/v1/lmcache_connector.py
get_finished
¶
Notifies worker-side connector ids of requests that have finished generating tokens.
Returns:
Type | Description |
---|---|
Optional[set[str]]
|
ids of requests that have finished asynchronous transfer |
Optional[set[str]]
|
(requests that previously returned True from request_finished()), |
tuple[Optional[set[str]], Optional[set[str]]]
|
tuple of (sending/saving ids, recving/loading ids). |
tuple[Optional[set[str]], Optional[set[str]]]
|
The finished saves/sends req ids must belong to a set provided in a |
tuple[Optional[set[str]], Optional[set[str]]]
|
call to this method (this call or a prior one). |
Source code in vllm/distributed/kv_transfer/kv_connector/v1/lmcache_connector.py
get_num_new_matched_tokens
¶
Get number of new tokens that can be loaded from the external KV cache beyond the num_computed_tokens.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
request
|
Request
|
the request object. |
required |
num_computed_tokens
|
int
|
the number of locally computed tokens for this request |
required |
Returns:
Type | Description |
---|---|
int
|
the number of tokens that can be loaded from the |
bool
|
external KV cache beyond what is already computed. |
Source code in vllm/distributed/kv_transfer/kv_connector/v1/lmcache_connector.py
request_finished
¶
Called when a request has finished, before its blocks are freed.
Returns:
Type | Description |
---|---|
bool
|
True if the request is being saved/sent asynchronously and blocks |
Optional[dict[str, Any]]
|
should not be freed until the request_id is returned from |
tuple[bool, Optional[dict[str, Any]]]
|
get_finished(). |
tuple[bool, Optional[dict[str, Any]]]
|
Optional KVTransferParams to be included in the request outputs |
tuple[bool, Optional[dict[str, Any]]]
|
returned by the engine. |
Source code in vllm/distributed/kv_transfer/kv_connector/v1/lmcache_connector.py
save_kv_layer
¶
save_kv_layer(
layer_name: str,
kv_layer: Tensor,
attn_metadata: AttentionMetadata,
**kwargs,
) -> None
Start saving the a layer of KV cache from vLLM's paged buffer to the connector. This is called from within attention layer to enable async copying during execution.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
layer_name
|
str
|
the name of the layer. |
required |
kv_layer
|
Tensor
|
the paged KV buffer of the current layer in vLLM. |
required |
attn_metadata
|
AttentionMetadata
|
the attention metadata. |
required |
**kwargs
|
additional arguments for the save operation. |
{}
|
Source code in vllm/distributed/kv_transfer/kv_connector/v1/lmcache_connector.py
start_load_kv
¶
start_load_kv(
forward_context: ForwardContext, **kwargs
) -> None
Start loading the KV cache from the connector to vLLM's paged KV buffer. This is called from the forward context before the forward pass to enable async loading during model execution.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
forward_context
|
ForwardContext
|
the forward context. |
required |
**kwargs
|
additional arguments for the load operation |
{}
|
Note
The number of elements in kv_caches and layer_names should be the same.
Source code in vllm/distributed/kv_transfer/kv_connector/v1/lmcache_connector.py
update_state_after_alloc
¶
update_state_after_alloc(
request: Request,
blocks: KVCacheBlocks,
num_external_tokens: int,
)
Update KVConnector state after block allocation.
Source code in vllm/distributed/kv_transfer/kv_connector/v1/lmcache_connector.py
wait_for_layer_load
¶
wait_for_layer_load(layer_name: str) -> None
Block until the KV for a specific layer is loaded into vLLM's paged buffer. This is called from within attention layer to ensure async copying from start_load_kv is complete.
This interface will be useful for layer-by-layer pipelining.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
layer_name
|
str
|
the name of that layer |
required |
Source code in vllm/distributed/kv_transfer/kv_connector/v1/lmcache_connector.py
wait_for_save
¶
Block until all the save operations is done. This is called as the forward context exits to ensure that the async saving from save_kv_layer is complete before finishing the forward.
This prevents overwrites of paged KV buffer before saving done.