vllm.attention.ops.ipex_attn
PagedAttention
module-attribute
¶
PagedAttention = (
_IPEXPagedAttention if _use_ipex else _PagedAttention
)
_IPEXPagedAttention
¶
Bases: _PagedAttention
Source code in vllm/attention/ops/ipex_attn.py
forward_decode
staticmethod
¶
forward_decode(
output: Tensor,
query: Tensor,
key_cache: Tensor,
value_cache: Tensor,
block_tables: Tensor,
context_lens: Tensor,
max_context_len: int,
kv_cache_dtype: str,
num_kv_heads: int,
scale: float,
alibi_slopes: Optional[Tensor],
k_scale: Tensor,
v_scale: Tensor,
*args,
) -> None
Source code in vllm/attention/ops/ipex_attn.py
split_kv_cache
staticmethod
¶
split_kv_cache(
kv_cache: Tensor,
num_kv_heads: int,
head_size: int,
*args,
) -> Tuple[Tensor, Tensor]
Source code in vllm/attention/ops/ipex_attn.py
write_to_paged_cache
staticmethod
¶
write_to_paged_cache(
key: Tensor,
value: Tensor,
key_cache: Tensor,
value_cache: Tensor,
slot_mapping: Tensor,
kv_cache_dtype: str,
k_scale: Tensor,
v_scale: Tensor,
*args,
) -> None
Source code in vllm/attention/ops/ipex_attn.py
_PagedAttention
¶
Source code in vllm/attention/ops/ipex_attn.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
|
copy_blocks
staticmethod
¶
Source code in vllm/attention/ops/ipex_attn.py
forward_decode
staticmethod
¶
forward_decode(
output: Tensor,
query: Tensor,
key_cache: Tensor,
value_cache: Tensor,
block_tables: Tensor,
context_lens: Tensor,
max_context_len: int,
kv_cache_dtype: str,
num_kv_heads: int,
scale: float,
alibi_slopes: Optional[Tensor],
k_scale: Tensor,
v_scale: Tensor,
*args,
) -> None
Source code in vllm/attention/ops/ipex_attn.py
get_kv_cache_shape
staticmethod
¶
get_supported_head_sizes
staticmethod
¶
split_kv_cache
staticmethod
¶
split_kv_cache(
kv_cache: Tensor,
num_kv_heads: int,
head_size: int,
*args,
) -> Tuple[Tensor, Tensor]
Source code in vllm/attention/ops/ipex_attn.py
write_to_paged_cache
staticmethod
¶
write_to_paged_cache(
key: Tensor,
value: Tensor,
key_cache: Tensor,
value_cache: Tensor,
slot_mapping: Tensor,
kv_cache_dtype: str,
k_scale: Tensor,
v_scale: Tensor,
*args,
) -> None